US20220153264A1 - Operational Risk Assessment for Autonomous Vehicle Control - Google Patents

Operational Risk Assessment for Autonomous Vehicle Control Download PDF

Info

Publication number
US20220153264A1
US20220153264A1 US17/587,280 US202217587280A US2022153264A1 US 20220153264 A1 US20220153264 A1 US 20220153264A1 US 202217587280 A US202217587280 A US 202217587280A US 2022153264 A1 US2022153264 A1 US 2022153264A1
Authority
US
United States
Prior art keywords
autonomous vehicle
operational risk
risk
planned path
operational
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/587,280
Inventor
Michael Woon
Michael Glenn Diaz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Retrospect Technology LLC
Original Assignee
Retrospect Technology LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/399,901 external-priority patent/US11198431B2/en
Application filed by Retrospect Technology LLC filed Critical Retrospect Technology LLC
Priority to US17/587,280 priority Critical patent/US20220153264A1/en
Publication of US20220153264A1 publication Critical patent/US20220153264A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0055Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot with safety arrangements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • B60W60/00276Planning or execution of driving tasks using trajectory prediction for other traffic participants for two or more other traffic participants
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0088Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/10Historical data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D2201/00Application
    • G05D2201/02Control of position of land vehicles
    • G05D2201/0213Road vehicle, e.g. car or truck

Definitions

  • This disclosure relates to operational risk assessment for autonomous vehicle control, in particular, to causing changes in the controlling of an autonomous vehicle based on an operational risk determined for the autonomous vehicle.
  • Conventional autonomous vehicle control systems rely on continuously or otherwise periodically collected sensor information to generate a current understanding of the environment in which the autonomous vehicle is located. Based on the current understanding of the environment, the control system of an autonomous vehicle decides how to operate the autonomous vehicle, for example, by controlling the autonomous vehicle to accelerate, come to a complete stop, adjust a direction of movement, or otherwise behave in accordance with traffic safety laws.
  • Disclosed herein are, inter alia, implementations of systems and techniques for operational risk assessment for autonomous vehicle control.
  • a method for controlling an autonomous vehicle based on an operational risk determined for the autonomous vehicle includes receiving input information indicating one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects.
  • a current scene synthesized for the autonomous vehicle is evaluated based on the input information to determine whether the current scene represents a violation of one or more first rules used for determining the operational risk, the operational risk representing a probability that the autonomous vehicle will encounter an event.
  • the current scene is evaluated against a previous scene synthesized for the autonomous vehicle to determine whether a degree of change between the current scene and the previous scene represents a violation of one or more second rules used for determining a risk underestimation.
  • a signal used for controlling the autonomous vehicle is generated.
  • the signal is transmitted to one or more control system modules of the autonomous vehicle to cause the one or more control system modules to control the autonomous vehicle by performing a remedial action to prevent the autonomous vehicle from encountering the event.
  • a method for controlling an autonomous vehicle based on an operational risk determined for the autonomous vehicle includes receiving input information indicating one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects.
  • a planned path representing an intended sequence of actions for the autonomous vehicle to travel within the environment in which the autonomous vehicle is located is identified.
  • a first operational risk for the autonomous vehicle is determined based on the input information and based on the planned path. The first operational risk represents a probability that the autonomous vehicle will encounter an event along the planned path.
  • a second operational risk for the autonomous vehicle is determined based on the planned path and based on one or more perturbations applied to the planned path.
  • the second operational risk represents a probability that the autonomous vehicle will encounter the event along the planned path based on the one or more perturbations applied to the planned path.
  • a determination is made as to whether a normalized difference between the first operational risk and the second operational risk exceeds a sensitivity threshold. Responsive to determining that the normalized difference between the first operational risk and the second operational risk exceeds the sensitivity threshold, a signal used for controlling the autonomous vehicle is generated. The signal is transmitted to one or more control system modules of the autonomous vehicle to cause the one or more control system modules to control the autonomous vehicle by performing a remedial action to prevent the autonomous vehicle from encountering the event.
  • a system for controlling an autonomous vehicle based on an operational risk determined for the autonomous vehicle includes an operational risk monitor module and a memory buffer.
  • the operational risk monitor module receives input information associated with an environment in which the autonomous vehicle is located from control system modules of the autonomous vehicle and indicates an action for performance using one or more of the control system modules based on an operational risk determined for the autonomous vehicle using the input information.
  • the memory buffer stores the information received from the control system modules, information indicative of previous operational risk estimations determined using the operational risk monitor module, and information indicative of scenes previously synthesized for the planned path.
  • FIG. 1 is a block diagram showing an example of a system for operational risk assessment for autonomous vehicle control.
  • FIG. 2 is a block diagram showing an example of an operational risk monitor module.
  • FIG. 3 is a block diagram showing examples of modules and components used for operational risk assessment for autonomous vehicle control.
  • FIG. 4 is an illustration showing an example of a systematic monitor used by an operational risk monitor module.
  • FIG. 5 is an illustration showing an example of a heuristic monitor used by an operational risk monitor module.
  • FIG. 6 is an illustration showing an example of an extended heuristic monitor used by an operational risk monitor module.
  • FIG. 7 is a flowchart showing an example of a technique for autonomous vehicle control based on a determined operational risk using systematic monitoring.
  • FIG. 8 is a flowchart showing an example of a technique for autonomous vehicle control based on a determined operational risk using heuristic monitoring.
  • FIG. 9 is a flowchart showing an example of a technique for autonomous vehicle control based on a determined operational risk using sensitivity monitoring.
  • FIG. 10 is a block diagram showing an example of a computing device which may be used in a system for operational risk assessment for autonomous vehicle control.
  • Conventional autonomous vehicle control systems determine a current path for controlling a vehicle by perceiving information detected about an environment in which the vehicle is located and predicting how that information will affect the operation of the vehicle.
  • changes in operational risk for a given planned path which includes or otherwise refers to the assessed risk along a previous planned path based on perceived and predicted information within the environment of the vehicle, are not considered.
  • conventional autonomous vehicle control systems are limited to performing current analyses of operational risk for a current planned path without regard to current analyses of operational risk for previous planned paths.
  • the failure to consider changes in operational risk for previous planned paths may, in some cases, result in the failure to diagnose or detect faults in the autonomous vehicle system that cause the autonomous vehicle system to underestimate the operational risk of the vehicle.
  • Implementations of this disclosure address problems such as these by causing changes in the controlling of an autonomous vehicle based on an operational risk determined for the autonomous vehicle.
  • An operational risk monitor module of the autonomous vehicle uses information about objects detected within an environment in which the autonomous vehicle is located and predicted behaviors of those objects to assess the operational risk of the autonomous vehicle along a planned path. The operational risk is used to determine whether to cause a change in the controlling of the autonomous vehicle, for example, based on a comparison between the operational risk and a previously estimated operational risk or based on a determination that the operational risk exceeds a threshold.
  • the operational risk monitor module transmits a signal to one or more control system modules of the autonomous vehicle to indicate to change the controlling of the autonomous vehicle based on the operational risk.
  • the operational risk monitor module can use a systematic monitor, a heuristic monitor, and/or a sensitivity monitor to determine to cause the changes in the controlling of the autonomous vehicle.
  • the systematic monitor performs continuous or otherwise frequent and periodic monitoring of operating conditions of the autonomous vehicle and of conditions of the environment in which the autonomous vehicle is located, such as to compare operational risks determined at given times against previous values.
  • the systematic monitor functionality can be used to determine that a change in the controlling of the autonomous vehicle is necessary when a current operational risk estimate for a given planned path exceeds a previous operational risk estimation for that planned path.
  • the heuristic monitor applies rules used by the systematic monitor against current conditions of the autonomous vehicle and of the environment in which the autonomous vehicle is located to determine whether a rule is violated.
  • a rule may indicate that all objects should be initially detected within a specific distance of the front of the autonomous vehicle while the autonomous vehicle is traveling forward.
  • the rule violation can be determined and used to cause a change in the controlling of the autonomous vehicle.
  • the operational risk monitor module may be further configured to use a combination of the systematic monitor, the heuristic monitor, and/or the sensitivity monitor simultaneously. Using a combination of the systematic monitor, heuristic monitor, and/or the sensitivity monitor broadens the scope of system issues which can be detected, thereby further improving a time-to-detection of hazardous and non-hazardous events.
  • operational risk refers to a calculated estimate associated with a given operation, which may be based on or otherwise account for estimated costs of potential outcomes associated with the given operation, estimates of likelihood of such potential outcomes, and/or proxies or other indicators of such aggregated risk estimates, estimated costs, and/or such likelihood.
  • Costs for a potential outcome may, for example, include or otherwise account for those costs that may be borne by the vehicle itself, occupants of the vehicle, and/or the environment of the vehicle and/or objects therein (e.g., other vehicles, pedestrians, and/or other objects around the vehicle). Costs may refer to events the autonomous vehicle is expected to encounter.
  • An event can be a hazardous event or a non-hazardous event.
  • a hazardous event refers to an event which results in or has the potential to result in injury to person and/or damage to property.
  • a non-hazardous event refers to an event which does not result or have the potential to result in injury to person and/or damage to property.
  • a hazardous event may be a collision involving the autonomous vehicle, the autonomous vehicle losing contact with the surface of the road on which it travels, or the like.
  • a non-hazardous event may be the autonomous vehicle driving onto rumble strips, driving through a pooling of water, or the like.
  • a low operational risk estimate may be calculated when the vehicle is moving at 5 mph and a leaf is identified 500 feet ahead, since the potential costs are low (e.g., impact with a tree leaf at 5 mph would not damage the vehicle and damage to the leaf is of no cost) and the likelihood of impact with the leaf is low (e.g., at 5 mph the vehicle has sufficient time to maneuver to avoid the leaf and/or the leaf may blow away before the vehicle arrives).
  • a high operational risk estimate may be calculated when the vehicle 102 is moving at 70 mph and another vehicle is identified 80 feet ahead, since the potential costs are higher (e.g., impact of the two vehicles at 70 mph may result in high costs borne by the vehicles and/or occupants thereof) and the likelihood of impact is higher (e.g., given the speed, distance, and size of the vehicle ahead).
  • the operational risk may be determined, estimated, or otherwise calculated in different manners, which may, but need not, include estimates of costs based on a severity of an event associated with the operational risk and/or estimates of probability of an encounter with an event arising (e.g., being based on speed of the vehicle and/or identification of obstacles).
  • the environment in which the autonomous vehicle is located includes or otherwise refers to a geographical location of the autonomous vehicle and contextual information relevant for a control system of the autonomous vehicle to understand how to operate the autonomous vehicle at that geographical location (e.g., traffic control devices, buildings, etc.).
  • FIG. 1 is a block diagram showing an example of a system 100 for operational risk assessment for autonomous vehicle control.
  • the system 100 includes an autonomous vehicle 102 which is controlled using modules and components.
  • the autonomous vehicle 102 includes sensors 104 , control system modules 106 , and an operational risk monitor module 108 .
  • the sensors 104 detect an object 110 within an environment in which the autonomous vehicle 102 is located.
  • the sensors 104 are hardware units used to measure or otherwise record information about the environment in which the autonomous vehicle 102 is located.
  • the sensors 104 may also include software used by those hardware units, such as to control the operation of those hardware units.
  • the sensors 104 may include a radar sensor, a LIDAR sensor, an image sensor (e.g., a camera), another sensor, multiples of one or more of the foregoing sensor types, or a combination thereof.
  • the sensors 104 may include a LIDAR sensor, multiple image sensors arranged to capture a 360 degree view around the autonomous vehicle, a front radar sensor, and a rear radar sensor.
  • the object 110 represents organic or inorganic matter located within the environment in which the autonomous vehicle 102 is located.
  • the object 110 may be a pedestrian, a vehicle, a street sign, a wall, a traffic control device, an animal, a tree, multiples of one or more of the foregoing object types, or a combination thereof.
  • the control system modules 106 process the output of the sensors 104 .
  • the control system modules 106 include software which receives the information measured or otherwise recorded using the sensors 104 to determine how to control the autonomous vehicle 102 and to control the autonomous vehicle 102 accordingly.
  • the control system modules 106 may include, but are not limited to, one or more of an onboard navigation module, a localization module, a control constraints module, a perception module, a prediction module, a planning module, a platform control module, another module, or a combination thereof.
  • the control system modules 106 are run on a computing device of the autonomous vehicle 102 .
  • the operational risk monitor module 108 uses information from the control system modules 106 to assess an operational risk for the autonomous vehicle 102 .
  • the operational risk monitor module 108 receives input information from the control system modules 106 indicating one or more objects (e.g., the object 110 ) detected within an environment in which the autonomous vehicle 102 is located and one or more object predictions representing expected behaviors of those detected objects.
  • the operational risk monitor module 108 further identifies planned paths determined using the control system modules 106 , which planned paths represent intended sequences of actions to be used for the autonomous vehicle 102 to travel within the environment in which the autonomous vehicle 102 is located.
  • the operational risk monitor module 108 uses the information about the detected objects and the object predictions, and planned paths to determine an operational risk for the autonomous vehicle along the planned paths.
  • the operational risk monitor module 108 then processes the determined operational risk to determine whether a remedial action should be performed, for example, by aborting operations within the environment in which the autonomous vehicle 102 is located to avoid an event which the autonomous vehicle 102 is expected to encounter or to otherwise reduce the probability that the autonomous vehicle 102 will encounter that event.
  • the remedial action may include, but is not limited to, a safe stop, a safe pullover, a blind pullover, a speed reduction, a teleoperation-related control, or another immediate action, which may, for example, be pending human analysis of the determined operational risk.
  • the operational risk monitor module 108 transmits a signal indicating or otherwise used to indicate the remedial action to the control system modules 106 .
  • the control system modules 106 then cause the performance of the remedial action based on the signal received from the operational risk monitor module 108 .
  • the operational risk monitor module 108 determines that the autonomous vehicle 102 is not sufficiently estimating the operational risk associated with the previous planned paths, the remedial action may indicate to follow only the last planned path and to stop the execution of future planned paths of the autonomous vehicle 102 .
  • the operational risk monitor module 108 generates a signal indicating the command to stop execution and transmits that signal to the control system modules 106 .
  • the control system modules 106 then stop the autonomous vehicle 102 according to the signal.
  • the operational risk monitor module 108 transmits a signal indicating to allow the autonomous vehicle 102 to maintain a safe state, for example by allowing the autonomous vehicle 102 to continue traveling along the current planned path or future planned paths within the environment in which the autonomous vehicle 102 is located.
  • the operational risk monitor module 108 can use a systematic monitor and/or a heuristic monitor.
  • the systematic monitor represents functionality of the operational risk monitor module 108 for systematically comparing the current operational risk for a previous planned path received from the control system modules 106 against the previous operational risk estimation, which was previously determined for a planned path previously received from the control system modules 106 .
  • the heuristic monitor represents functionality of the operational risk monitor module 108 for using configurable rules to detect that an operational risk determined based on input information exceeds a threshold representing a maximum acceptable risk for the autonomous vehicle. That is, the heuristic monitor evaluates inputs used to determine a planned path which can cause future operational risk estimates to rise, for example, where a fault in prior input information could cause an operational risk to be underestimated.
  • the particular approach used by the operational risk monitor module 108 can be defined during the deployment of the operational risk monitor module 108 to a computer of the autonomous vehicle 108 .
  • the approach used by the operational risk monitor 108 can be changed after the deployment of the operational risk monitor module 108 to the computer of the autonomous vehicle.
  • a server 112 communicates with one or more computing aspects of the autonomous vehicle 102 over a network 114 .
  • the server 112 may be or include a hardware server (e.g., a server device), a software server (e.g., a web server and/or a virtual server), or both.
  • the server 112 may be a computing device located at a remote operating center used to manage the operations of the autonomous vehicle 102 (and/or a fleet of autonomous vehicles including the autonomous vehicle 102 ) or a server device located in a rack, such as of a data center.
  • the network 114 is a network used to communicate information between a computing aspect of the autonomous vehicle 102 and the server 112 .
  • the network 114 may, for example, be a local area network, a wide area network, a machine-to-machine network, a virtual private network, or another public or private network.
  • the server 112 may receive information associated with the operational risk determined using the operational risk monitor module 108 , a signal generated using the operational risk monitor module 108 , or other information processed by or at the operational risk monitor module 108 . For example, that information can be reported to the server 112 for logging.
  • the information logged at the server 112 may be used to further configure or update configurations of the operational risk monitor module 108 .
  • the information logged at the server 112 may be processed using pattern recognition or other intelligence software to determine whether false positives, false negatives, or other misunderstood features are present within a particular location through which the autonomous vehicle has traveled. That information can be analyzed and then used to update an intelligence system of the autonomous vehicle, for example, by training the control system modules 106 and/or the operational risk monitor module 108 to more accurately detect and understand aspects about the environment in which the autonomous vehicle is located.
  • Implementations of the system 100 may differ from what is shown and described with respect to FIG. 1 .
  • the system 100 may omit the server 112 and/or the network 114 .
  • data indicative of how the operational risk monitor module 108 is used may be locally stored within a computing device of the autonomous vehicle 102 itself. The data can be retrieved directly from the computing device. In another example, data indicative of how the operational risk monitor module 108 is used may not be stored.
  • the operational risk monitor module 108 may not transmit a signal to the control system modules 106 .
  • the control system modules 106 may be configured to continue controlling the autonomous vehicle 102 along the current planned path unless a signal is received from the operational risk monitor module 108 .
  • the operational risk monitor module 108 may only transmit a signal to the control system modules 106 where a determination is made that a remedial action should be performed.
  • FIG. 2 is a block diagram showing an example of an operational risk monitor module 200 .
  • the operational risk monitor module 200 may, for example, be the operational risk monitor module 108 shown in FIG. 1 .
  • the operational risk monitor module 200 includes an object risk comparator 202 , an object prediction risk comparator 204 , a planned path comparator 206 , and an operational risk processor 208 that uses the output of the object risk comparator 202 , the object prediction risk comparator 204 , and the planned path comparator 206 to determine a remedial action 210 .
  • the operational risk monitor module 200 uses information about objects 212 , object predictions 214 , and planned paths 216 , received from one or more control system modules of an autonomous vehicle (e.g., the control system modules 106 and the autonomous vehicle 102 ), to determine the remedial action 210 .
  • the operational risk monitor module 200 then transmits a signal to the one or more control system modules to indicate whether and how to change a controlling of the autonomous vehicle.
  • the object risk comparator 202 receives information about the objects 212 , which are objects located within an environment in which an autonomous vehicle using the operational risk monitor module 200 is located.
  • the objects 212 may be or include objects detected within the environment in which the autonomous vehicle is located.
  • the objects 212 may be detected using one or more sensors of the autonomous vehicle (e.g., the sensors 104 ).
  • the information about the objects 212 received by the object risk comparator 202 includes a list of objects currently detected within the environment in which the autonomous vehicle is located and a list of objects previously detected within that environment.
  • the objects previously detected within the environment in which the autonomous vehicle is located can be retrieved from a buffer or other memory structure that temporarily stores information used for processing at the operational risk monitor module 200 .
  • a list of objects, such as the list of currently detected objects or the list of previously detected objects may be, include, or otherwise refer to a data set which indicates the objects.
  • the object risk comparator 202 uses the information about the objects 212 to determine whether the objects currently detected within the environment in which the autonomous vehicle is located present a greater risk to the autonomous vehicle than the objects previously detected within that environment. In particular, the object risk comparator 202 compares the currently detected objects against the previously detected objects to determine whether the risk associated with the currently detected objects being within the environment in which the autonomous vehicle is located is greater than the risk associated with the previously detected objects being within that environment.
  • the processing by the object risk comparator 202 can be performed to determine whether a number of detected objects has changed over some amount of time (e.g., one second or less), whether the types of detected objects have changed over some amount of time, whether a location of one or more of the detected objects within the environment in which the autonomous vehicle is located has changed over some amount of time, or another change in the objects detected at two different times.
  • the object risk comparator 202 can use rules or definitions of risks to determine whether the currently detected objects present a greater risk to the autonomous vehicle than the previously detected objects.
  • different objects can be defined to have different risks based on their type and their estimated distance from the autonomous vehicle.
  • different rules can be used to indicate how the detection of objects should be processed based on, for example, when the object was detected, whether a previously detected object expected to still be within the environment in which the autonomous vehicle is located is no longer detected within that environment, whether the classification of a detected object has changed based on the information about the objects 212 , or other criteria.
  • the rules or definitions of risk used by the object risk comparator 202 may, for example, be determined based on a machine learning model or other machine intelligence approach. For example, a neural network or other structure can be trained according to training data sets representing different objects and their relationships to a vehicle. In another example, the rules or definitions of risk used by the object risk comparator 202 may be determined using a deterministic risk framework other than one based in machine learning. The information modeled based on that training can be used to determine the rules or definitions of risk used by the object risk comparator 202 . The rules or definitions of risk used by the object risk comparator 202 may be configurable, for example, by a user of the autonomous vehicle.
  • the object risk comparator 202 outputs a signal indicating whether the objects currently detected within the environment in which the autonomous vehicle is located present a greater risk to the autonomous vehicle than the objects previously detected within that environment.
  • the signal may, for example, include a flag having a first value when the currently detected objects present a greater risk to the autonomous vehicle than the previously detected objects and having a second value when the currently detected objects do not present a greater risk to the autonomous vehicle than the previously detected objects.
  • the object prediction risk comparator 204 receives information about the object predictions 214 representing expected behaviors of the objects 212 within the environment in which the autonomous vehicle is located.
  • the information about the object predictions 214 received by the object prediction risk comparator 204 includes a list of object predictions for objects currently detected within the environment in which the autonomous vehicle is located and a list of object predictions for objects previously detected within that environment.
  • the object predictions for the objects previously detected within the environment in which the autonomous vehicle is located can be retrieved from a buffer or other memory structure that temporarily stores information used for processing at the operational risk monitor module 200 .
  • a list of object predictions, such as the list of object predictions for the currently detected objects or the list of object predictions for the previously detected objects may be, include, or otherwise refer to a data set which indicates the object predictions.
  • the object prediction risk comparator 204 uses the information about the object predictions 214 to determine whether the expected behaviors of the objects currently detected within the environment in which the autonomous vehicle is located present a greater risk to the autonomous vehicle than the expected behaviors of the objects previously detected within that environment. In particular, the object prediction risk comparator 204 compares the expected behaviors of the currently detected objects against the expected behaviors of the previously detected objects to determine whether the risk associated with the expected behaviors of the currently detected objects within the environment in which the autonomous vehicle is located is greater than the risk associated with the expected behaviors of the previously detected objects within that environment.
  • the processing by the object prediction risk comparator 204 can be performed to determine whether a detected object is expected to move closer to the autonomous vehicle, whether a detected object is expected to remain in the same location within the environment in which the autonomous vehicle is located or leave that environment altogether, whether and how a velocity at which a detected object moves is expected to change within the environment in which the autonomous vehicle is located, or another change in the expected behavior of detected objects.
  • the object prediction risk comparator 204 can use rules or definitions of risks to determine whether the object predictions for the currently detected objects present a greater risk to the autonomous vehicle than the object predictions for the previously detected objects.
  • different object predictions can be defined to have different risks based on the object to which they relate and the type of behaviors predicted thereby.
  • different rules can be used to indicate how the object predictions should be processed based on, for example, whether the predicted movement path of a detected object has changed, whether a detected object predicted to remain stationary has instead moved, whether a detected object predicted to move has instead remained stationary, or other criteria, whether the detected object is moving too close to the autonomous vehicle, or other criteria.
  • the rules or definitions of risk used by the object prediction risk comparator 204 may, for example, be determined based on a machine learning model or other machine intelligence approach. For example, a neural network or other structure can be trained according to training data sets representing different objects and their expected behaviors. In another example, the rules or definitions of risk used by the object prediction risk comparator 204 may be determined using a deterministic risk framework other than one based in machine learning. The information modeled based on that training can be used to determine the rules or definitions of risk used by the object prediction risk comparator 204 . The rules or definitions of risk used by the object prediction risk comparator 204 may be configurable, for example, by a user of the autonomous vehicle.
  • the object prediction risk comparator 204 outputs a signal indicating whether the object predictions for the objects currently detected within the environment in which the autonomous vehicle is located present a greater risk to the autonomous vehicle than the object predictions for the objects previously detected within that environment.
  • the signal may, for example, include a flag having a first value when the object predictions for the currently detected objects present a greater risk to the autonomous vehicle than the object predictions for the previously detected objects and having a second value when the object predictions for the currently detected objects do not present a greater risk to the autonomous vehicle than the object predictions for the previously detected objects.
  • the planned path comparator 206 receives information about the planned paths 216 representing intended sequences of actions used for the autonomous vehicle to travel within the environment in which the autonomous vehicle is located.
  • the information about the planned paths 216 received by the planned path comparator 206 includes a current planned path for the autonomous vehicle to travel within the environment in which it is located and one or more previously determined planned paths for the autonomous vehicle.
  • the one or more previously determined planned paths for the autonomous vehicle can be retrieved from a buffer or other memory structure that temporarily stores information used for processing at the operational risk monitor module 200 .
  • the one or more previously determined planned paths for the autonomous vehicle may be planned paths determined within or less than some amount of time (e.g., one second) of the determining of the current planned path.
  • the planned path comparator 206 uses the information about the planned paths 216 to determine whether a planned path for the autonomous vehicle has changed. In particular, the planned path comparator 206 compares the current planned path against the one or more previously determined planned paths (e.g., against a most recently determined one of the one or more previously determined planned paths) to determine whether the current planned path is different from the one or more previously determined planned paths. The planned path comparator 206 outputs a signal indicating whether the planned path for the autonomous vehicle has changed. The signal may, for example, include a flag having a first value when the planned path for the autonomous vehicle has changed and having a second value when the planned path for the autonomous vehicle has not changed.
  • the operational risk monitor module determines the remedial action 210 using a systematic monitor, a heuristic monitor, and/or a sensitivity monitor. For example, where the operational risk processor 208 uses a systematic monitor, the operational risk processor 208 uses the signals output from the object risk comparator 202 , the object prediction risk comparator 204 , and the planned path comparator 206 to determine the remedial action 210 . For example, where the signals output from the object risk comparator 202 , the object prediction risk comparator 204 , and the planned path comparator 206 are or otherwise include flags, the operational risk processor 208 can use those flags to determine the operational risk for the autonomous vehicle.
  • the operational risk processor 208 can then use that operational risk to determine whether the remedial action 210 should be executed, for example, using one or more control system modules of the autonomous vehicle.
  • the operational risk processor 208 uses input information including the objects 212 , the object predictions 214 , and the constraints 218 to determine whether a systematic rule used by the systematic monitor functionality of the operational risk monitor module 200 is violated, for example, based on static scene behaviors and abnormal dynamic scene behaviors of a current scene synthesized based on the objects 212 , the object predictions 214 , and the constraints 218 and further based on previous abnormal dynamic scene behaviors of a previous scene synthesized based on previous input information.
  • the operational risk monitor 200 uses a sensitivity monitor
  • the operational risk processes uses input information including the objects 212 , the object predictions 214 , and the planned paths 216 to evaluate planned paths based on updated scene information synthesized based on the newly detected objects and object predictions. Implementations and examples for using a systematic monitor, a heuristic monitor, or a sensitivity monitor for autonomous vehicle control are respectively described below with respect to FIGS. 7, 8, and 9 .
  • Implementations of the operational risk monitor module 200 may differ from what is shown and described with respect to FIG. 2 .
  • the object prediction risk comparator 204 may also receive the information about the objects 212 .
  • the object prediction risk comparator 204 can apply rules against the objects 212 using the object predictions 214 to determine whether a rule violation has occurred.
  • the same machine learning model, other machine intelligence approach, or other deterministic risk framework, or the same combination thereof, can be used by both of the object risk comparator 202 and the object prediction risk comparator 204 .
  • the operational risk processor 208 may perform pre-processing against the input received from the object risk comparator 202 , the object prediction risk comparator 204 , and the planned path comparator 206 .
  • the pre-processing can include determining whether further processing of the input received from the object risk comparator 202 , the object prediction risk comparator 204 , and the planned path comparator 206 based on the values of that input.
  • the operational risk processor 208 may be configured to stop processing input received from the object risk comparator 202 , the object prediction risk comparator 204 , and the planned path comparator 206 in some cases.
  • the operational risk processor 208 can pre-process the values of the flags output by the object risk comparator 202 , the object prediction risk comparator 204 , and the planned path comparator 206 .
  • the operational risk processor 208 can use the pre-processing functionality thereof to stop further processing by the operational risk processor 208 where the value of the flag output by the object risk comparator 202 indicates that the currently detected objects do not present a greater risk to the autonomous vehicle than the previously detected objects, the value of the flag output by the object prediction risk comparator 204 indicates that the object predictions for the currently detected objects do not present a greater risk to the autonomous vehicle than the object predictions for the previously detected object, and the value of the flag output by the planned path comparator 206 indicates that the current planned path for the autonomous vehicle remains unchanged as compared to the one or more previously determined planned paths. Where the operational risk processor 208 stops further processing, the remedial action 210 is not produced, and the operational risk monitor module 200 does not cause a change in the controlling of the autonomous vehicle.
  • FIG. 3 is a block diagram showing examples of modules and components used for operational risk assessment for autonomous vehicle control.
  • the modules and components shown in FIG. 3 include or otherwise refer to software and hardware of an autonomous vehicle (e.g., the autonomous vehicle 102 shown in FIG. 1 ), which are used to monitor the environment in which the autonomous vehicle is located and control the autonomous vehicle based on the monitoring.
  • the modules and components shown in FIG. 3 are included in or otherwise refer to aspects of the autonomous vehicle 102 shown in FIG. 1 , including the sensors 104 , the control system modules 106 , and the operational risk monitor module 108 .
  • an operational risk monitor module 300 receives signals output from each of a perception module 302 , a prediction module 304 , and a planning module 306 .
  • the perception module 302 is a control system module (e.g., of the control system modules 106 ) which processes information measured or otherwise recorded using sensors 308 (e.g., the sensors 104 ) to detect objects within an environment in which the autonomous vehicle is located.
  • the signal output by the perception module 302 is or includes information about objects, for example, the objects 212 shown in FIG. 2 .
  • the prediction module 304 is a control system module (e.g., of the control system modules 106 ) which processes information about the objects detected using the perception module 302 to determine object predictions for those objects.
  • the signal output by the prediction module 304 is or includes information about object predictions, for example, the object predictions 214 shown in FIG. 2 .
  • the planning module 306 is a control system module (e.g., of the control system modules 106 ) which processes information received from a control constraints module 310 (e.g., of the control system modules 106 ) along with the object predictions determined using the prediction module 304 to determine planned paths for the autonomous vehicle.
  • the signal output by the planning module 306 is or includes information about planned paths, for example, the planned paths 216 shown in FIG. 2 .
  • the operational risk monitor module 300 uses systematic monitoring, heuristic monitoring, and/or sensitivity monitoring to evaluate changes in operational risk for the autonomous vehicle.
  • systematic monitoring the operational risk monitor module 300 uses the signals output by the perception module 302 , the prediction module 304 , and the planning module 306 to determine an operational risk for the autonomous vehicle and further to determine whether and how to cause a change in the controlling of the autonomous vehicle based on that operational risk.
  • the operational risk monitor module 300 uses information taken as input to the planning module 306 to determine whether a planned path produced using that information could cause future operational risk estimates to exceed a threshold.
  • sensitivity monitoring the operational risk monitor module 300 determines whether perturbations to a planned path cause the operational risk to increase enough to merit the performance of a remedial action.
  • the operational risk monitor module 300 generates a signal indicating whether and how to cause a change in the controlling of the autonomous vehicle based on the determined operational risk to a platform control module 312 (e.g., of the control system modules 106 ).
  • the platform control module 312 represents functionality for the autonomous vehicle used to control the autonomous vehicle based on the signal output by the operational risk monitor module 300 .
  • the platform control module 312 can generate one or more actuator outputs for use in controlling aspects of the autonomous vehicle based on the signal received from the operational risk monitor module 300 .
  • the signal transmitted to the platform control module 312 from the operational risk monitor module 300 can be or include a control abort command that the platform control module 312 uses to perform an emergency stop of the autonomous vehicle.
  • the signal transmitted to the platform control module 312 from the operational risk monitor module 300 can be or include a velocity change command that the platform control module 312 uses to change a current speed of the autonomous vehicle.
  • the signal transmitted to the platform control module 312 from the operational risk monitor module 300 can be or include a validation signal that the platform control module 312 uses to keep the autonomous vehicle traveling along its current planned path.
  • the operational risk monitor module 300 can also transmit other signals to different control system modules of the autonomous vehicle.
  • the operational risk monitor module 300 can transmit a signal indicating a perception incident flag to the perception module 302 , such as to alert the perception module 302 as to a determination by the operational risk monitor module 300 that an object detected within the environment in which the autonomous vehicle is located presents an incident risk to the autonomous vehicle.
  • the incident risk indicated using the perception incident flag is a first incident risk representing a risk of an incident associated with an object.
  • the operational risk monitor module 300 can transmit a signal indicating a prediction incident flag to the prediction module 304 , such as to alert the prediction module 304 as to a determination by the operational risk monitor module 300 that an expected behavior of an object detected within the environment in which the autonomous vehicle is located presents an incident risk to the autonomous vehicle.
  • the incident risk indicated using the prediction incident flag is a second incident risk representing a risk of an incident associated with an object prediction.
  • the operational risk monitor module 300 can transmit a signal indicating information to be reported or stored remotely to a communication interface 314 .
  • the communication interface 314 is a hardware component of the autonomous vehicle which is used to communicate interface over a network to one or more remote computers (e.g., a server or other computer at a remote operating center).
  • the communication interface 314 communicates using one or more network protocols, such as using Ethernet, TCP, IP, power line communication, Wi-Fi, Bluetooth®, infrared, GPRS, GSM, CDMA, Z-Wave, ZigBee, another protocol, or a combination thereof.
  • the operational risk monitor module 300 can transmit a platform control constraints flag to the control constraints module 310 , such as to cause the control constraints module 310 to change one or more platform control constraints of the autonomous vehicle.
  • the platform control constraints flag can indicate to adjust one or more constraints and/or the priorities of those constraints used for an operational aspect of the autonomous vehicle, such as a propulsion system of the autonomous vehicle, a braking system of the autonomous vehicle, or another system of the autonomous vehicle.
  • the operational risk monitor module 300 can receive a signal directly from the control constraints module 310 , for example, for use in determining the operational risk for the autonomous vehicle and/or for use in determining whether and how to cause a change in the controlling of the autonomous vehicle based on that operational risk.
  • the operational risk monitor module 300 uses information stored in a memory buffer 316 to determine an operational risk for the autonomous vehicle and further to determine whether and how to cause a change in the controlling of the autonomous vehicle based on that operational risk.
  • the memory buffer 316 stores information about objects, object predictions, and planned paths, respectively received from the perception module 302 , the prediction module 304 , and the planning module 306 , as previously detected objects, previously determined object predictions, and previously determined planned paths.
  • the operational risk monitor module 300 retrieves the previously detected objects, previously determined object predictions, and previously determined planned paths from the memory buffer 316 as necessary.
  • the memory buffer 316 stores the information about the previously detected objects, previously determined object predictions, and previously determined planned paths for a limited amount of time, such as according to an eviction policy for the memory buffer 316 .
  • the eviction policy for the memory buffer 316 may be fixed or configurable. Upon the expiration of the time set by the eviction policy for given data, the memory buffer 316 deletes the given data, such as to prepare space for storing new data received from perception module 302 , the prediction module 304 , and the planning module.
  • the control system modules may include other modules used for controlling aspects of the autonomous vehicle.
  • an onboard navigation module 318 can be used to indicate geographical information about the location of the autonomous vehicle.
  • a localization module 320 can use a navigation waypoint command (e.g., indicating the geographical information) output from the onboard navigation module along with information about objects detected within an environment in which the autonomous vehicle is located (e.g., as output by the perception module 302 ) to classify and determine the location of the autonomous vehicle.
  • a localized waypoint command (e.g., including information about the classified and determined location of the autonomous vehicle) may then be made available, for example, to the control constraints module 310 .
  • Implementations of the modules and components used for operational risk assessment for autonomous vehicle control may differ from what is shown and described with respect to FIG. 3 .
  • information determined at and/or output by the operational risk monitor module 300 can be reported to a user of the autonomous vehicle.
  • the information reported to the user of the autonomous vehicle can include the operational risk determined using the inputs to the operational risk monitor module 300 and/or one or more of the outputs from the operational risk monitor module 300 .
  • the user of the autonomous vehicle may refer to the driver of the autonomous vehicle, a passenger of the autonomous vehicle, an owner or other registrant of the autonomous vehicle, or a person who manages the operation of the autonomous vehicle (e.g., at a remote operation center), or a third party observer, such as an insurance or governing authority agent who requires or requests data related to the operation of the autonomous vehicle.
  • the communication interface 314 can be used for the reporting by transmitting the information to a user device of the user of the autonomous vehicle.
  • the onboard navigation module 318 can be used for reporting the information the user of the autonomous vehicle.
  • FIG. 4 is an illustration showing an example of a systematic monitor used by an operational risk monitor module, which may, for example, be the operational risk monitor module 300 shown in FIG. 3 .
  • the systematic monitor represents functionality of the operational risk monitor module for determining an operational risk of an autonomous vehicle 400 , which may, for example, be the autonomous vehicle 102 shown in FIG. 1 , and determining whether and how to cause a change in the controlling of the autonomous vehicle 400 based on a comparison between the operational risk and a previous operational risk estimation.
  • An iso-boundary operational risk range 402 represents the field of view of the sensors of the autonomous vehicle 400 (e.g., the sensors 104 ), which may cover a 360 degree rotation around a center point of the autonomous vehicle 400 so as to cover an area around the autonomous vehicle 400 .
  • a planned path 404 indicates the direction of travel of the autonomous vehicle 400 .
  • An object 406 e.g., the object 110
  • the operational risk monitor module of the autonomous vehicle 400 detects the object 406 (e.g., a pedestrian) within the environment in which the autonomous vehicle 400 is located.
  • the operational risk monitor module determines a current operational risk for the autonomous vehicle 400 along the planned path 404 based on can be determined based on the detection of the object 406 and a prediction of how the object 406 will behave. For example, where the object 406 is a pedestrian, the object prediction may be a prediction of whether the pedestrian will walk in front of the autonomous vehicle 400 .
  • the operational risk monitor module compares the current operational risk against a previous operational risk estimation, which may, for example, have been determined one second earlier or another time earlier when the object 406 was farther from the autonomous vehicle 400 .
  • the systematic monitor of the operational risk monitor module determines a remedial action to perform to prevent a potential incident related to the predicted behavior of the object 406 .
  • the operational risk monitor module generates a signal usable by one or more control system modules of the autonomous vehicle 400 (e.g., the control system modules 106 ) to change a controlling of the autonomous vehicle 400 .
  • FIG. 5 is an illustration showing an example of a heuristic monitor used by an operational risk monitor module, which may, for example, be the operational risk monitor module 300 shown in FIG. 3 .
  • the heuristic monitor represents functionality of the operational risk monitor module for determining an operational risk of an autonomous vehicle 500 , which may, for example, be the autonomous vehicle 102 shown in FIG. 1 , and determining whether and how to cause a change in the controlling of the autonomous vehicle 500 based on whether the operational risk results in an anticipated rule violation.
  • An iso-boundary operational risk range 502 represents the field of view of the sensors of the autonomous vehicle 500 (e.g., the sensors 104 ), which may cover a 360 degree rotation around a center point of the autonomous vehicle 500 so as to cover an area around the autonomous vehicle 500 .
  • a first distance 504 from the autonomous vehicle 500 and a second distance 506 from the autonomous vehicle 500 are each associated with different rules used by a systematic monitor of the operational risk monitor module.
  • An object 508 (e.g., the object 110 ) is located nearby to the autonomous vehicle 500 .
  • the operational risk monitor module of the autonomous vehicle 500 detects the object 508 (e.g., a pedestrian) within the environment in which the autonomous vehicle 500 is located.
  • the operational risk monitor module detects the object 508 as being within the second distance 506 from the autonomous vehicle 500 , but not within the first distance 504 .
  • a rule associated with the first distance 504 is thus not implicated; however, a rule associated with the second distance 506 is.
  • the rule associated with the second distance 506 may indicate that if an operational risk determined for the autonomous vehicle 500 exceeds a threshold when a detected object (e.g., the object 508 ) is within the second distance 506 , a remedial action should be executed to cause a change in the controlling of the autonomous vehicle 500 .
  • the operational risk monitor module of the autonomous vehicle 500 determines an operational risk of the autonomous vehicle 500 based on the detection of the object 508 and a prediction of how the object 508 will behave. For example, where the object 508 is a pedestrian, the object prediction may be a prediction of whether the pedestrian will walk in front of the autonomous vehicle 500 . The operational risk monitor module then determines whether the operational risk exceeds a threshold for the rule associated with the second distance 506 .
  • the heuristic monitor of the operational risk monitor module determines a remedial action for execution to prevent a potential incident related to the predicted behavior of the object 508 .
  • the operational risk monitor module generates a signal usable by one or more control system modules of the autonomous vehicle 500 (e.g., the control system modules 106 ) to change a controlling of the autonomous vehicle 500 .
  • FIG. 6 is an illustration showing an example of an extended heuristic monitor used by an operational risk monitor module, which may, for example, be the operational risk monitor module 300 shown in FIG. 3 .
  • the extended heuristic monitor represents extended functionality of the heuristic monitor described with respect to FIG. 5 , in which rules are violated by abnormal behaviors of objects within an environment in which an autonomous vehicle 600 , which may, for example, be the autonomous vehicle 102 shown in FIG. 1 , is located, regardless of whether an operational risk determined based on an object detection exceeds a threshold.
  • An iso-boundary operational risk range 602 represents the field of view of the sensors of the autonomous vehicle 600 (e.g., the sensors 104 ), which may cover a 360 degree rotation around a center point of the autonomous vehicle 600 so as to cover an area around the autonomous vehicle 600 .
  • a first distance 604 from the autonomous vehicle 600 and a second distance 606 from the autonomous vehicle 600 are each associated with different rules used by a systematic monitor of the operational risk monitor module.
  • a first object 608 and a second object 610 (e.g., each of which being the object 110 or another object) are each located nearby to the autonomous vehicle 600 .
  • the operational risk monitor module of the autonomous vehicle 600 detects the first object 608 and the second object 610 (e.g., one or both of which being a pedestrian) within the environment in which the autonomous vehicle 600 is located.
  • the operational risk monitor module detects the first object 608 and the second object 610 as each being within the second distance 606 from the autonomous vehicle 600 , but not within the first distance 604 .
  • a rule associated with the first distance 604 is thus not implicated; however, a rule associated with the second distance 606 is.
  • the rule associated with the second distance 606 may indicate that if a detected object (e.g., the first object 608 and/or the second object 610 ) is within the second distance 606 and performs an unexpected behavior, a remedial action should be executed to cause a change in the controlling of the autonomous vehicle 600 .
  • the operational risk monitor module of the autonomous vehicle 600 determines whether either of the first object 608 or the second object 610 performs an unexpected behavior based on the detections of the first object 608 and the second object 610 and based on predictions of how the first object 608 and the second object 610 will behave. For example, where the first object 608 is a pedestrian and the second object 610 is a garbage can, the object prediction for the first object 608 may be a prediction of whether the pedestrian will walk in front of the autonomous vehicle 600 and the object prediction for the second object 610 may be a prediction that the garbage can will remain in place.
  • the heuristic monitor of the operational risk monitor module determines a remedial action to perform to prevent a potential incident related to the unexpected behavior of the second object 610 .
  • the operational risk monitor module generates a signal usable by one or more control system modules of the autonomous vehicle 600 (e.g., the control system modules 106 ) to change a controlling of the autonomous vehicle 600 .
  • FIG. 7 is a flowchart showing an example of a technique 700 for autonomous vehicle control based on a determined operational risk using systematic monitoring.
  • FIG. 8 is a flowchart showing an example of a technique 800 for autonomous vehicle control based on a determined operational risk using heuristic monitoring.
  • FIG. 9 is a flowchart showing an example of a technique 900 for autonomous vehicle control based on a determined operational risk using sensitivity monitoring.
  • the technique 700 , the technique 800 , and/or the technique 900 can be executed using computing devices, such as the systems, hardware, and software described with respect to FIGS. 1-6 .
  • the technique 700 , the technique 800 , and/or the technique 900 can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code.
  • the steps, or operations, of the technique 700 , the technique 800 , the technique 900 , or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof.
  • the technique 700 , the technique 800 , and the technique 900 are each depicted and described herein as a series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement a technique in accordance with the disclosed subject matter.
  • the technique 700 for autonomous vehicle control based on a determined operational risk using systematic monitoring is shown.
  • the technique 700 may, for example, represent operations performed at an operational risk monitor module (e.g., the operational risk monitor module 300 shown in FIG. 3 ) using systematic monitoring.
  • an operational risk monitor module e.g., the operational risk monitor module 300 shown in FIG. 3
  • the input information indicates one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects.
  • the input information may, for example, be received from one or more control system modules of the autonomous vehicle.
  • the one or more objects may be detected using a perception module that processes information measured or otherwise recorded using sensors of the autonomous vehicle.
  • the one or more object predictions may be determined using a prediction module that processes the detected objects based on expected behaviors of those detected objects.
  • a planned path referred to as the Nth planned path. Identifying the Nth planned path includes reading the Nth planned path from memory. For example, the Nth planned path can be identified by retrieving data associated with the Nth planned path from a buffer (e.g., the memory buffer 316 shown in FIG. 3 ).
  • the Nth planned path represents an intended sequence of actions used for the autonomous vehicle to travel within the environment in which the autonomous vehicle is located.
  • the planned path is one of a number of planned path candidates stored in the buffer.
  • the planned path candidates include planned paths which have been selected (e.g., by the planning module 306 ) as winning path candidates.
  • the planned path candidates can also include planned paths which were determined (e.g., by the planning module 306 ) as being non-winning path candidates. For example, a number of planned path candidates may be evaluated at a given processing instant to select one of those planned path candidates as a winning path candidate, such as based on the winning path candidate having a highest probability of safety or a lowest probability of risk. The other planned path candidates which were not selected as the winning path candidate are non-winning path candidates.
  • an updated scene for the Nth planned path is synthesized using the input information.
  • the updated scene for the Nth planned path refers to an understanding of the environment in which the autonomous vehicle is located as the autonomous vehicle travels along the Nth planned path, according to the newly detected objects within that environment and the newly predicted behaviors of those detected objects.
  • the Nth planned path may be a planned path which has already been evaluated using an operational risk monitor module as described herein. During such prior evaluation, a scene for the Nth planned path may have been synthesized based on then-detected objects and then-predicted behaviors of those objects.
  • the updated scene can be synthesized by applying the Nth planned path to the environment in which the autonomous vehicle is located.
  • the scene previously synthesized for the Nth planned path is updated according to the newly detected objects and the newly predicted behaviors of those objects.
  • the updated scene can be synthesized by projecting behaviors of the detected objects according to the object predictions against the Nth planned path, such as to predict a relationship between the expected behaviors of the detected objects and the autonomous vehicle as the autonomous vehicle travels along the Nth planned path.
  • an operational risk is determined for the updated scene. Determining the operational risk can include retrieving a cost value for the updated scene, such as based on the input information and based on the Nth planned path. For example, a cost function can be called to retrieve the cost value, for example, from a control system module of the autonomous vehicle. Alternatively, the cost value can be determined as part of the operations for determining the operational risk, for example, where the control system modules are not used to determine or otherwise maintain cost value information.
  • the operational risk represents a probability that the autonomous vehicle will encounter an event by continuing to travel along the Nth planned path based on the detected objects and object predictions.
  • the operational risk further represents a severity of the event, for example, based on the type of event and the expected location and behavior of the event.
  • the operational risk is determined using the cost value.
  • the cost value can be a value associated with the updated scene information.
  • a deterministic risk framework or other modeling is used to identify cost values for different scenes based on the relationships between the detected objects, the expected behaviors of those objects, and the planned path.
  • the previous operational risk estimation is an estimation of operational risk determined for a N-1th planned path.
  • the Nth planned path may represent a planned path determined some amount of time (e.g., one hundredth of a second) after the N-1th planned path was determined.
  • the Nth planned path may therefore represent the planned path of the autonomous vehicle some amount of time after the N-1th planned path.
  • the previous operational risk estimation may be stored in a buffer (e.g., the memory buffer 316 ). Determining whether the operational risk is greater than the previous operational risk estimation may include retrieving the previous operational risk estimation from memory and comparing the retrieved operational risk against the operational risk.
  • the determination as to whether the operational risk is greater than the previous operational risk estimation is used to indicate whether an operational risk underestimation occurred. That is, where the operational risk is greater than the previous operational risk estimation, a determination can be made that the previous operational risk estimation is too low.
  • a signal is generated to indicate the operational risk underestimation.
  • the signal is transmitted to one or more control system modules to cause a change in the controlling of the autonomous vehicle.
  • the signal can be transmitted to cause one or more control system modules of the autonomous vehicle to control the autonomous vehicle by performing a remedial action to prevent the autonomous vehicle from encountering an event associated with the operational risk.
  • a buffer e.g., the memory buffer 316
  • a signal is generated to indicate that there is no underestimation in operational risk as represented by the previous operational risk estimate.
  • the signal is transmitted to one or more control system modules to allow the autonomous vehicle to maintain a safe state of the autonomous vehicle, for example, by allowing the autonomous vehicle to continue traveling along its current planned path.
  • the value of N is increased by one.
  • the technique 700 then returns to 702 where new input information indicating objects and object predictions is received, and then to 704 where a planned path corresponding to the new value of N is read from memory (e.g., retrieved from the buffer).
  • the objects and object predictions of the new input information may be the same as the objects and object predictions of the input information used to process the planned path at the previous value of N. In some such cases, the objects and object predictions of the new input information may not be exactly the same as the objects and object predictions of the input information used to process the planned path at the previous value of N, but may nonetheless not be significantly different.
  • the signals generated at 712 or at 716 may indicate a value of a risk underestimation flag.
  • the risk underestimation flag may have a first value where an operational risk underestimation represented within the previous operational risk estimation is determined and a second value where no such underestimation is determined.
  • the control system modules which receive the signal indicating the value of the risk underestimation flag can use the value of the flag indicated within the signal to determine whether to cause a change in the controlling of the autonomous vehicle.
  • a signal is not generated to indicate that no operational risk increase is determined.
  • the technique 700 may simply end rather than generate a signal.
  • determining whether the Nth operational risk is greater than the previous operational risk estimation includes determining whether the Nth operational risk exceeds a threshold. For example, even where the Nth operational risk is greater than the previous operational risk estimation (e.g., where the previous operational risk estimation is zero), the Nth operational risk may still not be great enough to merit the performance of a remedial action to cause a change in the controlling of the autonomous vehicle.
  • the threshold may be configurable. For example, subsequent modeling performed using information processed to determine the Nth operational risk can be used to determine that the threshold should be changed.
  • the technique 700 can include selecting the planned path from a list of candidate planned paths.
  • each of the candidate planned paths may be a planned path identified using a control system module (e.g., the planning module 306 ).
  • the candidate planned paths may be evaluated based on an instantaneous operational risk limit defining a maximum acceptable operational risk at a given instance along the candidate planned paths.
  • the candidate planned path having the lowest probability of exceeding the instantaneous operational risk limit can be selected as the planned path.
  • the selection of a planned path may be constrained according to an instantaneous operational risk limit.
  • new candidate planned paths can be generated until the operational risk for a selected planned path is less than or equal to the instantaneous operational risk limit.
  • the technique 800 may, for example, represent operations performed at an operational risk monitor module (e.g., the operational risk monitor module 300 shown in FIG. 3 ) using heuristic monitoring.
  • an operational risk monitor module e.g., the operational risk monitor module 300 shown in FIG. 3
  • the input information indicates one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects.
  • the input information may, for example, be received from one or more control system modules of the autonomous vehicle.
  • the one or more objects may be detected using a perception module that processes information measured or otherwise recorded using sensors of the autonomous vehicle.
  • the one or more object predictions may be determined using a prediction module that processes the detected objects based on expected behaviors of those detected objects.
  • the input information may further include one or more actuator control constraints representing actuation limits for the autonomous vehicle.
  • the actual control constraints can relate to one or more of a maximum lateral or longitudinal traction limit, a maximum acceleration limit, a maximum braking limit, or a maximum steering angle.
  • a current scene synthesized for the autonomous vehicle based on the input information is evaluated to determine whether the current scene represents a violation of one or more first rules used for determining an operational risk for the autonomous vehicle.
  • the current scene can be synthesized based on the input information.
  • the first rules correspond to static scene behaviors of the current scene in which the static scene behaviors relate to objects being located too close to other objects, objects not being detected within the environment in which the autonomous vehicle is located, an excessive gap in the current scene, actuator control constraints of the autonomous vehicle being too low, or the like.
  • Evaluating the current scene can include the current operational risk can include identifying a rule which is implicated by the detected objects and the object predictions. For example, one or more rules for controlling the autonomous vehicle based on the presence of certain objects and the prediction of certain behaviors of those objects may be used to determine the current operational risk for the autonomous vehicle.
  • Data indicating the object or objects detected within the environment in which the vehicle is located can be used to identify the rule to use for determining the current operational risk.
  • data associated with the rule e.g., including the rule definition, conditions for using the rule, cost functions or inputs thereto, etc.
  • the rule can be retrieved from the database or other data store using a query or other request indicating the objects and/or the object predictions.
  • the current operational risk represents a probability that the autonomous vehicle will encounter an event by continuing to travel within the environment in which the autonomous vehicle is located, such as based on the detected objects and object predictions.
  • the current operational risk is determined using a cost value.
  • the cost value can be a value calculated by applying values representative of the objects and/or the object predictions to a cost function.
  • a deterministic risk framework or other modeling is used to identify cost values for different scenes based on the relationships between the detected objects, the expected behaviors of those objects, and the relevant rules.
  • the current scene is evaluated against a previous scene synthesized for the autonomous vehicle based on the input information is evaluated to determine whether a degree of change between the current scene and the previous scene represents a violation of one or more second rules used for determining a risk underestimation.
  • the previous scene can be synthesized based on previous input information (e.g., based on information previously stored in a buffer).
  • the second rules correspond to abnormal dynamic scene behaviors of the current scene or of the previous scene in which the abnormal dynamic scene behaviors relate to objects appearing within the environment in which the autonomous vehicle is located too late, objects disappearing from the environment in which the autonomous vehicle is located too quickly, objects accelerating within the environment in which the autonomous vehicle is located too quickly, discontinuity between predicted object behaviors, or the like.
  • a signal indicating or otherwise used to indicate a remedial action to perform is generated.
  • the remedial action may be specified based on the first and/or second rules. For example, where the rule is used for situations where pedestrians are detected in front of the autonomous vehicle and are expected to remain in front of the autonomous vehicle, the remedial actions available for the autonomous vehicle to perform may be limited to an emergency stop. Alternatively, the remedial action may be one of a number of remedial actions selected using the operational risk monitor module.
  • the first and second rules are rules used by systematic monitor functionality of the operational risk monitor module.
  • a rule can be a configuration or definition of the autonomous vehicle, the environment in which the autonomous vehicle is located, or both, which triggers the systematic monitor to determine that a remedial action needs to be performed.
  • the rule can be a rule indicating to perform an emergency stop of the autonomous vehicle if a pedestrian or other living being is first detected within a defined distance of the front of the autonomous vehicle based on input information, or to abort operation if poor object behavior is predicted.
  • the rules may be defined at the time in which the operational risk monitor module is deployed within the autonomous vehicle. Alternatively, some or all of the rules may be configurable, for example, by a user of the autonomous vehicle.
  • the particular remedial action to perform can be selected using a deterministic risk framework indicating a most effective or otherwise most useful remedial action to perform given the current operational cost and given the detected objects and predicted object behaviors.
  • the particular remedial action to perform can be selected based on the particular value of the current operational risk.
  • each of the available remedial actions can be associated with a range of operational risks.
  • the operational risk monitor module can output a signal (e.g., including a value of a flag and/or other data) used by one or more control system modules of the autonomous vehicle to select the remedial action.
  • the operational risk monitor module can select the remedial action having a range which includes the current operational risk.
  • the signal indicating or otherwise used to indicate the remedial action is transmitted to one or more control system modules to cause a change in the controlling of the autonomous vehicle.
  • the remedial action can be initiated based on actuator control constraints of the system (e.g., as received within the input information or otherwise).
  • the signal can be transmitted to cause one or more control system modules of the autonomous vehicle to control the autonomous vehicle by performing the remedial action to prevent the autonomous vehicle from encountering an event associated with the current operational risk.
  • a flag received from the operational risk monitor module can be used by a perception module to adjust a confidence in a prediction for the autonomous vehicle.
  • a flag received from the operational risk monitor module can be used by a planning module to reduce a speed operation of the autonomous vehicle.
  • the heuristic monitoring described with respect to the technique 800 can be extended heuristic monitoring.
  • the rules used by the technique 800 may generally be rules used to determine operational risks based on the detection of objects and the prediction of object behaviors nearby the front of the autonomous vehicle while the autonomous vehicle travels forward.
  • extended heuristic monitoring may instead use rules which consider objects and object predictions within any direction of the autonomous vehicle regardless of travel direction.
  • an extended heuristic monitor may indicate to slow down or stop the autonomous vehicle when input information indicates behavior which can result in an identification of an underestimation of a previous operational risk estimate, such as based on a first detection of an unobstructed pedestrian within some distance (e.g., two meters) in any direction of the autonomous vehicle.
  • an extended heuristic monitor may indicate to keep a speed low when traffic conditions on a freeway are high and the autonomous vehicle is generally surrounded by slow moving traffic.
  • the technique 800 can include selecting the planned path from a list of candidate planned paths.
  • each of the candidate planned paths may be a planned path identified using a control system module (e.g., the planning module 306 ).
  • the candidate planned paths may be evaluated based on an instantaneous operational risk limit defining a maximum acceptable operational risk at a given instance along the candidate planned paths.
  • one of the candidate planned paths having operational risk that does not exceed the instantaneous operational risk limit can be selected as the planned path, which may be the candidate planned path having the greatest operational risk without exceeding the instantaneous operational risk limit.
  • the selection of a planned path may be constrained according to an instantaneous operational risk limit.
  • new candidate planned paths can be generated until the operational risk for a selected planned path is less than or equal to the instantaneous operational risk limit.
  • the technique 800 can include operations based on a determination that the current scene does not represent the violation of the one or more first rules or that the degree of change does not represent the violation of the one or more second rules. For example, in such a case, a signal can be generated and transmitted to the one or more control system modules to cause the one or more control system modules to control the autonomous vehicle by maintaining a safe state.
  • the technique 800 can include storing, within a buffer, data indicative of one or more of the current scene, the previous scene, the static scene behaviors, the abnormal dynamic scene behaviors, or the previous abnormal dynamic scene behaviors.
  • the stored data may be used for heuristic processing of a next current scene.
  • the technique 800 can further include deleting irrelevant data from the buffer.
  • the technique 800 can include determining that the previous abnormal dynamic scene behaviors are unrelated to the operational risk and then data indicative of those previous abnormal dynamic scene behaviors from the buffer based on such determination.
  • the technique 900 may, for example, represent operations performed at an operational risk monitor module (e.g., the operational risk monitor module 300 shown in FIG. 3 ) using sensitivity monitoring.
  • an operational risk monitor module e.g., the operational risk monitor module 300 shown in FIG. 3
  • the input information indicates one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects.
  • the input information may, for example, be received from one or more control system modules of the autonomous vehicle.
  • the one or more objects may be detected using a perception module that processes information measured or otherwise recorded using sensors of the autonomous vehicle.
  • the one or more object predictions may be determined using a prediction module that processes the detected objects based on expected behaviors of those detected objects.
  • a planned path is identified. Identifying the planned path includes reading the planned path from memory. For example, the planned path can be identified by retrieving data associated with the planned path from a buffer (e.g., the memory buffer 316 shown in FIG. 3 ).
  • the planned path represents an intended sequence of actions used for the autonomous vehicle to travel within the environment in which the autonomous vehicle is located.
  • the planned path is a planned path which has already been evaluated using the control system modules of the autonomous vehicle (e.g., by the planning module 306 ).
  • a first operational risk is determined for the autonomous vehicle based on the input information and based on the planned path.
  • Determining the first operational risk can include synthesizing an updated scene for the planned path, for example, by updating a previously synthesized scene for the planned path according to the newly detected one or more objects from the input information and according to the newly predicted one or more object predictions from the input information.
  • a cost value can then be retrieved for the updated scene, such as based on the input information and based on the planned path.
  • a cost function can be called to retrieve the cost value, for example, from a control system module of the autonomous vehicle.
  • the cost value can be determined as part of the operations for determining the operational risk, for example, where the control system modules are not used to determine or otherwise maintain cost value information.
  • a second operational risk is determined for the autonomous vehicle based on the input information and based on perturbations to the planned path.
  • the perturbations to the planned path represent changes to one or more aspects of the planned path, for example, temporal or spatial adjustments to one or more actions of the intended sequence of actions represented by the planned path.
  • the perturbations may be minor, for example, by adjusting a speed of the autonomous vehicle by a small amount (e.g., increasing or decreasing the speed of the autonomous vehicle by less than 0.5 miles or kilometers per hour).
  • the perturbations may be major, for example, by causing an emergency stop of the autonomous vehicle or by causing the autonomous vehicle to make a turn.
  • the perturbations can be identified from a set of candidate perturbations available for evaluation by the heuristic monitor. For example, a maximized sensitivity ratio can be used to identify the one or more perturbations from a set of candidate perturbations. The maximized sensitivity ratio corresponds to a maximum acceptable risk increase normalized by a minimal perturbation magnitude.
  • the operational risk can be normalized by the perturbations, in some cases. For example, where the perturbations or parameters thereof are determined to be discrete, parameters for normalizing the operational risk using the perturbations can be used to adjust the perturbations, such as by retrieving those parameters for normalizing from a lookup table. For example, a lookup table may store discrete classification data for a perturbation.
  • Changing classifications may result in some delta needing to be applied to the perturbation of either a risk overestimation or a risk underestimation.
  • a delta of a perturbation for changing a garbage can detected nearby to the autonomous vehicle to a child can be 1.0 units of underestimation (e.g., ⁇ 1.0 units of overestimation) of operational risk
  • a delta of a perturbation for changing a child detected nearby the autonomous vehicle to a garbage may be ⁇ 1.0 units of underestimation (e.g., 1.0 units of overestimation) of operational risk.
  • the first and second operational risks each represents a probability that the autonomous vehicle will encounter an event by continuing to travel along the planned path based on the detected objects and object predictions.
  • the first and second operational risks are determined using the cost values.
  • the cost value can be a value associated with the updated scene information.
  • a deterministic risk framework or other modeling is used to identify cost values for different scenes based on the relationships between the detected objects, the expected behaviors of those objects, and the planned path.
  • the operational risk sensitivity threshold represents a maximum acceptable amount of change to the operational risk of the autonomous vehicle based on the perturbations introduced to the planned path.
  • the normalized difference is determined by first determining a difference between the first and second operational risks and then by normalizing that difference based on perturbation magnitudes. For example, the perturbation magnitudes may be stored in a lookup table.
  • a signal indicating or otherwise used to indicate a remedial action to perform is generated.
  • the particular remedial action to perform can be selected using a deterministic risk framework indicating a most effective or otherwise most useful remedial action to perform given the current operational cost and given the detected objects and predicted object behaviors.
  • the particular remedial action to perform can be selected based on the particular value of the first operational risk, based on the particular value of the second operational risk, or based on the particular value of the difference between the first and second operational risks.
  • each of the available remedial actions can be associated with a range of operational risks.
  • the operational risk monitor module can output a signal (e.g., including a value of a flag and/or other data) used by one or more control system modules of the autonomous vehicle to select the remedial action.
  • a signal e.g., including a value of a flag and/or other data
  • the operational risk monitor module can select the remedial action having a range which includes the current operational risk.
  • the signal indicating or otherwise used to indicate the remedial action is transmitted to one or more control system modules to cause a change in the controlling of the autonomous vehicle.
  • the signal can be transmitted to cause one or more control system modules of the autonomous vehicle to control the autonomous vehicle by performing the remedial action to prevent the autonomous vehicle from encountering an event associated with the current operational risk.
  • a flag received from the operational risk monitor module can be used by a perception module to adjust a confidence in a prediction for the autonomous vehicle.
  • a flag received from the operational risk monitor module can be used by a planning module to reduce a speed operation of the autonomous vehicle.
  • the technique 900 can include selecting the planned path from a list of candidate planned paths.
  • each of the candidate planned paths may be a planned path identified using a control system module (e.g., the planning module 306 ).
  • the candidate planned paths may be evaluated based on an instantaneous operational risk limit defining a maximum acceptable operational risk at a given instance along the candidate planned paths.
  • one of the candidate planned paths having operational risk that does not exceed the instantaneous operational risk limit can be selected as the planned path, which may be the candidate planned path having the greatest operational risk without exceeding the instantaneous operational risk limit.
  • the selection of a planned path may be constrained according to an instantaneous operational risk limit.
  • new candidate planned paths can be generated until the operational risk for a selected planned path is less than or equal to the instantaneous operational risk limit.
  • the technique 900 can include operations based on a determination that the normalized difference between the first and second operational risks is below the operational risk sensitivity threshold. For example, where the normalized difference is below the operational risk sensitivity threshold, a signal can be generated to indicate that the second operational risk represents an acceptable amount of change to the first operational risk. The signal can then be transmitted to the one or more control system modules to cause the one or more control system modules to control the autonomous vehicle by maintaining a safe state associated with the planned path.
  • FIG. 10 is a block diagram showing an example of a computing device 1000 which may be used in a system for operational risk assessment for autonomous vehicle control, for example, the system 100 shown in FIG. 1 .
  • the computing device 1000 may be used to implement a computer that runs or otherwise performs functionality of an operational risk monitor module, for example, the operational risk monitor module 108 .
  • the computing device 1000 may be used to implement a single computer used to run or otherwise perform functionality of both the operational risk monitor module and control system modules, for example, the control system modules 106 .
  • the computing device 1000 may otherwise be used to implement one or more computers according to implementations of this disclosure.
  • the computing device 1000 includes components or units, such as a processor 1002 , a memory 1004 , a bus 1006 , a power source 1008 , peripherals 1010 , a user interface 1012 , and a network interface 1014 .
  • a processor 1002 a memory 1004 , a bus 1006 , a power source 1008 , peripherals 1010 , a user interface 1012 , and a network interface 1014 .
  • One of more of the memory 1004 , the power source 1008 , the peripherals 1010 , the user interface 1012 , or the network interface 1014 can communicate with the processor 1002 using the bus 1006 .
  • the processor 1002 is a central processing unit, such as a microprocessor, and can include single or multiple processors having single or multiple processing cores. Alternatively, the processor 1002 can include another type of device, or multiple devices, now existing or hereafter developed, configured for manipulating or processing information. For example, the processor 1002 can include multiple processors interconnected in any manner, including hardwired or networked, including wirelessly networked. For example, the operations of the processor 1002 can be distributed across multiple devices or units that can be coupled directly or across a local area or other suitable type of network.
  • the processor 1002 can include a cache, or cache memory, for local storage of operating data or instructions.
  • the memory 1004 includes one or more memory components, which may each be volatile memory or non-volatile memory.
  • the volatile memory of the memory 1004 can be random access memory (RAM) (e.g., a DRAM module, such as DDR SDRAM) or another form of volatile memory.
  • the non-volatile memory of the memory 1004 can be a disk drive, a solid state drive, flash memory, phase-change memory, or another form of non-volatile memory configured for persistent electronic information storage.
  • the memory 1004 may also include other types of devices, now existing or hereafter developed, configured for storing data or instructions for processing by the processor 1002 .
  • the memory 1004 can include data for immediate access by the processor 1002 .
  • the memory 1004 can include executable instructions 1016 , application data 1018 , and an operating system 1020 .
  • the executable instructions 1016 can include one or more application programs, which can be loaded or copied, in whole or in part, from non-volatile memory to volatile memory to be executed by the processor 1002 .
  • the executable instructions 1016 can include instructions for performing some or all of the techniques of this disclosure.
  • the application data 1018 can include user data, database data (e.g., database catalogs or dictionaries), or the like.
  • the operating system 1020 can be, for example, Microsoft Windows®, Mac OS X®, or Linux®; an operating system for a small device, such as a smartphone or tablet device; or an operating system for a large device, such as a mainframe computer.
  • the power source 1008 includes a source for providing power to the computing device 1000 .
  • the power source 1008 can be an interface to an external power distribution system.
  • the power source 1008 can be a battery, such as where the computing device 1000 is a mobile device or is otherwise configured to operate independently of an external power distribution system.
  • the peripherals 1010 includes one or more sensors, detectors, or other devices configured for monitoring the computing device 1000 or the environment around the computing device 1000 .
  • the peripherals 1010 can include a geolocation component, such as a global positioning system location unit.
  • the peripherals can include a temperature sensor for measuring temperatures of components of the computing device 1000 , such as the processor 1002 .
  • the user interface 1012 includes one or more input interfaces and/or output interfaces.
  • An input interface may, for example, be a positional input device, such as a mouse, touchpad, touchscreen, or the like; a keyboard; or another suitable human or machine interface device.
  • An output interface may, for example, be a display, such as a liquid crystal display, a cathode-ray tube, a light emitting diode display, or other suitable display.
  • the network interface 1014 provides a connection or link to a network (e.g., the network 114 shown in FIG. 1 ).
  • the network interface 1014 can be a wired network interface or a wireless network interface.
  • the computing device 1000 can communicate with other devices via the network interface 1014 using one or more network protocols, such as using Ethernet, TCP, IP, power line communication, Wi-Fi, Bluetooth, infrared, GPRS, GSM, CDMA, Z-Wave, ZigBee, another protocol, or a combination thereof.
  • Implementations of the computing device 1000 may differ from what is shown and described with respect to FIG. 9 .
  • the computing device 1000 can omit the peripherals 1010 .
  • the memory 1004 can be distributed across multiple devices.
  • the memory 1004 can include network-based memory or memory in multiple clients or servers performing the operations of those multiple devices.
  • the application data 1018 can include functional programs, such as a web browser, a web server, a database server, another program, or a combination thereof.
  • the implementations of this disclosure can be described in terms of functional block components and various processing operations. Such functional block components can be realized by a number of hardware or software components that perform the specified functions.
  • the disclosed implementations can employ various integrated circuit components (e.g., memory elements, processing elements, logic elements, look-up tables, and the like), which can carry out a variety of functions under the control of one or more microprocessors or other control devices.
  • the systems and techniques can be implemented with a programming or scripting language, such as C, C++, Java, JavaScript, assembler, or the like, with the various algorithms being implemented with a combination of data structures, objects, processes, routines, or other programming elements.
  • system or “module” as used herein and in the figures, but in any event based on their context, may be understood as corresponding to a functional unit implemented using software, hardware (e.g., an integrated circuit, such as an ASIC), or a combination of software and hardware.
  • systems or mechanisms may be understood to be a processor-implemented software system or processor-implemented software mechanism that is part of or callable by an executable program, which may itself be wholly or partly composed of such linked systems or mechanisms.
  • Implementations or portions of implementations of the above disclosure can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium.
  • a computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport a program or data structure for use by or in connection with any processor.
  • the medium can be, for example, an electronic, magnetic, optical, electromagnetic, or semiconductor device.
  • Such computer-usable or computer-readable media can be referred to as non-transitory memory or media, and can include volatile memory or non-volatile memory that can change over time.
  • a memory of an apparatus described herein, unless otherwise specified, does not have to be physically contained by the apparatus, but is one that can be accessed remotely by the apparatus, and does not have to be contiguous with other memory that might be physically contained by the apparatus.

Abstract

Changes in the controlling of an autonomous vehicle are caused based on an operational risk determined for the autonomous vehicle. An operational risk monitor module of the autonomous vehicle uses information about objects detected within an environment in which the autonomous vehicle is located and predicted behaviors of those objects to assess the operational risk of the autonomous vehicle along a planned path. The operational risk is used to determine whether to cause a change in the controlling of the autonomous vehicle, for example, based on a comparison between the operational risk and a previously estimated operational risk or based on a determination that the operational risk exceeds a threshold. The operational risk monitor module transmits a signal to one or more control system modules of the autonomous vehicle to indicate to change the controlling of the autonomous vehicle based on the operational risk.

Description

    CROSS REFERENCE TO RELATED APPLICATION(S)
  • This disclosure is a continuation of U.S. application Ser. No. 16/404,208, filed May 6, 2019, which is a continuation-in-part of U.S. application Ser. No. 16/399,901, filed Apr. 30, 2019, the disclosures of which are herein incorporated by reference in their entirety.
  • TECHNICAL FIELD
  • This disclosure relates to operational risk assessment for autonomous vehicle control, in particular, to causing changes in the controlling of an autonomous vehicle based on an operational risk determined for the autonomous vehicle.
  • BACKGROUND
  • Conventional autonomous vehicle control systems rely on continuously or otherwise periodically collected sensor information to generate a current understanding of the environment in which the autonomous vehicle is located. Based on the current understanding of the environment, the control system of an autonomous vehicle decides how to operate the autonomous vehicle, for example, by controlling the autonomous vehicle to accelerate, come to a complete stop, adjust a direction of movement, or otherwise behave in accordance with traffic safety laws.
  • SUMMARY
  • Disclosed herein are, inter alia, implementations of systems and techniques for operational risk assessment for autonomous vehicle control.
  • In one implementation, a method for controlling an autonomous vehicle based on an operational risk determined for the autonomous vehicle is provided. The method includes receiving input information indicating one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects. A current scene synthesized for the autonomous vehicle is evaluated based on the input information to determine whether the current scene represents a violation of one or more first rules used for determining the operational risk, the operational risk representing a probability that the autonomous vehicle will encounter an event. The current scene is evaluated against a previous scene synthesized for the autonomous vehicle to determine whether a degree of change between the current scene and the previous scene represents a violation of one or more second rules used for determining a risk underestimation. Responsive to determining that the current scene represents the violation of the one or more first rules and that the degree of change represents the violation of the one or more second rules, a signal used for controlling the autonomous vehicle is generated. The signal is transmitted to one or more control system modules of the autonomous vehicle to cause the one or more control system modules to control the autonomous vehicle by performing a remedial action to prevent the autonomous vehicle from encountering the event.
  • In another implementation, a method for controlling an autonomous vehicle based on an operational risk determined for the autonomous vehicle is provided. The method includes receiving input information indicating one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects. A planned path representing an intended sequence of actions for the autonomous vehicle to travel within the environment in which the autonomous vehicle is located is identified. A first operational risk for the autonomous vehicle is determined based on the input information and based on the planned path. The first operational risk represents a probability that the autonomous vehicle will encounter an event along the planned path. A second operational risk for the autonomous vehicle is determined based on the planned path and based on one or more perturbations applied to the planned path. The second operational risk represents a probability that the autonomous vehicle will encounter the event along the planned path based on the one or more perturbations applied to the planned path. A determination is made as to whether a normalized difference between the first operational risk and the second operational risk exceeds a sensitivity threshold. Responsive to determining that the normalized difference between the first operational risk and the second operational risk exceeds the sensitivity threshold, a signal used for controlling the autonomous vehicle is generated. The signal is transmitted to one or more control system modules of the autonomous vehicle to cause the one or more control system modules to control the autonomous vehicle by performing a remedial action to prevent the autonomous vehicle from encountering the event.
  • In yet another implementation, a system for controlling an autonomous vehicle based on an operational risk determined for the autonomous vehicle is provided. The system includes an operational risk monitor module and a memory buffer. The operational risk monitor module receives input information associated with an environment in which the autonomous vehicle is located from control system modules of the autonomous vehicle and indicates an action for performance using one or more of the control system modules based on an operational risk determined for the autonomous vehicle using the input information. The memory buffer stores the information received from the control system modules, information indicative of previous operational risk estimations determined using the operational risk monitor module, and information indicative of scenes previously synthesized for the planned path.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The disclosure is best understood from the following detailed description when read in conjunction with the accompanying drawings. It is emphasized that, according to common practice, the various features of the drawings are not to-scale. On the contrary, the dimensions of the various features are arbitrarily expanded or reduced for clarity.
  • FIG. 1 is a block diagram showing an example of a system for operational risk assessment for autonomous vehicle control.
  • FIG. 2 is a block diagram showing an example of an operational risk monitor module.
  • FIG. 3 is a block diagram showing examples of modules and components used for operational risk assessment for autonomous vehicle control.
  • FIG. 4 is an illustration showing an example of a systematic monitor used by an operational risk monitor module.
  • FIG. 5 is an illustration showing an example of a heuristic monitor used by an operational risk monitor module.
  • FIG. 6 is an illustration showing an example of an extended heuristic monitor used by an operational risk monitor module.
  • FIG. 7 is a flowchart showing an example of a technique for autonomous vehicle control based on a determined operational risk using systematic monitoring.
  • FIG. 8 is a flowchart showing an example of a technique for autonomous vehicle control based on a determined operational risk using heuristic monitoring.
  • FIG. 9 is a flowchart showing an example of a technique for autonomous vehicle control based on a determined operational risk using sensitivity monitoring.
  • FIG. 10 is a block diagram showing an example of a computing device which may be used in a system for operational risk assessment for autonomous vehicle control.
  • DETAILED DESCRIPTION
  • Conventional autonomous vehicle control systems determine a current path for controlling a vehicle by perceiving information detected about an environment in which the vehicle is located and predicting how that information will affect the operation of the vehicle. However, changes in operational risk for a given planned path, which includes or otherwise refers to the assessed risk along a previous planned path based on perceived and predicted information within the environment of the vehicle, are not considered. Instead, conventional autonomous vehicle control systems are limited to performing current analyses of operational risk for a current planned path without regard to current analyses of operational risk for previous planned paths. The failure to consider changes in operational risk for previous planned paths may, in some cases, result in the failure to diagnose or detect faults in the autonomous vehicle system that cause the autonomous vehicle system to underestimate the operational risk of the vehicle. Underestimating the operational risk of an autonomous vehicle presents potential issues in allowing the autonomous vehicle to encounter events which may, in some cases, be hazardous, such as where the event may cause injury or damage. The failure to diagnose or detect faults which underestimate the operational risk for the autonomous vehicle increases the expose of the autonomous vehicle to potential hazards. Further, this may in some cases violate functional safety compliance. Conventional autonomous vehicle control systems thus fail to validate and generate evidence of safety.
  • Implementations of this disclosure address problems such as these by causing changes in the controlling of an autonomous vehicle based on an operational risk determined for the autonomous vehicle. An operational risk monitor module of the autonomous vehicle uses information about objects detected within an environment in which the autonomous vehicle is located and predicted behaviors of those objects to assess the operational risk of the autonomous vehicle along a planned path. The operational risk is used to determine whether to cause a change in the controlling of the autonomous vehicle, for example, based on a comparison between the operational risk and a previously estimated operational risk or based on a determination that the operational risk exceeds a threshold. The operational risk monitor module transmits a signal to one or more control system modules of the autonomous vehicle to indicate to change the controlling of the autonomous vehicle based on the operational risk.
  • The operational risk monitor module can use a systematic monitor, a heuristic monitor, and/or a sensitivity monitor to determine to cause the changes in the controlling of the autonomous vehicle. The systematic monitor performs continuous or otherwise frequent and periodic monitoring of operating conditions of the autonomous vehicle and of conditions of the environment in which the autonomous vehicle is located, such as to compare operational risks determined at given times against previous values. For example, the systematic monitor functionality can be used to determine that a change in the controlling of the autonomous vehicle is necessary when a current operational risk estimate for a given planned path exceeds a previous operational risk estimation for that planned path. The heuristic monitor applies rules used by the systematic monitor against current conditions of the autonomous vehicle and of the environment in which the autonomous vehicle is located to determine whether a rule is violated. For example, a rule may indicate that all objects should be initially detected within a specific distance of the front of the autonomous vehicle while the autonomous vehicle is traveling forward. The rule violation can be determined and used to cause a change in the controlling of the autonomous vehicle. The operational risk monitor module may be further configured to use a combination of the systematic monitor, the heuristic monitor, and/or the sensitivity monitor simultaneously. Using a combination of the systematic monitor, heuristic monitor, and/or the sensitivity monitor broadens the scope of system issues which can be detected, thereby further improving a time-to-detection of hazardous and non-hazardous events.
  • As used herein, operational risk refers to a calculated estimate associated with a given operation, which may be based on or otherwise account for estimated costs of potential outcomes associated with the given operation, estimates of likelihood of such potential outcomes, and/or proxies or other indicators of such aggregated risk estimates, estimated costs, and/or such likelihood. Costs for a potential outcome may, for example, include or otherwise account for those costs that may be borne by the vehicle itself, occupants of the vehicle, and/or the environment of the vehicle and/or objects therein (e.g., other vehicles, pedestrians, and/or other objects around the vehicle). Costs may refer to events the autonomous vehicle is expected to encounter. The events are associated with operational risks, for example, in which an operational risk represents a probability that the event will occur and/or a severity of the event. An event can be a hazardous event or a non-hazardous event. A hazardous event refers to an event which results in or has the potential to result in injury to person and/or damage to property. A non-hazardous event refers to an event which does not result or have the potential to result in injury to person and/or damage to property. As non-limiting illustrative examples, a hazardous event may be a collision involving the autonomous vehicle, the autonomous vehicle losing contact with the surface of the road on which it travels, or the like. In other non-limiting illustrative examples, a non-hazardous event may be the autonomous vehicle driving onto rumble strips, driving through a pooling of water, or the like. As non-limiting illustrative examples, a low operational risk estimate may be calculated when the vehicle is moving at 5 mph and a leaf is identified 500 feet ahead, since the potential costs are low (e.g., impact with a tree leaf at 5 mph would not damage the vehicle and damage to the leaf is of no cost) and the likelihood of impact with the leaf is low (e.g., at 5 mph the vehicle has sufficient time to maneuver to avoid the leaf and/or the leaf may blow away before the vehicle arrives). In other non-limiting illustrative examples, a high operational risk estimate may be calculated when the vehicle 102 is moving at 70 mph and another vehicle is identified 80 feet ahead, since the potential costs are higher (e.g., impact of the two vehicles at 70 mph may result in high costs borne by the vehicles and/or occupants thereof) and the likelihood of impact is higher (e.g., given the speed, distance, and size of the vehicle ahead). These examples additionally illustrate that the operational risk may be determined, estimated, or otherwise calculated in different manners, which may, but need not, include estimates of costs based on a severity of an event associated with the operational risk and/or estimates of probability of an encounter with an event arising (e.g., being based on speed of the vehicle and/or identification of obstacles).
  • As used herein, the environment in which the autonomous vehicle is located includes or otherwise refers to a geographical location of the autonomous vehicle and contextual information relevant for a control system of the autonomous vehicle to understand how to operate the autonomous vehicle at that geographical location (e.g., traffic control devices, buildings, etc.).
  • To describe some implementations in greater detail, reference is first made to examples of hardware and software structures used to implement a system for operational risk assessment for autonomous vehicle control. FIG. 1 is a block diagram showing an example of a system 100 for operational risk assessment for autonomous vehicle control. The system 100 includes an autonomous vehicle 102 which is controlled using modules and components. The autonomous vehicle 102 includes sensors 104, control system modules 106, and an operational risk monitor module 108.
  • The sensors 104 detect an object 110 within an environment in which the autonomous vehicle 102 is located. The sensors 104 are hardware units used to measure or otherwise record information about the environment in which the autonomous vehicle 102 is located. The sensors 104 may also include software used by those hardware units, such as to control the operation of those hardware units. The sensors 104 may include a radar sensor, a LIDAR sensor, an image sensor (e.g., a camera), another sensor, multiples of one or more of the foregoing sensor types, or a combination thereof. For example, the sensors 104 may include a LIDAR sensor, multiple image sensors arranged to capture a 360 degree view around the autonomous vehicle, a front radar sensor, and a rear radar sensor.
  • The object 110 represents organic or inorganic matter located within the environment in which the autonomous vehicle 102 is located. For example, the object 110 may be a pedestrian, a vehicle, a street sign, a wall, a traffic control device, an animal, a tree, multiples of one or more of the foregoing object types, or a combination thereof.
  • The control system modules 106 process the output of the sensors 104. The control system modules 106 include software which receives the information measured or otherwise recorded using the sensors 104 to determine how to control the autonomous vehicle 102 and to control the autonomous vehicle 102 accordingly. For example, the control system modules 106 may include, but are not limited to, one or more of an onboard navigation module, a localization module, a control constraints module, a perception module, a prediction module, a planning module, a platform control module, another module, or a combination thereof. The control system modules 106 are run on a computing device of the autonomous vehicle 102.
  • The operational risk monitor module 108 uses information from the control system modules 106 to assess an operational risk for the autonomous vehicle 102. The operational risk monitor module 108 receives input information from the control system modules 106 indicating one or more objects (e.g., the object 110) detected within an environment in which the autonomous vehicle 102 is located and one or more object predictions representing expected behaviors of those detected objects. The operational risk monitor module 108 further identifies planned paths determined using the control system modules 106, which planned paths represent intended sequences of actions to be used for the autonomous vehicle 102 to travel within the environment in which the autonomous vehicle 102 is located.
  • The operational risk monitor module 108 uses the information about the detected objects and the object predictions, and planned paths to determine an operational risk for the autonomous vehicle along the planned paths. The operational risk monitor module 108 then processes the determined operational risk to determine whether a remedial action should be performed, for example, by aborting operations within the environment in which the autonomous vehicle 102 is located to avoid an event which the autonomous vehicle 102 is expected to encounter or to otherwise reduce the probability that the autonomous vehicle 102 will encounter that event. For example, the remedial action may include, but is not limited to, a safe stop, a safe pullover, a blind pullover, a speed reduction, a teleoperation-related control, or another immediate action, which may, for example, be pending human analysis of the determined operational risk.
  • Where a determination is made that a remedial action should be performed, the operational risk monitor module 108 transmits a signal indicating or otherwise used to indicate the remedial action to the control system modules 106. The control system modules 106 then cause the performance of the remedial action based on the signal received from the operational risk monitor module 108. For example, where the operational risk monitor module 108 determines that the autonomous vehicle 102 is not sufficiently estimating the operational risk associated with the previous planned paths, the remedial action may indicate to follow only the last planned path and to stop the execution of future planned paths of the autonomous vehicle 102. The operational risk monitor module 108 generates a signal indicating the command to stop execution and transmits that signal to the control system modules 106. The control system modules 106 then stop the autonomous vehicle 102 according to the signal.
  • Alternatively, where a determination is made that a remedial action does not need to be performed, such as because the operational risk determined for the current planned path is low or otherwise does not merit the performance of such an action, the operational risk monitor module 108 transmits a signal indicating to allow the autonomous vehicle 102 to maintain a safe state, for example by allowing the autonomous vehicle 102 to continue traveling along the current planned path or future planned paths within the environment in which the autonomous vehicle 102 is located.
  • The operational risk monitor module 108 can use a systematic monitor and/or a heuristic monitor. The systematic monitor represents functionality of the operational risk monitor module 108 for systematically comparing the current operational risk for a previous planned path received from the control system modules 106 against the previous operational risk estimation, which was previously determined for a planned path previously received from the control system modules 106. The heuristic monitor represents functionality of the operational risk monitor module 108 for using configurable rules to detect that an operational risk determined based on input information exceeds a threshold representing a maximum acceptable risk for the autonomous vehicle. That is, the heuristic monitor evaluates inputs used to determine a planned path which can cause future operational risk estimates to rise, for example, where a fault in prior input information could cause an operational risk to be underestimated. The particular approach used by the operational risk monitor module 108 can be defined during the deployment of the operational risk monitor module 108 to a computer of the autonomous vehicle 108. Alternatively, or additionally, the approach used by the operational risk monitor 108 can be changed after the deployment of the operational risk monitor module 108 to the computer of the autonomous vehicle.
  • A server 112 communicates with one or more computing aspects of the autonomous vehicle 102 over a network 114. The server 112 may be or include a hardware server (e.g., a server device), a software server (e.g., a web server and/or a virtual server), or both. For example, where the server 112 is or includes a hardware server, the server 112 may be a computing device located at a remote operating center used to manage the operations of the autonomous vehicle 102 (and/or a fleet of autonomous vehicles including the autonomous vehicle 102) or a server device located in a rack, such as of a data center. The network 114 is a network used to communicate information between a computing aspect of the autonomous vehicle 102 and the server 112. The network 114 may, for example, be a local area network, a wide area network, a machine-to-machine network, a virtual private network, or another public or private network.
  • The server 112 may receive information associated with the operational risk determined using the operational risk monitor module 108, a signal generated using the operational risk monitor module 108, or other information processed by or at the operational risk monitor module 108. For example, that information can be reported to the server 112 for logging. The information logged at the server 112 may be used to further configure or update configurations of the operational risk monitor module 108. For example, the information logged at the server 112 may be processed using pattern recognition or other intelligence software to determine whether false positives, false negatives, or other misunderstood features are present within a particular location through which the autonomous vehicle has traveled. That information can be analyzed and then used to update an intelligence system of the autonomous vehicle, for example, by training the control system modules 106 and/or the operational risk monitor module 108 to more accurately detect and understand aspects about the environment in which the autonomous vehicle is located.
  • Implementations of the system 100 may differ from what is shown and described with respect to FIG. 1. In some implementations, the system 100 may omit the server 112 and/or the network 114. For example, data indicative of how the operational risk monitor module 108 is used may be locally stored within a computing device of the autonomous vehicle 102 itself. The data can be retrieved directly from the computing device. In another example, data indicative of how the operational risk monitor module 108 is used may not be stored.
  • In some implementations, where a determination is made that a remedial action does not need to be performed, the operational risk monitor module 108 may not transmit a signal to the control system modules 106. For example, the control system modules 106 may be configured to continue controlling the autonomous vehicle 102 along the current planned path unless a signal is received from the operational risk monitor module 108. In such an implementation, the operational risk monitor module 108 may only transmit a signal to the control system modules 106 where a determination is made that a remedial action should be performed.
  • FIG. 2 is a block diagram showing an example of an operational risk monitor module 200. The operational risk monitor module 200 may, for example, be the operational risk monitor module 108 shown in FIG. 1. The operational risk monitor module 200 includes an object risk comparator 202, an object prediction risk comparator 204, a planned path comparator 206, and an operational risk processor 208 that uses the output of the object risk comparator 202, the object prediction risk comparator 204, and the planned path comparator 206 to determine a remedial action 210.
  • The operational risk monitor module 200 uses information about objects 212, object predictions 214, and planned paths 216, received from one or more control system modules of an autonomous vehicle (e.g., the control system modules 106 and the autonomous vehicle 102), to determine the remedial action 210. The operational risk monitor module 200 then transmits a signal to the one or more control system modules to indicate whether and how to change a controlling of the autonomous vehicle.
  • The object risk comparator 202 receives information about the objects 212, which are objects located within an environment in which an autonomous vehicle using the operational risk monitor module 200 is located. The objects 212 may be or include objects detected within the environment in which the autonomous vehicle is located. The objects 212 may be detected using one or more sensors of the autonomous vehicle (e.g., the sensors 104).
  • The information about the objects 212 received by the object risk comparator 202 includes a list of objects currently detected within the environment in which the autonomous vehicle is located and a list of objects previously detected within that environment. For example, the objects previously detected within the environment in which the autonomous vehicle is located can be retrieved from a buffer or other memory structure that temporarily stores information used for processing at the operational risk monitor module 200. A list of objects, such as the list of currently detected objects or the list of previously detected objects, may be, include, or otherwise refer to a data set which indicates the objects.
  • The object risk comparator 202 uses the information about the objects 212 to determine whether the objects currently detected within the environment in which the autonomous vehicle is located present a greater risk to the autonomous vehicle than the objects previously detected within that environment. In particular, the object risk comparator 202 compares the currently detected objects against the previously detected objects to determine whether the risk associated with the currently detected objects being within the environment in which the autonomous vehicle is located is greater than the risk associated with the previously detected objects being within that environment.
  • For example, the processing by the object risk comparator 202 can be performed to determine whether a number of detected objects has changed over some amount of time (e.g., one second or less), whether the types of detected objects have changed over some amount of time, whether a location of one or more of the detected objects within the environment in which the autonomous vehicle is located has changed over some amount of time, or another change in the objects detected at two different times. The object risk comparator 202 can use rules or definitions of risks to determine whether the currently detected objects present a greater risk to the autonomous vehicle than the previously detected objects.
  • For example, different objects can be defined to have different risks based on their type and their estimated distance from the autonomous vehicle. In another example, different rules can be used to indicate how the detection of objects should be processed based on, for example, when the object was detected, whether a previously detected object expected to still be within the environment in which the autonomous vehicle is located is no longer detected within that environment, whether the classification of a detected object has changed based on the information about the objects 212, or other criteria.
  • The rules or definitions of risk used by the object risk comparator 202 may, for example, be determined based on a machine learning model or other machine intelligence approach. For example, a neural network or other structure can be trained according to training data sets representing different objects and their relationships to a vehicle. In another example, the rules or definitions of risk used by the object risk comparator 202 may be determined using a deterministic risk framework other than one based in machine learning. The information modeled based on that training can be used to determine the rules or definitions of risk used by the object risk comparator 202. The rules or definitions of risk used by the object risk comparator 202 may be configurable, for example, by a user of the autonomous vehicle.
  • The object risk comparator 202 outputs a signal indicating whether the objects currently detected within the environment in which the autonomous vehicle is located present a greater risk to the autonomous vehicle than the objects previously detected within that environment. The signal may, for example, include a flag having a first value when the currently detected objects present a greater risk to the autonomous vehicle than the previously detected objects and having a second value when the currently detected objects do not present a greater risk to the autonomous vehicle than the previously detected objects.
  • The object prediction risk comparator 204 receives information about the object predictions 214 representing expected behaviors of the objects 212 within the environment in which the autonomous vehicle is located. The information about the object predictions 214 received by the object prediction risk comparator 204 includes a list of object predictions for objects currently detected within the environment in which the autonomous vehicle is located and a list of object predictions for objects previously detected within that environment. For example, the object predictions for the objects previously detected within the environment in which the autonomous vehicle is located can be retrieved from a buffer or other memory structure that temporarily stores information used for processing at the operational risk monitor module 200. A list of object predictions, such as the list of object predictions for the currently detected objects or the list of object predictions for the previously detected objects, may be, include, or otherwise refer to a data set which indicates the object predictions.
  • The object prediction risk comparator 204 uses the information about the object predictions 214 to determine whether the expected behaviors of the objects currently detected within the environment in which the autonomous vehicle is located present a greater risk to the autonomous vehicle than the expected behaviors of the objects previously detected within that environment. In particular, the object prediction risk comparator 204 compares the expected behaviors of the currently detected objects against the expected behaviors of the previously detected objects to determine whether the risk associated with the expected behaviors of the currently detected objects within the environment in which the autonomous vehicle is located is greater than the risk associated with the expected behaviors of the previously detected objects within that environment.
  • For example, the processing by the object prediction risk comparator 204 can be performed to determine whether a detected object is expected to move closer to the autonomous vehicle, whether a detected object is expected to remain in the same location within the environment in which the autonomous vehicle is located or leave that environment altogether, whether and how a velocity at which a detected object moves is expected to change within the environment in which the autonomous vehicle is located, or another change in the expected behavior of detected objects. The object prediction risk comparator 204 can use rules or definitions of risks to determine whether the object predictions for the currently detected objects present a greater risk to the autonomous vehicle than the object predictions for the previously detected objects.
  • For example, different object predictions can be defined to have different risks based on the object to which they relate and the type of behaviors predicted thereby. In another example, different rules can be used to indicate how the object predictions should be processed based on, for example, whether the predicted movement path of a detected object has changed, whether a detected object predicted to remain stationary has instead moved, whether a detected object predicted to move has instead remained stationary, or other criteria, whether the detected object is moving too close to the autonomous vehicle, or other criteria.
  • The rules or definitions of risk used by the object prediction risk comparator 204 may, for example, be determined based on a machine learning model or other machine intelligence approach. For example, a neural network or other structure can be trained according to training data sets representing different objects and their expected behaviors. In another example, the rules or definitions of risk used by the object prediction risk comparator 204 may be determined using a deterministic risk framework other than one based in machine learning. The information modeled based on that training can be used to determine the rules or definitions of risk used by the object prediction risk comparator 204. The rules or definitions of risk used by the object prediction risk comparator 204 may be configurable, for example, by a user of the autonomous vehicle.
  • The object prediction risk comparator 204 outputs a signal indicating whether the object predictions for the objects currently detected within the environment in which the autonomous vehicle is located present a greater risk to the autonomous vehicle than the object predictions for the objects previously detected within that environment. The signal may, for example, include a flag having a first value when the object predictions for the currently detected objects present a greater risk to the autonomous vehicle than the object predictions for the previously detected objects and having a second value when the object predictions for the currently detected objects do not present a greater risk to the autonomous vehicle than the object predictions for the previously detected objects.
  • The planned path comparator 206 receives information about the planned paths 216 representing intended sequences of actions used for the autonomous vehicle to travel within the environment in which the autonomous vehicle is located. The information about the planned paths 216 received by the planned path comparator 206 includes a current planned path for the autonomous vehicle to travel within the environment in which it is located and one or more previously determined planned paths for the autonomous vehicle. For example, the one or more previously determined planned paths for the autonomous vehicle can be retrieved from a buffer or other memory structure that temporarily stores information used for processing at the operational risk monitor module 200. The one or more previously determined planned paths for the autonomous vehicle may be planned paths determined within or less than some amount of time (e.g., one second) of the determining of the current planned path.
  • The planned path comparator 206 uses the information about the planned paths 216 to determine whether a planned path for the autonomous vehicle has changed. In particular, the planned path comparator 206 compares the current planned path against the one or more previously determined planned paths (e.g., against a most recently determined one of the one or more previously determined planned paths) to determine whether the current planned path is different from the one or more previously determined planned paths. The planned path comparator 206 outputs a signal indicating whether the planned path for the autonomous vehicle has changed. The signal may, for example, include a flag having a first value when the planned path for the autonomous vehicle has changed and having a second value when the planned path for the autonomous vehicle has not changed.
  • The operational risk monitor module determines the remedial action 210 using a systematic monitor, a heuristic monitor, and/or a sensitivity monitor. For example, where the operational risk processor 208 uses a systematic monitor, the operational risk processor 208 uses the signals output from the object risk comparator 202, the object prediction risk comparator 204, and the planned path comparator 206 to determine the remedial action 210. For example, where the signals output from the object risk comparator 202, the object prediction risk comparator 204, and the planned path comparator 206 are or otherwise include flags, the operational risk processor 208 can use those flags to determine the operational risk for the autonomous vehicle. The operational risk processor 208 can then use that operational risk to determine whether the remedial action 210 should be executed, for example, using one or more control system modules of the autonomous vehicle. In another example, where the operational risk monitor module 200 uses a heuristic monitor, the operational risk processor 208 uses input information including the objects 212, the object predictions 214, and the constraints 218 to determine whether a systematic rule used by the systematic monitor functionality of the operational risk monitor module 200 is violated, for example, based on static scene behaviors and abnormal dynamic scene behaviors of a current scene synthesized based on the objects 212, the object predictions 214, and the constraints 218 and further based on previous abnormal dynamic scene behaviors of a previous scene synthesized based on previous input information. In yet another example, where the operational risk monitor 200 uses a sensitivity monitor, the operational risk processes uses input information including the objects 212, the object predictions 214, and the planned paths 216 to evaluate planned paths based on updated scene information synthesized based on the newly detected objects and object predictions. Implementations and examples for using a systematic monitor, a heuristic monitor, or a sensitivity monitor for autonomous vehicle control are respectively described below with respect to FIGS. 7, 8, and 9.
  • Implementations of the operational risk monitor module 200 may differ from what is shown and described with respect to FIG. 2. In some implementations, the object prediction risk comparator 204 may also receive the information about the objects 212. For example, the object prediction risk comparator 204 can apply rules against the objects 212 using the object predictions 214 to determine whether a rule violation has occurred. In some implementations, the same machine learning model, other machine intelligence approach, or other deterministic risk framework, or the same combination thereof, can be used by both of the object risk comparator 202 and the object prediction risk comparator 204.
  • In some implementations, the operational risk processor 208 may perform pre-processing against the input received from the object risk comparator 202, the object prediction risk comparator 204, and the planned path comparator 206. For example, the pre-processing can include determining whether further processing of the input received from the object risk comparator 202, the object prediction risk comparator 204, and the planned path comparator 206 based on the values of that input. For example, the operational risk processor 208 may be configured to stop processing input received from the object risk comparator 202, the object prediction risk comparator 204, and the planned path comparator 206 in some cases.
  • In some such implementations, the operational risk processor 208 can pre-process the values of the flags output by the object risk comparator 202, the object prediction risk comparator 204, and the planned path comparator 206. For example, the operational risk processor 208 can use the pre-processing functionality thereof to stop further processing by the operational risk processor 208 where the value of the flag output by the object risk comparator 202 indicates that the currently detected objects do not present a greater risk to the autonomous vehicle than the previously detected objects, the value of the flag output by the object prediction risk comparator 204 indicates that the object predictions for the currently detected objects do not present a greater risk to the autonomous vehicle than the object predictions for the previously detected object, and the value of the flag output by the planned path comparator 206 indicates that the current planned path for the autonomous vehicle remains unchanged as compared to the one or more previously determined planned paths. Where the operational risk processor 208 stops further processing, the remedial action 210 is not produced, and the operational risk monitor module 200 does not cause a change in the controlling of the autonomous vehicle.
  • FIG. 3 is a block diagram showing examples of modules and components used for operational risk assessment for autonomous vehicle control. The modules and components shown in FIG. 3 include or otherwise refer to software and hardware of an autonomous vehicle (e.g., the autonomous vehicle 102 shown in FIG. 1), which are used to monitor the environment in which the autonomous vehicle is located and control the autonomous vehicle based on the monitoring. The modules and components shown in FIG. 3 are included in or otherwise refer to aspects of the autonomous vehicle 102 shown in FIG. 1, including the sensors 104, the control system modules 106, and the operational risk monitor module 108.
  • As shown, an operational risk monitor module 300, which may, for example, be the operational risk monitor module 200 shown in FIG. 2, receives signals output from each of a perception module 302, a prediction module 304, and a planning module 306. The perception module 302 is a control system module (e.g., of the control system modules 106) which processes information measured or otherwise recorded using sensors 308 (e.g., the sensors 104) to detect objects within an environment in which the autonomous vehicle is located. The signal output by the perception module 302 is or includes information about objects, for example, the objects 212 shown in FIG. 2.
  • The prediction module 304 is a control system module (e.g., of the control system modules 106) which processes information about the objects detected using the perception module 302 to determine object predictions for those objects. The signal output by the prediction module 304 is or includes information about object predictions, for example, the object predictions 214 shown in FIG. 2.
  • The planning module 306 is a control system module (e.g., of the control system modules 106) which processes information received from a control constraints module 310 (e.g., of the control system modules 106) along with the object predictions determined using the prediction module 304 to determine planned paths for the autonomous vehicle. The signal output by the planning module 306 is or includes information about planned paths, for example, the planned paths 216 shown in FIG. 2.
  • The operational risk monitor module 300 uses systematic monitoring, heuristic monitoring, and/or sensitivity monitoring to evaluate changes in operational risk for the autonomous vehicle. With systematic monitoring, the operational risk monitor module 300 uses the signals output by the perception module 302, the prediction module 304, and the planning module 306 to determine an operational risk for the autonomous vehicle and further to determine whether and how to cause a change in the controlling of the autonomous vehicle based on that operational risk. With heuristic monitoring, the operational risk monitor module 300 uses information taken as input to the planning module 306 to determine whether a planned path produced using that information could cause future operational risk estimates to exceed a threshold. With sensitivity monitoring, the operational risk monitor module 300 determines whether perturbations to a planned path cause the operational risk to increase enough to merit the performance of a remedial action. The operational risk monitor module 300 generates a signal indicating whether and how to cause a change in the controlling of the autonomous vehicle based on the determined operational risk to a platform control module 312 (e.g., of the control system modules 106).
  • The platform control module 312 represents functionality for the autonomous vehicle used to control the autonomous vehicle based on the signal output by the operational risk monitor module 300. For example, the platform control module 312 can generate one or more actuator outputs for use in controlling aspects of the autonomous vehicle based on the signal received from the operational risk monitor module 300.
  • For example, the signal transmitted to the platform control module 312 from the operational risk monitor module 300 can be or include a control abort command that the platform control module 312 uses to perform an emergency stop of the autonomous vehicle. In another example, the signal transmitted to the platform control module 312 from the operational risk monitor module 300 can be or include a velocity change command that the platform control module 312 uses to change a current speed of the autonomous vehicle. In yet another example, the signal transmitted to the platform control module 312 from the operational risk monitor module 300 can be or include a validation signal that the platform control module 312 uses to keep the autonomous vehicle traveling along its current planned path.
  • The operational risk monitor module 300 can also transmit other signals to different control system modules of the autonomous vehicle. For example, the operational risk monitor module 300 can transmit a signal indicating a perception incident flag to the perception module 302, such as to alert the perception module 302 as to a determination by the operational risk monitor module 300 that an object detected within the environment in which the autonomous vehicle is located presents an incident risk to the autonomous vehicle. The incident risk indicated using the perception incident flag is a first incident risk representing a risk of an incident associated with an object. In another example, the operational risk monitor module 300 can transmit a signal indicating a prediction incident flag to the prediction module 304, such as to alert the prediction module 304 as to a determination by the operational risk monitor module 300 that an expected behavior of an object detected within the environment in which the autonomous vehicle is located presents an incident risk to the autonomous vehicle. The incident risk indicated using the prediction incident flag is a second incident risk representing a risk of an incident associated with an object prediction.
  • In another example, the operational risk monitor module 300 can transmit a signal indicating information to be reported or stored remotely to a communication interface 314. The communication interface 314 is a hardware component of the autonomous vehicle which is used to communicate interface over a network to one or more remote computers (e.g., a server or other computer at a remote operating center). The communication interface 314 communicates using one or more network protocols, such as using Ethernet, TCP, IP, power line communication, Wi-Fi, Bluetooth®, infrared, GPRS, GSM, CDMA, Z-Wave, ZigBee, another protocol, or a combination thereof.
  • In another example, the operational risk monitor module 300 can transmit a platform control constraints flag to the control constraints module 310, such as to cause the control constraints module 310 to change one or more platform control constraints of the autonomous vehicle. For example, the platform control constraints flag can indicate to adjust one or more constraints and/or the priorities of those constraints used for an operational aspect of the autonomous vehicle, such as a propulsion system of the autonomous vehicle, a braking system of the autonomous vehicle, or another system of the autonomous vehicle. In some cases, the operational risk monitor module 300 can receive a signal directly from the control constraints module 310, for example, for use in determining the operational risk for the autonomous vehicle and/or for use in determining whether and how to cause a change in the controlling of the autonomous vehicle based on that operational risk.
  • The operational risk monitor module 300 uses information stored in a memory buffer 316 to determine an operational risk for the autonomous vehicle and further to determine whether and how to cause a change in the controlling of the autonomous vehicle based on that operational risk. The memory buffer 316 stores information about objects, object predictions, and planned paths, respectively received from the perception module 302, the prediction module 304, and the planning module 306, as previously detected objects, previously determined object predictions, and previously determined planned paths. The operational risk monitor module 300 retrieves the previously detected objects, previously determined object predictions, and previously determined planned paths from the memory buffer 316 as necessary.
  • The memory buffer 316 stores the information about the previously detected objects, previously determined object predictions, and previously determined planned paths for a limited amount of time, such as according to an eviction policy for the memory buffer 316. The eviction policy for the memory buffer 316 may be fixed or configurable. Upon the expiration of the time set by the eviction policy for given data, the memory buffer 316 deletes the given data, such as to prepare space for storing new data received from perception module 302, the prediction module 304, and the planning module.
  • The control system modules may include other modules used for controlling aspects of the autonomous vehicle. For example, an onboard navigation module 318 can be used to indicate geographical information about the location of the autonomous vehicle. In another example, a localization module 320 can use a navigation waypoint command (e.g., indicating the geographical information) output from the onboard navigation module along with information about objects detected within an environment in which the autonomous vehicle is located (e.g., as output by the perception module 302) to classify and determine the location of the autonomous vehicle. A localized waypoint command (e.g., including information about the classified and determined location of the autonomous vehicle) may then be made available, for example, to the control constraints module 310.
  • Implementations of the modules and components used for operational risk assessment for autonomous vehicle control may differ from what is shown and described with respect to FIG. 3. In some implementations, information determined at and/or output by the operational risk monitor module 300 can be reported to a user of the autonomous vehicle. For example, the information reported to the user of the autonomous vehicle can include the operational risk determined using the inputs to the operational risk monitor module 300 and/or one or more of the outputs from the operational risk monitor module 300. The user of the autonomous vehicle may refer to the driver of the autonomous vehicle, a passenger of the autonomous vehicle, an owner or other registrant of the autonomous vehicle, or a person who manages the operation of the autonomous vehicle (e.g., at a remote operation center), or a third party observer, such as an insurance or governing authority agent who requires or requests data related to the operation of the autonomous vehicle.
  • In such an implementation, the communication interface 314 can be used for the reporting by transmitting the information to a user device of the user of the autonomous vehicle. Alternatively, or additionally, the onboard navigation module 318 can be used for reporting the information the user of the autonomous vehicle.
  • FIG. 4 is an illustration showing an example of a systematic monitor used by an operational risk monitor module, which may, for example, be the operational risk monitor module 300 shown in FIG. 3. The systematic monitor represents functionality of the operational risk monitor module for determining an operational risk of an autonomous vehicle 400, which may, for example, be the autonomous vehicle 102 shown in FIG. 1, and determining whether and how to cause a change in the controlling of the autonomous vehicle 400 based on a comparison between the operational risk and a previous operational risk estimation.
  • The autonomous vehicle 400 is traveling in an environment. An iso-boundary operational risk range 402 represents the field of view of the sensors of the autonomous vehicle 400 (e.g., the sensors 104), which may cover a 360 degree rotation around a center point of the autonomous vehicle 400 so as to cover an area around the autonomous vehicle 400. A planned path 404 indicates the direction of travel of the autonomous vehicle 400. An object 406 (e.g., the object 110) is located nearby to the autonomous vehicle 400. The operational risk monitor module of the autonomous vehicle 400 detects the object 406 (e.g., a pedestrian) within the environment in which the autonomous vehicle 400 is located.
  • The operational risk monitor module determines a current operational risk for the autonomous vehicle 400 along the planned path 404 based on can be determined based on the detection of the object 406 and a prediction of how the object 406 will behave. For example, where the object 406 is a pedestrian, the object prediction may be a prediction of whether the pedestrian will walk in front of the autonomous vehicle 400. The operational risk monitor module compares the current operational risk against a previous operational risk estimation, which may, for example, have been determined one second earlier or another time earlier when the object 406 was farther from the autonomous vehicle 400.
  • If the current operational risk represents an increase from the previous operational risk estimation that exceeds a threshold, such that the previous operational risk estimation represents an operational risk underestimation, the systematic monitor of the operational risk monitor module determines a remedial action to perform to prevent a potential incident related to the predicted behavior of the object 406. The operational risk monitor module generates a signal usable by one or more control system modules of the autonomous vehicle 400 (e.g., the control system modules 106) to change a controlling of the autonomous vehicle 400.
  • FIG. 5 is an illustration showing an example of a heuristic monitor used by an operational risk monitor module, which may, for example, be the operational risk monitor module 300 shown in FIG. 3. The heuristic monitor represents functionality of the operational risk monitor module for determining an operational risk of an autonomous vehicle 500, which may, for example, be the autonomous vehicle 102 shown in FIG. 1, and determining whether and how to cause a change in the controlling of the autonomous vehicle 500 based on whether the operational risk results in an anticipated rule violation.
  • The autonomous vehicle 500 is traveling in an environment. An iso-boundary operational risk range 502 represents the field of view of the sensors of the autonomous vehicle 500 (e.g., the sensors 104), which may cover a 360 degree rotation around a center point of the autonomous vehicle 500 so as to cover an area around the autonomous vehicle 500. A first distance 504 from the autonomous vehicle 500 and a second distance 506 from the autonomous vehicle 500 are each associated with different rules used by a systematic monitor of the operational risk monitor module. An object 508 (e.g., the object 110) is located nearby to the autonomous vehicle 500. The operational risk monitor module of the autonomous vehicle 500 detects the object 508 (e.g., a pedestrian) within the environment in which the autonomous vehicle 500 is located.
  • The operational risk monitor module detects the object 508 as being within the second distance 506 from the autonomous vehicle 500, but not within the first distance 504. A rule associated with the first distance 504 is thus not implicated; however, a rule associated with the second distance 506 is. The rule associated with the second distance 506 may indicate that if an operational risk determined for the autonomous vehicle 500 exceeds a threshold when a detected object (e.g., the object 508) is within the second distance 506, a remedial action should be executed to cause a change in the controlling of the autonomous vehicle 500.
  • The operational risk monitor module of the autonomous vehicle 500 determines an operational risk of the autonomous vehicle 500 based on the detection of the object 508 and a prediction of how the object 508 will behave. For example, where the object 508 is a pedestrian, the object prediction may be a prediction of whether the pedestrian will walk in front of the autonomous vehicle 500. The operational risk monitor module then determines whether the operational risk exceeds a threshold for the rule associated with the second distance 506.
  • If the operational risk exceeds the threshold for the rule associated with the second distance 506, the heuristic monitor of the operational risk monitor module determines a remedial action for execution to prevent a potential incident related to the predicted behavior of the object 508. The operational risk monitor module generates a signal usable by one or more control system modules of the autonomous vehicle 500 (e.g., the control system modules 106) to change a controlling of the autonomous vehicle 500.
  • FIG. 6 is an illustration showing an example of an extended heuristic monitor used by an operational risk monitor module, which may, for example, be the operational risk monitor module 300 shown in FIG. 3. The extended heuristic monitor represents extended functionality of the heuristic monitor described with respect to FIG. 5, in which rules are violated by abnormal behaviors of objects within an environment in which an autonomous vehicle 600, which may, for example, be the autonomous vehicle 102 shown in FIG. 1, is located, regardless of whether an operational risk determined based on an object detection exceeds a threshold.
  • The autonomous vehicle 600 is traveling in an environment. An iso-boundary operational risk range 602 represents the field of view of the sensors of the autonomous vehicle 600 (e.g., the sensors 104), which may cover a 360 degree rotation around a center point of the autonomous vehicle 600 so as to cover an area around the autonomous vehicle 600. A first distance 604 from the autonomous vehicle 600 and a second distance 606 from the autonomous vehicle 600 are each associated with different rules used by a systematic monitor of the operational risk monitor module. A first object 608 and a second object 610 (e.g., each of which being the object 110 or another object) are each located nearby to the autonomous vehicle 600. The operational risk monitor module of the autonomous vehicle 600 detects the first object 608 and the second object 610 (e.g., one or both of which being a pedestrian) within the environment in which the autonomous vehicle 600 is located.
  • The operational risk monitor module detects the first object 608 and the second object 610 as each being within the second distance 606 from the autonomous vehicle 600, but not within the first distance 604. A rule associated with the first distance 604 is thus not implicated; however, a rule associated with the second distance 606 is. The rule associated with the second distance 606 may indicate that if a detected object (e.g., the first object 608 and/or the second object 610) is within the second distance 606 and performs an unexpected behavior, a remedial action should be executed to cause a change in the controlling of the autonomous vehicle 600.
  • The operational risk monitor module of the autonomous vehicle 600 determines whether either of the first object 608 or the second object 610 performs an unexpected behavior based on the detections of the first object 608 and the second object 610 and based on predictions of how the first object 608 and the second object 610 will behave. For example, where the first object 608 is a pedestrian and the second object 610 is a garbage can, the object prediction for the first object 608 may be a prediction of whether the pedestrian will walk in front of the autonomous vehicle 600 and the object prediction for the second object 610 may be a prediction that the garbage can will remain in place.
  • If the operational risk monitor module determines that the second object 610 has moved (e.g., based on next information indicating a detection of the second object 610 after a time at which the earlier detection of the second object 610 occurred), the heuristic monitor of the operational risk monitor module determines a remedial action to perform to prevent a potential incident related to the unexpected behavior of the second object 610. The operational risk monitor module generates a signal usable by one or more control system modules of the autonomous vehicle 600 (e.g., the control system modules 106) to change a controlling of the autonomous vehicle 600.
  • To further describe some implementations in greater detail, reference is next made to examples of techniques which may be performed by or using a system for operational risk assessment for autonomous vehicle control, such as described with respect to FIGS. 1-6. FIG. 7 is a flowchart showing an example of a technique 700 for autonomous vehicle control based on a determined operational risk using systematic monitoring. FIG. 8 is a flowchart showing an example of a technique 800 for autonomous vehicle control based on a determined operational risk using heuristic monitoring. FIG. 9 is a flowchart showing an example of a technique 900 for autonomous vehicle control based on a determined operational risk using sensitivity monitoring.
  • The technique 700, the technique 800, and/or the technique 900 can be executed using computing devices, such as the systems, hardware, and software described with respect to FIGS. 1-6. The technique 700, the technique 800, and/or the technique 900 can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the technique 700, the technique 800, the technique 900, or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof.
  • For simplicity of explanation, the technique 700, the technique 800, and the technique 900 are each depicted and described herein as a series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement a technique in accordance with the disclosed subject matter.
  • Referring first to FIG. 7, the technique 700 for autonomous vehicle control based on a determined operational risk using systematic monitoring is shown. The technique 700 may, for example, represent operations performed at an operational risk monitor module (e.g., the operational risk monitor module 300 shown in FIG. 3) using systematic monitoring.
  • At 702, input information is received. The input information indicates one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects. The input information may, for example, be received from one or more control system modules of the autonomous vehicle. For example, the one or more objects may be detected using a perception module that processes information measured or otherwise recorded using sensors of the autonomous vehicle. In another example, the one or more object predictions may be determined using a prediction module that processes the detected objects based on expected behaviors of those detected objects.
  • At 704, a planned path, referred to as the Nth planned path, is identified. Identifying the Nth planned path includes reading the Nth planned path from memory. For example, the Nth planned path can be identified by retrieving data associated with the Nth planned path from a buffer (e.g., the memory buffer 316 shown in FIG. 3). The Nth planned path represents an intended sequence of actions used for the autonomous vehicle to travel within the environment in which the autonomous vehicle is located. The planned path is one of a number of planned path candidates stored in the buffer. The planned path candidates include planned paths which have been selected (e.g., by the planning module 306) as winning path candidates. The planned path candidates can also include planned paths which were determined (e.g., by the planning module 306) as being non-winning path candidates. For example, a number of planned path candidates may be evaluated at a given processing instant to select one of those planned path candidates as a winning path candidate, such as based on the winning path candidate having a highest probability of safety or a lowest probability of risk. The other planned path candidates which were not selected as the winning path candidate are non-winning path candidates.
  • At 706, an updated scene for the Nth planned path is synthesized using the input information. The updated scene for the Nth planned path refers to an understanding of the environment in which the autonomous vehicle is located as the autonomous vehicle travels along the Nth planned path, according to the newly detected objects within that environment and the newly predicted behaviors of those detected objects. For example, the Nth planned path may be a planned path which has already been evaluated using an operational risk monitor module as described herein. During such prior evaluation, a scene for the Nth planned path may have been synthesized based on then-detected objects and then-predicted behaviors of those objects. Now, based on the newly detected objects and newly predicted behaviors of those objects, the updated scene can be synthesized by applying the Nth planned path to the environment in which the autonomous vehicle is located. In this way, the scene previously synthesized for the Nth planned path is updated according to the newly detected objects and the newly predicted behaviors of those objects. In some cases, the updated scene can be synthesized by projecting behaviors of the detected objects according to the object predictions against the Nth planned path, such as to predict a relationship between the expected behaviors of the detected objects and the autonomous vehicle as the autonomous vehicle travels along the Nth planned path.
  • At 708, an operational risk is determined for the updated scene. Determining the operational risk can include retrieving a cost value for the updated scene, such as based on the input information and based on the Nth planned path. For example, a cost function can be called to retrieve the cost value, for example, from a control system module of the autonomous vehicle. Alternatively, the cost value can be determined as part of the operations for determining the operational risk, for example, where the control system modules are not used to determine or otherwise maintain cost value information.
  • The operational risk represents a probability that the autonomous vehicle will encounter an event by continuing to travel along the Nth planned path based on the detected objects and object predictions. The operational risk further represents a severity of the event, for example, based on the type of event and the expected location and behavior of the event. The operational risk is determined using the cost value. For example, the cost value can be a value associated with the updated scene information. A deterministic risk framework or other modeling is used to identify cost values for different scenes based on the relationships between the detected objects, the expected behaviors of those objects, and the planned path.
  • At 710, a determination is made as to whether the operational risk is greater than a previous operational risk estimation. The previous operational risk estimation is an estimation of operational risk determined for a N-1th planned path. For example, the Nth planned path may represent a planned path determined some amount of time (e.g., one hundredth of a second) after the N-1th planned path was determined. The Nth planned path may therefore represent the planned path of the autonomous vehicle some amount of time after the N-1th planned path. The previous operational risk estimation may be stored in a buffer (e.g., the memory buffer 316). Determining whether the operational risk is greater than the previous operational risk estimation may include retrieving the previous operational risk estimation from memory and comparing the retrieved operational risk against the operational risk. The determination as to whether the operational risk is greater than the previous operational risk estimation is used to indicate whether an operational risk underestimation occurred. That is, where the operational risk is greater than the previous operational risk estimation, a determination can be made that the previous operational risk estimation is too low.
  • At 712, where a determination is made that the operational risk is greater than the previous operational risk estimation, such that there is an operational risk underestimation, a signal is generated to indicate the operational risk underestimation. The signal is transmitted to one or more control system modules to cause a change in the controlling of the autonomous vehicle. For example, the signal can be transmitted to cause one or more control system modules of the autonomous vehicle to control the autonomous vehicle by performing a remedial action to prevent the autonomous vehicle from encountering an event associated with the operational risk.
  • At 714, where a determination is made that the operational risk is not greater than the previous operational risk estimation, a determination is made as to whether the Nth planned path is the last planned path stored in memory. For example, where the Nth planned path is identified by retrieving data associated with the Nth planned path from a buffer (e.g., the memory buffer 316), determining whether the Nth planned path is the last planned path includes determining whether the memory buffer 316 stores data associated with a planned path which has not yet been considered using the technique 700.
  • At 716, where the Nth planned path is the last planned path stored in memory, a signal is generated to indicate that there is no underestimation in operational risk as represented by the previous operational risk estimate. The signal is transmitted to one or more control system modules to allow the autonomous vehicle to maintain a safe state of the autonomous vehicle, for example, by allowing the autonomous vehicle to continue traveling along its current planned path.
  • At 718, where the Nth planned path is not the last planned path stored in memory, the value of N is increased by one. The technique 700 then returns to 702 where new input information indicating objects and object predictions is received, and then to 704 where a planned path corresponding to the new value of N is read from memory (e.g., retrieved from the buffer). In some cases, the objects and object predictions of the new input information may be the same as the objects and object predictions of the input information used to process the planned path at the previous value of N. In some such cases, the objects and object predictions of the new input information may not be exactly the same as the objects and object predictions of the input information used to process the planned path at the previous value of N, but may nonetheless not be significantly different.
  • In some implementations, the signals generated at 712 or at 716 may indicate a value of a risk underestimation flag. For example, the risk underestimation flag may have a first value where an operational risk underestimation represented within the previous operational risk estimation is determined and a second value where no such underestimation is determined. The control system modules which receive the signal indicating the value of the risk underestimation flag can use the value of the flag indicated within the signal to determine whether to cause a change in the controlling of the autonomous vehicle. In some implementations, a signal is not generated to indicate that no operational risk increase is determined. For example, at 716, the technique 700 may simply end rather than generate a signal.
  • In some implementations, determining whether the Nth operational risk is greater than the previous operational risk estimation includes determining whether the Nth operational risk exceeds a threshold. For example, even where the Nth operational risk is greater than the previous operational risk estimation (e.g., where the previous operational risk estimation is zero), the Nth operational risk may still not be great enough to merit the performance of a remedial action to cause a change in the controlling of the autonomous vehicle. In some such implementations, the threshold may be configurable. For example, subsequent modeling performed using information processed to determine the Nth operational risk can be used to determine that the threshold should be changed.
  • In some implementations, the technique 700 can include selecting the planned path from a list of candidate planned paths. For example, each of the candidate planned paths may be a planned path identified using a control system module (e.g., the planning module 306). The candidate planned paths may be evaluated based on an instantaneous operational risk limit defining a maximum acceptable operational risk at a given instance along the candidate planned paths. For example, the candidate planned path having the lowest probability of exceeding the instantaneous operational risk limit can be selected as the planned path. As such, the selection of a planned path may be constrained according to an instantaneous operational risk limit. In some such implementations, new candidate planned paths can be generated until the operational risk for a selected planned path is less than or equal to the instantaneous operational risk limit.
  • Referring next to FIG. 8, the technique 800 for autonomous vehicle control based on a determined operational risk using heuristic monitoring is shown. The technique 800 may, for example, represent operations performed at an operational risk monitor module (e.g., the operational risk monitor module 300 shown in FIG. 3) using heuristic monitoring.
  • At 802, input information is received. The input information indicates one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects. The input information may, for example, be received from one or more control system modules of the autonomous vehicle. For example, the one or more objects may be detected using a perception module that processes information measured or otherwise recorded using sensors of the autonomous vehicle. In another example, the one or more object predictions may be determined using a prediction module that processes the detected objects based on expected behaviors of those detected objects. The input information may further include one or more actuator control constraints representing actuation limits for the autonomous vehicle. For example, the actual control constraints can relate to one or more of a maximum lateral or longitudinal traction limit, a maximum acceleration limit, a maximum braking limit, or a maximum steering angle.
  • At 804, a current scene synthesized for the autonomous vehicle based on the input information is evaluated to determine whether the current scene represents a violation of one or more first rules used for determining an operational risk for the autonomous vehicle. The current scene can be synthesized based on the input information. The first rules correspond to static scene behaviors of the current scene in which the static scene behaviors relate to objects being located too close to other objects, objects not being detected within the environment in which the autonomous vehicle is located, an excessive gap in the current scene, actuator control constraints of the autonomous vehicle being too low, or the like. Evaluating the current scene can include the current operational risk can include identifying a rule which is implicated by the detected objects and the object predictions. For example, one or more rules for controlling the autonomous vehicle based on the presence of certain objects and the prediction of certain behaviors of those objects may be used to determine the current operational risk for the autonomous vehicle.
  • Data indicating the object or objects detected within the environment in which the vehicle is located can be used to identify the rule to use for determining the current operational risk. For example, data associated with the rule (e.g., including the rule definition, conditions for using the rule, cost functions or inputs thereto, etc.) can be stored in a database or other data store. The rule can be retrieved from the database or other data store using a query or other request indicating the objects and/or the object predictions.
  • The current operational risk represents a probability that the autonomous vehicle will encounter an event by continuing to travel within the environment in which the autonomous vehicle is located, such as based on the detected objects and object predictions. The current operational risk is determined using a cost value. For example, the cost value can be a value calculated by applying values representative of the objects and/or the object predictions to a cost function. A deterministic risk framework or other modeling is used to identify cost values for different scenes based on the relationships between the detected objects, the expected behaviors of those objects, and the relevant rules.
  • At 806, the current scene is evaluated against a previous scene synthesized for the autonomous vehicle based on the input information is evaluated to determine whether a degree of change between the current scene and the previous scene represents a violation of one or more second rules used for determining a risk underestimation. The previous scene can be synthesized based on previous input information (e.g., based on information previously stored in a buffer). The second rules correspond to abnormal dynamic scene behaviors of the current scene or of the previous scene in which the abnormal dynamic scene behaviors relate to objects appearing within the environment in which the autonomous vehicle is located too late, objects disappearing from the environment in which the autonomous vehicle is located too quickly, objects accelerating within the environment in which the autonomous vehicle is located too quickly, discontinuity between predicted object behaviors, or the like.
  • At 808, responsive to a determination that the current scene represents the violation of the one or more first rules and that the degree of change represents the violation of the one or more second rules, a signal indicating or otherwise used to indicate a remedial action to perform is generated. The remedial action may be specified based on the first and/or second rules. For example, where the rule is used for situations where pedestrians are detected in front of the autonomous vehicle and are expected to remain in front of the autonomous vehicle, the remedial actions available for the autonomous vehicle to perform may be limited to an emergency stop. Alternatively, the remedial action may be one of a number of remedial actions selected using the operational risk monitor module.
  • The first and second rules are rules used by systematic monitor functionality of the operational risk monitor module. For example, a rule can be a configuration or definition of the autonomous vehicle, the environment in which the autonomous vehicle is located, or both, which triggers the systematic monitor to determine that a remedial action needs to be performed. For example, the rule can be a rule indicating to perform an emergency stop of the autonomous vehicle if a pedestrian or other living being is first detected within a defined distance of the front of the autonomous vehicle based on input information, or to abort operation if poor object behavior is predicted. There may be a number of first rules used for different types of control of the autonomous vehicle. The rules may be defined at the time in which the operational risk monitor module is deployed within the autonomous vehicle. Alternatively, some or all of the rules may be configurable, for example, by a user of the autonomous vehicle.
  • For example, the particular remedial action to perform can be selected using a deterministic risk framework indicating a most effective or otherwise most useful remedial action to perform given the current operational cost and given the detected objects and predicted object behaviors. In another example, the particular remedial action to perform can be selected based on the particular value of the current operational risk. For example, each of the available remedial actions can be associated with a range of operational risks. The operational risk monitor module can output a signal (e.g., including a value of a flag and/or other data) used by one or more control system modules of the autonomous vehicle to select the remedial action. Alternatively, the operational risk monitor module can select the remedial action having a range which includes the current operational risk.
  • At 810, the signal indicating or otherwise used to indicate the remedial action is transmitted to one or more control system modules to cause a change in the controlling of the autonomous vehicle. The remedial action can be initiated based on actuator control constraints of the system (e.g., as received within the input information or otherwise). For example, the signal can be transmitted to cause one or more control system modules of the autonomous vehicle to control the autonomous vehicle by performing the remedial action to prevent the autonomous vehicle from encountering an event associated with the current operational risk. For example, a flag received from the operational risk monitor module can be used by a perception module to adjust a confidence in a prediction for the autonomous vehicle. In another example, a flag received from the operational risk monitor module can be used by a planning module to reduce a speed operation of the autonomous vehicle.
  • In some implementations, the heuristic monitoring described with respect to the technique 800 can be extended heuristic monitoring. For example, the rules used by the technique 800 may generally be rules used to determine operational risks based on the detection of objects and the prediction of object behaviors nearby the front of the autonomous vehicle while the autonomous vehicle travels forward. However, extended heuristic monitoring may instead use rules which consider objects and object predictions within any direction of the autonomous vehicle regardless of travel direction. For example, an extended heuristic monitor may indicate to slow down or stop the autonomous vehicle when input information indicates behavior which can result in an identification of an underestimation of a previous operational risk estimate, such as based on a first detection of an unobstructed pedestrian within some distance (e.g., two meters) in any direction of the autonomous vehicle. In another example, an extended heuristic monitor may indicate to keep a speed low when traffic conditions on a freeway are high and the autonomous vehicle is generally surrounded by slow moving traffic.
  • In some implementations, the technique 800 can include selecting the planned path from a list of candidate planned paths. For example, each of the candidate planned paths may be a planned path identified using a control system module (e.g., the planning module 306). The candidate planned paths may be evaluated based on an instantaneous operational risk limit defining a maximum acceptable operational risk at a given instance along the candidate planned paths. For example, one of the candidate planned paths having operational risk that does not exceed the instantaneous operational risk limit can be selected as the planned path, which may be the candidate planned path having the greatest operational risk without exceeding the instantaneous operational risk limit. As such, the selection of a planned path may be constrained according to an instantaneous operational risk limit. In some such implementations, new candidate planned paths can be generated until the operational risk for a selected planned path is less than or equal to the instantaneous operational risk limit.
  • In some implementations, the technique 800 can include operations based on a determination that the current scene does not represent the violation of the one or more first rules or that the degree of change does not represent the violation of the one or more second rules. For example, in such a case, a signal can be generated and transmitted to the one or more control system modules to cause the one or more control system modules to control the autonomous vehicle by maintaining a safe state.
  • In some implementations, the technique 800 can include storing, within a buffer, data indicative of one or more of the current scene, the previous scene, the static scene behaviors, the abnormal dynamic scene behaviors, or the previous abnormal dynamic scene behaviors. For example, the stored data may be used for heuristic processing of a next current scene. In some such implementations, the technique 800 can further include deleting irrelevant data from the buffer. For example, the technique 800 can include determining that the previous abnormal dynamic scene behaviors are unrelated to the operational risk and then data indicative of those previous abnormal dynamic scene behaviors from the buffer based on such determination.
  • Referring next to FIG. 9, the technique 900 for autonomous vehicle control based on a determined operational risk using sensitivity monitoring is shown. The technique 900 may, for example, represent operations performed at an operational risk monitor module (e.g., the operational risk monitor module 300 shown in FIG. 3) using sensitivity monitoring.
  • At 902, input information is received. The input information indicates one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects. The input information may, for example, be received from one or more control system modules of the autonomous vehicle. For example, the one or more objects may be detected using a perception module that processes information measured or otherwise recorded using sensors of the autonomous vehicle. In another example, the one or more object predictions may be determined using a prediction module that processes the detected objects based on expected behaviors of those detected objects.
  • At 904, a planned path is identified. Identifying the planned path includes reading the planned path from memory. For example, the planned path can be identified by retrieving data associated with the planned path from a buffer (e.g., the memory buffer 316 shown in FIG. 3). The planned path represents an intended sequence of actions used for the autonomous vehicle to travel within the environment in which the autonomous vehicle is located. The planned path is a planned path which has already been evaluated using the control system modules of the autonomous vehicle (e.g., by the planning module 306).
  • At 906, a first operational risk is determined for the autonomous vehicle based on the input information and based on the planned path. Determining the first operational risk can include synthesizing an updated scene for the planned path, for example, by updating a previously synthesized scene for the planned path according to the newly detected one or more objects from the input information and according to the newly predicted one or more object predictions from the input information. A cost value can then be retrieved for the updated scene, such as based on the input information and based on the planned path. For example, a cost function can be called to retrieve the cost value, for example, from a control system module of the autonomous vehicle. Alternatively, the cost value can be determined as part of the operations for determining the operational risk, for example, where the control system modules are not used to determine or otherwise maintain cost value information.
  • At 908, a second operational risk is determined for the autonomous vehicle based on the input information and based on perturbations to the planned path. The perturbations to the planned path represent changes to one or more aspects of the planned path, for example, temporal or spatial adjustments to one or more actions of the intended sequence of actions represented by the planned path. In some cases, the perturbations may be minor, for example, by adjusting a speed of the autonomous vehicle by a small amount (e.g., increasing or decreasing the speed of the autonomous vehicle by less than 0.5 miles or kilometers per hour). In other cases, the perturbations may be major, for example, by causing an emergency stop of the autonomous vehicle or by causing the autonomous vehicle to make a turn. The perturbations can be identified from a set of candidate perturbations available for evaluation by the heuristic monitor. For example, a maximized sensitivity ratio can be used to identify the one or more perturbations from a set of candidate perturbations. The maximized sensitivity ratio corresponds to a maximum acceptable risk increase normalized by a minimal perturbation magnitude. The operational risk can be normalized by the perturbations, in some cases. For example, where the perturbations or parameters thereof are determined to be discrete, parameters for normalizing the operational risk using the perturbations can be used to adjust the perturbations, such as by retrieving those parameters for normalizing from a lookup table. For example, a lookup table may store discrete classification data for a perturbation. Changing classifications may result in some delta needing to be applied to the perturbation of either a risk overestimation or a risk underestimation. For example, a delta of a perturbation for changing a garbage can detected nearby to the autonomous vehicle to a child can be 1.0 units of underestimation (e.g., −1.0 units of overestimation) of operational risk, whereas a delta of a perturbation for changing a child detected nearby the autonomous vehicle to a garbage can may be −1.0 units of underestimation (e.g., 1.0 units of overestimation) of operational risk.
  • For example, simply generating fixed or random perturbations, e.g. +0.5 kilometers per hour, may generate white noise or meaningless increases in operational risk. However, following rules outlining best practices or exploiting known trends in operational risk, even to the point of optimally evaluating or searching for the maximum increases in operational risk with the lowest changes in perturbation energies or deltas can create very meaningful and practical results, such as those useful for initiating remedial action or for maintaining a safe state.
  • The first and second operational risks each represents a probability that the autonomous vehicle will encounter an event by continuing to travel along the planned path based on the detected objects and object predictions. The first and second operational risks are determined using the cost values. For example, the cost value can be a value associated with the updated scene information. A deterministic risk framework or other modeling is used to identify cost values for different scenes based on the relationships between the detected objects, the expected behaviors of those objects, and the planned path.
  • At 910, a determination is made that a normalized difference between the first and second operational risks exceeds an operational risk sensitivity threshold. The operational risk sensitivity threshold represents a maximum acceptable amount of change to the operational risk of the autonomous vehicle based on the perturbations introduced to the planned path. The normalized difference is determined by first determining a difference between the first and second operational risks and then by normalizing that difference based on perturbation magnitudes. For example, the perturbation magnitudes may be stored in a lookup table.
  • At 912, responsive to a determination that the difference between the first and second operational risk exceeds an operational risk sensitivity threshold, a signal indicating or otherwise used to indicate a remedial action to perform is generated. For example, the particular remedial action to perform can be selected using a deterministic risk framework indicating a most effective or otherwise most useful remedial action to perform given the current operational cost and given the detected objects and predicted object behaviors. In another example, the particular remedial action to perform can be selected based on the particular value of the first operational risk, based on the particular value of the second operational risk, or based on the particular value of the difference between the first and second operational risks. For example, each of the available remedial actions can be associated with a range of operational risks. The operational risk monitor module can output a signal (e.g., including a value of a flag and/or other data) used by one or more control system modules of the autonomous vehicle to select the remedial action. Alternatively, the operational risk monitor module can select the remedial action having a range which includes the current operational risk.
  • At 914, the signal indicating or otherwise used to indicate the remedial action is transmitted to one or more control system modules to cause a change in the controlling of the autonomous vehicle. For example, the signal can be transmitted to cause one or more control system modules of the autonomous vehicle to control the autonomous vehicle by performing the remedial action to prevent the autonomous vehicle from encountering an event associated with the current operational risk. For example, a flag received from the operational risk monitor module can be used by a perception module to adjust a confidence in a prediction for the autonomous vehicle. In another example, a flag received from the operational risk monitor module can be used by a planning module to reduce a speed operation of the autonomous vehicle.
  • In some implementations, the technique 900 can include selecting the planned path from a list of candidate planned paths. For example, each of the candidate planned paths may be a planned path identified using a control system module (e.g., the planning module 306). The candidate planned paths may be evaluated based on an instantaneous operational risk limit defining a maximum acceptable operational risk at a given instance along the candidate planned paths. For example, one of the candidate planned paths having operational risk that does not exceed the instantaneous operational risk limit can be selected as the planned path, which may be the candidate planned path having the greatest operational risk without exceeding the instantaneous operational risk limit. As such, the selection of a planned path may be constrained according to an instantaneous operational risk limit. In some such implementations, new candidate planned paths can be generated until the operational risk for a selected planned path is less than or equal to the instantaneous operational risk limit.
  • In some implementations, the technique 900 can include operations based on a determination that the normalized difference between the first and second operational risks is below the operational risk sensitivity threshold. For example, where the normalized difference is below the operational risk sensitivity threshold, a signal can be generated to indicate that the second operational risk represents an acceptable amount of change to the first operational risk. The signal can then be transmitted to the one or more control system modules to cause the one or more control system modules to control the autonomous vehicle by maintaining a safe state associated with the planned path.
  • FIG. 10 is a block diagram showing an example of a computing device 1000 which may be used in a system for operational risk assessment for autonomous vehicle control, for example, the system 100 shown in FIG. 1. The computing device 1000 may be used to implement a computer that runs or otherwise performs functionality of an operational risk monitor module, for example, the operational risk monitor module 108. Alternatively, the computing device 1000 may be used to implement a single computer used to run or otherwise perform functionality of both the operational risk monitor module and control system modules, for example, the control system modules 106. As a further alternative, the computing device 1000 may otherwise be used to implement one or more computers according to implementations of this disclosure.
  • The computing device 1000 includes components or units, such as a processor 1002, a memory 1004, a bus 1006, a power source 1008, peripherals 1010, a user interface 1012, and a network interface 1014. One of more of the memory 1004, the power source 1008, the peripherals 1010, the user interface 1012, or the network interface 1014 can communicate with the processor 1002 using the bus 1006.
  • The processor 1002 is a central processing unit, such as a microprocessor, and can include single or multiple processors having single or multiple processing cores. Alternatively, the processor 1002 can include another type of device, or multiple devices, now existing or hereafter developed, configured for manipulating or processing information. For example, the processor 1002 can include multiple processors interconnected in any manner, including hardwired or networked, including wirelessly networked. For example, the operations of the processor 1002 can be distributed across multiple devices or units that can be coupled directly or across a local area or other suitable type of network. The processor 1002 can include a cache, or cache memory, for local storage of operating data or instructions.
  • The memory 1004 includes one or more memory components, which may each be volatile memory or non-volatile memory. For example, the volatile memory of the memory 1004 can be random access memory (RAM) (e.g., a DRAM module, such as DDR SDRAM) or another form of volatile memory. In another example, the non-volatile memory of the memory 1004 can be a disk drive, a solid state drive, flash memory, phase-change memory, or another form of non-volatile memory configured for persistent electronic information storage. The memory 1004 may also include other types of devices, now existing or hereafter developed, configured for storing data or instructions for processing by the processor 1002.
  • The memory 1004 can include data for immediate access by the processor 1002. For example, the memory 1004 can include executable instructions 1016, application data 1018, and an operating system 1020. The executable instructions 1016 can include one or more application programs, which can be loaded or copied, in whole or in part, from non-volatile memory to volatile memory to be executed by the processor 1002. For example, the executable instructions 1016 can include instructions for performing some or all of the techniques of this disclosure. The application data 1018 can include user data, database data (e.g., database catalogs or dictionaries), or the like. The operating system 1020 can be, for example, Microsoft Windows®, Mac OS X®, or Linux®; an operating system for a small device, such as a smartphone or tablet device; or an operating system for a large device, such as a mainframe computer.
  • The power source 1008 includes a source for providing power to the computing device 1000. For example, the power source 1008 can be an interface to an external power distribution system. In another example, the power source 1008 can be a battery, such as where the computing device 1000 is a mobile device or is otherwise configured to operate independently of an external power distribution system.
  • The peripherals 1010 includes one or more sensors, detectors, or other devices configured for monitoring the computing device 1000 or the environment around the computing device 1000. For example, the peripherals 1010 can include a geolocation component, such as a global positioning system location unit. In another example, the peripherals can include a temperature sensor for measuring temperatures of components of the computing device 1000, such as the processor 1002.
  • The user interface 1012 includes one or more input interfaces and/or output interfaces. An input interface may, for example, be a positional input device, such as a mouse, touchpad, touchscreen, or the like; a keyboard; or another suitable human or machine interface device. An output interface may, for example, be a display, such as a liquid crystal display, a cathode-ray tube, a light emitting diode display, or other suitable display.
  • The network interface 1014 provides a connection or link to a network (e.g., the network 114 shown in FIG. 1). The network interface 1014 can be a wired network interface or a wireless network interface. The computing device 1000 can communicate with other devices via the network interface 1014 using one or more network protocols, such as using Ethernet, TCP, IP, power line communication, Wi-Fi, Bluetooth, infrared, GPRS, GSM, CDMA, Z-Wave, ZigBee, another protocol, or a combination thereof.
  • Implementations of the computing device 1000 may differ from what is shown and described with respect to FIG. 9. In some implementations, the computing device 1000 can omit the peripherals 1010. In some implementations, the memory 1004 can be distributed across multiple devices. For example, the memory 1004 can include network-based memory or memory in multiple clients or servers performing the operations of those multiple devices. In some implementations, the application data 1018 can include functional programs, such as a web browser, a web server, a database server, another program, or a combination thereof.
  • The implementations of this disclosure can be described in terms of functional block components and various processing operations. Such functional block components can be realized by a number of hardware or software components that perform the specified functions. For example, the disclosed implementations can employ various integrated circuit components (e.g., memory elements, processing elements, logic elements, look-up tables, and the like), which can carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, where the elements of the disclosed implementations are implemented using software programming or software elements, the systems and techniques can be implemented with a programming or scripting language, such as C, C++, Java, JavaScript, assembler, or the like, with the various algorithms being implemented with a combination of data structures, objects, processes, routines, or other programming elements.
  • Functional aspects can be implemented in algorithms that execute on one or more processors. Furthermore, the implementations of the systems and techniques disclosed herein could employ a number of conventional techniques for electronics configuration, signal processing or control, data processing, and the like. The words “mechanism” and “component” are used broadly and are not limited to mechanical or physical implementations, but can include software routines in conjunction with processors, etc.
  • Likewise, the terms “system” or “module” as used herein and in the figures, but in any event based on their context, may be understood as corresponding to a functional unit implemented using software, hardware (e.g., an integrated circuit, such as an ASIC), or a combination of software and hardware. In certain contexts, such systems or mechanisms may be understood to be a processor-implemented software system or processor-implemented software mechanism that is part of or callable by an executable program, which may itself be wholly or partly composed of such linked systems or mechanisms.
  • Implementations or portions of implementations of the above disclosure can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport a program or data structure for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or semiconductor device.
  • Other suitable mediums are also available. Such computer-usable or computer-readable media can be referred to as non-transitory memory or media, and can include volatile memory or non-volatile memory that can change over time. A memory of an apparatus described herein, unless otherwise specified, does not have to be physically contained by the apparatus, but is one that can be accessed remotely by the apparatus, and does not have to be contiguous with other memory that might be physically contained by the apparatus.
  • While the disclosure has been described in connection with certain implementations, it is to be understood that the disclosure is not to be limited to the disclosed implementations but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.

Claims (20)

What is claimed is:
1. A method for controlling an autonomous vehicle based on an operational risk determined for the autonomous vehicle, the method comprising:
receiving input information indicating one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects;
evaluating a current scene synthesized for the autonomous vehicle based on the input information to determine whether the current scene represents a violation of one or more first rules used for determining the operational risk, the operational risk representing a probability that the autonomous vehicle will encounter an event;
evaluating the current scene against a previous scene synthesized for the autonomous vehicle to determine whether a degree of change between the current scene and the previous scene represents a violation of one or more second rules used for determining a risk underestimation;
responsive to determining that the current scene represents the violation of the one or more first rules and that the degree of change represents the violation of the one or more second rules, generating a signal used for controlling the autonomous vehicle; and
transmitting the signal to one or more control system modules of the autonomous vehicle to cause the one or more control system modules to control the autonomous vehicle by performing a remedial action to prevent the autonomous vehicle from encountering the event.
2. The method of claim 1, wherein the signal is a first signal, wherein the method further comprises:
responsive to determining that the current scene does not represent the violation of the one or more first rules or that the degree of change does not represent the violation of the one or more second rules, generating a second signal used for controlling the autonomous vehicle; and
transmitting the second signal to the one or more control system modules of the autonomous vehicle to cause the one or more control system modules to control the autonomous vehicle by maintaining a safe state.
3. The method of claim 1, wherein the one or more first rules correspond to static scene behaviors of the current scene and the one or more second rules correspond to abnormal dynamic scene behaviors of the current scene and previous abnormal dynamic scene behaviors of the previous scene.
4. The method of claim 3, wherein the method further comprises:
storing, within a buffer, data indicative of one or more of the current scene, the previous scene, the static scene behaviors, the abnormal dynamic scene behaviors, or the previous abnormal dynamic scene behaviors.
5. The method of claim 4, wherein the method further comprises:
determining that the previous abnormal dynamic scene behaviors are unrelated to the operational risk; and
responsive to determining that the previous abnormal dynamic scene behaviors are unrelated to the operational risk, removing, from the buffer, data indicative of the previous abnormal dynamic scene behaviors.
6. The method of claim 1, wherein the input information further indicates one or more actuator control constraints representing actuation limits for the autonomous vehicle, wherein the remedial action is initiated based on the one or more actuator control constraints.
7. A method for controlling an autonomous vehicle based on an operational risk determined for the autonomous vehicle, the method comprising:
receiving input information indicating one or more objects detected within an environment in which the autonomous vehicle is located and one or more object predictions representing expected behaviors of the one or more objects;
identifying a planned path representing an intended sequence of actions for the autonomous vehicle to travel within the environment in which the autonomous vehicle is located;
determining a first operational risk for the autonomous vehicle based on the input information and based on the planned path, the first operational risk representing a probability that the autonomous vehicle will encounter an event along the planned path;
determining a second operational risk for the autonomous vehicle based on the planned path and based on one or more perturbations applied to the planned path, the second operational risk representing a probability that the autonomous vehicle will encounter the event along the planned path based on the one or more perturbations applied to the planned path;
determining whether a normalized difference between the first operational risk and the second operational risk exceeds a sensitivity threshold;
responsive to determining that the normalized difference between the first operational risk and the second operational risk exceeds the sensitivity threshold, generating a signal used for controlling the autonomous vehicle; and
transmitting the signal to one or more control system modules of the autonomous vehicle to cause the one or more control system modules to control the autonomous vehicle by performing a remedial action to prevent the autonomous vehicle from encountering the event.
8. The method of claim 7, wherein the signal indicates that the first operational risk represents a risk underestimation for the autonomous vehicle.
9. The method of claim 7, wherein the signal is a first signal, wherein the method further comprises:
responsive to determining that the difference between the first operational risk and the second operational risk is below the sensitivity threshold, generating a second signal used for controlling the autonomous vehicle; and
transmitting the second signal to the one or more control system modules of the autonomous vehicle to cause the one or more control system modules to control the autonomous vehicle by maintaining a safe state associated with the planned path,
wherein the second signal indicates that the second operational risk represents an acceptable amount of change to the first operational risk.
10. The method of claim 7, wherein the method further comprises:
using a maximized sensitivity ratio to identify the one or more perturbations from a set of candidate perturbations, the maximized sensitivity ratio corresponding to a maximum acceptable risk increase normalized by a minimal perturbation magnitude.
11. The method of claim 7, wherein the method further comprises:
determining that first parameters for the one or more perturbations are discrete; and
responsive to determining that the first parameters are discrete, retrieving second parameters for normalizing the second operational risk from a lookup table.
12. The method of claim 7, wherein each of the one or more perturbations represents a temporal or spatial adjustment to one or more actions of the intended sequence of actions.
13. A system for controlling an autonomous vehicle based on an operational risk determined for the autonomous vehicle, the system comprising:
an operational risk monitor module that receives input information associated with an environment in which the autonomous vehicle is located from control system modules of the autonomous vehicle and indicates an action for performance using one or more of the control system modules based on an operational risk determined for the autonomous vehicle using the input information; and
a memory buffer allocated within a memory of a computing device running the operational risk monitor module, wherein the memory buffer stores the information received from the control system modules, information indicative of previous operational risk estimations determined using the operational risk monitor module, and information indicative of scenes previously synthesized for the planned path.
14. The system of claim 13, wherein the operational risk monitor module uses a heuristic monitor to indicate the action based on a violation of one or more first rules associated with a static scene behavior of a current scene synthesized for the autonomous vehicle and based on a violation of one or more second rules associated with an abnormal dynamic scene behavior of the current scene and with an abnormal dynamic scene behavior of a previous scene synthesized for the autonomous vehicle.
15. The system of claim 14, wherein the violation of the one or more first rules is determined by evaluating the current scene based on the input information, wherein the violation of the one or more first rules is determined by evaluating the current scene against the previous scene.
16. The system of claim 14, wherein the action is a remedial action where that the current scene represents the violation of the one or more first rules and where the degree of change represents the violation of the one or more second rules.
17. The system of claim 13, wherein the operational risk monitor module uses a sensitivity monitor to indicate the action based on a normalized difference between a first operational risk and a second operational risk, wherein the first operational risk and the second operational risk both correspond to a planned path representing a previously determined intended sequence of actions for the autonomous vehicle to travel within the environment in which the autonomous vehicle is located.
18. The system of claim 17, wherein the first operational risk is determined based on the input information and based on the planned path, wherein the first operational risk represents a probability that the autonomous vehicle will encounter an event along the planned path, and
wherein the second operational risk is determined based on the planned path and based on one or more perturbations applied to the planned path, wherein the second operational risk represents a probability that the autonomous vehicle will encounter the event along the planned path based on the one or more perturbations applied to the planned path.
19. The system of claim 17, wherein the action is a remedial action where the normalized difference exceeds a sensitivity threshold.
20. The system of claim 13, wherein the input information received from the control system modules includes at least one of information indicative of one or more objects detected within the environment in which the autonomous vehicle is located, information indicative of one or more object predictions representing expected behaviors of the one or more objects, or information indicative of one or more planned paths.
US17/587,280 2019-04-30 2022-01-28 Operational Risk Assessment for Autonomous Vehicle Control Abandoned US20220153264A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/587,280 US20220153264A1 (en) 2019-04-30 2022-01-28 Operational Risk Assessment for Autonomous Vehicle Control

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/399,901 US11198431B2 (en) 2019-04-30 2019-04-30 Operational risk assessment for autonomous vehicle control
US16/404,208 US11235761B2 (en) 2019-04-30 2019-05-06 Operational risk assessment for autonomous vehicle control
US17/587,280 US20220153264A1 (en) 2019-04-30 2022-01-28 Operational Risk Assessment for Autonomous Vehicle Control

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/404,208 Continuation US11235761B2 (en) 2019-04-30 2019-05-06 Operational risk assessment for autonomous vehicle control

Publications (1)

Publication Number Publication Date
US20220153264A1 true US20220153264A1 (en) 2022-05-19

Family

ID=73016129

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/404,208 Active 2039-09-25 US11235761B2 (en) 2019-04-30 2019-05-06 Operational risk assessment for autonomous vehicle control
US17/587,280 Abandoned US20220153264A1 (en) 2019-04-30 2022-01-28 Operational Risk Assessment for Autonomous Vehicle Control

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/404,208 Active 2039-09-25 US11235761B2 (en) 2019-04-30 2019-05-06 Operational risk assessment for autonomous vehicle control

Country Status (1)

Country Link
US (2) US11235761B2 (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932462B (en) * 2017-05-27 2021-07-16 华为技术有限公司 Driving intention determining method and device
US11198431B2 (en) * 2019-04-30 2021-12-14 Retrospect Technology, LLC Operational risk assessment for autonomous vehicle control
US11300977B2 (en) * 2019-05-01 2022-04-12 Smartdrive Systems, Inc. Systems and methods for creating and using risk profiles for fleet management of a fleet of vehicles
GB2589245B (en) * 2019-06-04 2023-01-25 Motional Ad Llc Autonomous vehicle operation using linear temporal logic
KR20190104488A (en) * 2019-08-21 2019-09-10 엘지전자 주식회사 Artificial intelligence robot for managing movement of object using artificial intelligence and operating method thereof
JP7136761B2 (en) * 2019-11-12 2022-09-13 本田技研工業株式会社 Risk estimation device and vehicle control device
US11352023B2 (en) 2020-07-01 2022-06-07 May Mobility, Inc. Method and system for dynamically curating autonomous vehicle policies
JP7450481B2 (en) * 2020-07-14 2024-03-15 本田技研工業株式会社 Mobile object control device, mobile object, mobile object control method, and program
US11644835B2 (en) * 2020-07-29 2023-05-09 Toyota Research Institute, Inc. Game-theoretic planning for risk-aware interactive agents
US20220080962A1 (en) * 2020-09-14 2022-03-17 Motional Ad Llc Vehicle operation using a behavioral rule model
US11603095B2 (en) * 2020-10-30 2023-03-14 Zoox, Inc. Collision avoidance planning system
US11868137B2 (en) * 2020-11-12 2024-01-09 Honda Motor Co., Ltd. Systems and methods for path planning with latent state inference and graphical relationships
US20220161786A1 (en) * 2020-11-24 2022-05-26 Hyundai Motor Company System for evaluating risk values associated with object on road for vehicle and method for the same
US20220197236A1 (en) * 2020-12-18 2022-06-23 Rockwell Collins, Inc. Hierarchical high integrity automation system
WO2023102116A1 (en) * 2021-12-02 2023-06-08 May Mobility, Inc. Method and system for feasibility-based operation of an autonomous agent
EP4220581A1 (en) * 2022-01-28 2023-08-02 Zenseact AB Estimation of risk exposure for autonomous vehicles
WO2023154568A1 (en) 2022-02-14 2023-08-17 May Mobility, Inc. Method and system for conditional operation of an autonomous agent

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110190972A1 (en) * 2010-02-02 2011-08-04 Gm Global Technology Operations, Inc. Grid unlock
US20120109610A1 (en) * 2009-03-05 2012-05-03 Massachusetts Institute Of Technology Methods and apparati for predicting and quantifying threat being experienced by a modeled system
US20170139411A1 (en) * 2015-11-16 2017-05-18 Polysync Technologies, Inc. Autonomous Vehicle Platform and Safety Architecture
US20180089911A1 (en) * 2016-09-23 2018-03-29 Kpit Technologies Limited Autonomous system validation
US20190325580A1 (en) * 2019-06-26 2019-10-24 Intel Corporation Surround camera system with seamless stitching for arbitrary viewpoint selection

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004008894A1 (en) 2004-02-24 2005-09-08 Robert Bosch Gmbh Safety system for a means of transportation and related method
EP2095351B1 (en) 2006-10-13 2014-06-25 Continental Teves AG & Co. oHG System for determining objects
DE102014004110A1 (en) * 2014-03-21 2015-09-24 Wabco Gmbh Method for operating an autonomously operating driver safety or driver assistance system of a motor vehicle
US10599155B1 (en) 2014-05-20 2020-03-24 State Farm Mutual Automobile Insurance Company Autonomous vehicle operation feature monitoring and evaluation of effectiveness
WO2017127596A1 (en) 2016-01-22 2017-07-27 Russell David Wayne System and method for safe positive control electronic processing for autonomous vehicles
US10493936B1 (en) * 2016-01-22 2019-12-03 State Farm Mutual Automobile Insurance Company Detecting and responding to autonomous vehicle collisions
US10942038B2 (en) * 2016-02-15 2021-03-09 Allstate Insurance Company Real time risk assessment and operational changes with semi-autonomous vehicles

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120109610A1 (en) * 2009-03-05 2012-05-03 Massachusetts Institute Of Technology Methods and apparati for predicting and quantifying threat being experienced by a modeled system
US20110190972A1 (en) * 2010-02-02 2011-08-04 Gm Global Technology Operations, Inc. Grid unlock
US20170139411A1 (en) * 2015-11-16 2017-05-18 Polysync Technologies, Inc. Autonomous Vehicle Platform and Safety Architecture
US20180089911A1 (en) * 2016-09-23 2018-03-29 Kpit Technologies Limited Autonomous system validation
US20190325580A1 (en) * 2019-06-26 2019-10-24 Intel Corporation Surround camera system with seamless stitching for arbitrary viewpoint selection

Also Published As

Publication number Publication date
US11235761B2 (en) 2022-02-01
US20200346643A1 (en) 2020-11-05

Similar Documents

Publication Publication Date Title
US11235761B2 (en) Operational risk assessment for autonomous vehicle control
US11198431B2 (en) Operational risk assessment for autonomous vehicle control
US11040726B2 (en) Alarm system of autonomous driving vehicles (ADVs)
CN110997387B (en) Risk handling for vehicles with autonomous driving capability
US20210356962A1 (en) Planning autonomous motion
US10730531B1 (en) Machine-learning based vehicle motion control system
JP6418574B2 (en) Risk estimation device, risk estimation method, and computer program for risk estimation
US11553363B1 (en) Systems and methods for assessing vehicle data transmission capabilities
US11769403B2 (en) Systems and methods for fiber optic based vehicle-direction detection
US20210124355A1 (en) Approaches for encoding environmental information
US11816900B2 (en) Approaches for encoding environmental information
JP2022532941A (en) Devices and methods for processing vehicle signals to calculate behavioral risk measures
US20220198921A1 (en) Data collection and modeling systems and methods for autonomous vehicles
US20230356751A1 (en) Malicious event detection for autonomous vehicles
CN112446466A (en) Measuring confidence in deep neural networks
WO2022130875A1 (en) Risk management device, risk management method, and risk management system
US20230182754A1 (en) Determining an anomalous event from a scenario and an action of interest
EP3454269A1 (en) Planning autonomous motion
US20230196787A1 (en) Estimating object uncertainty using a pre-non-maximum suppression ensemble
US20240092375A1 (en) Autonomous vehicle sensor calibration algorithm evaluation
US20240028035A1 (en) Planning autonomous motion
US20240101130A1 (en) Maintenance of autonomous vehicle tests
US20230331252A1 (en) Autonomous vehicle risk evaluation
US20240059311A1 (en) Automated window closure
WO2024064286A1 (en) Microweather classification

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION