US10345811B2 - Method and apparatus for scenario generation and parametric sweeps for the development and evaluation of autonomous driving systems - Google Patents

Method and apparatus for scenario generation and parametric sweeps for the development and evaluation of autonomous driving systems Download PDF

Info

Publication number
US10345811B2
US10345811B2 US15/811,878 US201715811878A US10345811B2 US 10345811 B2 US10345811 B2 US 10345811B2 US 201715811878 A US201715811878 A US 201715811878A US 10345811 B2 US10345811 B2 US 10345811B2
Authority
US
United States
Prior art keywords
scenario
parametric variation
complexity
driving
parametric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/811,878
Other versions
US20190146492A1 (en
Inventor
Matthew E. Phillips
Dylan T. Bergstedt
Nandan Thor
Jaehoon Choe
Michael J. Daily
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GM Global Technology Operations LLC
Original Assignee
GM Global Technology Operations LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GM Global Technology Operations LLC filed Critical GM Global Technology Operations LLC
Priority to US15/811,878 priority Critical patent/US10345811B2/en
Assigned to GM Global Technology Operations LLC reassignment GM Global Technology Operations LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Bergstedt, Dylan T., DAILY, MICHAEL J., Thor, Nandan, CHOE, Jaehoon
Assigned to GM Global Technology Operations LLC reassignment GM Global Technology Operations LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PHILLIPS, Matthew E., Bergstedt, Dylan T., DAILY, MICHAEL J., Thor, Nandan, CHOE, Jaehoon
Priority to CN201811336120.3A priority patent/CN109774724B/en
Priority to DE102018128290.7A priority patent/DE102018128290A1/en
Publication of US20190146492A1 publication Critical patent/US20190146492A1/en
Application granted granted Critical
Publication of US10345811B2 publication Critical patent/US10345811B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/16Control of vehicles or other craft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0088Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/041Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a variable is automatically adjusted to optimise the performance
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/16Control of vehicles or other craft
    • G09B19/167Control of land vehicles
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • G05B17/02Systems involving the use of models or simulators of said systems electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present application generally relates to vehicle control systems and autonomous vehicles. More specifically, the application teaches a method and apparatus for evaluating and quantifying the complexity of events, situations, and scenarios developed within simulation as a measure to assess, and subsequently train, a cognitive model of autonomous driving.
  • an autonomous vehicle is a vehicle that is capable of monitoring external information through vehicle sensors, recognizing a road situation in response to the external information, and manipulation of a vehicle owner.
  • Autonomous vehicle software is tested, evaluated and refined by running the software against various test scenarios to determine the performance of the software and the frequency of success and failure. It is desirable to expose the vehicle software to a variety of potentially challenging situation in order to analyze performance and train new system on a large set of meaningful data, providing enhanced insight into the ability of these systems. It is desirable to perform verification and validation for the system in a simulated environment to ensure that, once deployed, the system will be less likely to fail in certain complex situations.
  • the system may also be used to test the performance of other systems of autonomous control, given that the necessary parameters are simulated correctly within the vehicle, for example, automated braking systems. Therefore, it is desirable to be able to simulate as many possible driving scenarios and conditions as possible in order to identify system weakness and to improve the system in light of those weaknesses.
  • Embodiments according to the present disclosure provide a number of advantages. For example, embodiments according to the present disclosure may enable testing of autonomous vehicle software, subsystems and the like. This system may further be employed to test other control system software and is not limited to autonomous vehicles.
  • an apparatus comprising a sensor interface for generating sensor data for coupling to a vehicle control system, a control system interface for receiving control data from the vehicle control system, a memory for storing a first scenario wherein the first scenario is associated with a first parametric variation and a second parametric variation, and a simulator for simulating a first driving environment in response to the first scenario, the first parametric variation and the control data and assigning a first performance metric in response to the simulation of the first driving environment, the simulator further operative to simulate a second driving environment in response to the first scenario, the second parametric variation and the control data and assigning a second performance metric in response to the simulation of the second driving environment.
  • a method comprising receiving a driving scenario, a first parametric variation and a second parametric variation, simulating the driving scenario using the first parametric variation, evaluating a first driver performance in response to the driving scenario, the first parametric variation and a first control data, assigning a first performance metric in response to the first driver performance, simulating the driving scenario using the second parametric variation, evaluating a second driver performance in response to the driving scenario, the second parametric variation and a second control data, and assigning a second performance metric in response to the second driver performance.
  • FIG. 2 is an exemplary apparatus for implementing the method for autonomous system performance metric generation and benchmarking according to an embodiment.
  • FIG. 3 is an exemplary method for scenario generation and parametric sweeps for the development & evaluation of autonomous driving systems according to an embodiment.
  • Driving complexity is typically measured in relation to specific scenarios along with the rules of the road and goals of driving. For example, in a left turn scenario, the timing of the decision to execute the left turn at a T-junction is a critical moment with numerous contributing complexity factors which ultimately contribute to the resultant behaviors and outcomes.
  • a driver or pedestrian in or near a stopped vehicle may wave traffic by if the car is blocking a portion of the road. Successful navigation of the stopped car and oncoming traffic requires the driver to observes, assess, and judge the complex factors and potential dangers based on other drivers' behavior while negotiating this complex situation.
  • Complexity can also be quantified theoretically and measured in a variety of ways. For example, certain behavioral measures allow researchers to quantify driving performance, such as vehicle distance to center of the lane, or following distance to other traffic. In addition, a human's performance on a driving task can be assessed with behavioral and neurophysiological metrics of engagement, performance, and factors contributing to lower performance.
  • the behavioral measures include reaction time for decisions and behaviors including perceptual detection, discrimination, and time on task.
  • Typical driving failures result from human error and procedural failures.
  • Neural measures using electroencephalograms and other non-invasive brain imaging techniques include cognitive workload, engagement and the cognitive state of the operator, such as fatigue, spatial attention etc., in the subtask processes.
  • the driving control inputs can also be used to assess the performance given a target or ideal condition, decision, or path (e.g. motor tracking errors).
  • the driver's prior experience and access to the experience and knowledge are also critical factors in decision making performance.
  • the decision processes for drivers in highly complex situations are a specific emphasis for the design of the scenarios and the complexity and grading metrics of both human in loop (HIL) and autonomous driving examples from these scenarios.
  • HIL human in loop
  • the goal is to initially train the cognitive model using the “best” HIL examples of the complex scenarios, and then to generate good and bad training data using an autonomous control system and grading the subsequent exemplars on a cluster.
  • an ‘episode’ is defined as a discrete set of ‘events’ and is the top-level hierarchy of the cognitive system.
  • the episode can be considered the basic unit of memory within the cognitive system that define sequences of continuous phenomena that describe an instance of some vehicular scenario, e.g. a left turn at a busy intersection.
  • a complete episode is comprised of smaller sequences called ‘events,’ which are specific, stereotyped occurrences that may be shared across multiple episodes.
  • some events comprising the episode can include items such as the velocity of cross traffic, available gaps within the traffic, trajectory of other vehicles/pedestrians, or the current lane position of the self-vehicle.
  • Such discretized phenomena are not necessarily unique to the circumstances of the left turn event, and could be present in other episodes; for example, pedestrians may be present in an episode describing a parking scenario.
  • Each event is defined by percepts, or observations taken from the environment by data provided by internal and external sensor systems. These percepts are collected as ‘tokens’ and consist of a package of data streamed from sensor systems in real time. These data are then processed and used to define events. Percepts include critical information about the world, such as lane positions, turn lanes, and other environmental conditions integral to the act of driving. Also integral are properties of agents within that world, such as object definition, allocentric velocity, heading, etc., and aspects of the self-vehicle, such as egocentric velocity, heading, etc. Tokens are streaming percepts that are assembled to define discrete units of scenario states, called events. Then, a collection of observed events are learned through real-world driving and simulation to generate end-to-end scenarios called episodes, which are stored as the major units of memory.
  • simulation tokens may be generated in real-time via streaming of percepts through an interface.
  • Vehicular and environmental data are collected per simulation step and streamed to an output socket for collection into the cognitive model, which then packages collected token data into events.
  • Percepts may be collected from vehicle ‘self port’ data, and be tokenized on a per-vehicle basis, providing an allo-centric position/velocity array of every agent within the scenario.
  • Environmental states such as road configurations, are reconstructed in the cognitive model through data collected from lane marker sensors which define the edges of valid lanes and provides information about intersections or thoroughfare configurations that are learned as components of episodes by the cognitive system.
  • Non ground truth devices may be tokenized on a per-device basis.
  • the ‘sensor-world’ will then take the place of ground-truth self-port data, and be subject to realistic perturbations of the percept stream, such as sensor occlusion, malfunction, or signal attenuation due to environmental factors such as rain or fog. Delineation of individual events and episodes will at first be facilitated by the production of scenarios in simulation. With the rapid collection and automation of varying events through parameter sweeps of simulation variables (e.g. lane number), the cognitive system will eventually define the temporal edges of events through utilization of grammar algorithms and hierarchical clustering techniques.
  • FIG. 1 an exemplary left turn scenario is shown.
  • the scenario is divided into subtasks.
  • the T junction road surface 110 is shown where a vehicle approaches from the lower road S1 and navigates a left turn across one lane of traffic.
  • the introduction of a complexity metric allows a complexity value to be computed at the subtask level during an autonomous driving task.
  • the autonomous vehicle By taking data that is generated during simulation and extracting a measure of complexity, the autonomous vehicle creates a basis for scenario comparison and a basis for taking action based on previous situations with similar complexity scores. Attentional demand of the autonomous system can then be based on complexity.
  • the complexity measure ultimately feeds in to the cognitive model, along with grading scores, to guide decision making.
  • the evaluation is performed on a left hand turn scenario.
  • a left hand turn scenario there are many iterations that can occur based on, for example, traffic density, number of pedestrians, weather conditions, etc.
  • the main task of making a left-hand turn is broken down into subtasks.
  • the main task of making a left turn is broken up into four subtasks S1, S2, S3 and S4.
  • features of the subtasks are found to build a set of guidelines to break the data into subtasks.
  • Subtask 1's S1 endpoint is determined by finding the time point where the car's velocity drops below a certain acceptable stopping speed.
  • Subtask 2's S2 endpoint is then found when the car exceeds a certain acceleration velocity.
  • Subtask 3's S3 endpoint is located by identifying when the x-coordinate of the car stops changing. It is assumed that the endpoint of one subtask is the beginning point of the next subtask.
  • the purpose of breaking the left-turn task into subtasks is because complexity changes based on where in the task complexity is measured.
  • By splitting the task into subtasks it is possible to see how complexity changes among different parts in the task but also how complexity changes over time.
  • By chunking the task into subtasks complexity can be calculated within each subtask and complexity can be calculated among subtasks as a function of time.
  • complexity changes from subtask 1 S1 to subtask 2 S2 demonstrate a difference in complexity over time.
  • features are generally the same, so measuring differences among subtasks gives a non-trivial temporal difference in complexity.
  • subtasks are determined based on difference of features, which allows for a natural temporal complexity comparison between subtasks.
  • the number of lanes may only be important at lower speeds, which means that the interaction between the speed feature and the number of lanes feature may need to be considered in the future.
  • the fundamental idea behind using the number of lanes in this scenario is that it (1) indicates an intersection in this scenario and (2) indicates a choice of lanes to merge into. In this way, counting the number of lanes can be used to compute complexity.
  • Another measure of the number of alternatives can be taken in a temporal dimension, as opposed to a spatial dimension. In that regard, the number of gaps in traffic can be used to determine alternatives.
  • the idea behind measuring the amount of gaps in traffic in a given subtask is that it allows for a measure of how many opportunities to make the left turn were presented.
  • criticality is a subjective measurement and can be thought of as the expected value of the risk of not making a decision. That is, if risk is high in one subtask relative to the other subtasks, such as when going through an intersection, criticality is high in that subtask.
  • criticality In the specific case of the left turn, criticality is high when approaching the stop sign and when making the left turn. In this case, criticality is high when velocity is low, such as stopping at a stop sign and starting a turn. In that regard, criticality can be measured as the inverse of the velocity, in this specific scenario.
  • criticality is the most scenario-specific complexity measure and in the case of highway driving, criticality may increase as speed increases or it may increase when slowing down to exit the highway. Thus, criticality will be very different in each situation and even within situations in each subtask.
  • a direct measure of perceptual complexity is weather. Perceptual complexity increases drastically in heavy snow or heavy fog conditions. Another measure of perceptual complexity is the number of objects that the sensors pick up. The idea behind measuring the number of objects seen by the car in a given subtask is that the more objects that the car has to interact with, the more complex the perception by that car has to be. This feature should be relatively scalable, although the weight may change based on the scenario. For example, the number of objects may be more important than the weather when crossing a busy intersection on a clear day but the number of objects may be less important when driving on a winding desolate road in snowy conditions.
  • the speed of the decision can be seen as the inverse of the total length of the subtask.
  • the amount of seconds in each subtask is calculated. Inherent in this calculation is velocity, so it is not explicitly factored into the speed of the decision.
  • This feature is scalable: the speed of the decision is fundamentally found by taking the inverse of the length of the subtask the longer the subtask, the slower the speed of decision, given that subtasks are partitioned correctly.
  • the next step is to compare the calculated complexity scores with the subjective complexity scores created by a human.
  • Complexity scores created by humans include error rate probabilities and familiarity, which are not accounted for in the calculated complexity scores. It is desirable to initially match the calculated complexity scores with the human created complexity scores in order to establish a baseline. So in order to match the calculated and created complexity scores the different subtask complexities are weighted and adjusted in order to better match the human scores. This process of comparing the algorithm-computed complexity scores to subjective human complexity scores will continue to occur as data is generated, with the process iterating back on itself to continue to fine tune the algorithm. Many human created complexity scores are aggregated and normalized in order to generalize the algorithm.
  • the data is analyzed from a different perspective.
  • the total complexity among complexity categories for subtask 1 is calculated.
  • the total complexity value is then divided by the total algorithm complexity score among all of the subtasks and then multiplied by 100. This gives a scaled feature that shows how much subtask 1 contributes to overall complexity for scenario 1.
  • the scores are further scaled within each subtask. That is, all of the subtask scores are then divided by the highest score of any subtask among all scenarios, to scale the results within the given subtask.
  • the calculated complexity results are then compared to the human complexity scores.
  • all of the human complexity scores are divided by the largest human complexity score among the subtasks, which allows for comparison by similar scaling.
  • S3 can be modified by the same, but also contain automation that adjusts the aggressiveness of cross traffic or the consistency of cross traffic.
  • the complexity Of S4 may be modified by such factors as sudden braking, road type, merging behaviors of non-self-vehicles and the presence of “surprise” obstructions, such as road debris, lane visibility, etc.
  • Parameters that affect global decision-making, such as the presence of additional lanes, special lanes, construction, or other likely road-going scenarios are all additional elements that is suitable for automation.
  • Other parameters difficult or dangerous to replicate in road-going conditions, such as sensor occlusion or failures, can also be modulated to various degrees safely within simulation.
  • the apparatus 200 is operative to simulate a driving scenario for evaluation and quantifying the performance of a driving system, including an autonomous driving system.
  • the apparatus is used to quantify the cognitive cost and effort required by a driver, or driving system, to successfully complete the scenario.
  • the apparatus comprises a simulator 220 , a sensor interface 210 , a vehicle control system 230 , a control system interface 240 , and a memory 250 .
  • the sensor interface 210 and the control system interface 240 are operative to interact with the vehicle control system.
  • the apparatus may be implemented in hardware, software or a combination of both.
  • the system is further operative to generate many variations of highly complex, potentially dangerous examples that may be prohibitive for real world testing, or that a human assisted prototype AI may not encounter for millions of miles.
  • a human assisted prototype AI may not encounter for millions of miles.
  • the method is then operative to utilize the first of the parametric variation and to simulate the driving environment 320 .
  • the current system and method utilizes a set of parametric variations in order to train an autonomous cognitive system and, to gauge the performance of said system in a simulated environment.
  • the system is operative to create a highly complex, diverse set of scenarios that may be encountered while driving. Specific parameters are then swept in simulation including: weather (rain, fog, and snow), traffic velocity, aggressiveness of traffic, presence of pedestrians/cyclists, reactiveness of other actors in the scene, sensor health/percept quality, and the lane keeping ability of each agent.
  • the parameter sweeps are run across a variety of potentially challenging situations that have previously proven challenging for autonomous vehicles, for example, a left turn at busy intersection.
  • the first subtask is approaching the stop sign, the second subtask is stopping at the stop sign, the third subtask is making the left turn, and the fourth subtask is completing the turn and matching surrounding traffic speed.
  • Each subtask is assigned an individual complexity grade for each complexity category.
  • the data acquired can be used to: quantify the complexity of the scenario, evaluate the performance of a system, or teach a system about traffic behavior and realistic scenarios. This and other scenarios are simulated as each scenario presents unique challenges that can be quantified by a complexity metric. These scenarios include rare and challenging situations that result in generally high complexity scores and that are uncommonly encountered during driver testing. Variables are then swept for each scenario.
  • the average preferred walking speed for adult males and females has been experimentally determined to be 1.2 m/s to 1.4 m/s ( ⁇ 3 mph).
  • the speed of pedestrians in testing may be swept from 1 to 3 m/s in order to show a large range of walking speeds.
  • average cycling speed is approximately 4.5 m/s ( ⁇ 10 mph).
  • Simulated bikes in the scenario may be assigned velocities ranging from 4.5 m/s to 8.5 m/s for example, to display leisurely and fast cycling paces.
  • the parameter of weather may also swept. Precipitation and visibility due to fog are manipulated in order to reproduce realistic weather conditions.
  • the National Weather Service issues a fog advisory when visibility falls below 1 ⁇ 4 mile ( ⁇ 402 meters). Visibility due to fog may be set from 402 meters down to 50 meters in order to demonstrate a wide range of possible fog conditions.
  • the severity of rain is altered by changing the density of rain drops per meter cubed (184-384) and altering the diameter of each drop (0.77 mm-2.03 mm [5]). Simulated models may employ real world physics where larger drops fall more quickly.
  • the snowflake diameter is changed between trials (2.5 mm-20 mm), with larger flakes indicating heavier snowfall. Speed and density of the snowflakes may remain constant or change with atmospheric conditions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present application generally relates to methods and apparatus for evaluating driving performance under a plurality of driving scenarios and conditions. More specifically, the application teaches a method and apparatus for testing a driving scenario repetitively while altering a parametric variation, such as fog level, in order to evaluate driving system performance under changing conditions.

Description

BACKGROUND
The present application generally relates to vehicle control systems and autonomous vehicles. More specifically, the application teaches a method and apparatus for evaluating and quantifying the complexity of events, situations, and scenarios developed within simulation as a measure to assess, and subsequently train, a cognitive model of autonomous driving.
BACKGROUND INFORMATION
In general, an autonomous vehicle is a vehicle that is capable of monitoring external information through vehicle sensors, recognizing a road situation in response to the external information, and manipulation of a vehicle owner. Autonomous vehicle software is tested, evaluated and refined by running the software against various test scenarios to determine the performance of the software and the frequency of success and failure. It is desirable to expose the vehicle software to a variety of potentially challenging situation in order to analyze performance and train new system on a large set of meaningful data, providing enhanced insight into the ability of these systems. It is desirable to perform verification and validation for the system in a simulated environment to ensure that, once deployed, the system will be less likely to fail in certain complex situations. The system may also be used to test the performance of other systems of autonomous control, given that the necessary parameters are simulated correctly within the vehicle, for example, automated braking systems. Therefore, it is desirable to be able to simulate as many possible driving scenarios and conditions as possible in order to identify system weakness and to improve the system in light of those weaknesses.
The above information disclosed in this background section is only for enhancement of understanding of the background of the invention and therefore it may contain information that does not form the prior art that is already known in this country to a person of ordinary skill in the art.
SUMMARY
Embodiments according to the present disclosure provide a number of advantages. For example, embodiments according to the present disclosure may enable testing of autonomous vehicle software, subsystems and the like. This system may further be employed to test other control system software and is not limited to autonomous vehicles.
In accordance with an aspect of the present invention, an apparatus comprising a sensor interface for generating sensor data for coupling to a vehicle control system, a control system interface for receiving control data from the vehicle control system, a memory for storing a first scenario wherein the first scenario is associated with a first parametric variation and a second parametric variation, and a simulator for simulating a first driving environment in response to the first scenario, the first parametric variation and the control data and assigning a first performance metric in response to the simulation of the first driving environment, the simulator further operative to simulate a second driving environment in response to the first scenario, the second parametric variation and the control data and assigning a second performance metric in response to the simulation of the second driving environment.
In accordance with another aspect of the present invention, a method comprising receiving a driving scenario, a first parametric variation and a second parametric variation, simulating the driving scenario using the first parametric variation, evaluating a first driver performance in response to the driving scenario, the first parametric variation and a first control data, assigning a first performance metric in response to the first driver performance, simulating the driving scenario using the second parametric variation, evaluating a second driver performance in response to the driving scenario, the second parametric variation and a second control data, and assigning a second performance metric in response to the second driver performance.
The above advantage and other advantages and features of the present disclosure will be apparent from the following detailed description of the preferred embodiments when taken in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
The above-mentioned and other features and advantages of this invention, and the manner of attaining them, will become more apparent and the invention will be better understood by reference to the following description of embodiments of the invention taken in conjunction with the accompanying drawings, wherein:
FIG. 1 is an exemplary left turn scenario according to an embodiment.
FIG. 2 is an exemplary apparatus for implementing the method for autonomous system performance metric generation and benchmarking according to an embodiment.
FIG. 3 is an exemplary method for scenario generation and parametric sweeps for the development & evaluation of autonomous driving systems according to an embodiment.
The exemplifications set out herein illustrate preferred embodiments of the invention, and such exemplifications are not to be construed as limiting the scope of the invention in any manner.
DETAILED DESCRIPTION
The following detailed description is merely exemplary in nature and is not intended to limit the disclosure or the application and uses thereof. Furthermore, there is no intention to be bound by any theory presented in the preceding background or the following detailed description. For example, the algorithms, software and systems of the present invention have particular application for use on a vehicle. However, as will be appreciated by those skilled in the art, the invention may have other applications.
Driving complexity is typically measured in relation to specific scenarios along with the rules of the road and goals of driving. For example, in a left turn scenario, the timing of the decision to execute the left turn at a T-junction is a critical moment with numerous contributing complexity factors which ultimately contribute to the resultant behaviors and outcomes. In another exemplary scenario, a driver or pedestrian in or near a stopped vehicle may wave traffic by if the car is blocking a portion of the road. Successful navigation of the stopped car and oncoming traffic requires the driver to observes, assess, and judge the complex factors and potential dangers based on other drivers' behavior while negotiating this complex situation.
In determining and quantifying driving complexity it is desirable to emphasize the development, parametric variations, and quantification of scenarios where current AI or deep-learning based autonomous driving systems have the poorest performance. Complexity can also be quantified theoretically and measured in a variety of ways. For example, certain behavioral measures allow researchers to quantify driving performance, such as vehicle distance to center of the lane, or following distance to other traffic. In addition, a human's performance on a driving task can be assessed with behavioral and neurophysiological metrics of engagement, performance, and factors contributing to lower performance. The behavioral measures include reaction time for decisions and behaviors including perceptual detection, discrimination, and time on task.
Typical driving failures result from human error and procedural failures. Neural measures using electroencephalograms and other non-invasive brain imaging techniques include cognitive workload, engagement and the cognitive state of the operator, such as fatigue, spatial attention etc., in the subtask processes. The driving control inputs can also be used to assess the performance given a target or ideal condition, decision, or path (e.g. motor tracking errors). The driver's prior experience and access to the experience and knowledge are also critical factors in decision making performance. The decision processes for drivers in highly complex situations are a specific emphasis for the design of the scenarios and the complexity and grading metrics of both human in loop (HIL) and autonomous driving examples from these scenarios. The goal is to initially train the cognitive model using the “best” HIL examples of the complex scenarios, and then to generate good and bad training data using an autonomous control system and grading the subsequent exemplars on a cluster.
In research concerning HIL driver behavior and traffic safety, complexity measures are often derived to quantify the difficulty of traffic situations and assess performance. This depends on a number of environmental factors, and metrics such as traffic density, driver agent behavior, occupancy and mean speed ground truths as generated by traffic control cameras, and overall configuration of traffic, roads, and percept qualia. In order to train the cognitive driving system, all of these variables must be manipulated in automated fashion in order to create a rich library of scenarios from which to generate accurate semantic information and generate novel behaviors that meet or exceed the capability of human drivers. The current system utilizes a sweep of these parameters to produce scenarios of varying complexity and provide rapid iterations and variations to scenarios that would be infeasible to replicate in real-world driving contexts.
Within a scenario, an ‘episode’ is defined as a discrete set of ‘events’ and is the top-level hierarchy of the cognitive system. The episode can be considered the basic unit of memory within the cognitive system that define sequences of continuous phenomena that describe an instance of some vehicular scenario, e.g. a left turn at a busy intersection. A complete episode is comprised of smaller sequences called ‘events,’ which are specific, stereotyped occurrences that may be shared across multiple episodes. In the example above, some events comprising the episode can include items such as the velocity of cross traffic, available gaps within the traffic, trajectory of other vehicles/pedestrians, or the current lane position of the self-vehicle. Such discretized phenomena are not necessarily unique to the circumstances of the left turn event, and could be present in other episodes; for example, pedestrians may be present in an episode describing a parking scenario.
Each event is defined by percepts, or observations taken from the environment by data provided by internal and external sensor systems. These percepts are collected as ‘tokens’ and consist of a package of data streamed from sensor systems in real time. These data are then processed and used to define events. Percepts include critical information about the world, such as lane positions, turn lanes, and other environmental conditions integral to the act of driving. Also integral are properties of agents within that world, such as object definition, allocentric velocity, heading, etc., and aspects of the self-vehicle, such as egocentric velocity, heading, etc. Tokens are streaming percepts that are assembled to define discrete units of scenario states, called events. Then, a collection of observed events are learned through real-world driving and simulation to generate end-to-end scenarios called episodes, which are stored as the major units of memory.
It is desirable to learn as many episodes as possible which is more feasible through passive collection through real-world driving scenarios because the nature of the cognitive processor demands relevant, practical experience in order to more thoroughly learn and subsequently improve driving performance. In contrast to deterministic rule-based systems, cognitive processors must be exposed to as much real-world data as possible, including subtle variations of a given scenario that it may be required to evaluate in a deployed environment. Unfortunately, in situ exposure to develop a cognitive model in a physical vehicle is resource-intensive, time-consuming, and inconsistent; that is, desired variations to a scenario must be encountered by chance through unknown iterations of human-assisted driving, and/or painstakingly replicated in the physical world in a closed-circuit. As a result, scenarios within realistic simulation have been produced that allow rapid iteration of various world states, such as road lane configuration, traffic scenarios, self-vehicle properties, and non-self agent behaviors for faster-than-real time storage of episodes. This allows for a much richer episodic memory bank, and provides a more extensive library of scenarios from which the cognitive system can undergo machine learning and generate semantic relationships, critical for generative, non-rule-based agency.
During simulation tokens may be generated in real-time via streaming of percepts through an interface. Vehicular and environmental data are collected per simulation step and streamed to an output socket for collection into the cognitive model, which then packages collected token data into events. Percepts may be collected from vehicle ‘self port’ data, and be tokenized on a per-vehicle basis, providing an allo-centric position/velocity array of every agent within the scenario. Environmental states, such as road configurations, are reconstructed in the cognitive model through data collected from lane marker sensors which define the edges of valid lanes and provides information about intersections or thoroughfare configurations that are learned as components of episodes by the cognitive system. Non ground truth devices may be tokenized on a per-device basis. The ‘sensor-world’ will then take the place of ground-truth self-port data, and be subject to realistic perturbations of the percept stream, such as sensor occlusion, malfunction, or signal attenuation due to environmental factors such as rain or fog. Delineation of individual events and episodes will at first be facilitated by the production of scenarios in simulation. With the rapid collection and automation of varying events through parameter sweeps of simulation variables (e.g. lane number), the cognitive system will eventually define the temporal edges of events through utilization of grammar algorithms and hierarchical clustering techniques.
Turning now to FIG. 1, an exemplary left turn scenario is shown. In order to generate an accurate and meaningful complexity metric, the scenario is divided into subtasks. The T junction road surface 110 is shown where a vehicle approaches from the lower road S1 and navigates a left turn across one lane of traffic. The introduction of a complexity metric allows a complexity value to be computed at the subtask level during an autonomous driving task. By taking data that is generated during simulation and extracting a measure of complexity, the autonomous vehicle creates a basis for scenario comparison and a basis for taking action based on previous situations with similar complexity scores. Attentional demand of the autonomous system can then be based on complexity. The complexity measure ultimately feeds in to the cognitive model, along with grading scores, to guide decision making. In this exemplary embodiment, the evaluation is performed on a left hand turn scenario. Within this scenario there are many iterations that can occur based on, for example, traffic density, number of pedestrians, weather conditions, etc. In order to measure complexity, the main task of making a left-hand turn is broken down into subtasks. In this exemplary embodiment, the main task of making a left turn is broken up into four subtasks S1, S2, S3 and S4. In order to allow for scalability to other scenarios, features of the subtasks are found to build a set of guidelines to break the data into subtasks. Subtask 1's S1 endpoint is determined by finding the time point where the car's velocity drops below a certain acceptable stopping speed. Subtask 2's S2 endpoint is then found when the car exceeds a certain acceleration velocity. Subtask 3's S3 endpoint is located by identifying when the x-coordinate of the car stops changing. It is assumed that the endpoint of one subtask is the beginning point of the next subtask. These features that determine the end of respective subtasks should be scalable to most simple left-turn scenarios but it will depend on the aggressiveness of the driver, familiarity with the road, and simplicity of the left turn task.
The purpose of breaking the left-turn task into subtasks is because complexity changes based on where in the task complexity is measured. By splitting the task into subtasks, it is possible to see how complexity changes among different parts in the task but also how complexity changes over time. By chunking the task into subtasks, complexity can be calculated within each subtask and complexity can be calculated among subtasks as a function of time. Inherently, complexity changes from subtask 1 S1 to subtask 2 S2 demonstrate a difference in complexity over time. However, within a subtask, features are generally the same, so measuring differences among subtasks gives a non-trivial temporal difference in complexity. Generally, subtasks are determined based on difference of features, which allows for a natural temporal complexity comparison between subtasks. Certainly, there may be some features that change within a subtask but fundamental to how subtasks are broken down is a minimization of those feature changes. Depending on the application, it may be of interest to measure how complexity changes temporally throughout the entire task or exclusively in one subtask. Since subtasks are continuous (the endpoint of one is the starting point of the next), both large-scale (throughout a task) and small-scale (throughout a subtask) temporal complexity can be calculated and we postulate that our future efforts will extrapolate these complexity measures to a continuous or discrete event-related time domain.
Specific features calculated from the data from the simulations are used to map to the complexity parameters. For example, the weather can play a very important role in determining perceptual complexity and to a lesser degree, the speed of decision making. A direct way to measure the number of alternatives, and therefore estimate the complexity, is to measure the mean number of lanes in each subtask. Counting the number of lanes may be a direct measure of complexity in this left turn scenario as it indicates approaching an intersection, where there are a large number of alternatives. However, the number of lanes may not be as indicative of complexity in other situations such as a multi-lane highway may have four lanes in each direction but it may not be as complex as even a two way intersection. The number of lanes may only be important at lower speeds, which means that the interaction between the speed feature and the number of lanes feature may need to be considered in the future. The fundamental idea behind using the number of lanes in this scenario is that it (1) indicates an intersection in this scenario and (2) indicates a choice of lanes to merge into. In this way, counting the number of lanes can be used to compute complexity. Another measure of the number of alternatives can be taken in a temporal dimension, as opposed to a spatial dimension. In that regard, the number of gaps in traffic can be used to determine alternatives. The idea behind measuring the amount of gaps in traffic in a given subtask is that it allows for a measure of how many opportunities to make the left turn were presented. This information may be used in conjunction with the number of lanes data in order to give a more complete measure of alternatives. However, with an eye towards scalability, the number of alternatives may not always scale with the gaps in traffic in any scenario where driving across traffic is not necessary. In that case, another temporal alternatives feature must be found. It is important to measure the number of alternatives both spatially and temporally regardless of the driving situation.
Fundamentally, criticality is a subjective measurement and can be thought of as the expected value of the risk of not making a decision. That is, if risk is high in one subtask relative to the other subtasks, such as when going through an intersection, criticality is high in that subtask. In the specific case of the left turn, criticality is high when approaching the stop sign and when making the left turn. In this case, criticality is high when velocity is low, such as stopping at a stop sign and starting a turn. In that regard, criticality can be measured as the inverse of the velocity, in this specific scenario. However, criticality is the most scenario-specific complexity measure and in the case of highway driving, criticality may increase as speed increases or it may increase when slowing down to exit the highway. Thus, criticality will be very different in each situation and even within situations in each subtask.
A direct measure of perceptual complexity is weather. Perceptual complexity increases drastically in heavy snow or heavy fog conditions. Another measure of perceptual complexity is the number of objects that the sensors pick up. The idea behind measuring the number of objects seen by the car in a given subtask is that the more objects that the car has to interact with, the more complex the perception by that car has to be. This feature should be relatively scalable, although the weight may change based on the scenario. For example, the number of objects may be more important than the weather when crossing a busy intersection on a clear day but the number of objects may be less important when driving on a winding desolate road in snowy conditions.
Finally, the speed of the decision can be seen as the inverse of the total length of the subtask. The larger the amount of time that the driver is in a certain subtask, the longer the speed of the decision. This is tied to the velocity of the driver as well, with the larger the velocity, the lower the speed of the decision as the driver spends less time in the specific subtask. In terms of the complexity calculation for speed of decision, the amount of seconds in each subtask is calculated. Inherent in this calculation is velocity, so it is not explicitly factored into the speed of the decision. This feature is scalable: the speed of the decision is fundamentally found by taking the inverse of the length of the subtask the longer the subtask, the slower the speed of decision, given that subtasks are partitioned correctly.
The next step is to compare the calculated complexity scores with the subjective complexity scores created by a human. Complexity scores created by humans include error rate probabilities and familiarity, which are not accounted for in the calculated complexity scores. It is desirable to initially match the calculated complexity scores with the human created complexity scores in order to establish a baseline. So in order to match the calculated and created complexity scores the different subtask complexities are weighted and adjusted in order to better match the human scores. This process of comparing the algorithm-computed complexity scores to subjective human complexity scores will continue to occur as data is generated, with the process iterating back on itself to continue to fine tune the algorithm. Many human created complexity scores are aggregated and normalized in order to generalize the algorithm.
In order to determine correlation among subtasks or complexity categories and human complexity scores, the data is analyzed from a different perspective. Within a given scenario, the total complexity among complexity categories for subtask 1 is calculated. In an exemplary embodiment, the total complexity value is then divided by the total algorithm complexity score among all of the subtasks and then multiplied by 100. This gives a scaled feature that shows how much subtask 1 contributes to overall complexity for scenario 1. After the scaled complexity contributions for each subtask are calculated, the scores are further scaled within each subtask. That is, all of the subtask scores are then divided by the highest score of any subtask among all scenarios, to scale the results within the given subtask. The calculated complexity results are then compared to the human complexity scores. Similarly to the subtask scores, all of the human complexity scores are divided by the largest human complexity score among the subtasks, which allows for comparison by similar scaling.
Subtasks with a negative correlation with human complexity scores, such as approaching the stop sign and slowing down, are expected to be simple subtasks which do not greatly contribute to the overall complexity. So, it is expected that as the amount of complexity of a subtask contributes to overall complexity, it is a more complex subtask. For example, if slowing down when approaching a stop sign 1, which is the simplest subtask in making a left turn, contributes a lot to the overall complexity, the task itself was not very difficult relative to other tasks.
The same analysis is repeated among all complexity categories. Each complexity category's total score is found among all subtasks in a given scenario. This score is then divided by the total complexity score for the scenario and then multiplied by 100. This calculation computes the relative percentage that the complexity category contributes to the total complexity in a given scenario. This calculation is repeated for every complexity category.
Using distributed simulation, the complexity measure of a traffic scenario can be automated and iterated rapidly to maximize the amount of scenario data available to the cognitive system. Parameters within simulation, such as traffic density, can be manipulated as needed, and parameter sweeping can be utilized to rapidly generate traffic scenarios in a wide range of complexity scores. Many thousands of variations of complexity scenarios can then be run simultaneously on distributed cluster hardware to populate the episodic memory of the cognitive learning model to expose it to a rich array of episodes that will guide driving behavior.
Returning to the exemplary embodiment, four stages of a left turn scenario in which complexity measures change as the self-vehicle proceeds during the turn. In the S1 phase, the car is stationary and actions are limited to go/no go. In this stage, the criticality and perceptual difficulty of the task may be altered through addition or subtraction of simulation assets such as pedestrians or parked cars that represent critical elements the prevent action, or visual obstruction/obfuscation that creates additional complexity in proceeding to S2. These elements can be scripted in order to automatically generate many situational instances that greatly diversify the conditions of S1. Similarly, the complexity Of S2 can be modified by altering the speed of cross traffic, density of traffic, or type of traffic, such as truck vehicle types that limit visual range, in automated fashion. S3 can be modified by the same, but also contain automation that adjusts the aggressiveness of cross traffic or the consistency of cross traffic. Finally, the complexity Of S4 may be modified by such factors as sudden braking, road type, merging behaviors of non-self-vehicles and the presence of “surprise” obstructions, such as road debris, lane visibility, etc. Parameters that affect global decision-making, such as the presence of additional lanes, special lanes, construction, or other likely road-going scenarios are all additional elements that is suitable for automation. Other parameters difficult or dangerous to replicate in road-going conditions, such as sensor occlusion or failures, can also be modulated to various degrees safely within simulation. Through modification of these experimental variables, there is the potential to generate thousands of permutations of a given traffic episode that will provide rich training data for the cognitive system. This rapid iteration of traffic variables, and the parallelized, cluster-based simulation of both self- and non-self vehicle actions is the key to furnishing the cognitive model with extensive training data beyond what is feasible through real-world driving alone. The benefits of this approach are complete parametric control over environmental and vehicle situations and traffic behavior. This facilitates the testing of edge cases safely while systematically modifying the model parameters to ensure that the cognitive architecture performs at and beyond the capabilities of the best human drivers.
Turning now to FIG. 2, an exemplary apparatus 200 for implementing the method for autonomous system performance metric generation and benchmarking is shown. The apparatus 200 is operative to simulate a driving scenario for evaluation and quantifying the performance of a driving system, including an autonomous driving system. The apparatus is used to quantify the cognitive cost and effort required by a driver, or driving system, to successfully complete the scenario. The apparatus comprises a simulator 220, a sensor interface 210, a vehicle control system 230, a control system interface 240, and a memory 250. The sensor interface 210 and the control system interface 240 are operative to interact with the vehicle control system. The apparatus may be implemented in hardware, software or a combination of both.
The simulator 220 is operative to simulate the driving environment and scenario and generate control signals for controlling the sensor interface 210. The sensor interface 210 is operative to generate sensor signals that are readable by the vehicle control system 230. The sensor signals interface with a way with the vehicle control system 230 such that it appears to the vehicle control system that it is operating in the actual environment. The control system interface 240 receives control signals generated by the vehicle control system 230. The control system interface 240 translates these control signals into data used by the simulator 220 as feedback from the scenario. The scenarios are stored on the memory 250 which is accessed by the simulator. The simulator is further operative to store additional information and updates to the scenarios and complexity metrics on the memory 250.
Turning now to FIG. 3, an exemplary method for scenario generation and parametric sweeps for the development & evaluation of autonomous driving systems 300 is shown. The proposed method and corresponding system is distinct from other simulation examples because of its ability to be applied to a multitude of potentially complex driving scenarios with realistic actor behavior. Previously simulation has been focused on testing one specific feature of an autonomous vehicle. The exemplary parameter sweep, focusing focus on situations that have proven challenging for autonomous vehicles in the past, allows for a higher degree of confidence in regards to the real world readiness of systems that perform well across driving scenarios under all conditions of the parameter sweeps. By creating an in depth parameter sweep technique that involves both actor behavior as well as physical characteristics across a diverse set of potentially complex scenarios the system is operative to compare the performance of several autonomous systems and features. The system is further operative to generate many variations of highly complex, potentially dangerous examples that may be prohibitive for real world testing, or that a human assisted prototype AI may not encounter for millions of miles. By focusing on these complex situations and automating parameter sweeps involving the difficult situations the system is able to quickly scale up the acquisition of relevant useful data in much greater quantities.
The method is first operative to receive a driving scenario 310. The scenario may be received in response to a control signal generated by the simulator or may be received from a separate controller source. The scenario will have at least one set of parametric variations associated with the scenario. This exemplary method facilitates the rapid generation of meaningful data while not utilizing resources on scenarios that are easily handled by the autonomous control system. The result is faster training with a reduced the amount of real world failures in rare situations. The method and system enable system developers to test many features of their autonomous vehicle system across a wide range of conditions.
The method is then operative to utilize the first of the parametric variation and to simulate the driving environment 320. The current system and method utilizes a set of parametric variations in order to train an autonomous cognitive system and, to gauge the performance of said system in a simulated environment. The system is operative to create a highly complex, diverse set of scenarios that may be encountered while driving. Specific parameters are then swept in simulation including: weather (rain, fog, and snow), traffic velocity, aggressiveness of traffic, presence of pedestrians/cyclists, reactiveness of other actors in the scene, sensor health/percept quality, and the lane keeping ability of each agent. The parameter sweeps are run across a variety of potentially challenging situations that have previously proven challenging for autonomous vehicles, for example, a left turn at busy intersection. By running parameter sweeps across the scenarios, a large amount of meaningful data regarding the behavior of other traffic and autonomous agents in high complexity scenarios, along with the capabilities of the system being tested can be acquired. These sweeps also allow the system to pinpoint specific weaknesses of any control system by recording the parameters and scenarios where the system does not perform optimally. The system allows autonomous systems to be vetted for real world use or testing. It would take a great deal of real world driving time to witness many of the complex situation that are covered in the parameter sweeps resulting in a long delay and uncertainty about the abilities of the system in highly complex situations. Scenario set and parameter sweeps can greatly reduce the cost and time, associated with training a new autonomous vehicle feature. The feature can be validated and we can guarantee that the system will perform in real world testing. The quality and amount of useful data also has the potential to be greatly increased due to the thorough generation of training traffic conditions.
In order to fully account for different experimental conditions, it is desirable to perform parameter sweeps in order to create scenarios that provide a plethora of resulting data. Several variables are manipulated in order to mimic realistic driving conditions ranging from simple to complex. The definition of complexity may be defined by dividing complexity into five subcategories. Complexity categories may include, but are not limited to, criticality, alternatives, perceptual complexity, decision speed, and surprise. By mapping outputs from the simulator to these five complexity categories, an accurate measure of complexity can be computed per scenario variation. In this exemplary embodiment the left turn task is broken down into four distinct subtasks. The first subtask is approaching the stop sign, the second subtask is stopping at the stop sign, the third subtask is making the left turn, and the fourth subtask is completing the turn and matching surrounding traffic speed. Each subtask is assigned an individual complexity grade for each complexity category. The data acquired can be used to: quantify the complexity of the scenario, evaluate the performance of a system, or teach a system about traffic behavior and realistic scenarios. This and other scenarios are simulated as each scenario presents unique challenges that can be quantified by a complexity metric. These scenarios include rare and challenging situations that result in generally high complexity scores and that are uncommonly encountered during driver testing. Variables are then swept for each scenario.
Velocity can be swept by defining the recommended speed for each scenario. From this realistic deviation from this speed can be incorporated into the traffic cars in the scenario. This can be done by setting the desired speed of the traffic car equal to the recommended speed of the scenario +− a random number. The trajectory of each actor within the scenario can also manipulated to show several possible paths an actor may choose in a real driving situation. For example, an agent leaving a parking lot may have the option to turn left, turn right, or go straight at the intersection. The paths can be manipulated manually or an automated script can be utilized to randomly select a trajectory. Pedestrian and cyclist velocities and trajectories can also be manipulated in a similar fashion. The average preferred walking speed for adult males and females has been experimentally determined to be 1.2 m/s to 1.4 m/s (≈3 mph). The speed of pedestrians in testing may be swept from 1 to 3 m/s in order to show a large range of walking speeds. Likewise, average cycling speed is approximately 4.5 m/s (≈10 mph). Simulated bikes in the scenario may be assigned velocities ranging from 4.5 m/s to 8.5 m/s for example, to display leisurely and fast cycling paces.
The parameter of weather may also swept. Precipitation and visibility due to fog are manipulated in order to reproduce realistic weather conditions. The National Weather Service issues a fog advisory when visibility falls below ¼ mile (≈402 meters). Visibility due to fog may be set from 402 meters down to 50 meters in order to demonstrate a wide range of possible fog conditions. The severity of rain is altered by changing the density of rain drops per meter cubed (184-384) and altering the diameter of each drop (0.77 mm-2.03 mm [5]). Simulated models may employ real world physics where larger drops fall more quickly. The snowflake diameter is changed between trials (2.5 mm-20 mm), with larger flakes indicating heavier snowfall. Speed and density of the snowflakes may remain constant or change with atmospheric conditions. At high densities of rain, fog and snow the situation becomes very complex. The following distance and aggressiveness of other agents in the scenario is also manipulated. Less complex examples contain fewer pedestrians. More complex trials contain pedestrians that would cross unexpectedly, for example, pedestrians traversing outside of crosswalks or the presence of a condition/object that occludes humans or other critical entities from the sensors of the autonomous vehicle. Less complex examples contain only cyclists in designated bike lanes. More complex examples contain cyclists in lanes of traffic shared with vehicles. The acceptable following distance is manipulated for vehicles, in more complex examples shorter distances are used to lessen the gaps accessible for the autonomous vehicle during the turn. Some traffic vehicles are given the ability to switch lanes in order to pass a slow vehicle in more complex trials. Another important parameter to be swept is actor density. The number of actors present in each trial is also altered. Initially fewer actors may be present and additional actors may be added to the experiment in order to increase complexity.
The method is then operative to evaluate the performance of the driving system in response to the scenario with the parametric variation 330. The method is then operative to check to see if there is an additional parametric variation to simulate 330. If yes, the method is then operative to utilize the next parametric variation and to simulate the driving environment 320. If not, the method may then be operative to assign a performance metric to each of the tested parametric variations and, optionally, an overall performance metric 340. The method may then be able to assign a complexity metric to each parametric variation within the scenario and, optionally, an overall complexity 350. Again, the driver may be a human driver, a partial assist autonomous driving system or a fully autonomous driving system.

Claims (17)

What is claimed is:
1. An apparatus comprising:
a sensor interface for generating sensor data for coupling to a vehicle control system;
a control system interface for receiving control data from the vehicle control system;
a memory for storing a first scenario wherein the first scenario is associated with a first parametric variation and a second parametric variation; and
a simulator for simulating a first driving environment in response to the first scenario, the first parametric variation and the control data and assigning a first performance metric in response to the simulation of the first driving environment, the simulator further operative to simulate a second driving environment in response to the first scenario, the second parametric variation and the control data and assigning a second performance metric in response to the simulation of the second driving environment.
2. The apparatus of claim 1 wherein the first parametric variation and the second parametric variation relate to intensity of weather.
3. The apparatus of claim 1 wherein the memory is operative to store a second scenario associated with the first parametric variation and the second parametric variation.
4. The apparatus of claim 1 wherein the apparatus is implemented in software.
5. The apparatus of claim 1 wherein the apparatus is implemented in hardware.
6. The apparatus of claim 1 wherein the simulator is further operative to generate a success rate in response to the first performance metric and the second performance metric.
7. The apparatus of claim 1 wherein the vehicle control system is an autonomous vehicle control system.
8. A method comprising:
receiving a driving scenario, a first parametric variation and a second parametric variation;
simulating the driving scenario using the first parametric variation;
evaluating a first driver performance in response to the driving scenario, the first parametric variation and a first control data;
assigning a first performance metric in response to the first driver performance;
simulating the driving scenario using the second parametric variation;
evaluating a second driver performance in response to the driving scenario, the second parametric variation and a second control data; and
assigning a second performance metric in response to the second driver performance.
9. The method of claim 8 further comprising assigning an overall performance metric in response to the first performance metric and the second performance metric.
10. The method of claim 8 wherein the first parametric variation and the second parametric variation relate to intensity of weather.
11. The method of claim 8 wherein the first parametric variation and the second parametric variation relate to a velocity of a vehicle within the driving scenario.
12. The method of claim 8 wherein the first parametric variation and the second parametric variation relate to a number of vehicles within the driving scenario.
13. The method of claim 8 wherein the first parametric variation and the second parametric variation relate to velocity of a pedestrian within the driving scenario.
14. The method of claim 8 wherein the first parametric variation and the second parametric variation relate to a visibility within the driving scenario.
15. The method of claim 8 wherein further comprising receiving a third parametric variation, simulating the driving scenario using the third parametric variation and assigning a third performance metric in response to a third driver performance.
16. The method of claim 15 further comprising assigning an overall performance metric in response to the first performance metric, the second performance metric and the third performance metric.
17. The method of claim 8 wherein the first parametric variation and the second parametric variation relate to sensor performance within the driving scenario.
US15/811,878 2017-11-14 2017-11-14 Method and apparatus for scenario generation and parametric sweeps for the development and evaluation of autonomous driving systems Active US10345811B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/811,878 US10345811B2 (en) 2017-11-14 2017-11-14 Method and apparatus for scenario generation and parametric sweeps for the development and evaluation of autonomous driving systems
CN201811336120.3A CN109774724B (en) 2017-11-14 2018-11-09 Method and apparatus for evaluating driving performance under a variety of driving scenarios and conditions
DE102018128290.7A DE102018128290A1 (en) 2017-11-14 2018-11-12 METHOD AND DEVICE FOR PRODUCING SCENARIOS AND PARAMETRIC SWEEPS FOR THE DEVELOPMENT AND EVALUATION OF AUTONOMOUS DRIVE SYSTEMS

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/811,878 US10345811B2 (en) 2017-11-14 2017-11-14 Method and apparatus for scenario generation and parametric sweeps for the development and evaluation of autonomous driving systems

Publications (2)

Publication Number Publication Date
US20190146492A1 US20190146492A1 (en) 2019-05-16
US10345811B2 true US10345811B2 (en) 2019-07-09

Family

ID=66335801

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/811,878 Active US10345811B2 (en) 2017-11-14 2017-11-14 Method and apparatus for scenario generation and parametric sweeps for the development and evaluation of autonomous driving systems

Country Status (3)

Country Link
US (1) US10345811B2 (en)
CN (1) CN109774724B (en)
DE (1) DE102018128290A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220129362A1 (en) * 2020-10-25 2022-04-28 Motional Ad Llc Metric back-propagation for subsystem performance evaluation
US20220207209A1 (en) * 2020-12-30 2022-06-30 Beijing Voyager Technology Co., Ltd. Deterministic sampling of autonomous vehicle simulation variables at runtime
US11644331B2 (en) 2020-02-28 2023-05-09 International Business Machines Corporation Probe data generating system for simulator
US11702101B2 (en) 2020-02-28 2023-07-18 International Business Machines Corporation Automatic scenario generator using a computer for autonomous driving
US11814080B2 (en) 2020-02-28 2023-11-14 International Business Machines Corporation Autonomous driving evaluation using data analysis

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109902899B (en) * 2017-12-11 2020-03-10 百度在线网络技术(北京)有限公司 Information generation method and device
US11830365B1 (en) 2018-07-02 2023-11-28 Smartdrive Systems, Inc. Systems and methods for generating data describing physical surroundings of a vehicle
US10818102B1 (en) 2018-07-02 2020-10-27 Smartdrive Systems, Inc. Systems and methods for generating and providing timely vehicle event information
US12008922B2 (en) * 2018-07-02 2024-06-11 Smartdrive Systems, Inc. Systems and methods for comparing driving performance for simulated driving
US11341295B2 (en) * 2018-09-27 2022-05-24 Intel Corporation Methods, systems, and devices for efficient computation of simulation runs
CN110263709B (en) * 2019-06-19 2021-07-16 百度在线网络技术(北京)有限公司 Driving decision mining method and device
DE102019209725B4 (en) * 2019-07-03 2023-10-26 Zf Friedrichshafen Ag Method for adjusting means of a control device
US11443130B2 (en) 2019-08-30 2022-09-13 International Business Machines Corporation Making a failure scenario using adversarial reinforcement learning background
CN111122175B (en) * 2020-01-02 2022-02-25 阿波罗智能技术(北京)有限公司 Method and device for testing automatic driving system
DE102020204867A1 (en) 2020-04-17 2021-10-21 Volkswagen Aktiengesellschaft Method and system for operating a maneuver planning of at least one at least partially automated vehicle
US11628850B2 (en) * 2020-05-05 2023-04-18 Zoox, Inc. System for generating generalized simulation scenarios
JP7530999B2 (en) 2020-06-05 2024-08-08 ガティック エーアイ インコーポレイテッド Method and system for deterministic trajectory selection based on uncertainty estimation for autonomous agents
US11124204B1 (en) 2020-06-05 2021-09-21 Gatik Ai Inc. Method and system for data-driven and modular decision making and trajectory generation of an autonomous agent
JP7538892B2 (en) 2020-06-05 2024-08-22 ガティック エーアイ インコーポレイテッド Method and system for making context-aware decisions for autonomous agents - Patents.com
US11610504B2 (en) 2020-06-17 2023-03-21 Toyota Research Institute, Inc. Systems and methods for scenario marker infrastructure
CN111803065B (en) * 2020-06-23 2023-12-26 北方工业大学 Dangerous traffic scene identification method and system based on electroencephalogram data
CN111783226B (en) * 2020-06-29 2024-03-19 北京百度网讯科技有限公司 Method, device, electronic equipment and storage medium for generating automatic driving scene measurement parameters
KR20220060404A (en) * 2020-11-04 2022-05-11 현대자동차주식회사 Method and apparatus for generating test case for dynamic verification of autonomous driving system
US11699003B2 (en) * 2020-11-16 2023-07-11 Toyota Research Institute, Inc. Portable flexible agents for simulation
CN112525551B (en) * 2020-12-10 2023-08-29 北京百度网讯科技有限公司 Drive test method, device, equipment and storage medium for automatic driving vehicle
KR20220087240A (en) * 2020-12-17 2022-06-24 현대자동차주식회사 APPARATUS AND METHOD FOR SIMULATION OF Autonomous Vehicle
US20220197280A1 (en) * 2020-12-22 2022-06-23 Uatc, Llc Systems and Methods for Error Sourcing in Autonomous Vehicle Simulation
US11847385B2 (en) * 2020-12-30 2023-12-19 Beijing Voyager Technology Co., Ltd. Variable system for simulating operation of autonomous vehicles
CN112947080B (en) * 2021-02-04 2023-04-14 中国运载火箭技术研究院 Scene parameter transformation-based intelligent decision model performance evaluation system
CN113051685B (en) * 2021-03-26 2024-03-19 长安大学 Numerical control equipment health state evaluation method, system, equipment and storage medium
US12017686B1 (en) * 2021-08-11 2024-06-25 Waymo Llc Assessing surprise for autonomous vehicles
US20230056475A1 (en) * 2021-08-20 2023-02-23 Toyota Research Institute, Inc. System and method for a scenario-based event trigger
WO2023114396A1 (en) 2021-12-16 2023-06-22 Gatik Ai Inc. Method and system for addressing failure in an autonomous agent
US12037011B2 (en) 2021-12-16 2024-07-16 Gatik Ai Inc. Method and system for expanding the operational design domain of an autonomous agent
DE102022117619A1 (en) 2022-07-14 2024-01-25 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Computer-implemented method for determining the error criticality of a braking device of a vehicle in the event of a sudden loss of braking force during a braking maneuver
US20240069505A1 (en) * 2022-08-31 2024-02-29 Gm Cruise Holdings Llc Simulating autonomous vehicle operations and outcomes for technical changes

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9187088B1 (en) * 2014-08-15 2015-11-17 Google Inc. Distribution decision trees
US20170124781A1 (en) * 2015-11-04 2017-05-04 Zoox, Inc. Calibration for autonomous vehicle operation
US20170124476A1 (en) * 2015-11-04 2017-05-04 Zoox, Inc. Automated extraction of semantic information to enhance incremental mapping modifications for robotic vehicles
US20170169208A1 (en) * 2015-12-15 2017-06-15 Nagravision S.A. Methods and systems for validating an autonomous system that includes a dynamic-code module and a static-code module
US20180072313A1 (en) * 2016-09-13 2018-03-15 Here Global B.V. Method and apparatus for triggering vehicle sensors based on human accessory detection
US20180154899A1 (en) * 2016-12-02 2018-06-07 Starsky Robotics, Inc. Vehicle control system and method of use
US20180196439A1 (en) * 2015-11-04 2018-07-12 Zoox, Inc. Adaptive autonomous vehicle planner logic
US10026237B1 (en) * 2015-08-28 2018-07-17 State Farm Mutual Automobile Insurance Company Shared vehicle usage, monitoring and feedback

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8204711B2 (en) * 2009-03-25 2012-06-19 GM Global Technology Operations LLC System and apparatus for managing test procedures within a hardware-in-the-loop simulation system
US9262787B2 (en) * 2013-10-18 2016-02-16 State Farm Mutual Automobile Insurance Company Assessing risk using vehicle environment information
AT513370B1 (en) * 2013-11-05 2015-11-15 Avl List Gmbh Virtual test optimization for driver assistance systems
US9165477B2 (en) * 2013-12-06 2015-10-20 Vehicle Data Science Corporation Systems and methods for building road models, driver models, and vehicle models and making predictions therefrom
DE102016205806A1 (en) * 2016-04-07 2017-10-12 Bayerische Motoren Werke Aktiengesellschaft Method for optimizing a function in a vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9187088B1 (en) * 2014-08-15 2015-11-17 Google Inc. Distribution decision trees
US10026237B1 (en) * 2015-08-28 2018-07-17 State Farm Mutual Automobile Insurance Company Shared vehicle usage, monitoring and feedback
US20170124781A1 (en) * 2015-11-04 2017-05-04 Zoox, Inc. Calibration for autonomous vehicle operation
US20170124476A1 (en) * 2015-11-04 2017-05-04 Zoox, Inc. Automated extraction of semantic information to enhance incremental mapping modifications for robotic vehicles
US20180196439A1 (en) * 2015-11-04 2018-07-12 Zoox, Inc. Adaptive autonomous vehicle planner logic
US20170169208A1 (en) * 2015-12-15 2017-06-15 Nagravision S.A. Methods and systems for validating an autonomous system that includes a dynamic-code module and a static-code module
US20180072313A1 (en) * 2016-09-13 2018-03-15 Here Global B.V. Method and apparatus for triggering vehicle sensors based on human accessory detection
US20180154899A1 (en) * 2016-12-02 2018-06-07 Starsky Robotics, Inc. Vehicle control system and method of use

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11644331B2 (en) 2020-02-28 2023-05-09 International Business Machines Corporation Probe data generating system for simulator
US11702101B2 (en) 2020-02-28 2023-07-18 International Business Machines Corporation Automatic scenario generator using a computer for autonomous driving
US11814080B2 (en) 2020-02-28 2023-11-14 International Business Machines Corporation Autonomous driving evaluation using data analysis
US20220129362A1 (en) * 2020-10-25 2022-04-28 Motional Ad Llc Metric back-propagation for subsystem performance evaluation
US11321211B1 (en) * 2020-10-25 2022-05-03 Motional Ad Llc Metric back-propagation for subsystem performance evaluation
US20220207209A1 (en) * 2020-12-30 2022-06-30 Beijing Voyager Technology Co., Ltd. Deterministic sampling of autonomous vehicle simulation variables at runtime

Also Published As

Publication number Publication date
US20190146492A1 (en) 2019-05-16
DE102018128290A1 (en) 2019-05-16
CN109774724A (en) 2019-05-21
CN109774724B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
US10345811B2 (en) Method and apparatus for scenario generation and parametric sweeps for the development and evaluation of autonomous driving systems
EP3948794B1 (en) Systems and methods for generating synthetic sensor data via machine learning
Shirazi et al. Looking at intersections: a survey of intersection monitoring, behavior and safety analysis of recent studies
Young et al. Simulation of safety: A review of the state of the art in road safety simulation modelling
CN109782730B (en) Method and apparatus for autonomic system performance and rating
US20190146493A1 (en) Method And Apparatus For Autonomous System Performance And Benchmarking
Meyer-Delius et al. Probabilistic situation recognition for vehicular traffic scenarios
CN114077541A (en) Method and system for validating automatic control software for an autonomous vehicle
Guo et al. Lane change detection and prediction using real-world connected vehicle data
Yu et al. Dynamic driving environment complexity quantification method and its verification
Zec et al. Statistical sensor modelling for autonomous driving using autoregressive input-output HMMs
EP3722907A1 (en) Learning a scenario-based distribution of human driving behavior for realistic simulation model and deriving an error model of stationary and mobile sensors
Kusano et al. Collision avoidance testing of the waymo automated driving system
Platho et al. Traffic situation assessment by recognizing interrelated road users
Joo et al. A generalized driving risk assessment on high-speed highways using field theory
Ramakrishna et al. Risk-aware scene sampling for dynamic assurance of autonomous systems
St-Aubin Driver behaviour and road safety analysis using computer vision and applications in roundabout safety
Ding et al. Markov chain-based platoon recognition model in mixed traffic with human-driven and connected and autonomous vehicles
Schwalb Analysis of safety of the intended use (sotif)
Gregoriades Towards a user-centred road safety management method based on road traffic simulation
CN116842340A (en) Pedestrian track prediction method and equipment based on ATT-GRU model
Wang et al. Runtime unknown unsafe scenarios identification for SOTIF of autonomous vehicles
Caballero et al. Some statistical challenges in automated driving systems
Jazayeri Predicting Vehicle Trajectories at Intersections Using Advanced Machine Learning Techniques
Siebke et al. Realistic Traffic for Safety-Relevant Studies: A Comprehensive Calibration and Validation Strategy Using Dream for Virtual Assessment of Advanced Driver Assistance Systems (ADAS) and Highly Automated Driving (HAD)

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: GM GLOBAL TECHNOLOGY OPERATIONS LLC, MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BERGSTEDT, DYLAN T.;THOR, NANDAN;CHOE, JAEHOON;AND OTHERS;SIGNING DATES FROM 20171002 TO 20171113;REEL/FRAME:047019/0038

AS Assignment

Owner name: GM GLOBAL TECHNOLOGY OPERATIONS LLC, MICHIGAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PHILLIPS, MATTHEW E.;BERGSTEDT, DYLAN T.;THOR, NANDAN;AND OTHERS;SIGNING DATES FROM 20171002 TO 20181021;REEL/FRAME:047444/0461

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4