CN114787894A - Perceptual error model - Google Patents

Perceptual error model Download PDF

Info

Publication number
CN114787894A
CN114787894A CN202080084729.5A CN202080084729A CN114787894A CN 114787894 A CN114787894 A CN 114787894A CN 202080084729 A CN202080084729 A CN 202080084729A CN 114787894 A CN114787894 A CN 114787894A
Authority
CN
China
Prior art keywords
data
vehicle
error
determining
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080084729.5A
Other languages
Chinese (zh)
Inventor
S·A·莫达拉瓦拉萨
G·巴格希克
A·S·克雷戈
A·G·德克斯
R·利亚索夫
J·W·V·菲尔宾
A·G·赖格
A·C·雷什卡
M·温默斯霍夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zoox Inc
Original Assignee
Zoox Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/708,019 external-priority patent/US11734473B2/en
Application filed by Zoox Inc filed Critical Zoox Inc
Publication of CN114787894A publication Critical patent/CN114787894A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3664Environments for testing or debugging software
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/04Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles
    • G09B9/042Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of land vehicles providing simulation in a real vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3696Methods or tools to render software testable

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

Techniques for determining an error model based on vehicle data and ground truth data are discussed herein. To determine whether a complex system (which may not be able to be inspected) is capable of safe operation, various operating regimes (scenarios) may be identified based on the operational data. To provide safe operation of such a system, an error model may be determined that may provide probabilities associated with the sensory data, and the vehicle may determine the trajectory based on the probabilities of the errors associated with the sensory data.

Description

Perceptual error model
Cross Reference to Related Applications
This patent application claims priority to U.S. utility patent application serial No. 16/708019 filed on 9.12.12.2019. Application serial No. 16/708019 is fully incorporated herein by reference.
Background
The autonomous vehicle may use the autonomous vehicle controller to guide the autonomous vehicle through the environment. For example, an autonomous vehicle controller may use planning methods, apparatus, and systems to determine a driving path and guide an autonomous vehicle through an environment containing dynamic objects (e.g., vehicles, pedestrians, animals, etc.) and static objects (e.g., buildings, signs, stalled vehicles, etc.). However, in order to ensure the safety of the occupant, it is important to verify the safety of the controller.
Drawings
The detailed description is described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference symbols in different drawings indicates similar or identical items or features.
FIG. 1 illustrates generating vehicle performance data associated with a vehicle controller based on a parameterized scene.
FIG. 2 illustrates a computing device that generates scene data based at least in part on log data generated by a vehicle, where the scene data illustrates one or more variations of a scene.
Fig. 3 illustrates generating error model data based at least in part on vehicle data and ground truth data.
FIG. 4 illustrates perturbing a simulation using error model data by providing at least one of an error or an uncertainty associated with the simulation environment.
Fig. 5 illustrates a computing device that generates perceptual error model data based at least in part on log data generated by a vehicle and ground truth data.
Fig. 6 illustrates a computing device that generates simulation data based at least in part on parameterized scene data and generates security metric data based at least in part on the simulation data.
FIG. 7 illustrates a block diagram of an example system for implementing the techniques described herein.
Fig. 8 shows a flowchart of an example process for determining a safety metric associated with a vehicle controller, according to an example of the present disclosure.
FIG. 9 illustrates a flow chart of an example process for determining a statistical model associated with a subsystem of an autonomous vehicle.
FIG. 10 illustrates a plurality of regions of an environment associated with error probabilities.
Fig. 11 shows vehicle data and an environment represented by the vehicle data and a difference between the vehicle data and the environment.
FIG. 12 shows a flow diagram of an example process for determining an error model and determining simulated data of a disturbance.
Detailed Description
The techniques described herein are directed to various aspects of determining performance metrics of a system. In at least some examples described herein, such performance metrics may be determined, for example, using simulation in conjunction with other performance metric determinations. Simulations may be used to validate software (e.g., vehicle controllers) executing on vehicles (e.g., autonomous vehicles) and collect safety metrics to ensure that the software is able to safely control such vehicles in various scenarios. In additional or alternative examples, the simulation may be used to learn constraints of an autonomous vehicle using an autonomous controller. For example, the simulation may be used to learn the operating space of the autonomous vehicle (e.g., the autonomous controller effectively controls the envelope of the autonomous vehicle) from surface conditions, environmental noise, faulty components, and so forth. Simulations may also be used to facilitate generating feedback to improve the operation and design of an autonomous vehicle. For example, in some examples, simulations may be used to determine the amount of redundancy needed in an autonomous controller, or how to modify the behavior of an autonomous controller based on what is learned through the simulation. Further, in additional or alternative examples, the simulation facilitates providing information for a hardware design of the autonomous vehicle, such as optimizing placement of sensors on the autonomous vehicle.
In creating a simulation environment for testing and verification, the environment may be specifically enumerated by various specific examples. Each instantiation of such an environment may be unique and defined. Manually enumerating all possible scenarios may require excessive time, and various scenarios may not be tested if not every possible scenario is constructed. Scene parameters may be used to parameterize features and/or properties of objects within a scene and provide variants of the scene.
For example, one or more vehicles may traverse an environment and generate log data associated with the environment. The log data may include sensor data captured by one or more sensors of the vehicle, sensory data indicative of objects identified (or generated at a post-processing stage) by one or more on-board systems on the vehicle, predictive data indicative of the intent of the object (whether made during or after recording), and/or status data indicative of diagnostic information, trajectory information, and other information generated by the vehicle. The vehicle may transmit the log data via a network to a database storing the log data and/or a computing device analyzing the log data.
The computing device may determine, based on the log data, various scenes, frequencies of the various scenes, and regions of the environment associated with the various scenes. In some cases, the computing device may group similar scenes represented in the log data. For example, scenes may be grouped together (e.g., day, night, precipitation, vehicle position/speed, object position/speed, road segments, etc.) using, for example, k-means clustering and/or evaluating weighted distances (e.g., euclidean) between environmental parameters. As described above, clustering similar or analogous scenes can reduce the amount of computing resources required to simulate an autonomous controller in an environment by simulating the autonomous controller in a unique scene rather than simulating an autonomous vehicle in a nearly similar scene (which can result in redundant simulation data/results). As may be appreciated, autonomous controllers may be expected to perform similarly (and/or may have proven to perform) in similar scenarios.
For example, the computing device may determine the rate at which pedestrians appear on the crosswalk based on the number of pedestrians represented in the log data. In some cases, the computing device may determine a probability of detecting a pedestrian at the pedestrian crossing based on a rate of speed and a period of time in which the autonomous vehicle is operated. Based on the log data, the computing device may determine a scene and identify scene parameters based on the scene that may be used in the simulation.
In some cases, the simulation may be used to test and verify the response of the autonomous vehicle controller to defective (and/or malfunctioning) sensors of the vehicle and/or defective (and/or malfunctioning) handling of the sensor data. In such an example, the computing device may be configured to introduce inconsistencies in the scene parameters of the object. For example, the error model may indicate an error and/or a percentage of error associated with the scene parameter. The scenario may incorporate errors and/or error percentages into the simulation scenario and simulate the response of the autonomous vehicle controller. Such errors may be represented by, but are not limited to, a look-up table determined based on statistical aggregation using ground truth data, functions (e.g., errors based on input parameters), or any other model that maps parameters to specific errors. In at least some examples, such an error model can map a particular error to a probability/frequency of occurrence.
By way of example and not limitation, the error model may indicate that a scene parameter, such as a velocity associated with an object in the simulated environment, is associated with a percentage of error. For example, the object may travel at a speed of 10 meters per second in the simulated scene, and the error percentage may be 20%, resulting in a speed range between 8 meters per second and 12 meters per second. In some cases, a velocity range may be associated with a probability distribution that indicates that certain portions of the range have a higher probability of occurrence than other portions of the range (e.g., a probability of 15% at 8 and 12 meters per second, 30% at 9 and 11 meters per second, and 10% at 10 meters per second).
Based on the error model and/or the scene, a parameterized scene may be generated. The parameterized scene may provide a set of variations of the scene. Thus, instantiating an autonomous vehicle controller in a parameterized scene and simulating the parameterized scene may effectively cover a wide variety of scenes without manually enumerating these variants. Further, based at least in part on executing the parameterized scenario, the simulation data may indicate how the autonomous vehicle controller responds (or will respond) to the parameterized scenario and determine a successful outcome or an unsuccessful outcome based at least in part on the simulation data.
Aggregating simulation data related to the parameterized scene may provide a security metric associated with the parameterized scene. For example, the simulation data may indicate a success rate and/or a failure rate of the autonomous vehicle controller and a parameterized scenario. In some cases, reaching or exceeding the success rate may indicate successful verification of the autonomous vehicle controller, which may then be downloaded by (or otherwise communicated to) the vehicle for further vehicle control and operation.
For example, a parameterized scene may be associated with the results. The simulated data may indicate that the response of the autonomous vehicle controller is consistent or inconsistent with the results. By way of example and not limitation, a parameterized scene may represent a simulated environment including a vehicle controlled by an autonomous vehicle controller that travels at a speed and performs a stopping action in front of an object in front of the vehicle. The speed may be associated with a scene parameter indicative of a vehicle speed range. A parametric scene may be simulated based at least in part on the speed range, and simulation data indicative of a distance between the vehicle and the object at the time the vehicle completes the stopping action may be generated. The parameterized scene may be associated with a result that indicates a distance between the vehicle and the object meets or exceeds a distance threshold. Based on the simulation data and the scene parameters, the success rate may represent a comparison of the number of times the distance between the vehicle and the object reaches or exceeds a distance threshold when the vehicle completes the stopping motion and the total number of times the vehicle completes the stopping motion.
The techniques described herein provide various computational efficiencies. For example, using the techniques described herein, a computing device requires less computing resources and may generate multiple simulated scenes faster than those obtained via conventional techniques. Conventional techniques are not scalable. For example, generating a unique set of simulated environments-as many as needed for a training, testing, and/or verification system (e.g., one or more components of an artificial intelligence stack) on an autonomous vehicle (e.g., before deploying such autonomous vehicle to a corresponding new real environment) -may take an excessive amount of time, thereby limiting the ability to train, test, and/or verify such system (e.g., one or more components of an artificial intelligence stack) on an autonomous vehicle before entering a real scenario and/or environment. The techniques described herein are non-conventional in that they utilize sensor data collected from a real environment and supplement that data with additional data to generate a substantially accurate simulated environment (e.g., relative to a corresponding real environment) more efficiently than is available with conventional techniques. Moreover, the techniques described herein, e.g., different aspects of a scene, can generate many scalable simulation scenarios in less time and less computing resources than are available with conventional techniques.
Further, the techniques described herein are directed to security improvements. That is, the simulated environment produced by the generation techniques described herein may be used to test, train, and validate systems on autonomous vehicles to ensure that such systems are able to safely operate the autonomous vehicles when deployed in real environments. That is, the simulated environment produced by the generation techniques described herein may be used to test, train, and validate a planning system and/or a prediction system of an autonomous vehicle controller, which an autonomous vehicle may use to navigate an autonomous vehicle along a trajectory in a real environment. Thus, such training, testing, and verification achieved by the techniques described herein may provide an opportunity to ensure that an autonomous vehicle may safely operate in a real-world environment. Thus, the techniques described herein improve safety and bump navigation.
FIG. 1 shows an example 100 of generating vehicle performance data associated with a vehicle controller based on a parameterized scene. To generate a scene, input data 102 may be used. The input data 102 may include vehicle data 104 and/or additional contextual data 106. The vehicle data 104 may include log data captured by vehicles traveling through the environment. As described above, the log data may be used to identify scenarios that simulate the autonomous controller. For purposes of illustration, the vehicle may be an autonomous vehicle configured to operate according to a class 5 issued by the U.S. national highway traffic safety administration, which class describes vehicles capable of performing all safety critical functions throughout a trip, wherein the driver (or occupant) is not expected to control the vehicle at any time. In such an example, the vehicle may be empty as it may be configured to control all functions from start to stop, including all parking functions. This is but one example, and the systems and methods described herein may be incorporated into any land, air or water based vehicle, including vehicles that need to be manually controlled by the driver at all times to those that are partially or fully autonomously controlled.
The vehicle may include a computing device that includes a perception engine and/or a planner and performs operations such as detecting, identifying, segmenting, classifying, and/or tracking objects from sensor data collected from the environment. For example, objects such as pedestrians, bikes/bikers, motorcyclists/motorcyclists, buses, trams, trucks, animals and/or the like may be present in the environment.
As the vehicle traverses the environment, the sensors may capture sensor data related to the environment. For example, some sensor data may be associated with an object (e.g., a vehicle, a rider, and/or a pedestrian). In some cases, the sensor data may be associated with other objects, including but not limited to buildings, pavements, signs, obstacles, and the like. Thus, in some cases, sensor data may be associated with dynamic objects and/or static objects. As described above, a dynamic object may be an object associated with motion (e.g., a vehicle, a motorcycle, a rider, a pedestrian, an animal, etc.) or an object capable of moving within an environment (e.g., a parked vehicle, a standing pedestrian, etc.). As described above, a static object may be an object associated with an environment, such as a building/structure, a road surface, a road marking, a sign, an obstacle, a tree, a sidewalk, and so forth. In some cases, the vehicle computing device may determine information about objects in the environment, such as bounding boxes, classifications, segmentation information, and so forth.
The vehicle computing device may use the sensor data to generate a trajectory of the vehicle. In some cases, the vehicle computing device may also determine pose data associated with the vehicle location. For example, the vehicle computing device may use the sensor data to determine position data, coordinate data, and/or orientation data of the vehicle in the environment. In some cases, the pose data may include x-y-z coordinates and/or may include pitch, roll, and yaw data associated with the vehicle.
The vehicle computing device may generate vehicle data 104 that may include the data described above. For example, the vehicle data 104 may include sensor data, sensory data, planning data, vehicle state data, speed data, intent data, and/or other data generated by a vehicle computing device. In some cases, the sensor data may include data captured by sensors such as time-of-flight sensors, location sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., Inertial Measurement Unit (IMU), accelerometer, magnetometer, gyroscope, etc.), lidar sensors, radar sensors, sonar sensors, infrared sensors, cameras (e.g., RGB, IR, intensity, depth, etc.), microphone sensors, environmental sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), ultrasonic transducers, wheel encoders, etc. The sensor data may be data captured by such sensors, such as time-of-flight data, location data, lidar data, radar data, sonar data, image data, audio data, and so forth. Such log data may also include intermediate outputs of any one or more systems or subsystems of the vehicle, including, but not limited to, messages indicative of object detections, object trajectories, predictions of future object locations, multiple trajectories generated in response to such detections, control signals communicated to the one or more systems or subsystems used to execute the commands, and so forth. In some cases, the vehicle data 104 may include temporal data associated with other data generated by the vehicle computing device.
In some cases, the input data 102 may be used to generate a scene. The input data 102 may include vehicle data 104 and/or additional contextual data 106. By way of example and not limitation, the additional contextual data 106 may include data such as incident reports from third party sources. The third party sources may include law enforcement agencies, motor vehicle departments, and/or security authorities that may issue and/or store activity and/or incident reports. For example, the report may include a description of the type of activity (e.g., traffic hazards, such as debris on roads, local floods, etc.), location, and/or activity. By way of example and not limitation, the report may describe that the driver hit a fallen branch on the road while operating the vehicle while traveling at a speed of 15 meters per second. The report may be used to generate similar scenarios that may be used for simulation.
In some cases, additional contextual data 106 may include captured sensor data (e.g., image data). By way of example and not limitation, a driver of a vehicle may use a camera to capture image data as the driver operates the vehicle. In some cases, the image data may capture activity such as an event. By way of example and not limitation, a driver may use a dashboard camera (e.g., a camera mounted on the dashboard inside the vehicle) to capture image data as the driver operates the vehicle. As the driver operates the vehicle, the animal may run across the road and the driver may immediately brake to slow the vehicle. The dashboard camera may capture image data of the animal running on the road and the vehicle decelerating. The image data may be used to generate a scene where the animal runs on the road. As described above, probabilities associated with the scenarios may be determined to identify the scenarios for simulation based on the probabilities of meeting or exceeding the probability threshold. By way of example and not limitation, when the likelihood of encountering a scene is less than 0.001%, a probability threshold of 0.001% may be used, and then scenes with higher probabilities may be prioritized for simulation and a safety metric associated with the scenes with higher probabilities may be determined.
Input data 102, such as vehicle data 104 and/or additional contextual data 106, may be used by a scene editor component 108 to generate an initial scene 110. For example, the input data 102 can be input to a scene editor component 108, which can generate a synthetic environment representing at least a portion of the input data 102 in the synthetic environment. Examples of generating scenes, such as the initial scene 110 and data generated by a vehicle that may be included in the vehicle data 104, may be found in, for example, U.S. patent application No. 16/392094 entitled "scene Editor and Simulator" filed on 23/4/2019, which is incorporated herein by reference in its entirety.
The scene editor component 108 can be configured to scan the input data 102 to identify one or more scenes represented in the input data 102. By way of example and not limitation, the scene editor component 108 can determine that a portion of the input data 102 represents a pedestrian crossing a street without right of way (e.g., no crosswalk, at an intersection without a walk indication, etc.). The scene editor component 108 can identify this as a scene (e.g., a crossroad parameter) and mark (and/or classify) the scene as, for example, a crossroad scene. For example, the scene editor component 108 can generate the initial scene 110 using rules that define actions. By way of example and not limitation, a rule may define that a pedestrian crossing in an area not associated with is a crosswalk person. In some cases, the scene editor component 108 can receive tag data from a user of the scene editor component 108 to associate portions of the input data 102 with tags to generate the initial scene 110.
In some cases, the scene editor component 108 can scan other portions of the input data 102 and identify similar scenes and mark the similar scenes with the same shuffle road label. In some cases, the scene editor component 108 can identify scenes that do not correspond to (or are excluded from) existing tags and generate new tags for those scenes. In some cases, the scene editor component 108 can generate a scene library and store the scene library in a database within the scene editor component 108. By way of example and not limitation, the scene library may include crosswalk scenes, merge scenes, lane change scenes, and the like.
In at least some examples, such an initial scenario 110 can be specified manually. For example, one or more users may specify certain scenarios to be tested to ensure that the vehicle is able to operate safely while performing such scenarios, even though the scenarios have never (or rarely) been encountered before.
The parameter component 112 can determine scene parameters associated with the initial scene 110 identified by the scene editor component 108. By way of example and not limitation, the parameter component 112 may analyze a crossroad scene and determine scene parameters associated with the crossroad scene including a location of a pedestrian, a pose of the pedestrian, a size of the pedestrian, a speed of the pedestrian, a trajectory of the pedestrian, a distance between the vehicle and the pedestrian, a speed of the vehicle, a road width, and the like.
In some cases, parameter component 112 may determine a range or set of values associated with a scene parameter. For example, the parameter component 112 may determine a classification associated with an object (e.g., a pedestrian) represented in the initial scene 110 and determine other objects in the input data 102 having the same classification. The parameter component 112 may then determine a range of values associated with the scene parameters represented by the initial scene 110. By way of example and not limitation, the scene parameter may indicate that the pedestrian may have a speed in the range of 0.5-1.5 meters per second.
In some cases, parameter component 112 may determine a probability associated with a scene parameter. By way of example and not limitation, the parameter component 112 can associate a probability distribution, such as a Gaussian distribution (also referred to as a normal distribution), with a scene parameter. In some cases, parameter component 112 may determine probabilities associated with the scene parameters based on input data 102. As described above, the parameter component 112 can determine a classification associated with an object represented in the input data 102 and determine other objects of the same classification in the input data 102 and/or other log data. Parameter component 112 can then determine a probability distribution of scene parameters associated with the object represented by input data 102 and/or other log data.
By way of example and not limitation, parameter component 112 may determine that 30% of pedestrians are walking at speeds below 0.3 meters/second, 30% are walking at speeds above 1.2 meters/second, and 40% are walking at speeds of 0.3 to 1.2 meters per second. The parameter component 112 may use the distribution as a probability that a pedestrian crossing a road scene will walk at a particular speed. As an additional example and not by way of limitation, the parameter component 112 may determine a 1% crossroad scenario probability that may indicate that a vehicle traversing the environment will encounter a crossroad human 5% of the time while traversing the environment. In some cases, during the simulation of the autonomous vehicle controller, the scene probabilities may be used to include the scene at a rate associated with the scene probabilities.
In some cases, the parameter component 112 may receive supplemental data 114 incorporated into the distribution. By way of example and not limitation, parameter component 112 may determine a scene parameter indicating that a pedestrian may have a distance in the range of 30-60 meters from the vehicle while the vehicle is traveling at a speed of 15 meters/second or alternatively representing a time-to-collision of 2-4 seconds. The supplemental data 114 (e.g., regulations or guidelines) may indicate that the vehicle must process a scenario with a 1.5 second collision time, which may be a lower limit (also referred to as a parameter threshold). The parameter component 112 can incorporate supplemental data 114 and determine scene parameters as having a collision time of 1.5-4 seconds (although any time range can be specified). In some cases, the parameter component 112 may use the probability distributions discussed above to determine (using interpolation and/or extrapolation techniques) probabilities associated with the supplemental data 114.
The error model component 116 may determine an error model that may indicate an error associated with the scene parameter. For example, the perceptual error model may produce perceptual errors associated with perceptual parameters of the simulated object, the predictive error model may produce predictive errors associated with predictive parameters of the simulated object, and so on. In some cases, sensor data that is erroneous by the sensors and/or the vehicle's computing device may misprocess the sensor data as the vehicle traverses the environment, which may result in perception errors. Testing and modeling the perception error may help indicate the operating margin of the vehicle because it relates to the potential perception error. For example, a scene parameter such as a perception parameter may indicate a size of an object or a range of positions of the object in the environment. The error model component 116 may indicate potential errors associated with the size of the object using a perceptual error model, which may result in the perceptual data of the vehicle indicating that the object is larger or smaller than the actual object in the environment.
For example, the error model component 116 may determine a perceptual error model by comparing the input data 102 with ground truth data. In some cases, ground truth data may be manually flagged and/or determined from other validated machine learning components. For example, the input data 102 may include sensor data and/or vehicle-generated sensory data. The error model component 116 can compare the input data 102 with ground truth data that can indicate actual parameters of objects in the environment. By comparing the input data 102 with the ground truth data, the error model component 116 can determine the perceived error. By way of example and not limitation, the input data 102 may indicate a pedestrian height of 1.8 meters, while the ground truth data indicates a pedestrian height of 1.75 meters, and thus, the perception error model may indicate a perception error of approximately 3% (e.g., [ (1.8-1.75)/l.8 ]. 100).
In some cases, the error model component 116 may determine a classification associated with an object represented in the input data 102 and determine other objects of the same classification in the input data 102. The error model component 116 can then determine that a probability distribution (also referred to as an error distribution) is associated with the error range of the object and associate the probability distribution with the object within the initial scene 110. By way of example and not limitation, the error model component 116 may determine that objects with a pedestrian classification have a perception error of 4% -6% and objects with a vehicle classification have a perception error of 3% -5%. In some cases, the error model component 116 may determine a probability distribution indicating that objects, e.g., larger than a threshold size, are more or less likely to have errors (e.g., in classification).
In some cases, the parameter component 112 may use the region data 118 to determine a set of environmental regions that are compatible with the scene and scene parameters. By way of example and not limitation, the scenario may indicate that the vehicle has a speed of 15 meters per second, the pedestrian has a speed of 1.5 meters per second, and the distance between the vehicle and the pedestrian is 30 meters. The parameter component 112 can determine the region based on region data 118 that conforms to the environment of the scene. By way of example and not limitation, the parameter component 112 may exclude school zones because the vehicle speed in a scene may exceed the speed limit associated with a school zone, and thus the scene is not valid in a school zone. However, such a vehicle speed (e.g., 15 m/sec) may be reasonable on a county road close to a farmland, and thus such an area may be intensively considered in the area.
In some cases, the parameter component 112 may store zone data 118 that includes segments of the drivable zone of the environment. Techniques for identifying segments and similar segments of a drivable surface, and segment classification and/or typing, can be found, for example, in U.S. patent application No. 16/370696 entitled "Extension of Autonomus Driving function to New Regions," filed on 19.3.2019, and U.S. patent application No. 16/376842 entitled "sizing Autonomus Driving Using Map Data and Driving Data," filed on 5.4.2019, which are incorporated by reference in their entirety.
For example, the region data 118 of the environment may be parsed into segments and similar segments may be identified. In some cases, a segment may include an intersection segment, such as an intersection, junction, etc., or a length and/or width of a connecting road segment, such as a road between intersections. By way of example and not limitation, all dual lane road segments with speed limits in the 10 mile/hour range may be associated with the same stereotype. In some cases, data may be associated with each individual segment. For example, a junction segment may include a junction type, e.g., merge, "T", detour, etc.; a plurality of roads are converged at the intersection; the relative positions of these roads, e.g., the angle between the roads that meet at the intersection; information about traffic control signals at intersections; and/or other features. The data associated with the connected road segment may include the number of lanes, the width of the lanes, the direction of travel in each lane, the identity of the stop, speed limits on the road segment, and/or other characteristics.
In some examples, the segments of the drivable surface may be grouped according to segment classification or segment typing. By way of example and not limitation, some or all of the junction segments that meet a certain metric or attribute range may be grouped together (e.g., using k-means, evaluating weighted distances (e.g., euclidean) between segmentation parameters, or otherwise clustering such segments based on segmentation parameters).
Scenarios involving the same or similar stereotypes may be used to verify the functionality of the autonomous vehicle controller. For example, the autonomous vehicles may be expected to perform (and/or may have proven to perform) the same tasks in the same or similar stereotypes. In some examples, the use of stereotypes may reduce the number of comparisons to be made. For example, by identifying similar regions, the number of simulated scenes that may provide useful information is reduced. The techniques described herein may reduce computational complexity, memory requirements, and processing time by optimizing the particular scenario that provides useful information for verification and testing.
The parametric scene component 120 can generate a parametric scene 122 using data determined by the parameter component 112 (e.g., the initial scene 110, scene parameters, region sets, and/or error model data). For example, the initial scene 110 may indicate a scene such as a lane change scene, a right turn scene, a left turn scene, an emergency stop scene, and the like. The scene parameters may indicate a speed associated with a vehicle controlled by the autonomous vehicle controller, a pose of the vehicle, a distance between the vehicle and the object, and the like. In some cases, the scene parameters may indicate an object, a location associated with the object, a velocity associated with the object, and/or the like. Further, an error model (e.g., a perceptual error model, a predictive error model, etc.) may indicate an error associated with the scene parameter and provide a range of values and/or probabilities associated with the scene parameter. By way of example and not limitation, a scene parameter such as vehicle speed may be associated with a speed range such as 8-12 meters per second). As described above, the speed range may be associated with a probability distribution that indicates the probability that the speed is within the speed range of occurrence.
The set of regions may indicate a portion of an environment that may be used to place objects in the simulated environment. For example, the initial scene 110 may indicate a scene that includes a two-way multi-lane driving surface associated with a speed limit of 35 miles per hour. Based on the initial scenario 110, the set of regions may exclude regions that do not include a bi-directional, multi-lane driving surface associated with a speed limit of 35 miles per hour, such as a parking lot. The parameterized scene 122 may be used to cover changes provided by scene parameters, error models, region data 118, and the like.
By way of example and not limitation, the context parameters may include the vehicle traversing the environment at 10, 11, or 12 meters per second (or any speed) as it approaches the intersection. The set of regions may include uncontrolled intersections, intersections with four-way stops, and intersections with traffic lights. Further, the perceptual error model may indicate a perceptual error of 1.34%, which may be provided by a perceptual metric determined for the perceptual system under test. Thus, the parameterized scene 122 may allow for a total of 9 different scenes by changing scene parameters and regions (e.g., 3 speeds x 3 regions — 9 scenes/permutations). Further, when the simulation component 124 simulates the parameterized scene, the simulation component 124 can use a perception error model to introduce perception errors associated with the perception data determined by the vehicle. As can be appreciated, this is merely one example, and the parameterized scene 122 may include more or fewer permutations as well as different types of scene parameters, regions, and/or perceptual errors.
The simulation component 124 can execute the parameterized scene 122 as a set of simulation instructions and generate simulation data 126. For example, the simulation component 124 can instantiate a vehicle controller in a simulation scenario. In some cases, the simulation component 124 can execute multiple simulation scenarios simultaneously and/or in parallel. In addition, the simulation component 124 can determine the results of the parameterized scene 122. For example, the simulation component 124 can execute variants of the parameterized scene 122 for simulation for testing and verification. The simulation component 124 may generate simulation data 126 indicating how the autonomous vehicle controller is performing (e.g., responding), and may compare the simulation data 126 to predetermined results and/or determine whether any predetermined rules/assertions are violated/triggered.
In some cases, the simulated variants may be selected based on the generalized spacing of the scene parameters. By way of example, and not limitation, the scene parameter may be associated with a speed of the vehicle. Further, the context parameter may be associated with a range of values for the vehicle. The variants used for the simulation may be selected based on the generalized spacing to increase the coverage of the value range (e.g., select the 25 th percentile, the 50 th percentile, the 75 th percentile, etc.). In some cases, the variants may be randomly selected and/or the variants may be randomly selected within a standard deviation of a range of values.
In some cases, the predetermined rules/assertions may be based on parameterized scenarios 122 (e.g., traffic rules for crosswalks may be enabled based on crosswalk scenarios, or traffic rules for crossing lane markings may be disabled for scenarios of stalled vehicles). In some cases, simulation component 124 can dynamically enable and disable rules/assertions as the simulation progresses. For example, when a mock object approaches a school zone, rules/assertions related to the school zone may be enabled and disabled when the mock object leaves the school zone. In some cases, the rules/assertions may include comfort metrics that relate to, for example, how fast an object of a given simulated scene may accelerate. In at least some examples, the rules can include, for example, adherence to road rules, leaving a safe buffer between objects, and so forth.
The simulation component 124 can determine that the autonomous vehicle controller is successful based at least in part on determining that the autonomous vehicle controller execution is consistent with a predetermined result (i.e., that the autonomous vehicle controller has made everything it should do) and/or determining that a rule is not violated or that an assertion is not triggered. Based at least in part on determining that the performance of the autonomous vehicle controller is inconsistent with the predetermined outcome (i.e., the autonomous vehicle controller has not done what it should do) and/or determining that a rule is violated or an assertion is triggered, the simulation component 124 may determine that the autonomous vehicle controller is malfunctioning. Thus, based at least in part on executing the parameterized scenario 122, the simulation data 126 may instruct the autonomous vehicle controller how to respond to each variant of the parameterized scenario 122, as described above, and determine a successful outcome or an unsuccessful outcome based at least in part on the simulation data 126.
The analysis component 128 can be configured to determine a degree of success or failure. By way of example and not limitation, a rule may indicate that a vehicle controlled by an autonomous vehicle controller must stop within a threshold distance of an object. The simulation data 126 may indicate that in a first variation of the parameterized scene 122, the simulated vehicle stopped more than 5 meters from the threshold distance. In a second variation of the parametric scenario 122, the simulation data 126 may indicate that the simulated vehicle stopped more than 10 meters from the threshold distance. The analysis component 128 can indicate that the simulated vehicle is executed more successfully in the second variation than in the first variation. For example, the analysis component 128 can determine an ordered list (e.g., ordered according to relative success levels) that includes simulated vehicles and associated variants of the parameterized scene 122. These variations may also be used to determine that the limits of various components of the system are being modeled.
The analysis component 128 can determine additional variants of the parameterized scene 122 based on the simulation data 126. For example, the simulation data 126 output by the simulation component 124 may indicate a variation (which may be expressed as a likelihood of continuity) of the parameterized scenario 122 associated with success or failure. The analysis component 128 can determine additional variants based on the variants associated with the fault. By way of example and not limitation, the variation of the parameterized scene 122 associated with the fault may represent that the vehicle is traveling on the driving surface at a speed of 15 meters per second and that the animal is traversing the driving surface in front of the vehicle at a distance of 20 meters. The analysis component 128 can determine additional variations of the scene to determine additional simulation data 126 for analysis. By way of example and not limitation, the analysis component 128 can determine additional variants including vehicles traveling at 10 meters per second, 12.5 meters per second, 17.5 meters per second, 20 meters per second, and so forth. In addition, the analysis component 128 can determine additional variants that include the animal traversing the driving surface at distances of 15 meters, 17.5 meters, 22.5 meters, 25 meters, and so forth. Additional variants may be input into the simulation component 124 to generate additional simulation data. Such additional variations may be determined, for example, based on perturbations to scene parameters of a scene operating in a simulation.
In some cases, the analysis component 128 may determine additional variations of the scene by disabling scene parameters. For example, the parameterized scene may include a first scene parameter associated with a velocity of the object and a second scene parameter associated with a position of the object. The parameterized scene 122 may include a first range of values associated with velocity and a second range of values associated with position. In some cases, after simulating the parameterized scene 122, the simulation data 126 may indicate that some variants of the parameterized scene 122 result in successful results and some variants result in failed results. The analysis component 128 can then determine to disable the first scene parameter (e.g., set a fixed value associated with the first scene parameter) and change the parameterized scene 122 based on the second scene parameter. By disabling one of the scenario parameters, the analysis component 128 may determine whether the scenario parameter and/or the value of the scenario parameter is associated with a success result or a failure result. Such parameters may be disabled based on the likelihood of failure, randomly, or otherwise. By way of example and not limitation, simulation data 126 as a result of disabling all of the scene parameters may indicate that a problem exists with the planning components of the autonomous vehicle.
In some cases, the analysis component 128 can be used to perform sensitivity analysis. For example, the analysis component 128 can disable scene parameters and determine how disabling the scene parameters affects the simulation data 126 based on the simulation data 126 generated by the simulation component 124 (e.g., increase success rate, decrease success rate, with little effect on success rate). In some cases, analysis component 128 may disable the scene parameters individually to determine how disabling each scene parameter affects simulation data 126. Analysis component 128 can collect statistical data that indicates how various scene parameters affect simulation data 126 over many of the course of the simulation. In some cases, the analysis component 128 may be configured to disable a set of scene parameters (e.g., disable a night environment parameter and disable a wet condition environment parameter). As described above, the analysis component may collect statistics indicating how the scene parameter sets affect the simulation data 126. The statistical data may be used to determine whether a scenario parameter that increases or decreases the likelihood of outcome is a successful simulation, and may be used to identify a subsystem of the autonomous vehicle associated with the scenario parameter as a success rate of increasing or decreasing the simulation data 126.
In some cases, the analysis component 128 can adjust the degree to which the scene parameters are adjusted. By way of example, and not limitation, a scene parameter may indicate a humid environmental condition (e.g., a rain condition). The scene parameters may be adjusted over a range (e.g., one-quarter inch of rainfall, one inch of rainfall, etc.). The analysis component 128 may adjust the size of the scene parameters and perform a sensitivity analysis based on the size of the scene parameters to determine a threshold associated with the scene parameters that may result in successful or unsuccessful results of the simulation. In some cases, the threshold may be determined using a binary search algorithm, a particle filtering algorithm, and/or a monte carlo method, although other suitable algorithms are also contemplated.
The vehicle performance component 130 may determine the vehicle performance data 132 based on the simulated data 126 (and/or based on additional simulated data from additional variants of the analysis component 128) and the fault type. In some cases, the vehicle performance data 132 may indicate the performance of the vehicle in the environment. By way of example and not limitation, the vehicle performance data 132 may indicate that a vehicle traveling at a speed of 15 meters per second has a stopping distance of 15 meters. In some cases, the vehicle performance data may be indicative of a safety metric. By way of example and not limitation, the vehicle performance data 132 may indicate an event (e.g., a fault) and a cause of the event. In at least some examples, such indications may be binary (failure or not), coarse (failure levels, e.g., "critical," "non-critical," and "pass") or continuous (e.g., representing a probability of failure), although any other indication is contemplated. For example, data 134(1) may indicate a security level for event type 1 and cause type 1, and similarly for data 134(2) -134 (4). In some cases, cause type 1 and cause type 2 may indicate a fault, such as a vehicle fault or an object (e.g., a cyclist) fault. The vehicle performance data 132 may then indicate a safety metric associated with the parameterized scene 122. In some cases, the vehicle performance component 130 may use the target metric and compare the vehicle performance data 132 to the target metric to determine whether the safety metric meets or exceeds the target metric. In some cases, the target metric may be based on standards and/or regulations associated with the autonomous vehicle.
In some cases, the vehicle performance data 132 may be input into the filter assembly 136 to determine filtered data 138 based on the vehicle performance data 132. For example, the filter component 136 can be employed to determine filtered data 138, the filtered data 138 identifying areas that have not reached a coverage threshold. By way of example and not limitation, the initial scenario 110 may indicate a two-way multi-lane driving surface associated with a speed limit and zone data 118 of 35 miles per hour. Based on the initial scene 110 and the region data 118, the parameter component 112 may identify five regions that may be used to simulate the environment of the initial scene 110. After simulating the parametric scene 122, the vehicle performance data 132 may indicate that the simulation data 126 is associated with three of the five regions. For example, the simulation component 124 can simulate a scene and generate simulation data 126 based on executing the scene in three of the five regions identified by the parameter component 112. The filter component 136 can determine filtered data 138 based on one or more filters that represent that the remaining 2 of the 5 regions do not satisfy a coverage threshold (e.g., a minimum number of simulations associated with the region).
In some cases, the filter component 136 may determine filtered data 138 indicative of the occurrence of an event based on the vehicle performance data 132. By way of example and not limitation, the simulation data 126 may include the occurrence of an event, such as an emergency stop, a tire leak, an animal crossing a driving surface, and the like. The filter component 136 can determine filtered data 138 indicative of the occurrence of an emergency stop based on one or more filters. Further, the filtered data 138 may include portions of the simulation data 126 associated with the occurrence of an emergency stop, such as a stopping distance associated with the emergency stop.
Fig. 2 shows an example 200 of a vehicle 202 that may generate vehicle data 104, and transmit vehicle data 104 to a computing device 204 similar to the vehicle described with reference to fig. 1. As described above, the scene editor component 108 can be configured to scan the input data 102 (e.g., the vehicle data 104 and/or the additional contextual data 106) and identify one or more scenes represented in the input data 102. By way of non-limiting example, such a scenario may be determined based on, for example, a clustering of log data (e.g., using k-means, etc.) parameterization. In some cases, the scene editor component 108 can use the scene definition data 206 to identify one or more scenes represented in the input data 102. For example, scene definition data 206 may identify features associated with a scene type. By way of example and not limitation, the scene definition data 206 may identify a cross-road scene that includes features such as a pedestrian crossing a portion of a driving surface unrelated to a crosswalk. The scene editor component 108 can scan the input data 102 to identify a portion of the input data that includes characteristics of a pedestrian crossing a portion of the driving surface unrelated to the crosswalk to determine a crossroad scene. In at least some examples, such scenarios may further be manually entered and/or derived from third party data (e.g., police reports, commonly available video clips, etc.).
Further, as described above, parameter component 112 may determine scene parameters that may indicate a value or range of values associated with parameters of objects in a scene. The parameter component 112 may generate the scene data 208 as shown in fig. 2.
The scene data 208 may indicate a basic scene that includes a vehicle 210 traversing along a driving surface and an object 212 (which may be a different vehicle) traversing in the same direction as the vehicle 210 in the vicinity of the vehicle 210. The vehicle 210 and the object 212 may be approaching an intersection with a crosswalk. The parameter component 112 may indicate a distance between the vehicle 210 and the object 212The scene parameter is determined to have a range of distances. Thus, scene S1May represent a first scene, scene S, of a set of scenes having a first distance between vehicle 210 and object 2122May represent a second scene of a set of scenes having a second distance between the vehicle 210 and the object 212, and the scene SNMay represent an nth scene of a set of scenes having an nth distance between the vehicle 210 and the object 212. Examples of additional types of parameters (also referred to as Attributes) may be found, for example, in U.S. patent application No. 16/363541 entitled "pendant Prediction Based on Attributes," filed on 25.3.2019, the contents of which are incorporated herein by reference in their entirety.
For example, the scene parameters may include, but are not limited to, a speed of the object 212, an acceleration of the object 212, an x-position of the object 212 (e.g., a global position, a local position, and/or a position relative to any other reference frame), a y-position of the object 212 (e.g., a size, an attitude, a local position, a global position, and/or a position relative to any other reference frame), a bounding box associated with the object 212 (e.g., a range (length, width, and/or height), a yaw, a pitch, a roll, etc.), a lighting state (e.g., a brake light, a flash, a hazard light, a headlight, a backup light, etc.), a wheel orientation of the object 212, a map element (e.g., a distance between the object 212 and a stop light, a stop sign, a deceleration strip, an intersection, a yield sign, etc.), a classification of the object 212 (e.g., a vehicle, a car, a truck, a vehicle, a system, a, Bicycles, motorcycles, pedestrians, animals, etc.), object characteristics (e.g., whether the object is changing lanes, whether object 212 is a double-stop vehicle, etc.), proximity to one or more objects (in any coordinate system), lane type (e.g., lane direction, parking lane), road markings (e.g., indicating whether to allow overtaking or lane changing, etc.), object density, etc.
As described above, the parameter component 112 may determine a range of values associated with the scene parameter represented by the vehicle data 104 and/or other input data. Thus, each of the example scene parameters identified above, as well as other scene parameters, may be associated with a set of values or a range of values that may be used to generate a set of scenes, where the scenes in the set of scenes differ by one or more scene parameter values.
Fig. 3 illustrates an example 300 of generating error model data based at least in part on vehicle data and ground truth data. As shown in fig. 3, the vehicle 202 may generate the vehicle data 104 and transmit the vehicle data 104 to the error model component 116. As described above, the error model component 116 can determine an error model that can indicate an error associated with a scene parameter. For example, the vehicle data 104 may be data associated with subsystems of the vehicle 202, such as perception systems, planning systems, tracking systems (also referred to as tracking systems), predictive systems, and the like. By way of example and not limitation, the vehicle data 104 may be associated with a perception system, and the vehicle data 104 may include a bounding box associated with an object detected by the vehicle 202 in the environment.
The error model component 116 can receive ground truth data 302 that can be manually labeled and/or determined from other validated machine learning components. By way of example and not limitation, the ground truth data 302 may include a validated bounding box associated with an object in the environment. By comparing the bounding box of the vehicle data 104 with the bounding box of the ground truth data 302, the error model component 116 can determine errors associated with subsystems of the vehicle 202. In some cases, the vehicle data 104 may include one or more characteristics (also referred to as parameters) associated with the detected entity and/or the environment in which the entity is located. In some examples, the features associated with the entity may include, but are not limited to, an x-position (global position), a y-position (global position), a z-position (global position), an orientation, an entity type (e.g., classification), a speed of the entity, a range (size) of the entity, and the like. The characteristics associated with the environment may include, but are not limited to, the presence of another entity in the environment, the status of another entity in the environment, a time of day, a day of the week, a season, weather conditions, indications of darkness/light, and the like. Thus, the error may be associated with other characteristics (e.g., environmental parameters).
The error model component 116 may process the plurality of vehicle data 104 and the plurality of ground truth data 302 to determine error model data 304. The error model data 304 may include errors calculated by the error model component 116, which may be represented as errors 306(1) - (3). Additionally, the error model component 116 can determine probabilities associated with errors 306(1) - (3), represented as probabilities 308(1) - (3), which can be associated with the environmental parameters to render the error model 310(1) - (3). By way of example and not limitation, vehicle data 104 may include bounding boxes associated with objects at a distance of 50 meters from vehicle 202 in an environment including rainfall. The ground truth data 302 may provide a validated bounding box associated with the object. The error model component 116 may determine error model data 304, which error model data 304 determines an error associated with a perception system of the vehicle 202. The distance of 50 meters and rainfall may be used as environmental parameters to determine which of the error models 310(1) - (3) to use. Once the error model is identified, the error model 310(1) - (3) may provide the errors 306(1) - (3) based on the probabilities 308(1) - (3), wherein the errors 306(1) - (3) associated with higher probabilities 308(1) - (3) are more likely to be selected than the errors 306(1) - (3) associated with lower probabilities 308(1) - (3).
FIG. 4 illustrates an example 400 of perturbing a simulation using error model data by providing at least one of an error or an uncertainty associated with a simulation environment. As described above, the error model data may include an error model that associates the error 306 with the probability 308, and the error model may be associated with an environmental parameter. The simulation component 124 can use the error model data 304 to inject errors, which can result in perturbed, parameterized scenes to perturb the simulation. Based on the injected error, the simulation data may indicate how the autonomous controller responds to the injected error.
For example, the simulation component 124 can perturb the simulation by continuously injecting errors into the simulation. By way of example, and not limitation, example 402 illustrates at time t0 A bounding box 404 associated with the object. The bounding box 404 may represent detection of an object by a vehicle that includes errors such as errors in the size of the bounding box and/or the position of the bounding box. At time t1The simulation component 124 can employ bounding boxes that represent objects and include different errors406. For example, at each simulation time (e.g., t)0、t1Or t2) The simulation component 124 can use different errors 306 based on probabilities 308 associated with the errors 306. At time t2The simulation component 124 can employ a bounding box 408 that represents the object and includes different errors.
In some cases, the simulation component 124 can perturb the simulation by injecting an uncertainty associated with a bounding box representing an object in the environment. By way of example, and not limitation, example 410 illustrates at time t0 A bounding box 412 associated with the object. The bounding box may include an uncertainty of 5%, which may indicate an uncertainty of the size and/or position of the object in an amount of 5%. Furthermore, the uncertainty may be at time t1And t2Rather than injecting different errors at different simulation times as shown in example 402.
Fig. 5 shows an example 500 of the vehicle 202 generating the vehicle data 104 and transmitting the vehicle data 104 to the computing device 204. As described above, the error model component 116 may determine a perceptual error model, which may indicate an error associated with the scene parameter. As described above, the vehicle data 104 may include sensor data generated by sensors of the vehicle 202 and/or perception data generated by perception systems of the vehicle 202. The perceptual error model may be determined by comparing the vehicle data 104 with the ground truth data 302. The ground truth data 302 may be manually tagged and may be associated with an environment and may represent known results. Accordingly, deviations in the vehicle data 104 from the ground truth data 302 may be identified as errors in the sensor system and/or the perception system of the vehicle 202. By way of example and not limitation, the perception system may identify the object as a bicyclist, where the ground truth data 302 indicates that the object is a pedestrian. As another example and not by way of limitation, the sensor system may generate sensor data representing an object as having a width of 2 meters, where the ground truth data 302 indicates that the object has a width of 1.75 meters.
As described above, the error model component 116 can determine a classification associated with an object represented in the vehicle data 104 and determine other objects in the vehicle data 104 and/or other log data that have the same classification. The error model component 116 can then determine a probability distribution associated with a series of errors associated with the object. Based on the comparison and range of errors, the error model component 116 can determine the perceptual error model data 502.
As shown in fig. 5, environment 504 may include objects 506(1) - (3) represented as bounding boxes generated by the perception system. Perceptual error model data 502 may indicate the scene parameters as 508(1) - (3) and indicate the errors associated with the scene parameters as 510(1) - (3). As shown in fig. 5, the error associated with object 508(1) may be visualized in environment 504 as a larger bounding box 512 indicating uncertainty about the size of object 508 (1).
FIG. 6 illustrates an example 600 of the computing device 204 generating the simulation data 126 and determining the vehicle performance data 132. The parameterized scene component 120 may determine a parameterized scene based on scene parameters, a set of regions, and a perceptual error model. For example, the scene parameters may indicate objects in the parameterized scene, locations associated with the objects, velocities associated with the objects, and so forth. Further, a context parameter may indicate a range indicating a range of values and/or probabilities associated with the context parameter. The set of regions may indicate portions of the environment that may be used to place objects in the simulated environment. Further, the perceptual error model may indicate an error associated with the scene parameter. As detailed herein, these may be combined to create a parameterized scene that may cover variations provided by scene parameters, region sets, and/or perceptual error models.
The simulation component 124 can use the parameterized scene to simulate variants of the parameterized scene. For example, the simulation component 124 can execute variants of parameterized scenarios to use in simulation for testing and verification. The simulation component 124 may generate simulation data 126 indicating how the autonomous vehicle controller is performing (e.g., responding), and may compare the simulation data 126 to predetermined results and/or determine whether any predetermined rules/assertions are violated/triggered.
As shown in FIG. 6, simulation data 126 may indicate the number of simulations (e.g., simulation 1, simulation 2, etc.) and the results of the simulations (e.g., result 1, result 2). For example, as described above, the results may indicate a pass or fail based on the rule/assertion being violated/triggered. Further, the simulation data 126 may indicate a probability of encountering a scene. By way of example and not limitation, the simulation component 124 can simulate a scene including pedestrians crossing a road. The input data may indicate that the vehicle is encountering a pedestrian crossing the road at a speed of 1 minute every 1 hour of travel. This can be used to determine the probability of encountering a particular simulation associated with a variant of the parameterized scene. In some cases, the simulation component 124 can identify variants of the parameterized scene with low probability and perform simulations corresponding to those variants. This may allow testing and validation of autonomous vehicle controllers in more unique situations.
Further, the simulation component 124 can identify variants of the parameterized scene based on the results for additional simulation. By way of example and not limitation, the result of the simulation may be a fault in which the scene parameter is associated with a vehicle speed of 15 meters per second. The simulation component 124 can identify speeds approaching 15 meters per second to determine a threshold at which the simulation will pass, which can further help develop a safer vehicle controller.
Based on the simulation data 126, the vehicle performance component 130 may generate vehicle performance data 132. As described above, for example, for event type 1 and cause type 1, data 134(1) may indicate a security level, and similarly for data 134(2) -134 (4). In some cases, the event type may indicate that the cost has reached or exceeded a cost threshold, although other event types are also contemplated. For example, the cost may include, but is not limited to, a reference cost, an obstacle cost, a lateral cost, a longitudinal cost, and the like.
The reference cost may comprise a cost associated with a difference between a point on the reference trajectory (also referred to as a reference point) and a corresponding point on the target trajectory (also referred to as a point or a target point), whereby the difference represents one or more differences in yaw, lateral offset, velocity, acceleration, curvature rate, and the like. In some examples, reducing the weight associated with the reference cost may reduce the penalty associated with a target track located a distance away from the reference track, which may provide a smoother transition resulting in safer and/or more comfortable vehicle operation.
In some examples, the obstacle cost may include a cost associated with a distance between a point on the reference or target trajectory and a point associated with an obstacle in the environment. For example, the points associated with the obstacle may correspond to points on the boundary of the drivable area, or may correspond to points associated with obstacles in the environment. In some examples, obstacles in the environment may include, but are not limited to, static objects (e.g., buildings, curbs, sidewalks, lane markings, road signs, traffic lights, trees, etc.) or dynamic objects (e.g., vehicles, bikers, pedestrians, animals, etc.). In some examples, the dynamic object may also be referred to as a proxy. In some examples, a static object or a dynamic object may be generally referred to as an object or an obstacle.
In some examples, the lateral cost may refer to a cost associated with a steering input of the vehicle, such as a maximum steering input relative to vehicle speed. In some examples, the longitudinal cost may refer to a cost associated with a speed and/or acceleration (e.g., maximum braking and/or acceleration) of the vehicle. Such costs may be used to ensure that the vehicle is operating within the feasible limits and/or comfort limits of the passengers being transported.
In some cases, cause type 1 and cause type 2 may indicate a fault, such as a vehicle fault or an object (e.g., a cyclist) fault. The vehicle performance component 130 may use predetermined rules/assertions to determine faults. By way of example and not limitation, a rule may indicate that when a vehicle is impacted by an object behind the vehicle, a fault may be associated with the object. In some cases, additional rules may be used, such as indicating that the vehicle must traverse the environment forward when impacted by a rear object. In some cases, a cause type (e.g., cause type 1 and/or cause type 2) may be associated with a component of the autonomous vehicle controller. By way of non-limiting example, such causes may include perception systems, prediction systems, planning systems, network delays, torque/acceleration failures, and/or failures of any other component or subcomponent of the vehicle.
As described above, the analysis component can determine to disable a scene parameter (e.g., set a fixed value associated with the scene parameter) and change other scene parameters based on the simulation data 126. By isolating the context parameters, the analysis component can determine the context parameters associated with success or failure results. The vehicle performance data 132 may then indicate a safety metric associated with the parameterized scenario. Further, the analysis component can perform a sensitivity analysis to determine a cause of the fault. For example, the analysis component may individually disable the scene parameters to isolate one or more scene parameters and determine how disabling the scene parameters affects the response of the autonomous vehicle, capture statistics associated with disabling one or more scene parameters, and capture results as a result of success or failure. The statistics may indicate how the set of scene parameters affect the outcome, and may be used to determine whether the scene parameters that increase or decrease the likelihood of outcome are successful simulations, and may be used to identify subsystems of the autonomous vehicle that are successful in increasing or decreasing the simulation data with the scene parameters.
Fig. 7 illustrates a block diagram of an example system 700 for implementing the techniques discussed herein. In at least one example, the system 700 may include a vehicle 202. In the example 700 shown, the vehicle 202 is an autonomous vehicle; however, the vehicle 202 may be any other type of vehicle (e.g., a driver-controlled vehicle that may indicate whether it is safe to perform various maneuvers).
Vehicle 202 may include a computing device 702, one or more sensor systems 704, one or more transmitters 706, one or more communication connections 708 (also referred to as communication devices and/or modems), at least one direct connection 710 (e.g., for physically coupling with vehicle 202 to exchange data and/or provide power), and one or more drive systems 712. One or more sensor systems 704 may be configured to capture sensor data associated with an environment.
The sensor systems 704 may include time-of-flight sensors, location sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., Inertial Measurement Units (IMU), accelerometers, magnetometers, gyroscopes, etc.), lidar sensors, radar sensors, sonar sensors, infrared sensors, cameras (e.g., RGB, IR, intensity, depth, etc.), microphone sensors, environmental sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), ultrasonic transducers, wheel encoders, etc. The sensor system 704 may include multiple instances of each of these or other types of sensors. For example, the time-of-flight sensors may include individual time-of-flight sensors located at corners, front, rear, sides, and/or top of the vehicle 202. As another example, the camera sensors may include multiple cameras disposed at different locations around the exterior and/or interior of the vehicle 202. The sensor system 704 may provide input to the computing device 702.
The vehicle 202 may also include one or more emitters 706 for emitting light and/or sound. The one or more transmitters 706 in this example include internal audible and visual transmitters that communicate with the occupants of the vehicle 202. By way of example and not limitation, the internal transmitters may include speakers, lights, signs, display screens, touch screens, tactile transmitters (e.g., vibration and/or force feedback), mechanical actuators (e.g., seat belt tensioners, seat positioners, headrest positioners, etc.), and the like. The one or more transmitters 706 in this example further include an external transmitter. By way of example and not limitation, the external transmitters in this example include lights for signaling direction of travel or other indicators of vehicle action (e.g., indicator lights, signs, light arrays, etc.), and one or more audible transmitters (e.g., speakers, speaker arrays, horns, etc.) for audible communication with pedestrians or other nearby vehicles, one or more of which may include acoustic beam steering technology.
The vehicle 202 may also include one or more communication connections 708 that enable communication (e.g., remotely operating computing devices) or remote services between the vehicle 202 and one or more other local or remote computing devices. For example, the communication connection 708 may facilitate communication with other local computing devices on the vehicle 202 and/or the drive system 712. Moreover, the communication connection 708 may allow the vehicle 202 to communicate with other nearby computing devices (e.g., other nearby vehicles, traffic signals, etc.).
The communication connection 708 may include a physical and/or logical interface for connecting the computing device 702 to another computing device or to one or more external networks 714 (e.g., the internet). For example, the communication connection 708 may enable Wi-Fi based communications, such as via frequencies defined by the IEEE 802.11 standard, short-range wireless frequencies such as bluetooth, cellular communications (e.g., 2G, 3G, 4GLTE, 5G, etc.), satellite communications, dedicated short-range communications (DSRC), or any suitable wired or wireless communication protocol that enables the respective computing device to interact with other computing devices. In at least some examples, the communication connection 708 can include one or more modems as described in detail above.
In at least one example, the vehicle 202 may include one or more drive systems 712. In some examples, the vehicle 202 may have a single drive system 712. In at least one example, if the vehicle 202 has multiple drive systems 712, the individual drive systems 712 may be positioned at opposite ends (e.g., front and rear, etc.) of the vehicle 202. In at least one example, drive system 712 may include one or more sensor systems 704 to detect a condition of drive system 712 and/or an environment surrounding vehicle 202. By way of example and not limitation, the sensor system 704 may include one or more wheel encoders (e.g., rotary encoders) to sense rotation of wheels of the drive system, inertial sensors (e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers, etc.) to measure orientation and acceleration of the drive system, cameras or other image sensors, ultrasonic sensors for acoustically detecting objects around the drive system, lidar sensors, radar sensors, and the like. Some sensors, such as wheel encoders, may be unique to the drive system 712. In some cases, the sensor system 704 on the drive system 712 may overlay or supplement a corresponding system (e.g., the sensor system 704) of the vehicle 202.
The drive system 712 may include a number of vehicle systems, including a high voltage battery, an electric motor to drive the vehicle, an inverter to convert direct current from the battery to alternating current for use by other vehicle systems, a steering system, including a steering motor and steering rack (which may be electric), a braking system including hydraulic or electric actuators, a suspension system including hydraulic and/or pneumatic components, a stability control system for distributing braking force to mitigate traction loss and maintain control, an HVAC system, lighting (e.g., lighting for headlights/taillights, etc. that illuminate the environment outside the vehicle), and one or more other systems (e.g., a cooling system, a security system, an onboard charging system, other electrical components such as DC/DC converters, high voltage connectors, high voltage cables, a charging system, a charging port, etc.). Additionally, the drive system 712 may include a drive system controller that may receive and pre-process data from the sensor system 704 and control the operation of various vehicle systems. In some examples, the drive system controller may include one or more processors and memory communicatively coupled with the one or more processors. The memory may store one or more modules to perform various functions of the drive system 712. In addition, the drive systems 712 further include one or more communication connections that enable the respective drive systems to communicate with one or more other local or remote computing devices.
Computing device 702 can include one or more processors 516 and memory 518 communicatively coupled to one or more processors 716. In the illustrated example, the memory 718 of the computing device 702 stores a positioning component 720, a perception component 722, a prediction component 724, a planning component 726, and one or more system controllers 728. Although shown as residing in memory 718 for illustrative purposes, it is contemplated that positioning component 720, perception component 722, prediction component 724, planning component 726, and one or more system controllers 728 can additionally or alternatively be accessible by computing device 702 (e.g., stored at different components of vehicle 202) and/or accessible by vehicle 202 (e.g., stored remotely).
In the memory 718 of the computing device 702, the positioning component 720 may include functionality to receive data from the sensor system 704 to determine the location of the vehicle 202. For example, the positioning component 720 can include and/or request/receive a three-dimensional map of an environment, and can continuously determine a location of an autonomous vehicle within the map. In some cases, the locating component 720 may receive time-of-flight data, image data, lidar data, radar data, sonar data, IMU data, GPS data, wheel encoder data, any combination thereof, or the like, using SLAM (simultaneous location and mapping) or CLAMS (simultaneous calibration, location and mapping) to accurately determine the location of the autonomous vehicle. In some cases, the positioning component 720 may provide data to various components of the vehicle 202 to determine an initial position of the autonomous vehicle to generate a trajectory, as discussed herein.
The perception component 722 may include functionality to perform object detection, segmentation, and/or classification. In some examples, sensing component 722 may provide processed sensor data that indicates the presence of an entity proximate to vehicle 202 and/or indicates a classification of the entity as an entity type (e.g., automobile, pedestrian, bicyclist, building, tree, road surface, roadside, sidewalk, unknown, etc.). In additional and/or alternative examples, the sensing component 722 can provide processed sensor data that is indicative of one or more characteristics (also referred to as parameters) associated with the detected entity and/or the environment in which the entity is located. In some examples, the features associated with the entity may include, but are not limited to, an x-position (global position), a y-position (global position), a z-position (global position), an orientation, an entity type (e.g., classification), a speed of the entity, a range (size) of the entity, and the like. The characteristics associated with the environment may include, but are not limited to, the presence of another entity in the environment, the status of another entity in the environment, a time of day, a day of the week, a season, weather conditions, geographic location, indications of darkness/light, and the like.
The perception component 722 may include functionality to store perception data generated by the perception component 722. In some cases, the perception component 722 may determine a trajectory that corresponds to an object that has been classified as an object type. For illustration purposes only, one or more images of the environment may be captured using the sensing component 722 of the sensor system 704. The sensor system 704 may capture images of an environment that includes an object, such as a pedestrian. The pedestrian may be in the first position at time T and in the second position at time T + T (e.g., movement during a time span after time T). In other words, the pedestrian may move from the first position to the second position within the time span. For example, such motion may be recorded as stored perception data associated with the object.
In some examples, the stored perception data may include fused perception data captured by the vehicle. The fused perceptual data may include a fusion or other combination of sensor data from the sensor system 704, such as an image sensor, a lidar sensor, a radar sensor, a time-of-flight sensor, a sonar sensor, a global positioning system sensor, an internal sensor, and/or any combination of these. The stored perception data may additionally or alternatively include classification data that includes semantic classifications of objects (e.g., pedestrians, vehicles, buildings, roads, etc.) represented in the sensor data. The stored perception data may additionally or alternatively include trajectory data (a collection of historical positions, orientations, sensor features, etc. associated with the object over time) corresponding to motion of the object classified as a dynamic object in the environment. The trajectory data may include a plurality of trajectories of a plurality of different objects over time. When the object is stationary (e.g., standing still) or moving (e.g., walking, running, etc.), the trajectory data may be mined to identify images of certain types of objects (e.g., pedestrians, animals, etc.). In this example, the computing device determines a trajectory corresponding to a pedestrian.
The prediction component 724 may generate one or more probability maps representing the predicted probabilities of the likely locations of one or more objects in the environment. For example, the prediction component 724 may generate one or more probability maps for vehicles, pedestrians, animals, etc. within a threshold distance from the vehicle 202. In some cases, prediction component 724 may measure trajectories of objects and generate discretized predicted probability maps, heat maps, probability distributions, discretized probability distributions, and/or trajectories for the objects based on observed and predicted behavior. In some cases, one or more probability maps may represent the intent of one or more objects in the environment.
The planning component 726 may determine a path for the vehicle 202 to follow to traverse the environment. For example, the planning component 726 may determine various routes and paths and various levels of detail. In some cases, the planning component 726 may determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a target location). For purposes of this discussion, a route may be a sequence of waypoints for travel between two locations. By way of non-limiting example, waypoints include streets, intersections, Global Positioning System (GPS) coordinates, and the like. Further, the planning component 726 may generate instructions for guiding the autonomous vehicle along at least a portion of a route from the first location to the second location. In at least one example, the planning component 726 may determine how to direct an autonomous vehicle from a first waypoint in a sequence of waypoints to a second waypoint in the sequence of waypoints. In some examples, the instruction may be a path or a portion of a path. In some examples, multiple paths may be generated substantially simultaneously (i.e., within a technical tolerance) according to a fallback level technique. One of the plurality of paths within the fallback data range having the highest confidence level may be selected to operate the vehicle.
In other examples, the planning component 726 may alternatively or additionally use data from the perception component 722 to determine a path to be followed by the vehicle 202 to traverse the environment. For example, the planning component 726 may receive data from the perception component 722 regarding objects associated with the environment. Using this data, the planning component 726 may determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a destination location) to avoid objects in the environment. In at least some examples, such a planning component 726 can determine that no such collision-free path exists, and in turn provide a path that safely stops the vehicle 202, thereby avoiding all collisions and/or otherwise mitigating damage.
In at least one example, computing device 702 may include one or more system controllers 728, which may be configured to control steering, propulsion, braking, safety, transmitters, communications, and other systems of vehicle 202. These system controllers 728, which may be in communication with and/or controlled by corresponding systems of the drive system 712 and/or other components of the vehicle 202, may be configured to operate according to the path provided from the planning component 726.
The vehicle 202 may be connected to the computing device 204 via the network 514 and may include one or more processors 730 and memory 732 communicatively coupled with the one or more processors 730. In at least one example, one or more processors 730 may be similar to processor 716 and memory 732 may be similar to memory 718. In the illustrated example, the memory 732 of the computing device 204 stores the scene editor component 108, the parameter component 112, the error model component 116, the parameterized scene component 120, the simulation component 124, the analysis component 128, and the vehicle performance component 130. Although shown as residing in memory 732 for purposes of illustration, it is contemplated that the scene editor component 108, the parameter component 112, the error model component 116, the parameterized scene component 120, the simulation component 124, the analysis component 128, and the vehicle performance component 130 can additionally or alternatively be accessible by the computing device 204 (e.g., stored in a different component of the computing device 204 and/or accessible by the computing device 204 (e.g., stored remotely). the scene editor component 108, the parameter component 112, the error model component 116, the parameterized scene component 120, the simulation component 124, the analysis component 128, and the vehicle performance component 130 can be substantially similar to the scene editor component 108, the parameter component 112, the error model component 116, the parameterized scene component 120, the simulation component 124, the analysis component 128, and the vehicle performance component 130 of fig. 1.
Processor 716 of computing device 702 and processor 730 of computing device 204 may be any suitable processors capable of executing instructions to process data and perform operations as described herein. By way of example, and not limitation, processors 716 and 730 may include one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or any other device or portion of a device that processes electronic data to convert that electronic data into other electronic data that may be stored in registers and/or memory. In some examples, integrated circuits (e.g., ASICs, etc.), gate arrays (e.g., FPGAs, etc.), and other hardware devices may also be considered processors, so long as they are configured to implement the coded instructions.
Memory 718 computing device 702 and memory 732 of computing device 204 are examples of non-transitory computer-readable media. Memories 718 and 732 may store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems. In various embodiments, memories 718 and 732 may be implemented using any suitable memory technology, such as Static Random Access Memory (SRAM), synchronous dynamic ram (sdram), non-volatile/flash-type memory, or any other type of memory capable of storing information. The architectures, systems, and individual elements described herein may include many other logical, procedural, and physical components, of which those shown in the figures are merely examples associated with the discussion herein.
In some cases, aspects of some or all of the components discussed herein may include any model, algorithm, and/or machine learning algorithm. For example, in some cases, the components in memories 718 and 732 may be implemented as neural networks.
FIG. 8 shows an example process 800 for determining a safety metric associated with a vehicle controller. Some or all of process 800 may be performed by one or more components in fig. 1-7, as described herein. For example, some or all of process 800 may be performed by computing device 204 and/or computing device 702.
In operation 802 of the example process 800, the process 800 may include receiving log data associated with operating in an autonomous vehicle in an environment. In some cases, the log data may be generated by a vehicle that captures at least sensor data of the environment.
In operation 804 of the example process 800, the process 800 may include determining a set of scenes based on log data (or other data), one scene of the set of scenes including scene parameters associated with an aspect of the environment. In some cases, the computing device may group similar scenes represented in the log data. For example, weighted distances (e.g., euclidean) between parameters of the evaluation environment and/or k-means clustering, for example, may be used to group the scenes together. Furthermore, the scene parameter may represent an environmental parameter, such as a night environment parameter or a humid condition environment parameter. In some cases, the scene parameters may be associated with a vehicle or object (e.g., pose, speed, etc.).
At operation 806 of the example process 800, the process 800 may include determining an error model associated with a subsystem of the autonomous vehicle. The error model component may compare the vehicle data (e.g., log data) to the ground truth data to determine differences between the vehicle data and the ground truth data. In some cases, the vehicle data may represent estimated values associated with objects in the environment, such as estimated positions, estimated orientations, estimated ranges, and so forth, and the ground truth data may represent actual positions, actual orientations, or actual ranges of objects. Based on the difference, the error model component may determine an error associated with a subsystem (e.g., perception system, tracking system, prediction system, etc.) of the vehicle.
At operation 808 of the example process 800, the process 800 may include determining a parameterized scene based on the scene parameters and the error model. These can be combined to create a parameterized scene that can cover variations provided by scene parameters and/or error models. In some cases, scene parameters may be randomly selected and combined to create a parameterized scene. In some cases, scene parameters may be combined based on the probability of coincidence. By way of example and not limitation, the log data may indicate that 5% of the driving experience includes encountering a pedestrian and the parameterized scene component may include the pedestrian as a scene parameter in the 5% of the parameterized scene generated by the parameterized scene component. In some cases, the parameterized scene component may validate parameterized scenes to reduce unlikely or unlikely combinations of scene parameters. By way of example and not limitation, vehicles would not be placed in a lake and pedestrians would not travel at a speed of 30 meters per second. By way of non-limiting example, such a parameterized scene may include range of distances, speeds, lighting conditions, weather conditions, etc. of vehicles and people crossing roads on a particular defined road having a gaussian distribution (or other distribution) of various errors on perceptual models, predictive models, etc. based at least in part on scene parameters.
In operation 810 of the example process 800, the process 800 may include perturbing the parameterized scene by modifying at least one of the parameterized scene, a scene parameter, or a component of the simulated vehicle based at least in part on the error. In some cases, uncertainty may be associated with the scene parameters. By way of example and not limitation, the location of an object may be associated with an uncertainty of 5%, resulting in the autonomous controller traversing the environment while accounting for the uncertainty of 5%. In some cases, when the simulator performs a simulation, the simulator may determine from the error model the errors to be incorporated into the simulation.
In operation 812 of the example process 800, the process 800 may include instantiating the simulated vehicle in the perturbed parameterized scene. The simulator may use a simulated vehicle that may be associated with the autonomous controller and traverse the simulated environment with the autonomous controller. Instantiating the autonomous vehicle controller in a parameterized scene and simulating the parameterized scene can effectively cover a wide variety of scenes without manually enumerating the variants. Additionally, based at least in part on executing the parameterized scenario, the simulation data may instruct the autonomous vehicle controller how to respond to the parameterized scenario and determine a successful outcome or an unsuccessful outcome based at least in part on the simulation data.
In operation 814 of the example process 800, the process may include receiving simulation data indicating how the simulated vehicle responds to the perturbed parameterized scene. After the simulation, the results may indicate a pass (e.g., a successful result), a failure, and/or a degree of success or failure associated with the vehicle controller.
At operation 816 of the example process 800, the process may include determining a safety metric associated with the vehicle controller based on the simulation data. For example, each simulation may result in successful or unsuccessful results. Further, as described above, the vehicle performance component may determine vehicle performance data based on the simulation data, which may be indicative of the performance of the vehicle in the environment. Based on the sensitivity analysis, the vehicle performance data may indicate a scenario where the result of the simulation was unsuccessful, a reason for the simulation being unsuccessful, and/or a boundary of a scenario parameter indicating a value of the scenario parameter for which the simulation result was successful. Thus, the safety metric may indicate the pass/fail rate of the vehicle controller in various simulation scenarios.
FIG. 9 illustrates a flow chart of an example process for determining a statistical model associated with a subsystem of an autonomous vehicle. Some or all of process 900 may be performed by one or more components in fig. 1-7, as described herein. For example, some or all of process 900 may be performed by computing device 204 and/or computing device 702.
At operation 902 of the example process 900, the process 900 may include receiving vehicle data (or other data) associated with a subsystem of an autonomous vehicle. The vehicle data may include log data captured by vehicles traveling in the environment. In some cases, the vehicle data may include control data (e.g., data used to control systems such as steering, braking, etc.) and/or sensor data (e.g., lidar data, radar data, etc.).
At operation 904 of the example process 900, the process 900 may include determining output data associated with the subsystem based on the vehicle data. By way of example and not limitation, the subsystem may be a perception system and the output data may be a bounding box associated with an object in the environment.
In operation 906 of the example process 900, the process 900 may include receiving ground truth data associated with the subsystem. In some cases, ground truth data may be manually flagged and/or determined from other validated machine learning components. By way of example and not limitation, the ground truth data may include verified bounding boxes associated with objects in the environment.
In operation 908 of the example process 900, the process 900 may include determining a difference between the first portion of the output data and the second portion of the ground truth data, the difference representing an error associated with the subsystem. As described above, the output data may include a bounding box associated with an object in the environment, as detected by a perception system of the vehicle, and the ground truth data may include a validated bounding box associated with the object. The difference between the two bounding boxes may indicate an error associated with the perception system of the vehicle. By way of example and not limitation, the bounding box of the output data may be larger than the bounding box of the verification, indicating that the perception system detects an object larger than the object in the environment.
In operation 910 of the example process 900, the process 900 may include determining a statistical model associated with the subsystem based on the differences.
FIG. 10 shows an example 1000 of multiple regions of an environment associated with error probabilities. As described above, the error model may indicate one or more parameters associated with the environment, and the error model may indicate a probability and/or error distribution associated with the one or more parameters. The error may represent a difference between data associated with the vehicle-generated environment and the actual environment (or ground truth data representing the environment). For example, the error may indicate a difference in size, position, and/or velocity of the object in the environment, or whether the object is detected in the environment. In some cases, the error may indicate a difference between the classification of the object generated by the vehicle (e.g., pedestrian, bicyclist, vehicle, sign, static object, dynamic object, etc.) and the actual classification of the object. In some cases, the error may indicate a difference between the predicted trajectory of the object and the actual trajectory of the object.
Example 1000 illustrates probabilities associated with errors and how factors such as a distance between a sensor and a region in an environment or a type of object in the environment affect such probabilities.
By way of example, and not limitation, the region 1002 may be associated with a portion of the environment within line of sight of the vehicle 210. Based at least in part on, for example, the distance between region 1002 and vehicle 210, the error model may indicate a probability 1004(1) associated with error 1006 (1). For example, a relatively close distance between region 1002 and vehicle 210 may reduce the probability of error due to an increase in sensor data accuracy, precision, and/or density as compared to sensor data of environmental regions that are relatively distant from vehicle 210.
Furthermore, environmental conditions may affect the probability of error. By way of example and not limitation, a probability 1004(1) (e.g., a false positive error) of an error 1006(1) associated with an object perceived by the vehicle 210 but not present in the environment may be lower under ideal conditions (e.g., good lighting, clear weather, etc.) than a probability 1004(1) associated with a false positive error under ambient conditions of darkness, rain, snow, etc.
As another example and without limitation, region 1008 may be associated with a portion of the environment within a line of sight of vehicle 210 and with object 212. The error model may indicate a probability 1004(2) associated with an error 1006(2) based at least in part on, for example, a distance between the region 1008 and the vehicle 210 and/or a classification associated with the object 212. By way of example and not limitation, vehicle 210 may determine a size associated with object 212 based on the perception data. Error model may indicate a probability 1004(2) of error 1006(2) (the actual size of object 212 is twice as large as the size indicated by the perception data) being lower than a probability 1004(2) of difference 1006(2) (the actual size of object 212 is slightly larger than the size indicated by the perception data).
As another example and not by way of limitation, region 1010 may be associated with a portion of the environment that is occluded or partially occluded by object 212. Based at least in part on, for example, the amount of occluded region 1010, the error model may indicate a probability 1004(3) associated with error 1006 (3). By way of example and not limitation, the probability 1004(3) of error 1006(3) (associated with objects not perceived by vehicle 210 but actually present in the environment (e.g., false negative errors)) may be higher than the probability 1004(1) of error 1006(1) (associated with false negative errors at region 1002 due to occlusion or partial occlusion of region 1010).
As another example and without limitation, region 1012 may be associated with a portion of the environment within the line of sight of vehicle 210 and with object 1014. Based at least in part on, for example, the distance between region 1012 and vehicle 210 and/or the classification associated with object 1014, the error model may indicate a probability 1004(4) associated with error 1006 (4). By way of example and not limitation, vehicle 210 may determine a pose associated with object 1014 based on the perception data. The error model may indicate a probability 1004(4) that the actual pose of the object 1014 is different from the error 1006(4) of the pose indicated by the perceptual data. In some cases, probability 1004(4) of error 1006(4) associated with region 1012 may be higher than probability 1004(2) of error 1006(2) due to the increased distance between region 1012 and vehicle 210 compared to the distance between region 1008 and vehicle 210. Further, errors 1006(4) and 1006(2) may change based on the classification associated with objects 212 and 1014. That is, the error model may be adjusted based on classification, distance, environmental factors, and the like.
Error models indicative of probabilities 1004(1), 1004(2), 1004(3), and 1004(4) and errors 1006(1), 1006(2), 1006(3), and 1006(4) may be generated based at least in part on vehicle data 104 generated by vehicle 202. For example, the vehicle 202 may generate the vehicle data 104 associated with the environment and time. The vehicle data 104 may include sensor data, sensory data, predictive data, and the like. As described above, the error model component 116 may receive the vehicle data 104 and the ground truth data 302 to determine differences between the vehicle data 104 and the ground truth data 302. The difference may indicate an error in the vehicle data 104. As described above, the error may be associated with a difference in the classification of the object, the size of the object, the position of the object, the velocity of the object, and/or the trajectory of the object in the environment. In some cases, the difference may be associated with vehicle data 104 indicating that the object is present in the environment while the object is not actually present (e.g., a false positive error), and in some cases, the difference may be associated with vehicle data 104 indicating that the object is not present in the environment (e.g., omitted) while the object is actually present (e.g., a false negative error).
In some cases, the vehicle data 104 can include data associated with multiple environments, and the error model component 116 can determine a frequency (also referred to as a frequency of occurrence) associated with one or more errors. By way of example and not limitation, based on the difference between the vehicle data 104 and the ground truth data 302 and the frequency of the difference, the error model may indicate that the probability that the vehicle data 104 represents a curb height 2 centimeters above the actual curb in the environment is higher than the probability that the vehicle data 104 represents a curb height 1 meter above the actual curb, and the difference and probability may be represented as a distribution in the error model.
In at least some examples, various environmental parameters such as, but not limited to, vehicle speed, location, object location, speed, object classification, weather, time of day, etc., may be used in conjunction with the error to cluster the measurements into various categories (e.g., using k-means, decision trees, other suitable clustering algorithms, etc.). The measurements in the various clusters can then be aggregated to determine an error distribution associated with the clusters. Conversely, when driving, the vehicle may select the cluster with the closest environmental parameter to the current observed, and then use the associated error model.
In some cases, a machine learning model may be used to determine the distribution fit. By way of example and not limitation, the difference between the vehicle data 104 and the ground truth data 302 may represent discrete points. The machine learning model may determine the distribution based on the discrete points by fitting one or more distributions to the discrete points. In some cases, the distribution can be a discrete distribution (e.g., a logarithmic distribution, a poisson distribution, etc.), a bernoulli distribution, a continuous distribution (e.g., a generalized normal distribution, a gaussian distribution, etc.), a mixed discrete/continuous distribution, a joint distribution, and/or the like. Of course, in some examples, in addition to/instead of determining the best fit of the distribution, a histogram (or the like) of the measurement data associated with a particular cluster may be stored.
In some cases, the vehicle data 104 may be clustered using a clustering algorithm. By way of example and not limitation, the vehicle data 104 may be clustered based on a classification associated with the object, a time of day, an amount of visible light detected by the vehicle 202, weather conditions (e.g., sunlight, cloudy days, precipitation, fog, etc.), a speed of the vehicle 202, a pose of the vehicle 202, and/or an environmental area. Clustering algorithms may include, for example, class specific clustering, k-means algorithms, expectation maximization algorithms, multivariate normal distributions, and the like.
The clustered vehicle data 104 can be used to determine a distribution associated with a vehicle data cluster of the clustered vehicle data 104. By way of example and not limitation, the category-specific clusters may include cluster objects (e.g., pedestrians, vehicles, etc.) represented in the vehicle data 104 by classification, and the distribution associated with the pedestrian clusters may be used as a pedestrian error model and the distribution associated with the vehicle clusters may be used as a vehicle error model.
FIG. 11 shows an example 1100 of vehicle data and an environment represented by the vehicle data and a difference between the vehicle data and the environment. In general, example 1100 illustrates a case matrix: true positive error, false negative error, false positive error, and true negative case (no error), each of which is discussed below and throughout the present disclosure.
Example 1102 shows vehicle data 1104(1) associated with environment 1106 (1). As shown in example 1102, vehicle data 1104(1) may include perception data 1108, and perception data 1108 may be associated with an object (e.g., object 1110) perceived by the vehicle in environment 1106 (1). However, as shown in environment 1106(1), perceptual data 1108 may skew the attribute values associated with an object 1110, such as the location, classification, size, etc. of the associated object 1110. Thus, while this may be considered a true positive scenario, other differences may arise between the vehicle data 1104(1) and the environment 1106 (1).
Example 1112 shows vehicle data 1104(2) associated with environment 1106 (2). As shown in example 1112, the vehicle data 1104(2) may indicate that the vehicle does not perceive any objects in the environment but that the object 1114 is indeed present in the environment 1106 (2). This can be considered a false negative scenario.
Example 1116 shows vehicle data 1104(3) associated with environment 1106 (3). As shown in example 1116, vehicle data 1104(3) may include perception data 1118, and perception data 1118 may be associated with an object that is perceived by the vehicle but not present, as shown in environment 1106 (3). This may be considered a false positive scenario.
Example 1120 shows vehicle data 1104(4) associated with environment 1106 (4). In example 1120, vehicle data 1104(4) does not represent any objects, and likewise, environment 1106(4) does not include any objects. This can therefore be considered a true negative scenario.
For false positive errors, the error model may be determined by distance, environmental factors, etc. For example, a false positive error may indicate a probability that the perception system determines that an object is present in the environmental region when the object is not present in the environmental region.
For false negative errors, the error model may be adjusted by distance, environmental factors, attributes of the object, and the like. A false negative error may indicate, for example, a false negative error may indicate a probability that the perception system determines that the object is not present in the region of the environment when the object is indeed present in the region of the environment. As described above, false negative errors may be based at least in part on a classification of the real object, a size of the real object, a distance between the sensor capture data and an environmental region associated with the object, an amount of occlusion of the real object, and/or the like.
As described above, the error model may include distributions corresponding to examples 1102, 1112, 1116, and/or 1120 (e.g., true positive, false negative, false positive, and/or true negative scenes). The autonomous vehicle controller can traverse a simulated environment, and the simulation component 124 can determine a probability of a true negative, a false positive, a false negative, and/or a true positive associated with an area of the simulated environment based at least in part on the error model. The simulation component 124 can then determine simulated data of the disturbance by creating a false positive scenario, a false negative scenario, and/or a true positive scenario to disturb the simulated environment and determine a response of the simulated vehicle controller (also referred to as the simulated autonomous vehicle controller) to the false positive scenario, the false negative scenario, and/or the true positive scenario.
By way of example and not limitation, the simulation component 124 can select an error model based at least in part on a distance between the simulated vehicle controller and the simulated object and/or any additional environmental parameters discussed herein. For example, when the simulated object is represented at a first time as 5 meters from the simulated vehicle controller, the simulation component 124 can determine an error model that represents the error encountered at that distance. The simulation may represent a simulated object at a second distance (e.g., 10 meters) from the simulated vehicle controller at a second time after the first time. The simulation component 124 can select an error model based on such distance and can determine scene data representative of additional perturbations of the simulated objects in the simulated environment at the second time. Thus, the simulation component 124 can employ the error model to continuously (or intermittently) perturb the simulation environment as the simulation executes.
In some cases, the error distribution of the error model may be based at least in part on an elapsed simulation time. By way of example and not limitation, the simulated vehicle controller may perceive the simulated object at a first time and the simulated object at a second time. The error model may indicate that a probability of error associated with the perception of the simulated object at the first time may be higher than a probability of error associated with the perception of the simulated object at the second time. In some examples, the first time may represent a time at which the simulated vehicle controller first detected the simulated object. That is, the error model may be changed based at least in part on a duration associated with a simulated object tracked by the simulated vehicle controller.
In some cases, the error model may be used during operation of the autonomous vehicle in a non-simulated environment. By way of example and not limitation, an autonomous vehicle may traverse an environment and capture sensor data of the environment. The perception system of the autonomous vehicle may generate perception data indicative of, for example, objects in the environment. The vehicle may include an error model component and may use the error model component to determine a probability of error associated with the perception data. For example, an autonomous vehicle may perceive an object in the environment, such as a parking curb of a parking lot. The autonomous vehicle may receive an error probability associated with a location of a parking curb. By way of example and not limitation, the error model may indicate a probability of the parking curb being within 0.1 meters of a lateral direction from a location of the parking curb as perceived by the autonomous vehicle to be 5%. As described above, the probability may be based on features such as an x-position (global position), a y-position (global position), a z-position (global position), an orientation, an object type (e.g., classification), a speed of the object, a range (size) of the object, a presence of another object in the environment, a state of another object in the environment, a time of day, a day of week, a season, a weather condition, an indication of darkness/lightness, and the like.
Based on the probability of the perception error, the autonomous vehicle's planning system may more safely traverse the environment by considering the probability of the perception error. In some cases, the planning system may use a probability threshold and determine trajectories of locations that avoid objects that meet or exceed the probability threshold. By way of example and not limitation, the planning system may use a 1% probability threshold and traverse locations with objects having a probability of 1% or greater. In some cases, the planning system may use a probability threshold associated with the classification of the object. By way of example and not limitation, the planning system may have a lower probability threshold associated with pedestrian classification than a probability threshold associated with debris on the driving surface.
FIG. 12 shows a flow diagram of an example process for determining an error model and determining simulated data of a disturbance.
In operation 1202 of the example process 1200, the process 1200 may include receiving vehicle data indicative of a perceived state of a subject. In some cases, the vehicle data may include log data captured by vehicles traveling in the environment. In some cases, the vehicle data may include control data (e.g., data for control systems such as steering, braking, etc.), sensor data (e.g., lidar data, radar data, etc.), and/or outputs of vehicle subsystems (e.g., sensory data (e.g., classification data, bounding box, x-position (global position), y-position (global position), z-position (global position), orientation, entity type (e.g., classification), speed of the entity, range of the entity, prediction data, etc.).
At operation 1204 of the example process 1200, the process 1200 may include receiving ground truth data associated with the object based at least in part on the vehicle data. In some cases, ground truth data may be manually flagged and/or determined from other validated machine learning components. By way of example and not limitation, ground truth data may include verified bounding boxes associated with objects in the environment.
In operation 1206 of the example process 1200, the process 1200 may include determining a difference between the vehicle data and the ground truth data. If there is no discrepancy, process 1200 may return to operation 1202. If a difference does exist, the process may proceed to operation 1208.
In operation 1208 of the example process 1200, the process 1200 may include determining an error based at least in part on the vehicle data and the ground truth data. As described above, the difference between the vehicle data and the ground truth data may represent an error in the vehicle data. The error may be associated with a probability, and a distribution of the error and the probability may be included in the error model.
In operation 1210 of the example process 1200, the process 1200 may include determining a plurality of parameters based at least in part on the vehicle data. As described above, the vehicle data may include data such as control data (e.g., data for controlling systems such as steering, braking, etc.), sensor data (e.g., lidar data, radar data, etc.), and/or output from vehicle subsystems (e.g., sensory data (e.g., classification data, bounding boxes, x-position (global position), y-position (global position), z-position (global position), orientation, entity type (e.g., classification), speed of the entity, extent of the entity, prediction data, etc.). But are not limited to, the presence of another entity in the environment, the status of another entity in the environment, a time of day, a day of week, a season, weather conditions, indications of darkness/light, etc.
In operation 1212 of the example process 1200, the process 1200 may include clustering at least a portion of the vehicle data based at least in part on the plurality of parameters and the error. For example, clustering may be performed based on a classification of objects represented in the vehicle data, a type of date, an amount of visible light detected by the vehicle, weather conditions, a speed of the vehicle, a pose of the vehicle, and/or an environmental region. The distribution of errors and probabilities may be associated with a cluster of clustered vehicle data.
In operation 1214 of the example process 1200, the process 1200 may include determining an error model based at least in part on a portion of the vehicle data. As described above, the machine learning model may be used to determine a distribution fit to fit the distribution to the error and probability distributions represented in the vehicle data. In some cases, the fitness measure may be used to determine a fit of the distribution to the error and probability distributions. By way of example and not limitation, a negative log-likelihood function or log-likelihood function may be used to determine the fit metric. By way of example and not limitation, a distribution for non-clustered vehicle data may yield a negative log likelihood value that is higher than a distribution for vehicle data clustered using multivariate expectation maximization, where a higher negative log likelihood value indicates a poor fit of the distribution.
In some examples, operation 1214 may include determining an error model associated with false positives at the location for an "empty" area of the environment. In some examples, operation 1214 may include determining an error model associated with false negatives at the location for an "occupied" area of the environment. For example, operation 1214 may include determining an error model based on distance, environmental factors, and the like.
Example clauses
A: a system, comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause a system to perform operations comprising: receiving log data associated with operating an autonomous vehicle in an environment; determining a set of scenes based at least in part on the log data, one scene of the set of scenes including scene parameters associated with an aspect of the environment; determining an error model associated with a subsystem of the autonomous vehicle; determining a parameterized scene based at least in part on the scene parameters and an error model; perturbing the parametric scene by adding an error to at least one of the scene parameters or a component of the simulated vehicle to be instantiated in the perturbed parametric scene, the simulated vehicle being controlled by the vehicle controller; instantiating a simulated vehicle in the perturbed parameterized scene; receiving simulation data indicative of a parameterized scene simulating how a vehicle responds to a disturbance; and determining a safety metric associated with the vehicle controller based at least in part on the simulated data.
B: the system of paragraph a, wherein determining the set of scenes comprises: clustering the log data to determine a first set of clusters, wherein individual clusters in the first set of clusters are associated with a single scene; determining a probability associated with the single cluster based at least in part on the first set of clusters; and determining a second set of clusters based at least in part on the probability threshold and the first set of clusters.
C: the system of paragraph a, wherein determining the error model comprises: receiving ground truth data associated with an environment; determining an error based at least in part on comparing the ground truth data to the log data; determining an error profile based at least in part on the error; wherein the error model comprises an error distribution.
D: the system of paragraph a, wherein the parameterized scene is a first parameterized scene, the perturbed parameterized scene is a first perturbed parameterized scene, and the simulation data is first simulation data, the operations further comprising: determining a second parameterized scene based on the first simulation data, including at least one of the first subset of scene parameters or the second subset of error models; perturbing the second parameterized scene as a second perturbed parameterized scene; instantiating a simulated vehicle in the second perturbed parameterized scene; receiving second analog data; and updating the security metric based at least in part on the second simulated data.
E: a method, comprising: determining a scene comprising scene parameters describing a portion of an environment; receiving an error model associated with a subsystem of a vehicle; determining a parameterized scene based at least in part on the scene, scene parameters, and an error model; the disturbance parametric scene is used as a disturbed parametric scene; receiving simulation data of a parameterized scene indicating how subsystems of a vehicle respond to a disturbance; and determining a safety metric associated with a subsystem of the vehicle based at least in part on the simulated data.
F: the method of paragraph E, wherein the scene parameters are associated with at least one of object size, object speed, object pose, object density, vehicle speed, vehicle trajectory.
G: the method of paragraph E, wherein determining the scene comprises: receiving log data associated with an autonomous vehicle; clustering the log data to determine a first set of clusters, wherein individual clusters in the first set of clusters are associated with the scene; determining a probability associated with the single cluster based at least in part on the first set of clusters; and determining that the probability meets or exceeds a probability threshold.
H: the method of paragraph E, wherein the error model is determined based at least in part on the following factors: receiving ground truth data associated with an environment; determining an error based at least in part on comparing the ground truth data to log data associated with the vehicle; and determining an error profile based at least in part on the error; wherein the error model comprises an error distribution.
I: the method of paragraph E, wherein the parameterized scene is a first parameterized scene, the perturbed parameterized scene is a first perturbed parameterized scene, and the simulation data is first simulation data, the method further comprising: determining a second parameterized scene based on the first simulation data that includes at least one of the first subset of scene parameters or the second subset of error models; perturbing the second parameterized scene; receiving second analog data; and updating the security metric based at least in part on the second simulated data.
J: the method of paragraph I, further comprising: disabling at least a first portion of one of the scene parameters or the error model; and associating the second simulation data with at least a second portion of the one of the error model or the scene parameter that is not disabled.
K: the method of paragraph E, wherein the security metric indicates a probability of reaching or exceeding a cost threshold.
L: the method of paragraph E, wherein the portion is a first portion, the method further comprising: receiving map data, wherein a second portion of the map data is associated with a first portion of the environment; and determining that the second portion of the map data is associated with a scene associated with a probability of meeting or exceeding a threshold probability associated with a scene parameter.
M: a non-transitory computer-readable medium storing instructions executable by a processor, wherein the instructions, when executed, cause the processor to perform operations comprising: determining a scene comprising scene parameters describing a portion of an environment; receiving an error model associated with a vehicle subsystem or determining one or more terms of an error model associated with a vehicle subsystem; determining a parameterized scene based at least in part on the scene, scene parameters, and an error model; the disturbance parametric scene is used as a disturbed parametric scene; receiving simulation data of a parameterized scene indicating how vehicle subsystems respond to a disturbance; and determining a safety metric associated with a subsystem of the vehicle based at least in part on the simulated data.
N: the non-transitory computer readable medium of paragraph M, wherein the scene parameters are associated with at least one of object size, object speed, object pose, object density, vehicle speed, vehicle trajectory.
O: the non-transitory computer-readable medium of paragraph M, wherein determining the scene comprises: receiving log data associated with an autonomous vehicle; clustering log data to determine a first set of clusters, wherein individual clusters in the first set of clusters are associated with a scene; determining a probability associated with the single cluster based at least in part on the first set of clusters; and determining that the probability meets or exceeds a probability threshold.
P: the non-transitory computer-readable medium of paragraph M, wherein the error is determined based at least in part on the following factors: receiving ground truth data associated with an environment; determining an error based at least in part on comparing the ground truth data to log data associated with the vehicle; and determining an error profile based at least in part on the error; wherein the error model comprises an error distribution.
Q: the non-transitory computer readable medium of paragraph M, wherein the parameterized scene is a first parameterized scene, the perturbed parameterized scene is a first perturbed parameterized scene, and the simulation data is first simulation data, the operations further comprising: determining a second parameterized scene based on the first simulation data that includes at least one of the first subset of scene parameters or the second subset of error models; perturbing the second parameterized scene; receiving second analog data; and updating the security metric based at least in part on the second simulated data.
R: the non-transitory computer-readable medium of paragraph Q, the operations further comprising: disabling at least a first portion of one of the scene parameters or the error model; and associating the second simulation data with at least a second portion of the one of the error model or the scene parameter that is not disabled.
S: the non-transitory computer-readable medium of paragraph M, wherein the security metric indicates a probability of reaching or exceeding a cost threshold.
T: the non-transitory computer readable medium of paragraph M, wherein the error model is associated with one or more of a perception system of the vehicle, a prediction system of the vehicle, or a planning system of the vehicle.
U: a system, comprising: one or more processors; one or more computer-readable media storing computer-executable instructions that, when executed, cause a system to perform operations comprising: receiving vehicle data; inputting at least a first portion of the vehicle data into a subsystem of the autonomous vehicle, the subsystem associated with at least one of a perception system, a planning system, a tracking system, or a prediction system; determining an environmental parameter based at least in part on the second portion of the vehicle data; receiving an estimate from a subsystem; receiving ground truth data associated with a subsystem; determining a difference between the estimate and the ground truth data, the difference representing an error associated with the subsystem; and determining a statistical model associated with the subsystem indicative of a probability of error based at least in part on the difference, the probability associated with the environmental parameter.
V: the system of paragraph U, wherein the vehicle data comprises sensor data from sensors on the autonomous vehicle, wherein the environmental parameter comprises one or more of a speed or a weather condition of the autonomous vehicle, and wherein the subsystem is a perception subsystem, the estimate is one or more of an estimated position, an estimated orientation, or an estimated range of the object represented in the vehicle data, and the ground truth data represents an actual position, an actual orientation, or an actual range of the object.
W: the system of paragraph U, wherein determining the statistical model comprises: determining a first frequency associated with the environmental parameter and a second frequency associated with the difference based at least in part on the vehicle data; and determining a probability based at least in part on the first frequency and the second frequency.
X: the system of paragraph U, the operations further comprising: determining a simulated environmental parameter based at least in part on the simulated vehicle data; determining that the simulated environmental parameters correspond to the environmental parameters; determining a simulated estimate based at least in part on the simulated vehicle data and the subsystem; and perturbing the simulated estimate based at least in part on the probability by altering a portion of the corresponding simulated scene based at least in part on the error.
Y: a method, comprising: receiving data relating to a vehicle; determining an environmental parameter based at least in part on the first portion of data; determining output data associated with the vehicle system based at least in part on the second portion of data; receiving ground truth data associated with the system and the data; determining a difference between the output data and the ground truth data, the difference representing an error associated with the system; and determining a statistical model associated with the system indicative of a probability of error based at least in part on the difference, the probability associated with the environmental parameter.
Z: the method of paragraph Y, wherein determining the statistical model comprises: a frequency associated with the error is determined based at least in part on the data.
AA: the method of paragraph Y, wherein the environmental parameter comprises one or more of a speed of the vehicle, a weather condition, a geographic location of the vehicle, or a time of day.
AB: the method of paragraph Y, further comprising: generating a simulation; determining that the simulated environmental parameters of the simulation correspond to environmental parameters; inputting analog data into the system; receiving an analog output from the system; and perturbing the simulation based at least in part on the probability and the error.
AC: the method of paragraph Y, wherein the system is a perception system, the output data includes a first bounding box associated with the object, the ground truth data includes a second bounding box associated with the object, and wherein determining the variance includes determining a variance between at least one of: a first extent of the first bounding box and a second extent of the second bounding box; or a first pose of the first bounding box and a second pose of the second bounding box.
AD: the method of paragraph Y, wherein the system is a tracker system, the output data includes planned trajectory data for the vehicle, the ground truth data includes measured trajectories for the vehicle, and wherein determining the difference includes determining a difference between the planned trajectory data and the measured trajectories.
AE: the method of paragraph Y, wherein the system is associated with a prediction system, the data comprises a predicted trajectory of an object in the environment, the ground truth data comprises an observed trajectory of the object, and wherein determining the difference comprises determining a difference between the predicted trajectory and the observed trajectory.
AF: the method of paragraph Y, wherein the data is first data, the environmental parameter is a first environmental parameter, the difference is a first difference, the error is a first error, and the probability is a first probability, the method further comprising: receiving second data associated with a vehicle system; determining a second environmental parameter based at least in part on the second data; determining a second difference between the third portion of the output data and the fourth portion of the ground truth data, the second difference representing a second error associated with the system; and updating a statistical model associated with the system, the statistical model indicating a second probability of the second error, the second probability associated with the second environmental parameter.
AG: a non-transitory computer-readable medium storing instructions executable by a processor, wherein the instructions, when executed, cause the processor to perform operations comprising: receiving data; determining an environmental parameter based at least in part on the data; determining output data based at least in part on the data and the vehicle system; receiving ground truth data associated with the system and the data; determining a difference between a first portion of the output data and a second portion of the ground truth data, the difference representing an error associated with the system; determining a statistical model associated with the system indicative of the probability of error based at least in part on the difference; and associating the probability with the environmental parameter.
AH: the non-transitory computer readable medium of paragraph AG, wherein determining the statistical model comprises: a frequency associated with the difference is determined based at least in part on the data.
AI: the non-transitory computer readable medium of paragraph AG, wherein the environmental parameter includes one or more of vehicle speed, weather conditions, or time of day.
AJ: the non-transitory computer readable medium of paragraph AG, the operations further comprising: generating a simulation comprising a simulated vehicle; receiving analog data; determining that the simulated environmental parameter corresponds to an environmental parameter; inputting at least a portion of the analog data into the system; receiving analog output data from the system; and altering the analog output data based at least in part on the request and the probability and the error.
AK: the non-transitory computer readable medium of paragraph AG, wherein the system is a perception system, the data includes a first bounding box associated with the object, the ground truth data includes a second bounding box associated with the object, and wherein determining the variance includes determining a variance between at least one of: a first extent of the first bounding box and a second extent of the second bounding box; or a first pose of the first bounding box and a second pose of the second bounding box.
AL: the non-transitory computer readable medium of paragraph AG, wherein the system is a tracker system, the data includes planned trajectory data of the vehicle, the ground truth data includes measured trajectories of the vehicle, and wherein determining the difference includes determining a difference between the planned trajectory data and the measured trajectories.
AM: the non-transitory computer readable medium of paragraph AG, wherein the system is associated with a prediction system, the data comprises a predicted trajectory of an object in the environment, the ground truth data comprises an observed trajectory of the object, and wherein determining the difference comprises determining a difference between the predicted trajectory and the observed trajectory.
AN: the non-transitory computer-readable medium of paragraph AG, wherein the data is first data, the environmental parameter is a first environmental parameter, the discrepancy is a first discrepancy, the error is a first error, and the probability is a first probability, the operations further comprising: receiving second data associated with a system of a vehicle; determining a second environmental parameter based at least in part on the second data; determining a second difference between the third portion of the output data and the fourth portion of the ground truth data, the second difference representing a second error associated with the system; and updating a statistical model associated with the system, the statistical model indicating a second probability of the second error, the second probability associated with the second environmental parameter.
AO: a system, comprising: one or more processors; one or more non-transitory computer-readable media storing computer-executable instructions that, when executed, cause a system to perform operations comprising: receiving vehicle data from a vehicle, the vehicle data being associated with a state of an object; receiving ground truth data associated with an object; determining an error based at least in part on the vehicle data and the ground truth data; determining a plurality of parameters based at least in part on the vehicle data; clustering at least a portion of the vehicle data into a plurality of clusters based at least in part on the plurality of parameters and the error; and determining an error model based at least in part on a portion of the vehicle data associated with a cluster of the plurality of clusters.
AP: the system of paragraph AO wherein the vehicle data is based at least in part on sensor data from sensors associated with the vehicle.
AQ: the system of paragraph AO, wherein the plurality of parameters are associated with at least two or more of weather conditions, a first time of day, a second time of year, a distance to the object, a classification of the object, a size of the object, a speed of the object, a location of the object, or an orientation of the object.
AR: the system of paragraph AO wherein the error is a first error, the operations further comprising: receiving perception data; determining a second error associated with the perception data based at least in part on the perception data and the error model; and controlling the vehicle based at least in part on the perception data and the second error.
AS: a method, comprising: receiving vehicle data from a vehicle, the vehicle data associated with a state of an object; receiving ground truth data associated with the object based at least in part on the vehicle data; determining an error based at least in part on the vehicle data and the ground truth data; determining a parameter based at least in part on the vehicle data; clustering a portion of the vehicle data into a plurality of clusters based at least in part on the parameters and the errors; and determining an error model based at least in part on a portion of the vehicle data associated with a cluster of the plurality of clusters.
AT: the method of paragraph AS, wherein the state of the object comprises at least one of a size of the object, a location of the object, an orientation of the object, a speed of the object, or a position of the object.
AU: the method of paragraph AS, wherein the error model includes an error distribution, the method further comprising: determining a frequency of occurrence associated with the error based at least in part on the vehicle data and the ground truth data; and determining an error profile based at least in part on the frequency of occurrence.
AV: the method of paragraph AS, further comprising: determining classification data identifying a classification of the object; determining object data identifying object parameters of the object; and determining an error distribution associated with at least one of a first cluster of the plurality of clusters or a second cluster of the plurality of clusters, the first cluster associated with a classification of the object, the second cluster associated with an object parameter of the object; wherein the error model includes an error distribution associated with at least one of a true positive error or a false positive error.
AW: the method of paragraph AS, further comprising: receiving simulation data associated with a simulated vehicle controller in a simulated environment; determining simulated data of the disturbance based at least in part on the error model and the simulated data; sending the perturbed simulation data to a simulated vehicle controller in the simulated environment; and determining a response of the simulated data indicative of how the simulated vehicle controller responds to the disturbance based at least in part on the simulated data of the disturbance.
AX: the method of paragraph AW, wherein: the error is a first error; the simulation data includes a classification associated with a simulated object represented in the simulation environment; and the simulated data of the disturbance indicates a second error associated with at least one of a position of the object, an orientation of the object, a range of the object, or a velocity of the object.
AY: the method of paragraph AS further comprises: determining a first error distribution associated with false negative errors at a first time; determining a second error profile associated with false positive errors at a second time after the first time; and determining a third error profile associated with false positive errors at a third time after the second time; wherein the error model includes a first error distribution, a second error distribution, and a third error distribution.
AZ: the method of paragraph AS, further comprising: determining a cost associated with fitting the vehicle data to the error model; and determining an error model based at least in part on the cost.
BA: one or more non-transitory computer-readable media storing instructions executable by one or more processors, wherein the instructions, when executed, cause the one or more processors to perform operations comprising: receiving vehicle data from a vehicle, the vehicle data being associated with a state of an object; receiving ground truth data associated with an object; determining an error based at least in part on the vehicle data and the ground truth data; determining a parameter based at least in part on the vehicle data; clustering a portion of the vehicle data into a plurality of clusters based at least in part on the parameter; and determining an error model based at least in part on a portion of the vehicle data associated with a cluster of the plurality of clusters.
BB: the one or more non-transitory computer-readable media of paragraph BA, wherein the state of the object comprises at least one of a size of the object, a location of the object, an orientation of the object, a speed of the object, or a position of the object.
BC: the one or more non-transitory computer-readable media of paragraph BA, wherein the error model comprises an error distribution, the operations further comprising: determining a frequency of occurrence associated with the error based at least in part on the vehicle data and the ground truth data; and determining an error profile based at least in part on the frequency of occurrence.
BD: the one or more non-transitory computer-readable media of paragraph BA, wherein the error model comprises an error distribution, the operations further comprising: determining classification data identifying a classification of the object; determining object data identifying object parameters of the object; and determining an error distribution associated with at least one of a first cluster of the plurality of clusters or a second cluster of the plurality of clusters, the first cluster associated with a classification of the object, the second cluster associated with an object parameter of the object; wherein the error model includes an error distribution associated with at least one of a true positive error or a false positive error.
BE: the one or more non-transitory computer-readable media of paragraph BA, wherein the error model is one of a plurality of error models, the operations further comprising: receiving simulation data associated with a simulated vehicle controller in a simulated environment; determining simulated data of the disturbance based at least in part on the error model and the simulated data; sending the disturbed simulation data to a simulated vehicle controller in a simulated environment; and determining a response of the simulated data indicative of how the simulated vehicle controller responds to the disturbance based at least in part on the simulated data of the disturbance.
BF: one or more non-transitory computer-readable media of paragraph BE, wherein: the error is a first error; the simulation data includes a classification associated with a simulated object represented in the simulation environment; the simulated data of the disturbance indicates a second error associated with at least one of a position of the object, an orientation of the object, a range of the object, or a velocity of the object.
BG: the one or more non-transitory computer-readable media of paragraph BA, the operations further comprising: determining a first error distribution associated with false negative errors at a first time; determining a second error profile associated with the true positive error at a second time after the first time; and determining a third error profile associated with false positive errors at a third time after the second time; wherein the error model includes a first error distribution, a second error distribution, and a third error distribution.
BH: the one or more non-transitory computer-readable media of paragraph BA, the operations further comprising: determining a cost associated with fitting the vehicle data to the error model; and determining an error model based at least in part on the cost.
While the above example clauses are described with respect to one particular implementation, it should be understood that in the context of this document, the contents of the example clauses may also be implemented by a method, apparatus, system, computer-readable medium, and/or another implementation. Further, any of the example A-BHs may be implemented alone or in combination with any other one or more of the example A-BHs.
Conclusion
While one or more examples of the technology described herein have been described, various modifications, additions, permutations and equivalents thereof are included within the scope of the technology described herein.
In the description of the examples, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific examples of the claimed subject matter. It is to be understood that other examples may be used and that changes or variations, such as structural changes, may be made. Such examples, variations, or modifications do not necessarily depart from the scope of the claimed subject matter as it is intended. Although the steps herein may be presented in a particular order, in some cases the order may be changed to provide certain inputs at different times or in a different order without changing the functionality of the systems and methods described. The disclosed procedures may also be performed in a different order. In addition, the various calculations herein need not be performed in the order disclosed, and other examples using alternative orders of calculation may be readily implemented. In addition to reordering, a computation may be decomposed into sub-computations with the same result.

Claims (15)

1. A method, comprising:
receiving vehicle data from a vehicle, the vehicle data associated with a state of an object;
receiving ground truth data associated with the object based at least in part on the vehicle data;
determining an error based at least in part on the vehicle data and the ground truth data;
determining a parameter based at least in part on the vehicle data;
clustering a portion of the vehicle data as a plurality of clusters and based at least in part on the parameter and the error; and
determining an error model based at least in part on a portion of the vehicle data associated with one of the plurality of clusters.
2. The method of claim 1, wherein the vehicle data is based at least in part on sensor data from a sensor associated with the vehicle.
3. The method of claim 1 or 2, wherein the parameter is associated with at least one of a weather condition, a first time of day, a second time of year, a distance to the object, a classification of the object, a size of the object, a speed of the object, a location of the object, or an orientation of the object.
4. The method of any of claims 1-3, wherein the error is a first error, the method further comprising:
receiving perception data;
determining a second error associated with the perception data based at least in part on the perception data and the error model; and
controlling the vehicle based at least in part on the perception data and the second error.
5. The method of any of claims 1-4, wherein the state of the object comprises at least one of a size of the object, a location of the object, an orientation of the object, a speed of the object, or a position of the object.
6. The method of any of claims 1 to 5, wherein the error model comprises an error distribution, the method further comprising:
determining a frequency of occurrence associated with the error based at least in part on the vehicle data and the ground truth data; and
determining the error profile based at least in part on the frequency of occurrence.
7. The method of any of claims 1 to 6, further comprising:
determining classification data identifying a classification of the object;
determining object data identifying object parameters of the object; and
determining an error distribution associated with at least one of a first cluster of the plurality of clusters or a second cluster of the plurality of clusters, the first cluster associated with a classification of the object, the second cluster associated with an object parameter of the object; wherein
The error model includes an error distribution associated with at least one of a true positive error or a false positive error.
8. The method of any of claims 1 to 7, further comprising:
receiving simulation data associated with a simulated vehicle controller in a simulated environment;
determining simulated data of the disturbance based at least in part on the error model and the simulated data;
transmitting simulated data of the disturbance to a simulated vehicle controller in the simulated environment; and
determining a response based at least in part on the simulated data of the disturbance, the response indicating how the simulated vehicle controller responds to the simulated data of the disturbance.
9. The method of any one of claims 1 to 8, wherein:
the error is a first error;
the simulation data includes a classification associated with a simulated object represented in the simulation environment; and
the simulated data of the disturbance indicates a second error associated with at least one of a position of the object, an orientation of the object, a range of the object, or a velocity of the object.
10. The method of any of claims 1 to 9, further comprising:
determining a first error distribution associated with false negative errors at a first time;
determining a second error profile associated with a true positive error at a second time after the first time; and
determining a third error distribution associated with false positive errors at a third time after the second time;
wherein the error model includes the first error profile, the second error profile, and the third error profile.
11. The method of any of claims 1 to 10, further comprising:
determining a cost associated with fitting the vehicle data to the error model; and
determining the error model based at least in part on the cost.
12. A computer program product comprising coded instructions which, when operated on a computer, carry out the method according to any one of claims 1 to 11.
13. A system, comprising:
one or more processors; and
one or more non-transitory computer-readable media storing instructions executable by one or more processors, wherein the instructions, when executed, cause the system to perform operations comprising:
receiving vehicle data from a vehicle, the vehicle data associated with a state of an object;
receiving ground truth data associated with the object;
determining an error based at least in part on the vehicle data and the ground truth data;
determining a parameter based at least in part on the vehicle data;
clustering a portion of the vehicle data as a plurality of clusters and based at least in part on the parameter; and
determining an error model based at least in part on a portion of the vehicle data associated with one of the plurality of clusters.
14. The system of claim 13, wherein the error model comprises an error distribution, the operations further comprising:
determining a frequency of occurrence associated with the error based at least in part on the vehicle data and the ground truth data; and
determining the error profile based at least in part on the frequency of occurrence.
15. The system of claim 13 or 14, wherein the operations further comprise:
determining classification data identifying a classification of the object;
determining object data identifying object parameters of the object; and
determining an error distribution associated with at least one of a first cluster of the plurality of clusters or a second cluster of the plurality of clusters, the first cluster associated with a classification of the object, the second cluster associated with an object parameter of the object;
wherein the error model includes an error distribution associated with at least one of a true positive error or a false positive error.
CN202080084729.5A 2019-12-09 2020-11-30 Perceptual error model Pending CN114787894A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US16/708,019 US11734473B2 (en) 2019-09-27 2019-12-09 Perception error models
US16/708,019 2019-12-09
PCT/US2020/062602 WO2021118822A1 (en) 2019-12-09 2020-11-30 Perception error models

Publications (1)

Publication Number Publication Date
CN114787894A true CN114787894A (en) 2022-07-22

Family

ID=76330731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080084729.5A Pending CN114787894A (en) 2019-12-09 2020-11-30 Perceptual error model

Country Status (4)

Country Link
EP (1) EP4073782A4 (en)
JP (1) JP2023504506A (en)
CN (1) CN114787894A (en)
WO (1) WO2021118822A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7160190B2 (en) * 2019-05-16 2022-10-25 日本電信電話株式会社 Abnormality detection device, method, system and program
DE102022120754A1 (en) * 2022-08-17 2024-02-22 Dspace Gmbh Method and system for running a virtual test

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10346564B2 (en) * 2016-03-30 2019-07-09 Toyota Jidosha Kabushiki Kaisha Dynamic virtual object generation for testing autonomous vehicles in simulated driving scenarios
US10481044B2 (en) * 2017-05-18 2019-11-19 TuSimple Perception simulation for improved autonomous vehicle control
US10877476B2 (en) * 2017-11-30 2020-12-29 Tusimple, Inc. Autonomous vehicle simulation system for analyzing motion planners
US20190179979A1 (en) * 2017-12-13 2019-06-13 Uber Technologies, Inc. Simulated Sensor Testing
US10169678B1 (en) * 2017-12-21 2019-01-01 Luminar Technologies, Inc. Object identification and labeling tool for training autonomous vehicle controllers

Also Published As

Publication number Publication date
JP2023504506A (en) 2023-02-03
WO2021118822A1 (en) 2021-06-17
EP4073782A4 (en) 2024-01-17
EP4073782A1 (en) 2022-10-19

Similar Documents

Publication Publication Date Title
US11734473B2 (en) Perception error models
US11625513B2 (en) Safety analysis framework
US11351995B2 (en) Error modeling framework
US11574089B2 (en) Synthetic scenario generator based on attributes
US11568100B2 (en) Synthetic scenario simulator based on events
US11150660B1 (en) Scenario editor and simulator
US20210339741A1 (en) Constraining vehicle operation based on uncertainty in perception and/or prediction
US11458991B2 (en) Systems and methods for optimizing trajectory planner based on human driving behaviors
US11648939B2 (en) Collision monitoring using system data
US11628850B2 (en) System for generating generalized simulation scenarios
US11697412B2 (en) Collision monitoring using statistic models
CN114430722A (en) Security analysis framework
US11526721B1 (en) Synthetic scenario generator using distance-biased confidences for sensor data
CN114077541A (en) Method and system for validating automatic control software for an autonomous vehicle
US11577741B1 (en) Systems and methods for testing collision avoidance systems
US11415997B1 (en) Autonomous driving simulations based on virtual simulation log data
WO2020264276A1 (en) Synthetic scenario generator based on attributes
US20230150549A1 (en) Hybrid log simulated driving
CN114787894A (en) Perceptual error model
CN116917827A (en) Proxy conversion in driving simulation
US20220266859A1 (en) Simulated agents based on driving log data
US11814070B1 (en) Simulated driving error models
US20240096232A1 (en) Safety framework with calibration error injection
US11952001B1 (en) Autonomous vehicle safety system validation
US11938966B1 (en) Vehicle perception system validation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination