CN114430722A - Security analysis framework - Google Patents

Security analysis framework Download PDF

Info

Publication number
CN114430722A
CN114430722A CN202080066048.6A CN202080066048A CN114430722A CN 114430722 A CN114430722 A CN 114430722A CN 202080066048 A CN202080066048 A CN 202080066048A CN 114430722 A CN114430722 A CN 114430722A
Authority
CN
China
Prior art keywords
scene
data
vehicle
determining
error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080066048.6A
Other languages
Chinese (zh)
Inventor
G·巴格希克
A·S·克雷戈
A·G·德克斯
R·利亚索夫
J·W·V·菲尔宾
M·温默斯霍夫
A·C·雷什卡
A·G·赖格
S·A·莫达拉瓦拉萨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zoox Inc
Original Assignee
Zoox Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/586,853 external-priority patent/US11351995B2/en
Priority claimed from US16/586,838 external-priority patent/US11625513B2/en
Application filed by Zoox Inc filed Critical Zoox Inc
Publication of CN114430722A publication Critical patent/CN114430722A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3696Methods or tools to render software testable
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/161Decentralised systems, e.g. inter-vehicle communication
    • G08G1/163Decentralised systems, e.g. inter-vehicle communication involving continuous checking
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

Techniques for determining a safety metric associated with a vehicle controller are discussed herein. To determine whether a complex system (which may be unchecked) is capable of operating safely, various operating states (scenarios) may be identified based on the operational data and associated with the scenario parameters to be adjusted. To verify the safe operation of such systems, a scene may be identified for inspection. Error metrics for subsystems of the system may be quantified. In addition to random errors of other systems/subsystems, error metrics may also be introduced into the scene. Scene parameters may also be perturbed. Any of a large number of such disturbances may be instantiated in the simulation to test, for example, the vehicle controller. Safety metrics associated with the vehicle controller and the cause of any failures may be determined based on the simulation.

Description

Security analysis framework
RELATED APPLICATIONS
This patent application claims priority to U.S. utility patent application entitled "safety analysis framework", serial No. 16/586,838, entitled "9/27/2019", and claims priority to U.S. utility patent application entitled "error modeling framework", serial No. 16/586,853, entitled "27/9/2019". The entire disclosures of application Ser. Nos. 16/586,838 and 16/586,853 are incorporated herein by reference.
Background
Autonomous vehicles may use autonomous vehicle controllers to guide the autonomous vehicle through the environment. For example, an autonomous vehicle controller may use a planned method, apparatus, and system to determine a driving path and guide an autonomous vehicle through an environment containing dynamic objects (e.g., vehicles, pedestrians, animals, etc.) and static objects (e.g., buildings, signs, stopped vehicles, etc.). However, in order to ensure the safety of the occupant, it is important to verify the safety of the controller.
Drawings
The detailed description is described with reference to the accompanying drawings. In the drawings, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference symbols in different drawings indicates similar or identical items or features.
FIG. 1 illustrates generating vehicle performance data associated with a vehicle controller based on a parameterized scenario.
FIG. 2 illustrates a computing device that generates scene data based at least in part on log data generated by a vehicle, wherein the scene data depicts one or more changes to a scene.
Fig. 3 illustrates generating error model data based at least in part on vehicle data and real (ground route) data.
FIG. 4 illustrates perturbing a simulation using error model data by providing at least one of an error or uncertainty associated with the simulation environment.
FIG. 5 illustrates a computing device that generates perceptual error model data based at least in part on log data and truth data generated by a vehicle.
Fig. 6 illustrates a computing device that generates simulation data based at least in part on parameterized scene data and generates security metric data based at least in part on the simulation data.
FIG. 7 depicts a block diagram of an example system for implementing techniques described herein.
Fig. 8 depicts a flowchart of an example process for determining a safety metric associated with a vehicle controller, according to an example of the present disclosure.
FIG. 9 depicts a flowchart of an example process for determining a statistical model associated with a subsystem of an autonomous vehicle.
Detailed Description
The technology described herein relates to determining various aspects of performance metrics of a system. In at least some examples described herein, such performance metrics may be determined using simulations in conjunction with other performance metric determinations, for example. Simulations may be used to verify software (e.g., vehicle controllers) executing on vehicles (e.g., autonomous vehicles), and collect safety metrics to ensure that the software can safely control these vehicles in various scenarios. In additional or alternative examples, simulations may be used to learn constraints of an autonomous vehicle using an autonomous controller. For example, the simulation may be used to understand the operating space of the autonomous vehicle (e.g., the envelope in which the autonomous controller effectively controls the autonomous vehicle) in view of surface conditions, environmental noise, faulty components, etc. The simulation may also be used to generate feedback to improve the operation and design of the autonomous vehicle. For example, in some examples, simulations may be used to determine the amount of redundancy needed in an autonomous driving controller, or how to modify the behavior of an autonomous driving controller based on what is learned through the simulation. Further, in additional or alternative examples, the simulation may be used to inform the hardware design of the autonomous vehicle, such as optimizing the placement of sensors on the autonomous vehicle.
When creating a simulation environment to perform testing and verification, it is possible to specifically enumerate the environment with various specific examples. Each instantiation of such an environment may be unique and defined. Manually enumerating all possible scenarios may require an excessive amount of time, and various scenarios may not be tested if not every possible scenario is constructed. Scene parameters may be used to parameterize properties and/or attributes of objects within a scene and provide changes to the scene.
For example, a vehicle or vehicles may traverse an environment and generate log data associated with the environment. The log data may include: sensor data captured by one or more sensors of the vehicle, sensory data indicative of an object identified by one or more systems on the vehicle (or generated during a post-processing stage), predictive data indicative of the intent of the object (whether generated during recording or subsequently), and/or status data indicative of diagnostic information, trajectory information, and other information generated by the vehicle. The vehicle may transmit the log data to a database storing the log data and/or a computing device analyzing the log data via a network.
The computing device may determine, based on the log data, various scenes, frequencies of the various scenes, and regions in the environment associated with the various scenes. In some instances, the computing device may group similar scenes represented in the log data. For example, scenes may be grouped together using, for example, k-means clustering and/or evaluating weighted distances (e.g., euclidean) between environmental parameters (e.g., daytime, nighttime, precipitation, vehicle position/speed, object position/speed, road segments, etc.). As described above, clustering similar or analogous scenes may reduce the amount of computing resources required to simulate an autonomous drive controller in an environment by simulating the autonomous drive controller in a unique scene, rather than simulating an autonomous vehicle in a nearly similar scene that produces redundant simulation data/results. As can be appreciated, it may be desirable for an autonomous driving controller to perform similarly (and/or to have proven performance) in similar scenarios.
For example, the computing device may determine the rate of pedestrian presence at a pedestrian crossing based on the number of pedestrians represented in the log data. In some instances, the computing device may determine a probability that a pedestrian is detected at the crosswalk based on the velocity and a period of time during which the autonomous vehicle is operated. Based on the log data, the computing device may determine a scenario and identify scenario parameters that may be used for simulation based on the scenario.
In some instances, the simulation may be used to test and verify the response of an autonomous vehicle controller to defective (and/or malfunctioning) sensors of the vehicle and/or defective (and/or malfunctioning) handling of sensor data. In such an example, the computing device may be configured to introduce inconsistencies in the scene parameters of the object. For example, the error model may indicate an error and/or a percentage of error associated with the scene parameter. The scenario may incorporate errors and/or error percentages into the simulated scenario and simulate the response of the autonomous vehicle controller. Such errors may be represented by, but are not limited to, a look-up table determined based on statistical aggregation using real data, a function (e.g., based on the error of an input parameter), or any other model that maps a parameter to a particular error. In at least some examples, such an error model may map a particular error with a probability/frequency of occurrence.
For example, but not limiting of, the error model may indicate that a scene parameter, such as a velocity associated with an object in the simulated environment, is associated with a percentage of error. For example, the object may be traveling at 10 meters per second in the simulated scene and the error percentage may be 20%, which results in a speed range between 8 and 12 meters per second. In some instances, a speed range may be associated with a probability distribution that indicates that some portions of the range have a higher probability of occurrence than other portions of the range (e.g., 8 and 12 meters per second are associated with a 15% probability, 9 and 11 meters per second are associated with a 30% probability, and 10 meters per second are associated with a 10% probability).
Based on the error model and/or the scene, a parameterized scene may be generated. The parameterized scene may provide a set of changes to the scene. Thus, instantiating the autonomous vehicle controller in a parameterized scene and simulating the parameterized scene may effectively cover a wide range of changes in the scene without the need to manually enumerate the changes. Additionally, based at least in part on executing the parameterized scenario, the simulation data may indicate how the autonomous vehicle controller responds to (or will respond to) the parameterized scenario, and determine a successful result or an unsuccessful result based at least in part on the simulation data.
Aggregating simulation data related to the parameterized scene may provide a security metric associated with the parameterized scene. For example, the simulation data may indicate a success rate and/or a failure rate of the autonomous vehicle controller and the parameterized scenario. In some instances, meeting or exceeding the success rate may indicate successful verification of the autonomous vehicle controller, which may then be downloaded by (or otherwise communicated to) the vehicle for further vehicle control and operation.
For example, a parameterized scene may be associated with the results. The simulation data may instruct the autonomous vehicle controller to respond in concert or non-concert with the results. For example, but not limiting of, a parameterized scene may represent a simulated environment that includes a vehicle controlled by an autonomous vehicle controller that is traveling at a speed and performs a stopping action in front of an object in front of the vehicle. The speed may be associated with a scene parameter indicative of a speed range of the vehicle. The parametric scene may be simulated based at least in part on the speed range, and simulated data indicative of a distance between the vehicle and the object may be generated when the vehicle completes the stopping action. The parametric scene may be associated with a result indicating that the distance between the vehicle and the object meets or exceeds a distance threshold. Based on the simulation data and the scene parameters, the success rate may indicate a number of times that a distance between the vehicle and the object meets or exceeds a distance threshold when the vehicle completes the stopping action, as compared to a total number of times that the vehicle completes the stopping action.
The techniques described herein provide various computational efficiencies. For example, using the techniques described herein, a computing device requires less computing resources and may generate multiple simulated scenes faster than that obtained via conventional techniques. Conventional techniques are not scalable. For example, generating a unique set of simulated environments-the number required for training, testing, and/or verifying a system (e.g., one or more components of an AI stack) on an autonomous vehicle (e.g., before deploying such autonomous vehicle in a corresponding new real environment) -may take an inordinate amount of time, thereby limiting the ability to train, test, and/or verify such a system (e.g., one or more components of an AI stack) on an autonomous vehicle before entering a real scenario and/or environment. The techniques described herein are unconventional in that they utilize sensor data collected from an actual environment, and supplement that data with additional data to more efficiently generate a substantially accurate simulated environment (e.g., relative to the corresponding actual environment) than is obtained using conventional techniques. Moreover, the techniques described herein, e.g., the changing aspects of a scene, can generate many scalable simulation scenarios in less time and with less computing resources than is obtained with conventional techniques.
Further, the technology described herein relates to improvements in security. That is, the simulated environment produced by the generation techniques described herein may be used to test, train, and verify systems on an autonomous vehicle to ensure that the autonomous vehicle is safely operated when these systems are deployed in an actual environment. That is, the simulated environment produced by the generation techniques described herein may be used to test, train, and validate a planner system and/or a predictive system of an autonomous vehicle controller, which may be used by an autonomous vehicle to navigate the autonomous vehicle along a trajectory in an actual environment. Thus, such training, testing, and verification enabled by the techniques described herein may provide an opportunity to ensure that the autonomous vehicle is able to safely operate in a real world environment. Thus, the techniques described herein improve security and shock navigation.
FIG. 1 shows an example 100 of generating vehicle performance data associated with a vehicle controller based on a parameterized scenario. To generate a scene, input data 102 may be used. The input data 102 may include vehicle data 104 and/or additional contextual data 106. The vehicle data 104 may include log data captured by vehicles traveling through the environment. As described above, the log data may be used to identify a scene for simulating an automatic driving controller. For purposes of illustration, the vehicle may be an autonomous vehicle configured to operate according to a level 5 classification promulgated by the U.S. national road traffic safety administration, the level 5 classification describing vehicles capable of performing all safety critical functions for an entire trip without the driver (or occupant) wishing to control the vehicle at any time. In such an example, the vehicle may be unoccupied because it may be configured to control all functions from start to stop, including all parking functions. This is merely one example, and the systems and methods described herein may be incorporated into any ground, air, or water based vehicle, including vehicles ranging from those requiring manual control by the driver at any time to those requiring partial or full autopilot control.
The vehicle may include a computing device containing a perception engine and/or planner and perform operations such as detecting, identifying, segmenting, classifying, and/or tracking objects on sensor data collected from the environment. For example, objects such as pedestrians, bicycle/bike riders, motorcycle/motorcycle riders, buses, trams, trucks, animals, and/or the like may be present in the environment.
As the vehicle travels through the environment, the sensors may capture sensor data associated with the environment. For example, some sensor data may be associated with an object (e.g., a vehicle, a rider, and/or a pedestrian). In some instances, the sensor data may be associated with other objects including, but not limited to, buildings, road surfaces, signs, obstacles, and the like. Thus, in some instances, sensor data may be associated with dynamic objects and/or static objects. As described above, a dynamic object may be an object associated with motion (e.g., a vehicle, a motorcycle, a rider, a pedestrian, an animal, etc.) or an object capable of motion (e.g., a parked vehicle, a standing pedestrian, etc.) within an environment. As described above, a static object may be an object associated with an environment, such as a building/structure, a road surface, a road marking, a sign, an obstacle, a tree, a sidewalk, and the like. In some instances, the vehicle computing device may determine information about objects in the environment, such as bounding boxes, classifications, segmentation information, and so forth.
The vehicle computing device may use the sensor data to generate a trajectory of the vehicle. In some instances, the vehicle computing device may also determine pose data associated with the position of the vehicle. For example, the vehicle computing device may use the sensor data to determine position data, coordinate data, and/or orientation data of the vehicle in the environment. In some examples, the attitude data may include x-y-z coordinates and/or may include pitch, roll, and yaw data associated with the vehicle.
The vehicle computing device may generate vehicle data 104 that may include the data described above. For example, the vehicle data 104 may include sensor data, sensory data, planning data, vehicle state data, speed data, intent data, and/or other data generated by a vehicle computing device. In some examples, the sensor data may include data captured by sensors, such as time-of-flight sensors, position sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., Inertial Measurement Unit (IMU), accelerometer, magnetometer, gyroscope, etc.), lidar sensors, radar sensors, sonar sensors, infrared sensors, cameras (e.g., RGB, IR, intensity, depth, etc.), microphone sensors, environmental sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), ultrasonic transducers, wheel encoders, and so forth. The sensor data may be data captured by such sensors, such as time-of-flight data, location data, lidar data, radar data, sonar data, image data, audio data, and so forth. Such log data may also include intermediate outputs of any one or more systems or subsystems of the vehicle, including, but not limited to, messages indicative of object detections, object traces, predictions of future object locations, multiple trajectories generated in response to such detections, control signals transmitted to one or more systems or subsystems for carrying out commands, and so forth. In some instances, the vehicle data 104 may include temporal data associated with other data generated by the vehicle computing device.
In some instances, the input data 102 may be used to generate a scene. The input data 102 may include vehicle data 104 and/or additional contextual data 106. For example, but not limiting of, the additional contextual data 106 may include data such as incident reports from third party sources. The third party sources may include law enforcement agencies, motor vehicle departments, and/or security authorities that are capable of issuing and/or storing activity reports and/or incident reports. For example, the report may include a description of the type of activity (e.g., a traffic accident, such as debris on a road, a localized flood, etc.), a location, and/or an activity. For example, but not limiting of, the report may describe that the driver, while operating the vehicle, hit a tree branch that fell down the road while traveling at a speed of 15 meters per second. The reports may be used to generate similar scenarios that may be used for simulations.
In some instances, the additional contextual data 106 may include captured sensor data (e.g., image data). For example, but not limiting of, a driver of a vehicle may use a camera to capture image data while the driver is operating the vehicle. In some instances, the image data may capture activity such as an accident. For example, and without limitation, a driver may use a dashboard camera (e.g., a camera mounted on an interior dashboard of the vehicle) to capture image data while the driver is operating the vehicle. As the driver operates the vehicle, the animal may run across the road and the driver may immediately brake to slow the vehicle. The dashboard camera may capture image data of the animal running across the road and the vehicle decelerating. The image data may be used to generate a scene where the animal runs across the road. As described above, the probability associated with a scene may be determined based on the probability of meeting or exceeding a probability threshold to identify the scene for simulation. For example, but not limiting of, in the event that the likelihood of encountering a scene is less than 0.001%, a probability threshold of 0.001% may be used, and then scenes with higher probabilities may be prioritized for use in modeling and determining the security metric associated with the scenes with higher probabilities.
The scene editor component 108 can use the input data 102, such as the vehicle data 104 and/or the additional contextual data 106, to generate an initial scene 110. For example, the input data 102 can be input into the scene editor component 108, and the scene editor component 108 can generate a synthetic environment that represents at least a portion of the input data 102 in the synthetic environment. Examples of generating scenes such as the initial scene 110 and data generated by vehicles that may be included in the vehicle data 104 may be found, for example, in U.S. patent application No.16/392,094, entitled "scene editor and simulator" and filed on 2019, 23.4.4.9, the entire contents of which are incorporated herein by reference.
The scene editor component 108 can be configured to scan the input data 102 to identify one or more scenes represented in the input data 102. For example, and without limitation, the scene editor component 108 can determine that a portion of the input data 102 represents a pedestrian crossing a street without right of way (e.g., no crosswalk, at an intersection without a walk indication, etc.). The scene editor component 108 can identify this as a scene (e.g., a violation crossing road parameter) and label (and/or classify) the scene as, for example, a violation crossing road scene. For example, the scene editor component 108 can generate the initial scene 110 using rules that define actions. For example, but not limiting of, a rule may define that a pedestrian crossing a road in an area not associated with a crosswalk is an offending crosswalk. In some instances, the scene editor component 108 can receive annotation data from a user of the scene editor component 108 to associate portions of the input data 102 with annotations to generate the initial scene 110.
In some instances, the scene editor component 108 can scan other portions of the input data 102 and identify similar scenes and mark the similar scenes with the same offending crossing road markings. In some instances, the scene editor component 108 can identify scenes that do not correspond to (or are excluded from) existing annotations and generate new annotations for those scenes. In some instances, the scene editor component 108 can generate a scene library and store the scene library in a database within the scene editor component 108. For example, but not limiting of, the scene library may include a crosswalk scene, a merge scene, a lane change scene, and the like.
In at least some examples, such an initial scenario 110 can be specified manually. For example, one or more users may specify certain scenarios to be tested to ensure that the vehicle is able to operate safely while executing those scenarios, even though the scenarios were never (or rarely) encountered previously.
The parameter component 112 can determine scene parameters associated with the initial scene 110 identified by the scene editor component 108. For example, and without limitation, the parameter component 112 may analyze the offending crossing road scene and determine scene parameters associated with the offending crossing road scene, including a pedestrian's location, a pedestrian's pose, a pedestrian's stature, a pedestrian's speed, a pedestrian's trail, a distance between a vehicle and a pedestrian, a vehicle's speed, a width of a road, and the like.
In some instances, parameter component 112 may determine a range or set of values associated with a scene parameter. For example, the parameter component 112 may determine a classification associated with an object (e.g., a pedestrian) represented in the initial scene 110 and determine other objects in the input data 102 having the same classification. The parameter component 112 may then determine a range of values associated with the scene parameters represented by the initial scene 110. For example, but not limiting of, the scene parameter may indicate that the pedestrian may have a speed in the range of 0.5-5 meters per second.
In some instances, the parameter component 112 may determine a probability associated with a scene parameter. For example, and without limitation, the parameter component 112 can associate a probability distribution, such as a Gaussian distribution (also referred to as a normal distribution), with a scene parameter. In some instances, parameter component 112 may determine probabilities associated with scene parameters based on input data 102. As described above, parameter component 112 can determine a classification associated with an object represented in input data 102 and determine other objects having the same classification in input data 102 and/or other log data. Parameter component 112 can then determine a probability distribution of scene parameters associated with the object represented by input data 102 and/or other log data.
For example, and without limitation, the parameter component 112 may determine that 30% of pedestrians are walking at speeds below 0.3 meters per second, 30% are walking at speeds above 1.2 meters per second, and 40% are walking at speeds of 0.3 to 1.2 meters per second. The parameter component 112 can use the distribution as a probability that a pedestrian violating a road scenario will walk at a particular speed. As another example and without limitation, the parameter component 112 may determine a 1% violation crossroad scenario probability, which may indicate that a vehicle traversing the environment will encounter a violation crossroad rider 5% of the time while traversing the environment. In some instances, during simulation of an autonomous vehicle controller, the scene probabilities may be used to include the scene at a rate associated with the scene probabilities.
In some instances, the parameter component 112 may receive supplemental data 114 incorporated into the distribution. For example, and without limitation, the parameter component 112 may determine a context parameter indicating that a pedestrian may have a distance in the range of 30-60 meters from the vehicle while the vehicle is traveling at a speed of 15 meters per second, or alternatively expressed as a time-to-collision of 2-4 seconds. Supplemental data 114 (e.g., regulations or guidelines) may indicate that the vehicle must process a scenario with a 1.5 second time-to-collision, which may be a lower limit (also referred to as a parameter threshold). The parameter component 112 may incorporate supplemental data 114 and determine scene parameters as having a collision time of 1.5-4 seconds (although any time range may be specified). In some instances, parameter component 112 can use the probability distributions described above to determine (using interpolation and/or extrapolation techniques) a probability associated with supplemental data 114.
The error model component 116 may determine an error model that may indicate an error associated with the scene parameter. For example, the perceptual error model may produce perceptual errors associated with perceptual parameters of the simulated object, and the predictive error model may produce predictive errors associated with predictive parameters of the simulated object. In some instances, sensors may generate erroneous sensor data as the vehicle traverses the environment, and/or computing devices of the vehicle may incorrectly process the sensor data, which may result in perception errors. Testing and modeling the perception error may help indicate the operating margin of the vehicle as it relates to potential perception errors. For example, a scene parameter such as a perception parameter may indicate a size of an object or a range of positions of the object in the environment. The error model component 116 may use a perceptual error model to indicate potential errors associated with the size of the object, which may result in perceptual data of the vehicle indicating that the object is larger or smaller than the actual object in the environment.
The error model component 116 can determine, for example, a perceptual error model by comparing the input data 102 with the real data. In some instances, the trueness data may be manually labeled and/or determined from other validated machine learning components. For example, the input data 102 may include sensor data and/or sensory data generated by the vehicle. The error model component 116 may compare the input data 102 with real data, which may be indicative of actual parameters of objects in the environment. By comparing the input data 102 with the real data, the error model component 116 can determine the perceptual error. For example, but not limiting of, the input data 102 may represent a pedestrian height of 1.8 meters, while the real data indicates a pedestrian height of 1.75 meters, and thus, the perception error model may indicate a perception error of approximately 3% (e.g., [ (1.8-175)/1.8] × 100).
In some instances, the error model component 116 may determine a classification associated with an object represented in the input data 102 and determine other objects in the input data 102 that have the same classification. The error model component 116 can then determine a probability distribution (also referred to as an error distribution) associated with the error range of the object and associate the probability distribution with the object within the initial scene 110. For example, and without limitation, the error model component 116 may determine that objects with a pedestrian classification have a perception error of 4% -6% and objects with a vehicle classification have a perception error of 3% -5%. In some instances, the error model component 116 can determine a probability distribution indicating that objects, e.g., larger than a threshold size, are more likely or less likely to have errors (e.g., in classification).
In some instances, the parameter component 112 may use the region data 118 to determine a set of regions in the environment that are compatible with the scene and scene parameters. For example, but not limiting of, the scene may indicate that the speed of the vehicle is 15 meters per second, the speed of the pedestrian is 1.5 meters per second, and the distance between the vehicle and the pedestrian is 30 meters. The parameter component 112 can determine the region based on region data 118 in the environment that conforms to the scene. For example, but not limiting of, parameter component 112 may exclude school zones because the speed of the vehicle may exceed the speed limit associated with a school zone in the scenario, and thus the scenario would be ineffective in a school zone. However, such vehicle speeds (e.g., 15m/s) may be reasonable on county highways adjacent to the farmland, and thus such areas may be considered to be within a group of areas.
In some examples, the parameter component 112 may store zone data 118 including a drivable zone segment of the environment. Techniques for identifying drivable surface segments and similar segments, and segment classifications and/or typologies (sterotype) may be found, for example, in U.S. patent application No.16/370,696 entitled "extending autopilot functionality to new areas," filed on 29/3/2019, and U.S. patent application No.16/376,842 entitled "simulating autopilot using map data and driving data," filed on 5/4/2019, which are incorporated herein by reference in their entirety.
For example, the region data 118 of the environment may be parsed into sections, and similar sections may be identified. In some examples, a segment may include: intersection sections, such as intersections, junctions, etc., or connecting road sections, such as the length and/or width of the roads between intersections. For example, but not limiting of, all two-lane road segments with a speed limit within a 10mph range may be associated with the same type of layout. In some instances, data may be associated with each of the individual segments. For example, the intersection section may include: intersection types, such as junction, "T", ring, etc.; a plurality of roads that meet at an intersection; the relative positions of those roads, such as the angle between the roads that meet at the intersection; information related to traffic control signals at the intersection; and/or other features. The data associated with the connecting road segment may include: the number of lanes, the width of those lanes, the direction of travel in each lane, the identity of the parking lanes, speed limits on road sections, and/or other features.
In some examples, drivable surface segments may be grouped according to segment classification or segment type. For example, and without limitation, some or all intersection sections that meet some metric or attribute range may be grouped together (e.g., using k-means, evaluating weighted distances (e.g., euclidean) between section parameters, or clustering the sections based on section parameters).
The functionality of the autonomous vehicle controller may be verified using a scene that includes the same or similar type. For example, it may be desirable for an autonomous vehicle to perform identically (and/or have performed provably) in the same or similar version. In some examples, using a layout may reduce the number of comparisons to be made. For example, by identifying similar regions, the number of simulated scenes that may provide useful information is reduced. The techniques described herein may reduce computational complexity, memory requirements, and processing time by optimizing the particular scenario that provides useful information for verification and testing.
The parameterized scene component 120 may generate a parameterized scene 122 using data determined by the parameter component 112 (e.g., the initial scene 110, scene parameters, a set of region and/or error model data). For example, the initial scene 110 may indicate a scene such as a lane change scene, a right turn scene, a left turn scene, an emergency stop scene, and so forth. The scene parameters may indicate a speed associated with the vehicle controlled by the autonomous vehicle controller, a pose of the vehicle, a distance between the vehicle and the object, and the like. In some instances, the scene parameter may indicate an object, a location associated with the object, a velocity associated with the object, and/or the like. Further, an error model (e.g., a perceptual error model, a predictive error model, etc.) may indicate an error associated with the scene parameter and provide a range of values and/or probabilities associated with the scene parameter. For example, but not limiting of, a scene parameter such as vehicle speed may be associated with a speed range such as 8-12 meters per second. As described above, the speed range may be associated with a probability distribution that indicates the probability that the speed is within the occurring speed range.
The set of regions may indicate portions of the environment that may be used to place objects in the simulated environment. For example, the initial scene 110 may indicate a scene that includes a bi-directional, multi-lane driving surface associated with a speed limit of 35 miles per hour. Based on the initial scenario 110, the set of regions may exclude regions that do not include a bi-directional, multi-lane driving surface associated with a speed limit of 35 miles per hour, such as a parking lot. The parameterized scene 122 may be used to cover variations provided by scene parameters, error models, region data 118, and the like.
For example, but not limiting of, the context parameters may include the vehicle moving through the environment at 10, 11, or 12 meters per second (or any speed) as it approaches the intersection. The set of zones may include uncontrolled intersections, intersections with four-way stops, and intersections with traffic lights. Additionally, the perceptual error model may indicate a perceptual error of 1.34%, which may be provided by a perceptual metric determined for the perceptual system being tested. Thus, the parameterized scene 122 may allow for a total of 9 different scenes by changing scene parameters and regions (e.g., 3 speeds × 3 regions — 9 scenes/permutations). Additionally, when the simulation component 124 simulates the parameterized scene, the simulation component 124 can use a perception error model to introduce perception errors associated with perception data determined by the vehicle. As can be appreciated, this is merely one example, and the parameterized scene 122 may include more or fewer permutations and different types of scene parameters, regions, and/or perceptual errors.
The simulation component 124 can execute the parameterized scene 122 as a set of simulation instructions and generate simulation data 126. For example, the simulation component 124 can instantiate a vehicle controller in a simulation scenario. In some instances, the simulation component 124 can execute multiple simulation scenarios simultaneously and/or in parallel. Additionally, the simulation component 124 can determine the results of the parameterized scene 122. For example, the simulation component 124 can execute changes of the parameterized scene 122 for simulation of testing and verification. The simulation component 124 can generate simulation data 126 indicating how the autonomous vehicle controller is performing (e.g., responding), and can compare the simulation data 126 to predetermined results and/or determine whether any predetermined rules/assertions are violated/triggered.
In some instances, the variation for the simulation may be selected based on a generalized interval of scene parameters. For example, but not limiting of, the scene parameter may be associated with a speed of the vehicle. Additionally, the context parameter may be associated with a range of values for the vehicle. The variation used for the simulation may be selected based on a generalized interval to increase the coverage of the value range (e.g., selecting the 25 th percentile, the 50 th percentile, the 75% percentile, etc.). In some instances, the variations may be randomly selected and/or randomly selected within a standard deviation of a range of values.
In some instances, the predetermined rules/assertions may be based on parameterized scenarios 122 (e.g., traffic rules for crosswalks may be enabled based on crosswalk scenarios, or traffic rules for crossing lane markers may be disabled for stopped vehicle scenarios). In some instances, the simulation component 124 can dynamically enable and disable rules/assertions as the simulation progresses. For example, rules/assertions related to school zones may be enabled when a simulated object is proximate to a school zone, and may be disabled when the simulated object leaves the school zone. In some instances, the rules/assertions may include comfort metrics that relate to, for example, how quickly an object may accelerate given a simulated scenario. In at least some examples, the rules may include, for example, compliance with road rules, leaving a safe buffer between objects, and so forth.
The simulation component 124 can determine that the autonomous vehicle controller was successful based at least in part on determining that the autonomous vehicle controller performed in conformance with the predetermined result (i.e., that the autonomous vehicle controller did all of the things it should do) and/or determining that the rule was not violated or that the assertion was not triggered. The simulation component 124 can determine that the autonomous vehicle controller is disabled based at least in part on determining that the autonomous vehicle controller execution is inconsistent with the predetermined outcome (i.e., the autonomous vehicle controller does something it should not do) and/or determining that the rule is violated or that the assertion is triggered. Thus, based at least in part on executing the parameterized scene 122, the simulation data 126 may indicate how the autonomous vehicle controller responds to each change in the parameterized scene 122 as described above, and determine a successful or unsuccessful result based at least in part on the simulation data 126.
The analysis component 128 can be configured to determine a degree of success or failure. For example, but not limiting of, a rule may indicate that a vehicle controlled by an autonomous vehicle controller must stop within a threshold distance of an object. The simulation data 126 may indicate that in a first variation of the parameterized scene 122, the simulated vehicle stopped at a location that is more than 5 meters from the threshold distance. In a second variation of the parametric scene 122, the simulation data 126 may indicate that the simulated vehicle is stopped at a location that is more than 10 meters from the threshold distance. The analysis component 128 can indicate: the simulated vehicle performed more successfully in the second variation than in the first variation. For example, the analysis component 128 can determine an ordered list (e.g., ordered according to a relative success ratio) that includes simulated vehicles and associated changes of the parameterized scene 122. Such variations may also be used to determine the limits of various components of the system being simulated.
The analysis component 128 can determine additional changes to the parameterized scene 122 based on the simulation data 126. For example, the simulation data 126 output by the simulation component 124 may indicate a change in the parameterized scene 122 associated with success or failure (which may be represented as a continuous likelihood). The analysis component 128 can determine additional changes based on changes associated with the failure. For example, and without limitation, the change in the parameterized scene 122 associated with the failure may represent a vehicle traveling at a speed of 15 meters per second over the driving surface and an animal traversing the driving surface at a distance of 20 meters in front of the vehicle. The analysis component 128 can determine additional changes to the scene to determine additional simulation data 126 for analysis. For example, and without limitation, the analysis component 128 may determine additional variations including vehicles traveling at a speed of 10 meters per second, 12.5 meters per second, 17.5 meters per second, 20 meters per second, and so forth. Additionally, the analysis component 128 can determine additional variations that include the animal traversing the driving surface at equal distances of 15 meters, 17.5 meters, 22.5 meters, and 25 meters. Additional changes may be input into the simulation component 124 to generate additional simulation data. These additional changes may be determined based on, for example, perturbations to scene parameters of a scene running in the simulation.
In some instances, the analysis component 128 can determine additional changes to the scene by disabling scene parameters. For example, the parametric scene may include a first scene parameter associated with a velocity of the object and a second scene parameter associated with a position of the object. The parameterized scene 122 may include a first range of values associated with velocity and a second range of values associated with position. In some instances, after simulating the parameterized scene 122, the simulation data 126 may indicate that some changes to the parameterized scene 122 produce successful results and some changes produce failed results. The analysis component 128 can then determine to disable the first scene parameter (e.g., set a fixed value associated with the first scene parameter) and change the parameterized scene 122 based on the second scene parameter. By disabling one of the scenario parameters, the analysis component 128 may determine whether the scenario parameter and/or a value of the scenario parameter is associated with a success result or a failure result. These parameters may be disabled randomly or otherwise based on the likelihood of failure. For example, but not limiting of, simulation data 126 may indicate a problem with the planning component of the autonomous vehicle as a result of disabling all of the scene parameters.
In some instances, the analysis component 128 can be used to perform sensitivity analysis. For example, the analysis component 128 can disable scene parameters and determine how disabling the scene parameters affects the simulation data 126 based on the simulation data 126 generated by the simulation component 124 (e.g., increasing success rate, decreasing success rate, with minimal impact on success rate). In some instances, the analysis component 128 may individually disable the scene parameters to determine how disabling each scene parameter affects the simulation data 126. The analysis component 128 can collect statistical data that indicates how individual scene parameters affect the simulation data 126 over many simulation processes. In some instances, the analysis component 128 may be configured to disable a set of scene parameters (e.g., disable a night environment parameter and disable a humid condition environment parameter). As described above, the analysis component may collect statistical data that indicates how sets of scene parameters affect the simulation data 126. The statistical data may be used to determine whether a scene parameter that increases or decreases the likelihood of a result is a successful simulation and may be used to identify a subsystem in the autonomous vehicle that is associated with the scene parameter when the success rate of the simulation data 126 is increased or decreased.
In some instances, the analysis component 128 can adjust the degree to which the scene parameters are adjusted. For example, but not limiting of, the scene parameter may be indicative of a humid environmental condition (e.g., a rain condition). The scene parameters may be adjusted over a range (e.g., one-quarter inch of rainfall, one inch of rainfall, etc.). The analysis component 128 may adjust the size of the scene parameter and perform a sensitivity analysis based on the size of the scene parameter to determine a threshold associated with the scene parameter that may result in a successful or unsuccessful outcome of the simulation. In some examples, the threshold may be determined using a binary search algorithm, a particle filtering algorithm, and/or a monte carlo method, although other suitable algorithms are also contemplated.
The vehicle performance component 130 may determine the vehicle performance data 132 based on the simulation data 126 (and/or based on additional simulation data for additional changes from the analysis component 128) and the failure type. In some examples, the vehicle performance data 132 may indicate how the vehicle is performing in the environment. For example, but not limiting of, the vehicle performance data 132 may indicate that a vehicle traveling at a speed of 15 meters per second has a stopping distance of 15 meters. In some examples, the vehicle performance data may be indicative of a safety metric. For example, but not limiting of, the vehicle performance data 132 may indicate an event (e.g., failure) and a cause of the event. In at least some examples, such indications may be binary (failed or not), coarse (failure levels, e.g., "severe," "not severe," and "pass"), or continuous (e.g., representing failure probabilities), although any other indication may be considered. For example, for event type 1 and cause type 1, data 134(1) may indicate a security level and similarly for data 134(2) -134 (4). In some examples, cause type 1 and cause type 2 may indicate a fault, such as a fault of a vehicle or a fault of an object (e.g., a rider). The vehicle performance data 132 may then indicate a safety metric associated with the parameterized scene 122. In some examples, the vehicle performance component 130 may use the target metric and compare the vehicle performance data 132 to the target metric to determine whether the safety metric meets or exceeds the target metric. In some instances, the target metric may be based on standards and/or regulations associated with the autonomous vehicle.
In some examples, the vehicle performance data 132 may be input into the filter component 134 to determine the filtered data 136 based on the vehicle performance data 132. For example, the filter component 134 can be employed to determine the filtered data 136 that identifies areas that do not meet a coverage threshold. For example, but not limiting of, the initial scenario 110 may indicate a bi-directional, multi-lane driving surface associated with a speed limit and zone data 118 of 35 miles per hour. Based on the initial scene 110 and the region data 118, the parameter component 112 may identify five regions in the environment that may be used to simulate the initial scene 110. After the parameterized scene 122 is simulated, the vehicle performance data 132 may indicate that the simulation data 126 is associated with three of the five regions. For example, the simulation component 124 can simulate a scene and generate the simulation data 126 based on executing the scene in three of the five regions identified by the parameter component 112. The filter component 134 can determine the filter data 136 based on one or more filters, the filter data 136 indicating that the remaining 2 of the 5 regions do not satisfy a coverage threshold (e.g., a minimum number of simulations associated with one region).
In some examples, the filter component 134 may determine filtered data 136 indicative of the occurrence of an event based on the vehicle performance data 132. For example, but not limiting of, the simulation data 126 may include the occurrence of events such as emergency stops, tire leaks, animals crossing driving surfaces, and the like. The filter component 134 can determine filtered data 136 indicating the occurrence of an emergency stop based on one or more filters. In addition, the filtered data 136 may include portions of the simulation data 126 associated with the occurrence of an emergency stop, such as a stopping distance associated with the emergency stop.
Fig. 2 shows an example 200 of a vehicle 202, which vehicle 202 may be similar to the vehicle that generated the vehicle data 104 described with reference to fig. 1, generated the vehicle data 104, and transmitted the vehicle data 104 to the computing device 204. As described above, the scene editor component 108 can be configured to scan the input data 102 (e.g., the vehicle data 104 and/or the additional contextual data 106) and identify one or more scenes represented in the input data 102. As a non-limiting example, such a scenario may be determined based on, for example, clustering of log data (e.g., using k-means, etc.) parameterization. In some instances, the scene editor component 108 can use the scene definition data 206 to identify one or more scenes represented in the input data 102. For example, the scene definition data 206 may identify features associated with a scene type. For example, but not limiting of, the scene definition data 206 may identify an offending traversing road scene that includes features such as a pedestrian traversing a portion of a driving surface that is not associated with a pedestrian crossing. The scene editor component 108 can scan the input data 102 to identify portions of the input data that include characteristics of a pedestrian traversing a portion of a driving surface not associated with a crosswalk to determine an offending crossing road scene. In at least some examples, such scenes may also be manually entered and/or derived from third party data (e.g., police reports, commonly available video clips, etc.).
Additionally, as described above, the parameter component 112 may determine a scene parameter, which may indicate a value or range of values associated with a parameter of an object in a scene. As shown in FIG. 2, parameter component 112 may generate scene data 208.
The scene data 208 may indicate a basic scene that includes a vehicle 210 traveling along a driving surface and an object 212 (which may be a different vehicle) traveling in the same direction as the vehicle 210 in the vicinity of the vehicle 210. The vehicle 210 and the object 212 may approach an intersection with a crosswalk. The parameter component 112 may determine a scene parameter indicative of a distance between the vehicle 210 and the object 212 as having a range of distances. Thus, scene S1May represent a first scene of a set of scenes having a first distance between the vehicle 210 and the object 212, scene S2A second scene of the set of scenes having a second distance between the vehicle 210 and the object 212 may be represented, and scene SNAn nth scene of the set of scenes having an nth distance between the vehicle 210 and the object 212 may be represented. Examples of additional types of parameters (also referred to as attributes) may beSuch as that filed on 2019, 3, 25, entitled "attribute-based pedestrian prediction," is found in U.S. patent application No.16/363,541, which is incorporated by reference herein in its entirety.
For example, the scene parameters may include, but are not limited to: speed of object 212, acceleration of object 212, x-position of object 212 (e.g., global position, local position, and/or position relative to any other reference frame), y-position of object 212 (e.g., size, attitude, local position, global position, and/or position relative to any other reference frame), bounding box associated with object 212 (e.g., extent (length, width, and/or height), yaw, pitch, roll, etc.), lighting status (e.g., brake light, blinking light, warning light, headlamp, backup light, etc.), wheel orientation of object 212, map elements (e.g., distance between object 212 and stop light, stop sign, speed bump, intersection, lane-giving sign, etc.), classification of object 212 (e.g., vehicle, car, truck, bicycle, motorcycle, pedestrian, animal, etc.), object characteristics (e.g., whether object is in a lane-change, lane-changing, etc.), object characteristics (e.g., whether object is in a lane-changing lane, etc.) Whether the object 212 is a side-by-side parked vehicle, etc.), proximity to one or more objects (in any coordinate system), lane type (e.g., direction of lane, parking lane), road markings (e.g., indicating whether traffic is allowed or lane changes, etc.), object density, etc.
As described above, parameter component 112 may determine a range of values associated with a scene parameter represented by vehicle data 104 and/or other input data. Thus, each of the above-identified example scene parameters, as well as other scene parameters, may be associated with a set of values or range of values that may be used to generate a set of scenes, where the scenes in the set of scenes differ by one or more values of the scene parameter.
FIG. 3 shows an example 300 of generating error model data based at least in part on vehicle data and real data. As shown in FIG. 3, the vehicle 202 may generate the vehicle data 104 and send the vehicle data 104 to the error model component 116. As described above, the error model component 116 can determine an error model that can indicate an error associated with a scene parameter. For example, the vehicle data 104 may be data associated with subsystems of the vehicle 202, such as perception systems, planning systems, tracking systems (also referred to as tracker systems), prediction systems, and the like. For example, but not limiting of, the vehicle data 104 may be associated with a perception system, and the vehicle data 104 may include a bounding box associated with an object in the environment detected by the vehicle 202.
The error model component 116 can receive the truth data 302, which truth data 302 can be manually labeled and/or determined from other validated machine learning components. For example, but not limiting of, the real data 302 may include verified bounding boxes associated with objects in the environment. By comparing the bounding box of the vehicle data 104 to the bounding box of the real data 302, the error model component 116 can determine errors associated with subsystems of the vehicle 202. In some instances, the vehicle data 104 may include one or more characteristics (also referred to as parameters) associated with the detected entity and/or the environment in which the entity is located. In some examples, characteristics associated with an entity may include, but are not limited to: x-position (global position), y-position (global position), z-position (global position), direction, entity type (e.g., classification), speed of the entity, scope (size) of the entity, etc. Characteristics associated with an environment may include, but are not limited to: presence of another entity in the environment, status of another entity in the environment, time of day, day of week, season, weather conditions, dark/light indications, etc. Thus, the error may be associated with other characteristics (e.g., environmental parameters).
The error model component 116 may process the plurality of vehicle data 104 and the plurality of real data 302 to determine error model data 304. Error model data 304 may include errors calculated by error model component 116, which may be represented as errors 306(1) - (3). Additionally, the error model component 116 can determine probabilities associated with the errors 306(1) - (3) represented as probabilities 308(1) - (3), which probabilities 308(1) - (3) can be associated with environmental parameters to render the error models 310(1) - (3). For example, but not limiting of, the vehicle data 104 may include a bounding box associated with an object at a distance of 20250 meters from the vehicle in an environment including rainfall. The real data 302 may provide a verified bounding box associated with the object. The error model component 116 may determine error model data 304, which error model data 304 determines errors associated with the perception system of the vehicle 202. The distance of 50 meters and rainfall may be used as environmental parameters to determine which of error models 310(1) - (3) to use. Once the error model is identified, the error models 310(1) - (3) may provide the errors 306(1) - (3) based on the probabilities 308(1) - (3), wherein the errors 306(1) - (3) associated with higher probabilities 308(1) - (3) are more likely to be selected than the errors 306(1) - (3) associated with lower probabilities 308(1) - (3).
FIG. 4 illustrates an example 400 of perturbing a simulation using error model data by providing at least one of an error or uncertainty associated with a simulation environment. As described above, the error model data may include an error model that associates the error 306 with the probability 308, and the error model may be associated with an environmental parameter. The simulation component 124 can use the error model data 304 to inject errors that can produce a perturbed parametric scene for perturbation simulation. Based on the injected error, the simulation data may indicate how the autonomous driving controller responds to the injected error.
For example, the simulation component 124 can perturb the simulation by continuously injecting errors into the simulation. For example, but not limiting of, example 402 depicts time t0 A bounding box 404 associated with the object. The bounding box 404 may represent detection of an object by a vehicle that includes an error, such as an error in the size of the bounding box and/or an error in the position of the bounding box. At time t1The simulation component 124 can employ a bounding box 406 that represents the object and includes different errors. For example, at each simulation time (e.g., t)0、t1Or t2) The simulation component 124 can use different errors 306 based on probabilities 308 associated with the errors 306. At time t2The simulation component 124 can employ a bounding box 408 that represents the object and includes different errors.
In some instances, the analog component 124 may be implemented by injectionThe uncertainty associated with the bounding box representing the object in the environment perturbs the simulation. For example, but not limiting of, example 410 depicts time t0 A bounding box 412 associated with the object. The bounding box may include an uncertainty of 5%, which may indicate that the uncertainty in the size and/or position of the object is by an amount of 5%. In addition, the uncertainty may be over time t1And t2Rather than injecting different errors at different simulation times, it persists in the object, as shown in example 402.
Fig. 5 shows an example 500 of a vehicle 202 generating vehicle data 104 and transmitting the vehicle data 104 to a computing device 204. As described above, the error model component 116 may determine a perceptual error model, which may be indicative of an error associated with the scene parameter. As described above, the vehicle data 104 may include sensor data generated by sensors of the vehicle 202 and/or perception data generated by perception systems of the vehicle 202. The perception error model may be determined by comparing the vehicle data 104 with the real data 302. The real data 302 may be manually labeled and may be associated with an environment and may represent known results. Accordingly, deviations in the vehicle data 104 from the real data 302 may be identified as errors in the sensor system and/or perception system of the vehicle 202. For example, but not limiting of, the perception system may identify the object as a rider, where the real data 302 indicates that the object is a pedestrian. As another example and without limitation, the sensor system may generate sensor data representing an object having a width of 2 meters, where the real data 302 indicates that the object has a width of 1.75 meters.
As described above, the error model component 116 can determine a classification associated with an object represented in the vehicle data 104 and determine other objects in the vehicle data 104 and/or other log data having the same classification. The error model component 116 can then determine a probability distribution associated with an error range associated with the object. Based on the comparison and the error range, the error model component 116 can determine the perceptual error model data 502.
As shown in FIG. 5, environment 504 may include objects 506(1) - (3), which objects 506(1) - (3) are represented as bounding boxes generated by the perception system. Perceptual error model data 502 may indicate that the scene parameters are 508(1) - (3) and the errors associated with the scene parameters are 510(1) - (3). As shown in fig. 5, the error associated with object 508(1) may be visualized in environment 504 as a larger bounding box 512, which larger bounding box 512 indicates uncertainty regarding the size of object 508 (1).
FIG. 6 shows an example 600 of the computing device 204 generating the simulation data 126 and determining the vehicle performance data 132. The parametric scene component 120 can determine a parametric scene based on scene parameters, a set of regions, and a perceptual error model. For example, the scene parameters may indicate objects in the parameterized scene, locations associated with the objects, velocities associated with the objects, and so forth. Additionally, the context parameter may indicate a range indicating a range of values and/or a range of probabilities associated with the context parameter. A set of regions may indicate portions of the environment that may be used to place objects in the simulated environment. Further, the perceptual error model may indicate an error associated with the scene parameter. As described in detail herein, these may be combined to create a parameterized scene that may cover variations provided by scene parameters, a set of regions, and/or a perceptual error model.
The parameterized scene may be used by the simulation component 124 to simulate changes in the parameterized scene. For example, the simulation component 124 can perform a change of a parameterized scene for simulation of testing and verification. The simulation component 124 can generate simulation data 126 indicating how the autonomous vehicle controller is performing (e.g., responding), and can compare the simulation data 126 to predetermined results and/or determine whether any predetermined rules/assertions are violated/triggered.
As shown in FIG. 6, simulation data 126 may indicate a plurality of simulations (e.g., simulation 1, simulation 2, etc.) and simulation results (e.g., result 1, result 2). For example, as described above, the result may indicate a pass or fail based on a rule/assertion that is violated/triggered. Additionally, the simulation data 126 may indicate a probability of encountering the scene. For example, and without limitation, the simulation component 124 can simulate a scene that includes a pedestrian crossing a road in violation. The input data may indicate that the vehicle encounters a pedestrian crossing the road in violation at a rate of 1 minute every 1 hour of travel. This can be used to determine the probability of encountering a particular simulation associated with a change in the parameterized scene. In some instances, the simulation component 124 can identify changes in the parameterized scene with low probability and perform simulations corresponding to those changes. This may allow autonomous vehicle controllers to be tested and verified in more unique situations.
Additionally, the simulation component 124 can identify changes to the parameterized scene based on the results to perform additional simulations. For example, but not limiting of, the result of the simulation may be a failure in which the scene parameter is associated with a vehicle speed of 15 meters per second. The simulation component 124 can identify speeds approaching 15 meters per second to determine a threshold at which the simulation will pass, which can further assist in developing a safer vehicle controller.
Based on the simulation data 126, the vehicle performance component 130 may generate vehicle performance data 132. As described above, for example, for event type 1 and reason type 1, data 134(1) may indicate a security level, and similarly for data 134(2) -134 (1). In some instances, the event type may indicate that the cost has met or exceeded a cost threshold, although other event types are also contemplated. For example, the cost may include, but is not limited to, a reference cost, an obstacle cost, a lateral cost, a longitudinal cost, and the like.
The reference cost may include a cost associated with a difference between a point on the reference trajectory (also referred to as a reference point) and a corresponding point on the target trajectory (also referred to as a point or a target point), where the difference represents one or more of a yaw, a lateral offset, a velocity, an acceleration, a curvature rate, or the like. In some instances, reducing the weight associated with the reference cost may reduce the penalty associated with a target trajectory that is a distance from the reference trajectory, which may provide smoother transitions, resulting in safer and/or more comfortable vehicle operation.
In some examples, the obstacle cost may include a cost associated with a distance between a point on the reference or target trajectory and a point associated with an obstacle in the environment. For example, the point associated with the obstacle may correspond to a point on the boundary of the drivable area, or may correspond to a point associated with an obstacle in the environment. In some examples, obstacles in the environment may include, but are not limited to: static objects (e.g., buildings, curbs, sidewalks, lane markers, posts, traffic lights, trees, etc.), or dynamic objects (e.g., vehicles, riders, pedestrians, animals, etc.). In some examples, the dynamic object may also be referred to as a proxy. In some examples, a static object or a dynamic object may be generally referred to as an object or an obstacle.
In some examples, the lateral cost may refer to a cost associated with a steering input to the vehicle, such as a maximum steering input relative to a speed of the vehicle. In some examples, the longitudinal cost may refer to a cost associated with a speed and/or acceleration (e.g., maximum braking and/or acceleration) of the vehicle. Such costs may be used to ensure that the vehicle is operating within feasible and/or comfort limits for the passengers being transported.
In some examples, cause type 1 and cause type 2 may indicate a fault, such as a fault of a vehicle or a fault of an object (e.g., a rider). The vehicle performance component 130 may use predetermined rules/assertions to determine faults. For example, but not limiting of, a rule may indicate that when a vehicle is impacted by an object behind the vehicle, a fault may be associated with the object. In some instances, additional rules may be used, such as indicating that the vehicle must traverse the environment in a forward direction when the vehicle is impacted by a rear object. In some instances, a cause type (e.g., cause type 1 and/or cause type 2) may be associated with a component of an autonomous vehicle controller. As non-limiting examples, such reasons may include: sensing systems, predictive systems, planner systems, network delays, torque/acceleration failures, and/or failures of any other component or subcomponent of the vehicle.
As described above, the analysis component can determine to disable a scene parameter (e.g., set a fixed value associated with the scene parameter) based on the simulation data 126 and change other scene parameters. By isolating the context parameters, the analysis component can determine context parameters associated with success or failure results. The vehicle performance data 132 may then indicate safety metrics associated with the parameterized scenario. Additionally, the analysis component can perform a sensitivity analysis to determine a cause of the failure. For example, the analysis component may individually disable the scene parameters, to isolate one or more scene parameters and determine how disabling the scene parameters affects the response of the autonomous vehicle, capture statistics associated with disabling one or more scene parameters, and capture the results as a result of success or failure. The statistical data may indicate how sets of scene parameters affect the outcome, and may be used to determine whether a scene parameter that increases or decreases the likelihood of the outcome is a successful simulation, and may be used to identify a subsystem in the autonomous vehicle that is associated with the scene parameter when the success rate of the simulation data is increased or decreased.
Fig. 7 depicts a block diagram of an example system 700 for implementing techniques discussed herein. In at least one example, the system 700 may include a vehicle 202. In the illustrated example 700, the vehicle 202 is an autonomous vehicle; however, the vehicle 202 may be any other type of vehicle (e.g., a driver-controlled vehicle that may provide an indication as to whether it is safe to perform various operations).
The vehicle 202 may include: a computing device 702, one or more sensor systems 704, one or more transmitters 706, one or more communication connections 708 (also referred to as communication devices and/or modems), at least one direct connection 710 (e.g., for physically coupling with the vehicle 202 to exchange data and/or supply power), and one or more drive systems 712. One or more sensor systems 704 may be configured to capture sensor data associated with an environment.
The sensor system 704 may include: time-of-flight sensors, position sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., Inertial Measurement Unit (IMU), accelerometer, magnetometer, gyroscope, etc.), lidar sensors, radar sensors, sonar sensors, infrared sensors, cameras (e.g., RGB, IR, intensity, depth, etc.), microphone sensors, environmental sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), ultrasonic transducers, wheel encoders, etc. The sensor system 704 may include multiple instances of each of these or other types of sensors. For example, the time-of-flight sensors may include individual time-of-flight sensors located at the corners, front, rear, sides, and/or top of the vehicle 202. As another example, the camera sensors may include a plurality of cameras arranged at various locations around the exterior and/or interior of the vehicle 202. The sensor system 704 may provide input to the computing device 702.
The vehicle 202 may also include one or more emitters 706 for emitting light and/or sound. In this example, the one or more transmitters 706 include internal audio and visual transmitters that communicate with the occupants of the vehicle 202. For example, but not limiting of, the internal transmitters may include: speakers, lights, signs, display screens, touch screens, tactile emitters (e.g., vibration and/or force feedback), mechanical actuators (e.g., seat belt tensioners, seat positioners, headrest positioners, etc.), and the like. In this example, the one or more transmitters 706 also include an external transmitter. For example, and without limitation, in this example, the external transmitters include lights or other indicators of vehicle motion (e.g., indicator lights, signs, arrays of lights, etc.) to indicate direction of travel, and one or more audio transmitters (e.g., speakers, speaker arrays, horns, etc.) to audibly communicate with pedestrians or other nearby vehicles, one or more of which may include acoustic beam steering technology.
The vehicle 202 may also include one or more communication connections 708 that enable communication between the vehicle 202 and one or more other local or remote computing devices (e.g., remotely operated computing devices) or remote services. For example, the communication connection 708 may facilitate communication with other local computing devices on the vehicle 202 and/or the drive system 712. Moreover, the communication connection 708 may allow the vehicle 202 to communicate with other nearby computing devices (e.g., other nearby vehicles, traffic signals, etc.).
Communication connection(s) 708 may include a physical and/or logical interface for connecting computing device 702 to another computing device or to one or more external networks 714 (e.g., the internet). For example, the communication connection 708 may enable Wi-Fi based communication, such as via frequencies defined by the IEEE 802.11 standard, short-range wireless frequencies, such as bluetooth, cellular communication (e.g., 2G, 3G, 4G LTE, 5G, etc.), satellite communication, dedicated short-range communication (DSRC), or any suitable wired or wireless communication protocol that enables the respective computing device to interface with other computing devices. In at least some examples, the communication connection 708 can include one or more modems as described in detail above.
In at least one example, the vehicle 202 may include one or more drive systems 712. In some examples, the vehicle 202 may have a single drive system 712. In at least one example, if the vehicle 202 has multiple drive systems 712, the individual drive systems 712 may be positioned on opposite ends (e.g., front and rear, etc.) of the vehicle 202. In at least one example, the drive system 712 may include one or more sensor systems 704 to detect a condition of the drive system 712 and/or the surrounding environment of the vehicle 202. For example, but not limiting of, the sensor system 704 may include: one or more wheel encoders (e.g., rotary encoders) for sensing rotation of wheels of the drive system, inertial sensors (e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers, etc.) for measuring direction and acceleration of the drive system, cameras or other image sensors, ultrasonic sensors for acoustically detecting objects in the environment surrounding the drive system, lidar sensors, radar sensors, and the like. Some sensors, such as wheel encoders, may be unique to the drive system 712. In some cases, the sensor system 704 on the drive system 712 may overlap with or supplement a corresponding system of the vehicle 202 (e.g., the sensor system 704).
The drive system 712 may include a number of vehicle systems, including a high voltage battery, an engine for propelling the vehicle, inverters for converting direct current from a battery to alternating current for use by other vehicle systems, steering systems including a steering motor and a steering rack (which may be electric), braking systems including hydraulic or electric actuators, suspension systems including hydraulic and/or pneumatic components, stability control systems for distributing braking force to mitigate traction loss and maintain control, HVAC systems, lighting devices (e.g., lighting devices such as headlights/taillights that illuminate the exterior surroundings of the vehicle), and one or more other systems (e.g., refrigeration systems, security systems, onboard charging systems, other electrical components such as DC/DC converters, high voltage connectors, high voltage cables, charging systems, charging ports, etc.). Additionally, the drive system 712 may include a drive system controller that may receive and pre-process data from the sensor system 704 and control the operation of various vehicle systems. In some examples, the drive system controller may include one or more processors and memory communicatively coupled with the one or more processors. The memory may store one or more modules for performing various functions of the drive system 712. In addition, the drive systems 712 also include one or more communication connections that enable the respective drive systems to communicate with one or more other local or remote computing devices.
Computing device 702 can include one or more processors 516 and memory 518 communicatively coupled to one or more processors 716. In the illustrated example, the memory 718 of the computing device 702 stores a positioning component 720, a perception component 722, a prediction component 724, a planning component 726, and one or more system controllers 728. Although depicted as residing in memory 718 for purposes of illustration, it is contemplated that positioning component 720, perception component 722, prediction component 724, planning component 726, and one or more system controllers 728 can additionally or alternatively be accessible by computing device 702 (e.g., stored in a different component of vehicle 202 and/or accessible (e.g., remotely stored) by vehicle 202).
In the memory 718 of the computing device 702, the positioning component 720 may include functionality for receiving data from the sensor system 704 to determine the location of the vehicle 202. For example, the positioning component 720 can include and/or request/receive a three-dimensional map of the environment, and can continuously determine the location of the autonomous vehicle within the map. In some examples, the location component 720 may receive time-of-flight data, image data, lidar data, radar data, sonar data, IMU data, GPS data, wheel encoder data, any combination thereof, or the like, using SLAM (simultaneous location and mapping) or CLAM (calibration, location and mapping, simultaneous) to accurately determine the location of the autonomous vehicle. In some examples, the positioning component 720 may provide data to various components of the vehicle 202 to determine an initial position of the autonomous vehicle to generate a trajectory, as discussed herein.
The perception component 722 may include functionality for performing object detection, segmentation, and/or classification. In some examples, the perception component 722 can provide processed sensor data that indicates the presence of an entity proximate the vehicle 202 and/or a classification of the entity as an entity type (e.g., car, pedestrian, rider, building, tree, road surface, curb, sidewalk, unknown, etc.). In additional and/or alternative examples, the sensing component 722 can provide processed sensor data that is indicative of one or more characteristics (also referred to as parameters) associated with the detected entity and/or the environment in which the entity is located. In some examples, characteristics associated with an entity may include, but are not limited to: x-position (global position), y-position (global position), z-position (global position), direction, entity type (e.g., classification), speed of the entity, scope (size) of the entity, etc. Characteristics associated with an environment may include, but are not limited to: presence of another entity in the environment, status of another entity in the environment, time of day, day of week, season, weather conditions, geographic location, dark/light indication, etc.
The perception component 722 may include functionality for storing perception data generated by the perception component 722. In some instances, perception component 722 may determine a trace corresponding to an object that has been classified as an object type. For illustrative purposes only, one or more images of the environment may be captured using the sensory component 722 of the sensor system 704. The sensor system 704 may capture an image of an environment including an object, such as a pedestrian. The pedestrian may be in the first position at time T and in the second position at time T + T (e.g., moving within time T after time T). In other words, the pedestrian may move from the first position to the second position during the time span. Such movement may be recorded, for example, as stored sensory data associated with the object.
In some examples, the stored perception data may include fused perception data captured by the vehicle. The fused perceptual data may include a fusion or other combination of sensor data from sensor systems 704, such as image sensors, lidar sensors, radar sensors, time-of-flight sensors, sonar sensors, global positioning system sensors, internal sensors, and/or any combination of these, for example. The stored perception data may additionally or alternatively include classification data that includes semantic classifications of objects (e.g., pedestrians, vehicles, buildings, roads, etc.) represented in the sensor data. The stored perception data may additionally or alternatively include trace data (a collection of historical locations, orientations, sensor features, etc. associated with the object over time) corresponding to the movement of objects classified as dynamic objects through the environment. The trace data may include a plurality of traces of a plurality of different objects over time. When an object is stationary (e.g., standing still) or moving (e.g., walking, running, etc.), this trace data may be mined to identify images of particular types of objects (e.g., pedestrians, animals, etc.). In this example, the computing device determines a track corresponding to the pedestrian.
The prediction component 724 may generate one or more probability maps representing predicted probabilities of possible locations of one or more objects in the environment. For example, the prediction component 724 may generate one or more probability maps for vehicles, pedestrians, animals, etc. within a threshold distance from the vehicle 202. In some instances, the prediction component 724 may measure the trace of the object and generate a discretized predicted probability map, heat map, probability distribution, discretized probability distribution, and/or trajectory for the object based on the observed and predicted behaviors. In some instances, the one or more probability maps may represent an intent of one or more objects in the environment.
The planning component 726 may determine that the vehicle 202 follows a path through the environment. For example, the planning component 726 may determine various routes and paths and various levels of detail. In some instances, the planning component 726 may determine a route for traveling from a first location (e.g., a current location) to a second location (e.g., a target location). For purposes of this discussion, a route may be a sequence of waypoints traveled between two locations. By way of non-limiting example, the waypoints include streets, intersections, Global Positioning System (GPS) coordinates, and the like, and further the planning component 726 can generate instructions for guiding the autonomous vehicle along at least a portion of the route from the first location to the second location. In at least one example, planning component 726 may determine how to direct an autonomous vehicle from a first waypoint in a sequence of waypoints to a second waypoint in the sequence of waypoints. In some examples, the instruction may be a path or a portion of a path. In some examples, multiple paths may be generated substantially simultaneously (i.e., within a technical tolerance) according to a rolling time domain technique. A single path of the plurality of paths in the rolling time domain data having the highest confidence level may be selected to operate the vehicle.
In other examples, the planning component 726 may alternatively or additionally use data from the perception component 722 to determine a path that the vehicle 202 follows through the environment. For example, the planning component 726 may receive data from the perception component 722 regarding objects associated with the environment. Using this data, the planning component 726 may determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a target location) to avoid objects in the environment. In at least some examples, such a planning component 726 may determine that there is no such collision-free path, and in turn provide a path to safely stop the vehicle 202 to avoid all collisions and/or mitigate damage.
In at least one example, the computing device 702 may include one or more system controllers 728, which may be configured to control steering, propulsion, braking, safety, transmitters, communications, and other systems of the vehicle 202. These system controllers 728 may be in communication with and/or control other components of the respective systems of the drive system 712 and/or the vehicle 202, which may be configured to operate according to the path provided from the planning component 726.
The vehicle 202 may be connected to the computing device 204 via the network 514 and may include one or more processors 730 and memory 732 communicatively coupled with the one or more processors 730. In at least one example, one or more processors 730 may be similar to processor 716 and memory 732 may be similar to memory 718. In the illustrated example, the memory 732 of the computing device 204 stores the scene editor component 108, the parameter component 112, the error model component 116, the parameterized scene component 120, the simulation component 124, the analysis component 128, and the vehicle performance component 130. Although depicted as residing in the memory 732 for purposes of illustration, it is contemplated that the scene editor component 108, the parameter component 112, the error model component 116, the parameterized scene component 120, the simulation component 124, the analysis component 128, and the vehicle performance component 130 can additionally or alternatively be accessible by the computing device 204 (e.g., stored in a different component of the computing device 204 and/or accessible by the computing device 204 (e.g., stored remotely)). The scenario editor component 108, the parameter component 112, the error model component 116, the parameterized scenario component 120, the simulation component 124, the analysis component 128, and the vehicle performance component 130 may be substantially similar to the scenario editor component 108, the parameter component 112, the error model component 116, the parameterized scenario component 120, the simulation component 124, the analysis component 128, and the vehicle performance component 130 of FIG. 1.
Processor 716 of computing device 702 and processor 730 of computing device 204 may be any suitable processors capable of executing instructions to process data and perform the operations described herein. For example, and without limitation, processors 716 and 730 may include one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or any other device or portion of a device that processes electronic data to convert that electronic data into other electronic data that may be stored in registers and/or memory. In some examples, integrated circuits (e.g., ASICs, etc.), gate arrays (e.g., FPGAs, etc.), and other hardware devices may also be considered processors, so long as they are configured to implement the coded instructions.
Memory 718 of computing device 702 and memory 732 of computing device 702 are examples of non-transitory computer-readable media. Memories 718 and 732 may store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the respective systems. In various embodiments, memories 718 and 732 may be implemented using any suitable memory technology, such as Static Random Access Memory (SRAM), Synchronous Dynamic RAM (SDRAM), non-volatile/flash-type memory, or any other type of memory capable of storing information. The architecture, system, and individual elements described herein may include many other logical, procedural, and physical components, of which those shown in the figures are merely examples relevant to the discussion herein.
In some examples, aspects of some or all of the components discussed herein may include any model, algorithm, and/or machine learning algorithm. For example, in some instances, the components in memories 718 and 732 may be implemented as neural networks.
FIG. 8 depicts an example process 800 for determining a safety metric associated with a vehicle controller. Some or all of process 800 may be performed by one or more components in fig. 1-7, as described herein. For example, some or all of process 800 may be performed by computing device 204 and/or computing device 702.
At operation 802 of the example process 800, the process 800 may include receiving log data associated with operating an autonomous vehicle in an environment. In some instances, the log data may be generated by a vehicle that captures at least sensor data of the environment.
At operation 804 of the example process 800, the process 800 may include determining a set of scenes based on the log data (or other data), the scenes in the set of scenes including scene parameters associated with an aspect of the environment. In some instances, the computing device may group similar scenes represented in the log data. For example, scenes may be grouped together using, for example, k-means clustering and/or weighted distances (e.g., euclidean) between parameters of the evaluation environment. Further, the scene parameter may represent an environmental parameter, such as a night environment parameter or a humid condition environment parameter. In some instances, the scene parameters may be associated with a vehicle or an object (e.g., pose, speed, etc.).
At operation 806 of the example process 800, the process 800 may include determining an error model associated with a subsystem of the autonomous vehicle. The error model component can compare vehicle data (e.g., log data) to the real data to determine differences between the vehicle data and the real data. In some instances, the vehicle data may represent estimated values associated with objects in the environment, such as estimated positions, estimated directions, estimated ranges, etc., while the real data may represent actual positions, actual directions, or actual ranges of the objects. Based on the difference, the error model component may determine an error associated with a subsystem (e.g., perception system, tracking system, prediction system, etc.) of the vehicle.
At operation 808 of the example process 800, the process 800 may include determining a parameterized scene based on the scene parameters and the error model. These may be combined to create a parameterized scene that may cover variations provided by scene parameters and/or error models. In some instances, scene parameters may be randomly selected and combined to create a parameterized scene. In some instances, the scene parameters may be combined based on the probability of concurrency occurring. For example, but not limiting of, the log data may indicate that 5% of the driving experience includes a pedestrian encounter, and the parameterized scene component may include the pedestrian as a scene parameter in the 5% of the parameterized scene generated by the parameterized scene component. In some instances, the parameterized scene component may verify the parameterized scene to reduce impossible or inappropriate combinations of scene parameters. For example, but not limiting of, a vehicle will not be placed in a lake and a pedestrian will not be walking at a speed of 30 meters per second. As non-limiting examples, such parameterized scenes may include ranges of distances, speeds, lighting conditions, weather conditions, etc., of vehicles and offending road crossings on specially defined roads having various gaussian distributions (or other distributions) of errors based at least in part on scene parameters on perceptual models, predictive models, etc.
At operation 810 of the example process 800, the process 800 may include perturbing the parameterized scene by modifying at least one of the parameterized scene, a scene parameter, or a component of the simulated vehicle based at least in part on the error. In some instances, an uncertainty may be associated with a scene parameter. For example, but not limiting of, the location of the object may be associated with an uncertainty of 5%, such that the autonomous controller traverses the environment while accounting for the 5% uncertainty. In some examples, when the simulator performs a simulation, the simulator may determine from the error model the error to be incorporated into the simulation.
At operation 812 of the example process 800, the process 800 may include instantiating a simulated vehicle in the disturbance parameterized scene. The simulator may use a simulated vehicle that may be associated with the autonomous driving controller and traverse the autonomous driving controller through the simulated environment. Instantiating the autonomous vehicle controller in a parameterized scene and simulating the parameterized scene can effectively cover a wide range of variations of the scene without the need to manually enumerate the variations. Additionally, based at least in part on executing the parameterized scenario, the simulation data may indicate how the autonomous vehicle controller responds to the parameterized scenario, and determine a successful result or an unsuccessful result based at least in part on the simulation data.
At operation 814 of the example process 800, the process may include receiving simulation data indicating how the simulated vehicle responds to the perturbed parametric scene. After the simulation, the results may indicate a pass (e.g., a success result), a failure, and/or a degree of success or failure associated with the vehicle controller.
At operation 816 of the example process 800, the process may include determining a safety metric associated with the vehicle controller based on the simulation data. For example, each simulation may produce a successful or unsuccessful result. Additionally, as described above, the vehicle performance component may determine vehicle performance data that may indicate how the vehicle performed in the environment based on the simulation data. Based on the sensitivity analysis, the vehicle performance data may indicate a scenario where the simulation result is unsuccessful, a reason for the unsuccessful, and/or a boundary of the scenario parameter indicating a value of the scenario parameter where the simulation result is successful. Thus, the safety metric may indicate the pass/fail rate of the vehicle controller in various simulation scenarios.
FIG. 9 depicts a flowchart of an example process for determining a statistical model associated with a subsystem of an autonomous vehicle. Some or all of process 900 may be performed by one or more components in fig. 1-7, as described herein. For example, some or all of process 900 may be performed by computing device 204 and/or computing device 702.
At operation 902 of the example process 900, the process 900 may include receiving vehicle data (or other data) associated with a subsystem of an autonomous vehicle. The vehicle data may include log data captured by vehicles traveling through the environment. In some examples, the vehicle data may include control data (e.g., data for control systems such as steering, braking, etc.) and/or sensor data (e.g., lidar data, radar data, etc.).
At operation 904 of the example process 900, the process 900 may include determining output data associated with the subsystem based on the vehicle data. For example, but not limiting of, the subsystem may be a perception system and the output data may be a bounding box associated with an object in the environment.
At operation 906 of the example process 900, the process 900 may include receiving real data associated with a subsystem. In some instances, the trueness data may be manually labeled and/or determined from other validated machine learning components. For example, but not limiting of, the real data may include verified bounding boxes associated with objects in the environment.
At operation 908 of the example process 900, the process 900 may include determining a difference between the first portion of the output data and the second portion of the real data, the difference representing an error associated with the subsystem. As described above, the output data may include a bounding box associated with an object in the environment detected by the perception system of the vehicle, and the real data may include a verified bounding box associated with the object. The difference between the two bounding boxes may indicate an error associated with the perception system of the vehicle. For example, but not limiting of, the bounding box of the output data may be larger than the verified bounding box, indicating that the perception system detects an object larger than the object in the environment.
At operation 910 of the example process 900, the process 900 may include determining a statistical model associated with the subsystem based on the difference. For example,
exemplary clauses
A: a system, comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause the system to perform operations comprising: receiving log data associated with operating an autonomous vehicle in an environment; determining a set of scenes based at least in part on the log data, a scene of the set of scenes comprising scene parameters associated with an aspect of the environment; determining an error model associated with a subsystem of the autonomous vehicle; determining a parameterized scene based at least in part on the scene parameters and the error model; perturbing the parametric scene by at least one of adding an error to the scene parameters or a component of a simulated vehicle to be instantiated in the perturbed parametric scene, the simulated vehicle being controlled by a vehicle controller; instantiating the simulated vehicle in the perturbed parametric scene; receiving simulation data indicating how the simulated vehicle responds to the perturbed parametric scene; and determining a safety metric associated with the vehicle controller based at least in part on the simulated data.
B: the system of paragraph a, wherein determining the set of scenes comprises: clustering the log data to determine a first set of clusters, wherein individual clusters in the first set of clusters are associated with individual scenes; determining a probability associated with the individual cluster based at least in part on the first set of clusters; and determining a second set of clusters based at least in part on a probability threshold and the first set of clusters.
C: the system of paragraph a, wherein determining the error model comprises: receiving real data associated with the environment; determining an error based at least in part on comparing the real data to the log data; and determining an error profile based at least in part on the error; wherein the error model comprises the error distribution.
D: the system of paragraph a, wherein the parameterized scene is a first parameterized scene, the perturbed parameterized scene is a first perturbed parameterized scene, and the simulation data is first simulation data, the operations further comprising: determining a second parameterized scene based on the first simulation data, the second parameterized scene including at least one of the first subset of scene parameters or the second subset of error models; perturbing the second parameterized scene as a second perturbed parameterized scene; instantiating the simulated vehicle in the second disturbance parameterized scenario; receiving second analog data; and updating the security metric based at least in part on the second simulation data.
E: a method, comprising: determining a scene comprising scene parameters describing a portion of an environment; receiving an error model associated with a subsystem of a vehicle; determining a parameterized scene based at least in part on the scene, the scene parameters, and the error model; disturbing the parameterized scene into a disturbed parameterized scene; receiving simulation data indicating how the subsystem of the vehicle responds to the perturbed parametric scene; and determining a safety metric associated with the subsystem of the vehicle based at least in part on the simulated data.
F: the method of paragraph E, wherein the scene parameters are associated with at least one of object size, object speed, object pose, object density, vehicle speed, vehicle trajectory.
G: the method of paragraph E, wherein determining the scene comprises: receiving log data associated with an autonomous vehicle; clustering the log data to determine a first set of clusters, wherein individual clusters in the first set of clusters are associated with the scene; determining a probability associated with the individual cluster based at least in part on the first set of clusters; and determining that the probability meets or exceeds a probability threshold.
H: the method of paragraph E, wherein the error model is determined based at least in part on: receiving real data associated with the environment; determining an error based at least in part on comparing the real data to log data associated with a vehicle; and determining an error profile based at least in part on the error; wherein the error model comprises the error distribution.
I: the method of paragraph E, wherein the parameterized scene is a first parameterized scene, the perturbed parameterized scene is a first perturbed parameterized scene, and the simulation data is first simulation data, the method further comprising: determining a second parameterized scene based on the first simulation data, the second parameterized scene including at least one of the first subset of scene parameters or the second subset of error models; perturbing the second parameterized scene; receiving second analog data; and updating the security metric based at least in part on the second simulation data.
J: the method of paragraph I, further comprising: disabling at least a first portion of one of the scene parameters or the error model; and associating the second simulation data with at least a second portion of the non-disabled one of the scene parameter or the error model.
K: the method of paragraph E, wherein the security metric indicates a probability of meeting or exceeding a cost threshold.
L: the method of paragraph E, wherein the portion is a first portion, the method further comprising: receiving map data, wherein a second portion of the map data is associated with the first portion of the environment; and determining that the second portion of the map data is associated with a scene associated with a probability that meets or exceeds a threshold probability associated with the scene parameter.
M: a non-transitory computer-readable medium storing instructions executable by a processor, wherein the instructions, when executed, cause the processor to perform operations comprising: determining a scene comprising scene parameters describing a portion of an environment; one or more of receiving or determining an error model associated with a subsystem of a vehicle; determining a parameterized scene based at least in part on the scene, the scene parameters, and the error model; perturbing the parameterized scene into a perturbed parameterized scene; receiving simulation data indicating how the subsystem of the vehicle responds to the perturbed parametric scene; and determining a safety metric associated with the subsystem of the vehicle based at least in part on the simulated data.
N: the non-transitory computer readable medium of paragraph M, wherein the scene parameters are associated with at least one of object size, object speed, object pose, object density, vehicle speed, vehicle trajectory.
O: the non-transitory computer-readable medium of paragraph M, wherein determining the scene comprises: receiving log data associated with an autonomous vehicle; clustering the log data to determine a first set of clusters, wherein individual clusters in the first set of clusters are associated with the scene; determining a probability associated with the individual cluster based at least in part on the first set of clusters; and determining that the probability meets or exceeds a probability threshold.
P: the non-transitory computer-readable medium of paragraph M, wherein the error is determined based at least in part on: receiving real data associated with the environment; determining an error based at least in part on comparing the real data to log data associated with a vehicle; and determining an error profile based at least in part on the error; wherein the error model comprises the error distribution.
Q: the non-transitory computer-readable medium of paragraph M, wherein the parameterized scene is a first parameterized scene, the perturbed parameterized scene is a first perturbed parameterized scene, and the simulation data is first simulation data, the operations further comprising: determining a second parameterized scene including at least one of the first subset of scene parameters or the second subset of error models based on the first simulation data; perturbing the second parameterized scene; receiving second analog data; and updating the security metric based at least in part on the second simulation data.
R: the non-transitory computer-readable medium of paragraph Q, the operations further comprising: disabling at least a first portion of one of the scene parameters or the error model; and associating the second simulation data with at least a second portion of the non-disabled one of the scene parameter or the error model.
S: the non-transitory computer-readable medium of paragraph M, wherein the security metric indicates a probability of meeting or exceeding a cost threshold.
T: the non-transitory computer readable medium of paragraph M, wherein the error model is associated with one or more of a perception system of the vehicle, a prediction system of the vehicle, or a planner system of the vehicle.
U: a system, comprising: one or more processors; and one or more computer-readable media storing computer-executable instructions that, when executed, cause a system to perform operations comprising: receiving vehicle data; inputting at least a first portion of the vehicle data into a subsystem of an autonomous vehicle, the subsystem associated with at least one of a perception system, a planning system, a tracking system, or a prediction system; determining an environmental parameter based at least in part on a second portion of the vehicle data; receiving an estimate from the subsystem; receiving real data associated with the subsystem; determining a difference between the estimate and the real data, the difference representing an error associated with the subsystem; and determining a statistical model associated with the subsystem based at least in part on the difference, the statistical model indicating a probability of the error, the probability associated with the environmental parameter.
V: the system of paragraph U, wherein the vehicle data includes sensor data from sensors on the autonomous vehicle, wherein the environmental parameters include one or more of a speed or a weather condition of the autonomous vehicle, and wherein the subsystem is a perception subsystem, the estimated value is one or more of an estimated position, an estimated direction, or an estimated range of an object represented in the vehicle data, and the real data represents an actual position, an actual direction, or an actual range of the object.
W: the system of paragraph U, wherein determining the statistical model comprises: determining a first frequency associated with the environmental parameter and a second frequency associated with the difference based at least in part on the vehicle data; and determining the probability based at least in part on the first frequency and the second frequency.
X: the system of paragraph U, the operations further comprising: determining a simulated environmental parameter based at least in part on the simulated vehicle data; determining that the simulated environmental parameters correspond to the environmental parameters; determining a simulated estimate based at least in part on the simulated vehicle data and the subsystem; and perturbing the simulated estimate by altering a portion of the respective simulated scene based at least in part on the error based at least in part on the probability.
Y: a method, comprising: receiving data associated with a vehicle; determining an environmental parameter based at least in part on the first portion of data; determining output data associated with a system of the vehicle based at least in part on a second portion of the data; receiving real data associated with the system and the data; determining a difference between the output data and the real data, the difference representing an error associated with the system; and determining a statistical model associated with the system based at least in part on the difference, the statistical model indicating a probability of the error, the probability associated with the environmental parameter.
Z: the method of paragraph Y, wherein determining the statistical model comprises: determining a frequency associated with the error based at least in part on the data.
AA: the method of paragraph Y, wherein the environmental parameter includes one or more of a speed of the vehicle, a weather condition, a geographic location of the vehicle, or a time of day.
AB: the method of paragraph Y, further comprising: generating a simulation; determining that the simulated environmental parameters correspond to the environmental parameters; inputting analog data into the system; receiving an analog output from the system; and perturbing the simulation based at least in part on the probability and the error.
AC: the method of paragraph Y, wherein the system is a perception system, the output data comprises a first bounding box associated with an object, the real data comprises a second bounding box associated with the object, and wherein determining the difference comprises determining a difference between at least one of: a first extent of the first bounding box and a second extent of the second bounding box; or a first pose of the first bounding box and a second pose of the second bounding box.
AD: the method of paragraph Y, wherein the system is a tracker system, the output data includes planned trajectory data of the vehicle, the real data includes a measured trajectory of the vehicle, and wherein determining the discrepancy includes determining a discrepancy between the planned trajectory data and the measured trajectory.
AE: the method of paragraph Y, wherein the system is associated with a prediction system, the data comprises a predicted trajectory of an object in the environment, the real data comprises an observed trajectory of the object, and wherein determining the difference comprises determining a difference between the predicted trajectory and the observed trajectory.
AF: the method of paragraph Y, wherein the data is first data, the environmental parameter is a first environmental parameter, the difference is a first difference, the error is a first error, and the probability is a first probability, the method further comprising: receiving second data associated with the system of the vehicle; determining a second environmental parameter based at least in part on the second data; determining a second difference between the third portion of the output data and the fourth portion of the real data, the second difference representing a second error associated with the system; and updating a statistical model associated with the system, the statistical model indicating a second probability of the second error, the second probability associated with the second environmental parameter.
AG: a non-transitory computer-readable medium storing instructions executable by a processor, wherein the instructions, when executed, cause the processor to perform operations comprising: receiving data; determining an environmental parameter based at least in part on the data; determining output data based at least in part on the data and a system of the vehicle; receiving real data associated with the system and the data; determining a difference between a first portion of the output data and a second portion of the real data, the difference representing an error associated with the system; determining a statistical model associated with the system based at least in part on the difference, the statistical model indicating a probability of the error; and associating the probability with the environmental parameter.
AH: the non-transitory computer readable medium of paragraph AG, wherein determining the statistical model comprises: determining a frequency associated with the difference based at least in part on the data.
AI: the non-transitory computer readable medium of paragraph AG, wherein the environmental parameters include one or more of a speed, weather conditions, or time of day of the vehicle.
AJ: the non-transitory computer-readable medium of paragraph AG, the operations further comprising: generating a simulation comprising a simulated vehicle; receiving analog data; determining that a simulated environmental parameter corresponds to the environmental parameter; inputting at least a portion of the simulation data into the system; receiving analog output data from the system; and altering the analog output data based at least in part on the request and the probability and the error.
AK: the non-transitory computer readable medium of paragraph AG, wherein the system is a perception system, the data includes a first bounding box associated with an object, the real data includes a second bounding box associated with the object, and wherein determining the variance includes determining a variance between at least one of: a first extent of the first bounding box and a second extent of the second bounding box; or a first pose of the first bounding box and a second pose of the second bounding box.
AL: the non-transitory computer readable medium of paragraph AG, wherein the system is a tracker system, the data includes planned trajectory data of the vehicle, the real data includes a measured trajectory of the vehicle, and wherein determining the discrepancy includes determining a discrepancy between the planned trajectory data and the measured trajectory.
AM: the non-transitory computer readable medium of paragraph AG, wherein the system is associated with a prediction system, the data comprises a predicted trajectory of an object in the environment, the real data comprises an observed trajectory of the object, and wherein determining the difference comprises determining a difference between the predicted trajectory and the observed trajectory.
AN: the non-transitory computer-readable medium of paragraph AG, wherein the data is first data, the environmental parameter is a first environmental parameter, the difference is a first difference, the error is a first error, and the probability is a first probability, the operations further comprising: receiving second data associated with the system of the vehicle; determining a second environmental parameter based at least in part on the second data; determining a second difference between the third portion of the output data and the fourth portion of the real data, the second difference representing a second error associated with the system; and updating the statistical model associated with the system, the statistical model indicating a second probability of the second error, the second probability associated with the second environmental parameter.
Although the exemplary clauses described above are described with respect to one particular embodiment, it should be understood that the contents of the exemplary clauses, in the context of this document, may also be implemented via a method, apparatus, system, computer-readable medium and/or another embodiment. Additionally, any of example a through example AN may be implemented alone or in combination with any other one or more of example a through example AN.
Conclusion
While one or more examples of the techniques described herein have been described, various modifications, additions, permutations and equivalents thereof are also included within the scope of the techniques described herein.
In the description of the examples, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific examples of the claimed subject matter. It is to be understood that other examples may be used and that modifications or changes, such as structural changes, may be made. Such examples, modifications, or variations do not necessarily depart from the scope of the subject matter as claimed. Although the steps herein may be presented in a certain order, in some cases, the order may be altered such that certain inputs are provided at different times or in a different order without altering the functionality of the systems and methods described. The disclosed procedures may also be performed in a different order. In addition, the various computations herein need not be performed in the order disclosed, and other examples using alternative orders of computation may be readily implemented. In addition to being reordered, a computation can be decomposed into sub-computations with the same result.

Claims (15)

1. A method, comprising:
determining a scene comprising scene parameters describing a portion of an environment;
receiving an error model associated with a subsystem of a vehicle;
determining a parameterized scene based at least in part on the scene, the scene parameters, and the error model;
perturbing the parameterized scene into a perturbed parameterized scene;
receiving simulation data indicating how the subsystem of the vehicle responds to the perturbed parametric scene; and
determining a safety metric associated with the subsystem of the vehicle based at least in part on the simulated data.
2. The method of claim 2, wherein the scene parameter is associated with at least one of an object size, an object speed, an object pose, an object density, a vehicle speed, a vehicle trajectory.
3. The method of claim 1 or 2, wherein determining the scene comprises:
receiving log data associated with an autonomous vehicle;
clustering the log data to determine a first set of clusters, wherein individual clusters in the first set of clusters are associated with the scene;
determining a probability associated with the individual cluster based at least in part on the first set of clusters; and
determining that the probability meets or exceeds a probability threshold.
4. The method of any of claims 1 to 3, wherein the error model is determined based at least in part on:
receiving real data associated with the environment;
determining an error based at least in part on comparing the real data to log data associated with a vehicle; and
determining an error profile based at least in part on the error;
wherein the error model comprises the error distribution.
5. The method of any of claims 1 to 4, wherein the parametric scene is a first parametric scene, the perturbed parametric scene is a first perturbed parametric scene, and the simulation data is first simulation data, the method further comprising:
determining a second parameterized scene based on the first simulation data, the second parameterized scene including at least one of the first subset of scene parameters or the second subset of error models;
perturbing the second parameterized scene;
receiving second analog data; and
updating the security metric based at least in part on the second simulation data.
6. The method of claim 5, further comprising:
disabling at least a first portion of one of the scene parameters or the error model; and
associating the second simulation data with at least a second portion of the non-disabled one of the scene parameters or the error model.
7. The method of any of claims 1-6, wherein the security metric indicates a probability of meeting or exceeding a cost threshold.
8. The method of any of claims 1-7, wherein the portion is a first portion, the method further comprising:
receiving map data, wherein a second portion of the map data is associated with the first portion of the environment; and
determining that the second portion of the map data is associated with a scene associated with a probability that meets or exceeds a threshold probability associated with the scene parameter.
9. A computer program product comprising coded instructions which, when run on a computer, implement the method of any one of claims 1 to 8.
10. A system, comprising:
one or more processors; and
one or more non-transitory computer-readable media storing instructions that, when executed, cause the one or more processors to perform operations comprising:
determining a scene comprising scene parameters describing a portion of an environment;
one or more of receiving or determining an error model associated with a subsystem of a vehicle;
determining a parameterized scene based at least in part on the scene, the scene parameters, and the error model;
perturbing the parameterized scene into a perturbed parameterized scene;
receiving simulation data indicating how the subsystem of the vehicle responds to the perturbed parametric scene; and
determining a safety metric associated with the subsystem of the vehicle based at least in part on the simulated data.
11. The system of claim 10, wherein at least one of:
the scene parameter is associated with at least one of an object size, an object speed, an object pose, an object density, a vehicle speed, a vehicle trajectory;
the security metric indicates a probability of meeting or exceeding a cost threshold; or
The error model is associated with one or more of a perception system of the vehicle, a prediction system of the vehicle, or a planner system of the vehicle.
12. The system of claim 10 or 11, wherein determining the scene comprises:
receiving log data associated with an autonomous vehicle;
clustering the log data to determine a first set of clusters, wherein individual clusters in the first set of clusters are associated with the scene;
determining a probability associated with the individual cluster based at least in part on the first set of clusters; and
determining that the probability meets or exceeds a probability threshold.
13. The system of any of claims 10 to 12, wherein the error model is determined based at least in part on:
receiving real data associated with the environment;
determining an error based at least in part on comparing the real data to log data associated with a vehicle; and
determining an error profile based at least in part on the error;
wherein the error model comprises the error distribution.
14. The system of any of claims 10 to 13, wherein the parameterized scene is a first parameterized scene, the perturbed parameterized scene is a first perturbed parameterized scene, and the simulation data is first simulation data, the operations further comprising:
determining a second parameterized scene based on the first simulation data, the second parameterized scene including at least one of the first subset of scene parameters or the second subset of error models;
perturbing the second parameterized scene;
receiving second analog data; and
updating the security metric based at least in part on the second simulation data.
15. The system of claim 14, the operations further comprising:
disabling at least a first portion of one of the scene parameters or the error model; and
associating the second simulation data with at least a second portion of the non-disabled one of the scene parameters or the error model.
CN202080066048.6A 2019-09-27 2020-09-17 Security analysis framework Pending CN114430722A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US16/586,838 2019-09-27
US16/586,853 US11351995B2 (en) 2019-09-27 2019-09-27 Error modeling framework
US16/586,853 2019-09-27
US16/586,838 US11625513B2 (en) 2019-09-27 2019-09-27 Safety analysis framework
PCT/US2020/051271 WO2021061488A1 (en) 2019-09-27 2020-09-17 Safety analysis framework

Publications (1)

Publication Number Publication Date
CN114430722A true CN114430722A (en) 2022-05-03

Family

ID=75166344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080066048.6A Pending CN114430722A (en) 2019-09-27 2020-09-17 Security analysis framework

Country Status (4)

Country Link
EP (1) EP4034439A4 (en)
JP (1) JP2022550058A (en)
CN (1) CN114430722A (en)
WO (1) WO2021061488A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680517A (en) * 2023-07-27 2023-09-01 北京赛目科技股份有限公司 Method and device for determining failure probability in automatic driving simulation test

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115731695A (en) * 2021-08-31 2023-03-03 北京图森未来科技有限公司 Scene security level determination method, device, equipment and storage medium
US11955001B2 (en) 2021-09-27 2024-04-09 GridMatrix, Inc. Traffic near miss collision detection
WO2023049453A1 (en) * 2021-09-27 2023-03-30 GridMatrix Inc. Traffic monitoring, analysis, and prediction
CN114104000B (en) * 2021-12-16 2024-04-12 智己汽车科技有限公司 Dangerous scene evaluation and processing system, method and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10373259B1 (en) * 2014-05-20 2019-08-06 State Farm Mutual Automobile Insurance Company Fully autonomous vehicle insurance pricing
DE102014215980A1 (en) * 2014-08-12 2016-02-18 Volkswagen Aktiengesellschaft Motor vehicle with cooperative autonomous driving mode
US10496766B2 (en) 2015-11-05 2019-12-03 Zoox, Inc. Simulation system and methods for autonomous vehicles
US9734455B2 (en) * 2015-11-04 2017-08-15 Zoox, Inc. Automated extraction of semantic information to enhance incremental mapping modifications for robotic vehicles
US10346564B2 (en) 2016-03-30 2019-07-09 Toyota Jidosha Kabushiki Kaisha Dynamic virtual object generation for testing autonomous vehicles in simulated driving scenarios
DE102016009762A1 (en) * 2016-08-11 2018-02-15 Trw Automotive Gmbh Control system and control method for determining a likelihood of a lane change of a preceding motor vehicle
US10481044B2 (en) 2017-05-18 2019-11-19 TuSimple Perception simulation for improved autonomous vehicle control
US10725470B2 (en) * 2017-06-13 2020-07-28 GM Global Technology Operations LLC Autonomous vehicle driving systems and methods for critical conditions

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680517A (en) * 2023-07-27 2023-09-01 北京赛目科技股份有限公司 Method and device for determining failure probability in automatic driving simulation test
CN116680517B (en) * 2023-07-27 2023-09-29 北京赛目科技股份有限公司 Method and device for determining failure probability in automatic driving simulation test

Also Published As

Publication number Publication date
WO2021061488A1 (en) 2021-04-01
JP2022550058A (en) 2022-11-30
EP4034439A4 (en) 2023-11-01
EP4034439A1 (en) 2022-08-03

Similar Documents

Publication Publication Date Title
US11625513B2 (en) Safety analysis framework
US11734473B2 (en) Perception error models
US11351995B2 (en) Error modeling framework
US11574089B2 (en) Synthetic scenario generator based on attributes
US11568100B2 (en) Synthetic scenario simulator based on events
US11150660B1 (en) Scenario editor and simulator
US20210339741A1 (en) Constraining vehicle operation based on uncertainty in perception and/or prediction
US11755020B2 (en) Responsive vehicle control
WO2020097011A2 (en) Vehicle trajectory modification for following
CN114430722A (en) Security analysis framework
US11648939B2 (en) Collision monitoring using system data
US11628850B2 (en) System for generating generalized simulation scenarios
US11526721B1 (en) Synthetic scenario generator using distance-biased confidences for sensor data
US11697412B2 (en) Collision monitoring using statistic models
US11415997B1 (en) Autonomous driving simulations based on virtual simulation log data
US20230150549A1 (en) Hybrid log simulated driving
WO2020264276A1 (en) Synthetic scenario generator based on attributes
US20220269836A1 (en) Agent conversions in driving simulations
CN114787894A (en) Perceptual error model
US20220266859A1 (en) Simulated agents based on driving log data
US11814059B1 (en) Simulating autonomous driving using map data and driving data
US20240096232A1 (en) Safety framework with calibration error injection
US11814070B1 (en) Simulated driving error models
US11952001B1 (en) Autonomous vehicle safety system validation
US11938966B1 (en) Vehicle perception system validation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination