CN114730521A - Collision monitoring using statistical models - Google Patents

Collision monitoring using statistical models Download PDF

Info

Publication number
CN114730521A
CN114730521A CN202080078816.XA CN202080078816A CN114730521A CN 114730521 A CN114730521 A CN 114730521A CN 202080078816 A CN202080078816 A CN 202080078816A CN 114730521 A CN114730521 A CN 114730521A
Authority
CN
China
Prior art keywords
vehicle
determining
estimated location
probability
additional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080078816.XA
Other languages
Chinese (zh)
Inventor
A·S·克雷戈
A·加塞姆扎德霍什格鲁迪
S·A·莫达拉瓦拉萨
A·C·雷什卡
S·雷兹万贝巴哈尼
L·秦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zoox Inc
Original Assignee
Zoox Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/682,971 external-priority patent/US11697412B2/en
Priority claimed from US16/683,005 external-priority patent/US11648939B2/en
Application filed by Zoox Inc filed Critical Zoox Inc
Publication of CN114730521A publication Critical patent/CN114730521A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0953Predicting travel path or likelihood of collision the prediction being responsive to vehicle dynamic parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Abstract

Techniques and methods for performing collision monitoring using an error model. For example, a vehicle may generate sensor data using one or more sensors. The vehicle may then use the system to analyze the sensor data to determine a parameter associated with the vehicle and a parameter associated with another object. Further, the vehicle may process parameters associated with the vehicle using an error model associated with the system to determine a distribution of estimated locations associated with the vehicle. The vehicle may also process parameters associated with the object using an error model to determine a distribution of estimated locations associated with the object. Using the distribution of estimated locations, the vehicle can determine a probability of collision between the vehicle and the object.

Description

Collision monitoring using statistical models
Cross Reference to Related Applications
The present application claims priority from U.S. patent application No. 16/682,971 entitled "COLLISION MONITORING with statistical model (COLLISION MONITORING static MODELS)" filed on 13.11.2019 and U.S. patent application No. 16/683,005 entitled "COLLISION MONITORING with system data (COLLISION MONITORING SYSTEM DATA)" filed on 13.11.2019, the entire contents of which are incorporated herein by reference.
Background
Autonomous vehicles may use autonomous vehicle controllers to guide the autonomous vehicle through the environment. For example, an autonomous vehicle controller may use planning methods, apparatus, and systems to determine a travel path and guide an autonomous vehicle through an environment that includes dynamic objects (e.g., vehicles, pedestrians, animals, etc.) and static objects (e.g., buildings, signs, stopped vehicles, etc.). The autonomous vehicle controller may take into account the predicted behavior of the dynamic object as the vehicle navigates through the environment.
Drawings
The following detailed description is set forth with reference to the accompanying drawings. In the drawings, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference symbols in different drawings indicates similar or identical items or features.
FIG. 1 is a diagram of an environment including a vehicle performing collision monitoring using an error model and/or system data, according to an embodiment of the present disclosure.
FIG. 2 is a diagram of an example of a vehicle analyzing sensor data using an error model to determine an estimated location associated with an object, according to an embodiment of the present disclosure.
FIG. 3 is a diagram of another example of a vehicle analyzing sensor data using an error model to determine an estimated location associated with an object, according to an embodiment of the present disclosure.
FIG. 4 is a diagram of an example of a vehicle analyzing sensor data and system data to determine an estimated location associated with an object, according to an embodiment of the present disclosure.
FIG. 5 is a diagram of another example of a vehicle analyzing sensor data and system data to determine an estimated location associated with an object, according to an embodiment of the present disclosure.
FIG. 6 shows an example graph illustrating a vehicle determining a probability of collision according to an embodiment of the disclosure.
Fig. 7 illustrates generating error model data based at least in part on vehicle data and ground truth data, according to an embodiment of the disclosure.
Fig. 8 illustrates one or more computing devices that generate perceptual error model data based at least in part on recorded data and ground truth data generated by one or more vehicles, in accordance with an embodiment of the disclosure.
Fig. 9 illustrates generating uncertainty data based at least in part on vehicle data and ground truth data, in accordance with an embodiment of the present disclosure.
Fig. 10 depicts a block diagram of an example system for implementing the techniques described herein, according to an embodiment of the disclosure.
FIG. 11 depicts an example process for performing collision monitoring using an error model in accordance with an embodiment of the present disclosure.
FIG. 12 depicts an example process for determining an estimated location associated with an object using an error model in accordance with an embodiment of the present disclosure.
Fig. 13A-13B depict an example process for performing collision monitoring using uncertainty in accordance with an embodiment of the present disclosure.
FIG. 14 depicts an example process for determining an estimated location associated with an object using uncertainty in accordance with an embodiment of the present disclosure.
Detailed Description
As described above, the autonomous vehicle may use the controller to guide the autonomous vehicle through the environment. For example, the controller may use the planning methods, apparatus, and systems to determine a travel path and guide an autonomous vehicle through an environment that includes dynamic objects (e.g., vehicles, pedestrians, animals, etc.) and/or static objects (e.g., buildings, signs, stopped vehicles, etc.). To ensure the safety of occupants and objects, the autonomous vehicle controller may employ a safety factor when operating in the environment. However, in at least some examples, such systems and controllers may include complex systems that cannot be inspected. Despite the fact that: there may be no method for determining errors or uncertainties associated with such systems and systems, which may be necessary to inform such vehicles of safe operation in the environment.
Accordingly, the present disclosure is directed to techniques for: for performing collision monitoring using error models and/or system data by determining such error and/or uncertainty models for complex systems and systems. For example, the autonomous vehicle may use an error model and/or system uncertainty to determine estimated locations of both the autonomous vehicle and one or more objects at a later time. In some cases, the estimated location may include a distribution of probability locations associated with the autonomous vehicle and the one or more objects. The autonomous vehicle may then determine a probability of collision between the autonomous vehicle and the one or more objects using the estimated location. Based at least in part on the collision probability, the autonomous vehicle may perform one or more actions. In at least some examples, such a probability may be determined based on a determination made in accordance with any of the techniques described in detail herein.
For more detailed information, the autonomous vehicle may traverse the environment and generate sensor data using one or more sensors. In some cases, the sensor data may include data captured by sensors such as time-of-flight sensors, position sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., Inertial Measurement Unit (IMU), accelerometer, magnetometer, gyroscope, etc.), lidar sensors, radar sensors, sonar sensors, infrared sensors, cameras (e.g., RGB, IR, intensity, depth, etc.), microphone sensors, environmental sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), ultrasonic transducers, wheel encoders, etc. The autonomous vehicle may then analyze the sensor data using one or more components (e.g., one or more systems) while navigating through the environment.
For example, one or more of the components of the autonomous vehicle may use the sensor data to generate a trajectory of the autonomous vehicle. In some cases, one or more components may also use the sensor data to determine pose data associated with the position of the autonomous vehicle. For example, one or more components may use sensor data to determine position data, coordinate data, and/or orientation data of the vehicle in the environment. In some cases, the pose data may include x-y-z coordinates and/or may include pitch (pitch), roll (roll), and yaw (yaw) data associated with the vehicle.
Further, one or more components of the autonomous vehicle may use the sensor data to perform operations such as detecting, identifying, segmenting, classifying, and/or tracking objects within the environment. For example, objects such as pedestrians, bicycle/bike riders, motorcycle/motorcycle riders, buses, trams, trucks, animals, and/or the like may be present in the environment. The one or more components may use the sensor data to determine a current location of the object and an estimated location of the object at a future time (e.g., one second in the future, five seconds in the future, etc.).
The autonomous vehicle may then determine a probability of collision between the autonomous vehicle and the object using the trajectory of the autonomous vehicle and the estimated location of the object. For example, the autonomous vehicle may determine whether the estimated location of the object at the future time intersects the location of the autonomous vehicle along the trajectory at the future time. To improve safety, autonomous vehicles may use distance and/or time buffering when making decisions. For example, the autonomous vehicle may determine that a high probability of collision exists when the location of the object at the future time is within a threshold distance (e.g., a distance buffer) from the location of the autonomous vehicle.
Further, the autonomous vehicle may use an error model associated with the component and/or an uncertainty associated with an output of the component to determine a probability of collision. The error model associated with the component may represent one or more errors and/or error percentages associated with the output of the component. For example, the perceptual error model may produce perceptual errors associated with perceptual parameters (e.g., outputs) of the perceptual component, the predictive error model may produce predictive errors associated with predictive parameters (e.g., outputs) from the predictive component, and so on. In some cases, the error may be represented by, but is not limited to, a look-up table determined based at least in part on statistical aggregation using ground truth data, a function (e.g., based on the error of the input parameters), or any other model or data structure that maps an output to a particular error. In at least some examples, such an error model may map a particular error to a probability/frequency of occurrence. As will be described, in some examples, such error models may be determined for certain categories of data (e.g., different error models of a perception system for detection within a first range and detection within a second range of distances based on velocity of a vehicle, object, etc.).
In some cases, the error model may include a static error model. In other cases, the error model may include a dynamic error model that is updated by the autonomous vehicle and/or one or more computing devices. For example, one or more computing devices may continuously receive vehicle data from an autonomous vehicle. One or the computing devices may then use the vehicle data as well as the ground truth data to update an error model, which will be described in more detail below. After updating the error model, the one or more computing devices may send the updated error model to the autonomous vehicle.
The component may analyze the sensor data and, based at least in part on the analysis, generate an output that may be representative of one or more parameters. The error model may then indicate an output of a component of the vehicle, such as a speed associated with the object, which is associated with the error percentage. For example, the component may determine that the velocity of the object in the environment is 10 meters per second. Using the error model, the autonomous vehicle may determine a percentage of error of X% (e.g., 20%), resulting in a speed range of +/-X% (e.g., a range between 8 meters per second and 12 meters per second with a 20% percentage of error). In some cases, the velocity range may be associated with a probability distribution, e.g., a gaussian distribution, that indicates that some portions of the range have a higher probability of occurrence than other portions of the range. In some examples, the probability distribution may be grouped into a plurality of discrete probabilities. For example, 8 meters per second and 12 meters per second may be associated with a 5% probability, 9 meters per second and 11 meters per second may be associated with a 20% probability, and 10 meters per second may be associated with a 45% probability.
To use the error model, the autonomous vehicle may determine an estimated position of the object at a future time based at least in part on the output from the component and the error model associated with the component. The estimated location may correspond to a probability distribution, e.g., a gaussian distribution, of the location. In some cases, the autonomous vehicle determines the estimated location of the object by initially determining a respective probability distribution associated with each of the components and/or parameters. The autonomous vehicle may then use the probability distributions for all components and/or parameters to determine an estimated location. For example, the autonomous vehicle may aggregate or combine probability distributions for all components and/or parameters to determine an estimated location. Aggregating and/or combining the probability distributions may include multiplying the probability distributions, summing the probability distributions, and/or applying one or more other formulas to the probability distributions.
Additionally or alternatively, in some cases, the autonomous vehicle may first use the output from the components to determine an initial estimated location associated with the object. The autonomous vehicle then uses an error model to determine a total error for each of the outputs. The autonomous vehicle may determine the total error by aggregating and/or combining the errors from each of the error models of the components. Next, the autonomous vehicle may determine an estimated position using the total error and the initial estimated position. In this case, the estimated position may include a distribution of possible positions around the initial estimated position.
For a first example, an autonomous vehicle may use one or more components to analyze sensor data to determine a parameter associated with an object. The parameters may include, but are not limited to, the type of object, the current location of the object, the speed of the object, the direction of travel of the object, and the like. Using the error model, the autonomous vehicle may then determine a probability distribution associated with the type of object, a probability distribution associated with the current location of the object, a probability distribution associated with the speed of the object, a probability distribution associated with the direction of travel of the object, and so forth. The autonomous vehicle may then use the probability distribution of the parameters to determine an estimated location of the object at a future time. In this first example, each of the estimated locations (or any one or more) may be represented as a probability distribution of locations.
For a second example, the autonomous vehicle may use one or more components to analyze the sensor data to again determine the parameter associated with the object. The autonomous vehicle may then use these parameters to determine an initial estimated location of the object at a future time. Further, in some examples, the autonomous vehicle may use an error model associated with the parameters to determine a total error and/or a percentage of error associated with determining the initial estimated location of the object. The autonomous vehicle may then determine an estimated position of the object using the initial estimated position and the total error and/or percentage of error. Again, in this second example, each of the estimated locations (or any one or more) may be represented as a probability distribution of locations.
In any of the examples above, the autonomous vehicle may use a similar process to determine an estimated location of one or more other objects located within the environment. Further, the autonomous vehicle may use a similar process to determine an estimated location of the autonomous vehicle at a future time. For example, an autonomous vehicle may analyze sensor data using one or more components to determine parameters associated with the autonomous vehicle. The parameters may include, but are not limited to, a location of the autonomous vehicle, a speed of the autonomous vehicle, a direction of travel of the autonomous vehicle, and the like. The autonomous vehicle may then employ an error model associated with the parameter to determine an estimated location of the autonomous vehicle at a future time. In some examples, each of the estimated locations (or any one or more) may correspond to a probability distribution of the location of the autonomous vehicle at a future time.
In addition to or instead of using an error model to determine an estimated location of the autonomous vehicle and/or the object, the autonomous vehicle may use system data (e.g., an uncertainty model) associated with the components and/or the output to determine the estimated location. The uncertainty model associated with the parameters may correspond to a distribution of how much of the output should be trusted and/or a measure of how correct the system believes the output to be. For example, if the component analyzes the sensor data multiple times to determine the location of the object, the component will output a low uncertainty if the output includes a small distribution of values (e.g., within a first range) around the location indicated by the ground truth data. Further, if the output includes a large distribution of values around the location indicated by the ground truth data (e.g., within a second range that is greater than the first range), then the component will output a large uncertainty. The autonomous vehicle may use an uncertainty model to determine an estimated location of the object at a future time.
For a first example, an autonomous vehicle may use one or more components to analyze sensor data to again determine parameters associated with an object. The autonomous vehicle may then determine an uncertainty model associated with determining the type of the object, an uncertainty model associated with determining the current location of the object, an uncertainty model associated with determining the velocity of the object, an uncertainty model associated with determining the direction of travel of the object, and so forth. The autonomous vehicle may then determine an estimated location of the object at a future time using an uncertainty model associated with the parameter. In this first example, the estimated location may correspond to a probability distribution of locations.
For a second example, the autonomous vehicle may use one or more components to analyze the sensor data to again determine the parameter associated with the object. The autonomous vehicle may then use these parameters to determine an initial estimated location of the object at a future time. Further, the autonomous vehicle may determine an estimated position of the object at a future time using an uncertainty model associated with the component determination parameters and the estimated position. Again, in this second example, the estimated location may correspond to a probability distribution of locations.
In any of the examples above, the autonomous vehicle may use a similar process to determine an estimated location of one or more other objects located within the environment. Further, the autonomous vehicle may use a similar process to determine an estimated location of the autonomous vehicle at a future time. For example, an autonomous vehicle may use one or more components to analyze sensor data to determine parameters associated with the autonomous vehicle. The parameters may include, but are not limited to, a location of the autonomous vehicle, a speed of the autonomous vehicle, a direction of travel of the autonomous vehicle, etc. (e.g., any and/or all of these parameters may be derived from an output trajectory of the planning system). The autonomous vehicle may then determine an estimated location of the autonomous vehicle at a future time using an uncertainty model associated with the determined parameters. In some examples, the estimated location may correspond to a probability distribution of a location of the autonomous vehicle at a future time.
In some cases, the autonomous vehicle may then determine the probability of collision using the estimated location of the autonomous vehicle and the estimated location of the object. For example, the probability of collision between the autonomous vehicle and the object may be calculated using a geometric overlap region between the estimated location of the autonomous vehicle (e.g., a probability distribution of locations) and the estimated location of the object (e.g., a probability distribution of locations). In some cases, if multiple objects are located in the environment, the autonomous vehicle may use the determined collision probability for each of the objects to determine a total collision probability associated with the autonomous vehicle. For example, the total collision probability may include a sum of collision probabilities for each of the objects.
The autonomous vehicle may then determine whether the probability of collision is equal to or greater than a threshold (e.g., 0.5%, 1%, 5%, and/or some other threshold percentage). In some cases, if the probability of collision is less than the threshold, the autonomous vehicle may continue to navigate along the current route of the autonomous vehicle. However, in some cases, if the autonomous vehicle determines that the probability of collision is equal to or greater than the threshold, the autonomous vehicle may take one or more actions. For example, the autonomous vehicle may change the speed of the autonomous vehicle (e.g., decelerate), change the route of the autonomous vehicle, park in a safe location, and the like.
Additionally or alternatively, in some cases, the autonomous vehicle may determine an overall uncertainty associated with navigating the autonomous vehicle based at least in part on an uncertainty model for determining an estimated location of the autonomous vehicle and an uncertainty model for determining an estimated location of one or more objects. The autonomous vehicle may then generate different routes and perform a similar process to determine the overall uncertainty associated with the different routes. Further, the autonomous vehicle may select the route that includes the lowest uncertainty.
In some cases, the autonomous vehicle and/or one or more computing devices use input data (e.g., recorded data and/or simulated data) to generate an error model and/or an uncertainty model. For example, the autonomous vehicle and/or one or more computing devices may compare the input data to ground truth data. In some cases, ground truth data may be manually labeled and/or determined from other validated machine learning components. For example, the input data may include sensor data and/or output data generated by components of the autonomous vehicle. The autonomous vehicle and/or one or more computing devices may compare the input data to ground truth data that may be indicative of actual parameters of objects in the environment. By comparing the input data to ground truth data, the autonomous vehicle and/or one or more computing devices may determine errors and/or uncertainties associated with components and/or parameters and generate a corresponding error model using the errors and/or a corresponding uncertainty model using the uncertainties.
In some cases, the autonomous vehicle and/or one or more computing devices may determine an uncertainty associated with the component. For example, an autonomous vehicle and/or one or more computing devices may input data into a component multiple times in order to receive multiple outputs (e.g., parameters) from the component. The autonomous vehicle and/or one or more computing devices may then analyze the output to determine a distribution associated with the output. Using the distribution, the autonomous vehicle and/or one or more computing devices may determine the uncertainty. For example, if a large distribution exists, the autonomous vehicle and/or one or more computing devices may determine that a large uncertainty exists. However, if there is a small distribution, the autonomous vehicle and/or one or more computing devices may determine that there is a small uncertainty.
The techniques described herein may be implemented in a variety of ways. Example embodiments are provided below with reference to the following figures. Although discussed in the context of an autonomous vehicle, the methods, apparatus, and systems described herein may be applied to various systems (e.g., sensor systems or robotic platforms) and are not limited to autonomous vehicles. In another example, the techniques may be used in an aeronautical or nautical context, or in any system that uses machine vision (e.g., in a system that uses image data). Further, the techniques described herein may be used with real data (e.g., captured using one or more sensors), simulated data (e.g., generated by a simulator), or any combination of the two.
FIG. 1 is a diagram of an environment 100 including a vehicle 102 performing collision monitoring using an error model and/or system data, according to an embodiment of the present disclosure. For example, the vehicle 102 may navigate along a trajectory 104 within the environment 100. While navigating, the vehicle 102 may generate sensor data 106 using one or more sensors of the vehicle 102 and analyze the sensor data 106 using one or more components 108 (e.g., one or more systems) of the vehicle 102. The one or more components 108 can include, but are not limited to, a positioning component, a perception component, a prediction component, a planning component, and the like. Based at least in part on the analysis, the vehicle 102 may identify at least a first object 110 and a second object 112 located within the environment 100.
Further, the vehicle 102 may analyze the sensor data 106 using the one or more components 108 to determine an estimated location 114 associated with the vehicle 102, an estimated location 116 associated with the first object 110, and an estimated location 118 associated with the second object 112 at a future time. In some cases, the estimated location 114 may include a probability distribution of locations associated with the vehicle 102, the estimated location 116 may include a probability distribution of locations associated with the first object 110, and/or the estimated location 118 may include a probability distribution of locations associated with the second object 112.
For example, the estimated locations 114 may include an estimated location 120(1) associated with the vehicle 102, a first region 120(2) (e.g., a first boundary) of estimated locations associated with a first probability, a second region 120(3) (e.g., a second boundary) of estimated locations associated with a second probability, and a third region 120(4) (e.g., a third boundary) of estimated locations associated with a third probability. In some cases, the first probability is greater than the second probability and the second probability is greater than the third probability. For example, the vehicle 102 may determine that the probability that the vehicle 102 will be located within the first region 120(2) of the estimated location is higher than the probability that it will be located within the second region 120(3) of the possible location. Additionally, the vehicle 102 may determine that the probability that the vehicle 102 will be located within the second region 120(3) of the estimated location is higher than the probability that the vehicle will be located within the third region 120(4) of the estimated location.
It should be noted that although the example of fig. 1 shows only three separate regions of estimated position, in other examples there may be any number of regions of estimated position. Further, regions further away from the estimated location 120(1) may include a lower probability than regions closer to the estimated location 120 (1). This may similarly be used for each of the estimated positions of object 110 and object 112.
Additionally, the estimated location 116 may include an estimated location 122(1) associated with the first object 110, a first region 122(2) (e.g., a first boundary) of the estimated location associated with the first probability, a second region 122(3) (e.g., a second boundary) of the estimated location associated with the second probability, and a third region 122(4) (e.g., a third boundary) of the estimated location associated with the third probability. In some cases, the first probability is greater than the second probability and the second probability is greater than the third probability. For example, the vehicle 102 may determine that the probability that the first object 110 will be located within the first region 122(2) of the estimated location is higher than the probability that it will be located within the second region 122(3) of the possible location. Further, the vehicle 102 may determine that the probability that the first object 110 will be located within the second region 122(3) of the estimated location is higher than the probability that it will be located within the third region 122(4) of the estimated location.
Further, the estimated location 118 may include an estimated location 124(1) associated with the second object 112, a first region 124(2) (e.g., a first boundary) of the estimated location associated with the first probability, a second region 124(3) (e.g., a second boundary) of the estimated location associated with the second probability, and a third region 124(4) (e.g., a third boundary) of the estimated location associated with the third probability. In some cases, the first probability is greater than the second probability and the second probability is greater than the third probability. For example, the vehicle 102 may determine that the probability that the second object 112 will be located within the first region 124(2) of the estimated location is higher than the probability that it will be located within the second region 124(3) of the possible location. Further, the vehicle 102 may determine that the probability that the second object 112 will be located within the second region 124(3) of the estimated location is higher than the probability that it will be located within the third region 124(4) of the estimated location.
In some cases, the vehicle 102 may determine the estimated location 114 using one or more error models 126 associated with one or more components 108 and 118. For example, for the first object 110, the vehicle 102 may analyze the sensor data 106 using the one or more components 108 to determine one or more parameters 128 associated with the first object 110. The one or more parameters 128 may include, but are not limited to, the type of the first object 110, the current location of the first object 110 (and/or the distance to the first object 110), the velocity of the first object 110, and the like. Using one or more error models 126, the vehicle 102 may then determine the estimated location 116 of the first object 110.
For a first example, the vehicle 102 may determine a probability distribution associated with the type of the first object 110 using the first error model 126, determine a probability distribution associated with the current location of the first object 110 using the second error model 126, determine a probability distribution associated with the velocity of the first object 110 using the third error model 126, and so on. For example, using the velocity of the first object 110, the vehicle 102 may determine that the velocity of the first object 110 is 1 meter per second. The vehicle 102 may then use the third error model 126 to determine that the error percentage may be X% (e.g., 20%), resulting in a speed range (e.g., at 20%, the speed is between 0.8 meters per second and 1.2 meters per second). In some cases, the error model 126 may further indicate that some portions of the range have a higher probability of occurrence than other portions of the range. For example, 0.8 meters per second and 1.2 meters per second may be associated with a 5% probability, 0.9 meters per second and 1.1 meters per second may be associated with a 20% probability, and 1 meter per second may be associated with a 45% probability. The vehicle 102 may use a similar process to determine the probability distribution of one or more other parameters 128.
The vehicle 102 may then use the probability distribution of the parameter 128 to determine the estimated location 116 of the first object 110. Further, the vehicle 102 may use a similar process to determine the parameter 128 of the vehicle 102, determine a probability distribution associated with the parameter 128 of the vehicle 102, and use the probability distribution to determine the estimated location 114. Further, the vehicle 102 may use a similar process to determine the parameter 128 of the second object 112, determine a probability distribution associated with the parameter 128 of the second object 112, and use the probability distribution to determine the estimated location 116.
For a second example, the vehicle 102 may use the parameters 128 of the first object 110 in order to determine the estimated location 122(1) of the first object 110. The vehicle 102 may then use the error model 126 associated with the parameter 128 used to determine the estimated location 122(1) in order to determine the total error of the parameter 128. Using the total error and the estimated position 122(1), the vehicle 102 may determine the estimated position 116 of the first object 110. Further, the vehicle 102 may use a similar process to determine the estimated location 114 of the vehicle 102 and the estimated location 118 of the second object 112.
In addition to or instead of using one or more error models 126 to determine the estimated location 114 and 118, in other examples, the vehicle 102 may use one or more uncertainty models 130 associated with one or more components 108 and/or parameters 128. For example, the output from the one or more components 108 may include one or more uncertainty models 130 associated with the determination parameters 128. For example, the vehicle 102 may determine a first uncertainty model 130 associated with determining the type of the first object 110, a second uncertainty model 130 associated with determining the current location of the first object 110, a third uncertainty model 130 associated with determining the velocity of the first object 110, and so on. The vehicle 102 may then determine the estimated location 116 of the first object 110 using the parameters 128 and the uncertainty model 130.
For the first example, the vehicle 102 may determine a probability distribution associated with the type of the first object 110 using the first uncertainty model 130, determine a probability distribution associated with the current location of the first object 110 using the second uncertainty model 130, determine a probability distribution associated with the velocity of the first object 110 using the third uncertainty model 130, and so on. For example, using the velocity of the first object 110, the vehicle 102 may determine that the velocity of the first object 110 is 1 meter per second. The vehicle 102 may then determine that the uncertainty in the speed of the first object is 20%, and thus, the certainty is 80%. Thus, the vehicle 102 may determine a speed range between 0.8 meters per second and 1.2 meters per second. In some cases, the vehicle 102 may further determine that some portions of the range have a higher probability of occurrence than other portions of the range. For example, 0.8 meters per second and 1.2 meters per second may be associated with a 5% probability, 0.9 meters per second and 1.1 meters per second may be associated with a 20% probability, and 1 meter per second may be associated with a 45% probability. The vehicle 102 may use a similar process to determine the probability distribution of one or more other parameters 128.
The vehicle 102 may then use the probability distribution of the parameter 128 to determine the estimated location 116 of the first object 110. Additionally, the vehicle 102 may use a similar process to determine the parameter 128 of the vehicle 102, determine a probability distribution associated with the parameter 128 of the vehicle 102, and use the probability distribution to determine the estimated location 114. Further, the vehicle 102 may use a similar process to determine the parameter 128 of the second object 112, determine a probability distribution associated with the parameter 128 of the second object 112, and use the probability distribution to determine the estimated location 116.
For a second example, the vehicle 102 may use the parameters 128 of the first object 110 in order to determine the estimated location 122(1) of the first object 110. The vehicle 102 may then use one or more uncertainty models 130 associated with the parameters 128 in order to determine the overall uncertainty associated with the estimated location 122 (1). Using the total uncertainty, the vehicle 102 may determine an estimated location 116 of the first object 110. Further, the vehicle 102 may use a similar process to determine the estimated location 114 of the vehicle 102 and the estimated location 118 of the second object 112.
In either example, after determining the estimated location 114 plus 118, the vehicle 102 may use the estimated location 114 plus 118 to determine the collision probability. For example, the vehicle 102 may determine a probability of collision between the vehicle 102 and the first object 110. In some cases, the vehicle 102 may determine the probability of collision using at least a region of geometric overlap between the estimated location 114 of the vehicle 102 and the estimated location 116 of the first object 110.
More specifically, the estimated location 114 of the vehicle 102 may be with a parameter μν,σνGaussian distribution (which may be composed of N (μ)ν,
Figure BDA0003640379300000073
) Indicated). Further, the estimated position 116 of the first object 110 may be with the parameter μ0,σ0Gaussian distribution (which may be composed of N (μ)0,
Figure BDA0003640379300000071
Representation). The probability of overlap between estimated position 114 and estimated position 116 may then be translated to P [ x ═ 0]Wherein x belongs to
Figure BDA0003640379300000072
This may represent a one-dimensional problem associated with determining the probability of overlap.
In some cases, the vehicle 102 may perform a similar process to expand the one-dimensional problem into a two-dimensional problem. Further, the vehicle 102 may perform a similar process in order to determine a probability of collision between the vehicle 102 and the second object 112. In some cases, the vehicle 102 may then use the probability of collision between the vehicle 102 and the first object 110 and the probability of collision between the vehicle 102 and the second object 112 to determine an overall probability of collision. However, in the example of fig. 1, the probability of collision between the vehicle 102 and the second object 112 may be zero due to the absence of geometric overlap between the estimated location 114 and the estimated location 118.
The vehicle 102 may then determine whether the probability of collision is equal to or greater than a threshold. Based at least in part on determining that the probability of collision is less than the threshold, the vehicle 102 may continue to navigate along the trajectory 104. However, based at least in part on determining that the probability of collision is equal to or greater than the threshold, the vehicle 102 may take one or more actions. The one or more actions may include, but are not limited to, navigating along a new trajectory, changing speed (e.g., slowing down), stopping, etc.
It should be noted that, in some examples, the vehicle 102 may perform a similar process in order to determine a probability of collision between the object 110 and the object 112. The vehicle 102 may then perform one or more actions based at least in part on the collision probability. For example, if the vehicle 102 determines that the probability of collision between the object 110 and the object 112 is equal to or greater than a threshold, the vehicle 102 may stop.
FIG. 2 is a diagram of an example of the vehicle 102 analyzing the sensor data 106 using one or more error models 126 to determine an estimated location associated with an object, according to an embodiment of the present disclosure. For example, one or more sensor systems 202 of the vehicle 102 may generate the sensor data 106. The sensor data 106 may then be analyzed by one or more components 108 of the vehicle 102. In the example of fig. 2, the one or more components 108 can include a positioning component 204, a perception component 206, a planning component 208, and a prediction component 210. However, in other examples, the vehicle 102 may not include one or more of the positioning component 204, the perception component 206, the planning component 208, or the prediction component 210. Additionally or alternatively, in some examples, the vehicle 102 may include one or more additional components.
One or more of the components 204 and 210 can then analyze the sensor data 106 and generate an output 212 and 218 based at least in part on the analysis. In some cases, the output 212 and 218 may include parameters associated with the vehicle 102 and/or the object. For a first example, the output 212 from the positioning component 204 may indicate the location of the vehicle 102. For a second example, the output 214 from the perception component 206 may include detection, segmentation, classification, etc., associated with the object. For a third example, the output 216 from the planning component 208 may include a path for the vehicle 102 to travel within the environment.
It should be noted that although not shown in the example of FIG. 2, one or more of the components 204, 210 may use the outputs 212, 218 from one or more of the other components 204, 210 to generate the outputs 212, 218. For example, planning component 208 may use output 212 from positioning component 204 to generate output 216. For another example, planning component 208 may use output 214 from perception component 206 to generate output 216. In addition to or instead of using the outputs 212 and 218 from one or more other components 204 and 210, the components 204 and 210 may use probability distributions 220 and 226 described below.
The one or more error components 228 may be configured to process the outputs 212-. In some cases, one or more error components 228 may be included within the components 204 and 210. For example, the positioning component 204 can analyze the sensor data 106 and generate both the output 212 and the probability distribution 220 associated with the output 212 based at least in part on the analysis. For another example, perception component 206 can analyze sensor data 106 and generate both output 214 and probability distribution 222 associated with output 214 based at least in part on the analysis.
Probability distributions 220-226 may be associated with outputs 212-218, respectively. For example, one or more error components 228 can process the output 212 using one or more error models 126 associated with the positioning component 204 to generate the probability distribution 220. For example, if the output 212 indicates a location of the vehicle 102, the probability distribution 220 may represent an estimated location of the vehicle 102 based on the determined location and one or more errors represented by the one or more error models 126 for the positioning component 204. Further, error component 228 can process output 214 using one or more error models 126 associated with perception component 206 to generate probability distribution 222. For example, if the output 214 indicates a velocity of the object, the probability distribution 222 may represent a possible velocity of the object based on the determined velocity and one or more errors represented by the one or more error models 126 for the perception component 206.
The estimation component 230 may be configured to process one or more of the probability distributions 220 and 226 and/or the sensor data 106 (not shown for clarity) to generate an estimated location 232 associated with the vehicle 102 and/or the object. As discussed herein, the estimated location 232 may include a probability distribution, e.g., a gaussian distribution, of the location.
FIG. 3 is a diagram of another example of the vehicle 102 analyzing the sensor data 106 using one or more error models 126 to determine an estimated location associated with an object, according to an embodiment of the present disclosure. In the example of FIG. 3, the estimation component 230 may analyze one or more of the outputs 212 and 218 from one or more of the components 204 and 210 to determine an estimated location 302 associated with the vehicle 102 and/or the object. The one or more error components 228 may then determine an estimated location 304 of the vehicle 102 and/or the object using the one or more error models 126 and the estimated location 302.
For example, the one or more error components 228 may use the one or more error models 126 to determine one or more total errors and/or a total error percentage associated with the one or more outputs 212 and 218 of the one or more components 204 and 210 for determining the estimated location 302. The one or more error components 228 may then use the one or more total errors and/or the percentage of total errors to generate the estimated position 304. As discussed herein, the estimated location 304 may include a probability distribution, e.g., a gaussian distribution, of the location.
FIG. 4 is an illustration of an example of the vehicle 102 analyzing the sensor data 106 using one or more uncertainty models 130 to determine an estimated location associated with an object, in accordance with an embodiment of the disclosure. For example, the one or more uncertainty components 402 may be configured to process the outputs 212 and 218 using the one or more uncertainty models 130 to generate probability distributions 404 and 410 associated with the outputs 212 and 218. In some cases, one or more uncertainty components 402 may be included within components 204-210. For example, the positioning component 204 can analyze the sensor data 106 and generate both the output 212 and a probability distribution 404 associated with the output 212 based at least in part on the analysis. For another example, sensing component 206 may analyze sensor data 106 and, based at least in part on the analysis, generate both output 214 and probability distribution 406 associated with output 214.
It should be noted that although not shown in the example of FIG. 4, one or more of the components 204-210 may use the outputs 212-218 from one or more of the other components 204-210 to generate the outputs 212-218. For example, planning component 208 may use output 212 from positioning component 204 to generate output 216. For another example, planning component 208 may use output 214 from perception component 206 to generate output 216. In addition to or instead of using the outputs 212 and 218 from one or more other components 204 and 210, the components 204 and 210 may use the probability distributions 404 and 410.
Probability distributions 404-410 can be associated with outputs 212-218, respectively. For example, one or more uncertainty components 402 can process the output 212 using one or more uncertainty models 130 associated with the positioning component 204 to generate a probability distribution 404. For example, if the output 212 indicates a location of the vehicle 102, the probability distribution 404 may represent an estimated location of the vehicle 102 that is based at least in part on the determined location and the one or more uncertainty models 130 for the positioning component 204. Further, one or more uncertainty components 402 can process output 214 using one or more uncertainty models 130 associated with perception component 206 to generate probability distribution 406. For example, if output 214 indicates a velocity of the object, probability distribution 406 may represent a possible velocity of the object based on the determined velocity and one or more uncertainty models 130 for perception component 206.
The estimation component 230 may be configured to process one or more of the probability distributions 404 and 410 and/or the sensor data 106 (not shown for clarity) to generate an estimated location 412 associated with the vehicle 102 and/or the object. As discussed herein, the estimated location 412 may include a probability distribution, e.g., a gaussian distribution, of the location.
FIG. 5 is an illustration of another example of the vehicle 102 analyzing the sensor data 106 using one or more uncertainty models 130 to determine an estimated location associated with an object, in accordance with an embodiment of the disclosure. In the example of FIG. 5, the estimation component 230 may analyze one or more of the outputs 212 and 218 from one or more of the components 204 and 210 to determine an estimated location 302 associated with the vehicle 102 and/or the object. The one or more uncertainty components 402 can then determine an estimated location 502 of the vehicle 102 and/or object using the one or more uncertainty models 130 and the estimated location 302.
For example, the one or more uncertainty components 402 can use the one or more uncertainty models 130 of the components 204 and 210 to determine an overall uncertainty associated with the one or more outputs 212 and 218 of the one or more components 204 and 210, which is used to determine the estimated location 302. The one or more uncertainty components 402 can then use the total uncertainty to generate an estimated location 502. As discussed herein, the estimated location 502 may include a probability distribution, e.g., a gaussian distribution, of the location.
FIG. 6 shows an example graph 600 illustrating the determination of the probability of collision by the vehicle 102 over a period of time, in accordance with an embodiment of the disclosure. As shown, graph 600 represents probability 602 along the y-axis and time 604 along the x-axis. In the example of fig. 6, the vehicle 102 may determine a collision probability at time 606 (1). For example, at time 606(1), vehicle 102 may determine the probability of collision for three future times, time 606(2), time 606(3), and time 606 (4). In some cases, the probability of collision is associated with the vehicle 102 and a single object. In other cases, the probability of collision is associated with the vehicle 102 and more than one object.
As shown, the vehicle 102 may determine that there is a first probability of collision 608(1) at time 606(2), a second probability of collision 608(2) at time 606(3), and no probability of collision at time 606 (4). The first probability of collision 608(1) may be associated with a low risk, the second probability of collision 608(2) may be associated with a high risk, and since there is no probability of collision at time 606(4), there is no risk of collision at time 606 (4). In some cases, the first collision probability 608(1) may be low risk based at least in part on the first collision probability 608(1) being below a threshold probability. Further, based at least in part on the second probability of collision 608(2) being equal to or greater than the threshold probability, the second probability of collision 608(2) may be high risk.
Although the example of fig. 6 illustrates determining the probability of collision at discrete times, in some cases, the vehicle 102 may continuously determine the probability of collision.
Fig. 7 illustrates an example 700 of generating error model data based at least in part on vehicle data and ground truth data, in accordance with an embodiment of the disclosure. As depicted in fig. 7, one or more vehicles 702 may generate vehicle data 704 and transmit the vehicle data 704 to an error model component 706. As discussed herein, the error model component 706 can determine the error model 126 that can indicate an error associated with the parameter. For example, the vehicle data 704 can be data associated with one or more components of the vehicle 702, such as the perception component 206, the planning component 208, the positioning component 204, the estimation component 230, and the like. By way of example and not limitation, vehicle data 704 may be associated with perception component 206, and vehicle data 704 may include bounding boxes associated with one or more objects detected by vehicle 702 in the environment.
The error model component 706 can receive ground truth data 708 that can be manually labeled and/or determined from other validated machine learning components. By way of example and not limitation, ground truth data 708 may include verified bounding boxes associated with objects in the environment. By comparing the bounding box of the vehicle data 704 to the bounding box of the ground truth data 708, the error model component 706 can determine errors associated with one or more systems (e.g., components) of the vehicle 702. Such errors may include, for example, differences between ground truth and output, percentage differences, error rates, and the like. In some cases, the vehicle data 704 may include one or more characteristics (also referred to as parameters) associated with the detected entity and/or the environment in which the entity is located. In some examples, the characteristics associated with the entity may include, but are not limited to, an x-position (global position), a y-position (global position), a z-position (global position), a bearing, an entity type (e.g., classification), a velocity of the entity, a range (size) of the entity, and so forth. Characteristics associated with the environment may include, but are not limited to, the presence of another entity in the environment, the status of another entity in the environment, a time of day, a day of the week, a season, weather conditions, dark/light indications, and the like. Thus, the error may be associated with other characteristics (e.g., environmental parameters). In at least some examples, such error models (e.g., different models for different combinations of classification, distance, velocity, etc.) may be determined for various parameter groupings. In at least some examples, such parameters may further include environmental information such as, but not limited to, number of objects, time of day, time of year, weather conditions, and the like.
The error model component 706 can process the plurality of vehicle data 704 and the plurality of ground truth data 708 to determine error model data 710. The error model data 710 can include an error calculated by the error model component 706, which can be represented as error 712(1) - (3). Additionally, the error model component 706 can determine probabilities associated with the errors 712(1) - (3), represented as probabilities 714(1) - (3), which can be associated with environmental parameters for rendering the error models 716(1) - (3) (which can represent the error model 126). By way of example and not limitation, vehicle data 704 may include bounding boxes associated with objects at a distance of 50 meters from one or more vehicles 702 in an environment including rainfall. Ground truth data 708 may provide a validated bounding box associated with the object. The error model component 706 can determine error model data 710 that determines errors associated with the perception systems of one or more vehicles 702. The 50 meter distance and rainfall may be used as environmental parameters to determine which of the error models 716(1) - (3) to use. Once the error model is identified, the error model 716(1) - (3) may provide the errors 712(1) - (3) based on the probabilities 714(1) - (3), wherein the errors 712(1) - (3) associated with higher probabilities 714(1) - (3) are more likely to be selected than the errors 712(1) - (3) associated with lower probabilities 714(1) - (3).
Fig. 8 shows an example 800 of one or more vehicles 702 generating vehicle data 704 and transmitting the vehicle data 704 to one or more computing devices 802, according to an embodiment of the disclosure. As described above, the error model component 706 can determine a perceptual error model that can indicate an error associated with the parameter. As described above, the vehicle data 704 may include sensor data generated by one or more sensors of the vehicle 702 and/or perception data generated by one or more perception systems of the vehicle 702. The perceptual error model may be determined by comparing the vehicle data 704 to ground truth data 708. Ground truth data 708 may be manually labeled and may be related to the environment and may represent known results. Accordingly, deviations in the vehicle data 704 from the ground truth data 708 may be identified as errors in the sensor system and/or the perception system of one or more vehicles 702. By way of example and not limitation, the perception system may identify an object as a bike rider, where ground truth data 708 indicates that the object is a pedestrian. By way of another example and not limitation, the sensor system may generate sensor data representing an object as having a width of 2 meters, where ground truth data 708 indicates that the object has a width of 3 meters.
As described above, the error model component 706 can determine a classification associated with an object represented in the vehicle data 704 and determine other objects in the vehicle data 704 and/or other recorded data that have the same classification. The error model component 706 can then determine a probability distribution associated with an error range associated with the object. Based on the comparison and the error range, the error model component 706 can determine the estimated location 502.
As depicted in fig. 8, environment 804 may include objects 806(1) - (3) represented as bounding boxes generated by the perception system. The perceptual error model data 808 may indicate the context parameters as 810(1) - (3) and indicate the errors associated with the context parameters as 812(1) - (3).
Fig. 9 illustrates an example 900 of generating uncertainty data based at least in part on vehicle data and ground truth data in accordance with an embodiment of the disclosure. As depicted in fig. 9, one or more vehicles 702 can generate vehicle data 704 and transmit the vehicle data 704 to uncertainty model component 902. Uncertainty model component 902 can determine an uncertainty associated with the component that determines the parameter, as discussed herein. For example, the vehicle data 704 can be data associated with one or more components of the vehicle 702, such as the perception component 206, the planning component 208, the location component 204, the prediction component 210, and the like. By way of example and not limitation, vehicle data 704 may be associated with perception component 206, and vehicle data 704 may include bounding boxes associated with objects detected in the environment by one or more vehicles 702.
The uncertainty model component 902 can receive ground truth data 708 that can be manually labeled and/or determined from other validated machine learning components. By way of example and not limitation, ground truth data 708 may include verified bounding boxes associated with objects in the environment. By comparing the vehicle data 704 to ground truth data 708, the uncertainty model component 902 can determine that one or more systems (e.g., components) of the vehicle 702 determine a consistency of ground truth. For example, the consistency may indicate that the percentage of the parameter represented by the vehicle data 704 is the same as the percentage of the parameter represented by the ground truth data 708.
Uncertainty model component 902 can then use the consistency to generate uncertainty data 904 associated with the component determining the parameters and/or associated with the component determining the parameters. For example, if the consistency indicates that there is a low percentage, the uncertainty data 904 may indicate a high uncertainty. However, if the consistency data indicates that there is a high percentage, the uncertainty data 904 may indicate a low uncertainty.
In more detail, uncertainty model component 902 can identify one or more types of uncertainty. The types of uncertainty can include, but are not limited to, cognitive uncertainty, occasional uncertainty (e.g., data-related, task-related, etc.), and the like. Cognitive uncertainty may be associated with the ignorance of the data generated by the component. Occasional uncertainties may be associated with uncertainties about information that the data cannot interpret. Uncertainty model component 902 can then use the identified one or more uncertainties to generate one or more uncertainty models 130.
In some cases, uncertainty model component 902 can input data into the component multiple times, where one or more nodes of the component change as data is input, which results in different outputs of the component. This may result in a range in the output from the component. In some cases, the component may further output the outputted mean and/or variance. Uncertainty model component 902 can then use the distributions associated with the ranges, means, and/or variances of the outputs to generate one or more uncertainty models 130 for the components and/or the output types (e.g., parameters).
Fig. 10 depicts a block diagram of an example system 1000 for implementing techniques discussed herein. In at least one example, the system 1000 may include a vehicle 102. In the illustrated example 1000, the vehicle 102 is an autonomous vehicle; however, the vehicle 102 may be any other type of vehicle (e.g., a driver-controlled vehicle that may provide an indication of whether it is safe to perform various maneuvers).
Vehicle 102 may include one or more computing devices 1002, one or more sensor systems 202, one or more transmitters 1004, one or more communication connections 1006 (also referred to as communication devices and/or modems), at least one direct connection 1008 (e.g., for physically coupling with vehicle 102 to exchange data and/or provide power), and one or more drive systems 1010. One or more sensor systems 202 may be configured to acquire sensor data 106 associated with an environment.
The one or more sensor systems 202 may include time-of-flight sensors, position sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., Inertial Measurement Unit (IMU), accelerometer, magnetometer, gyroscope, etc.), lidar sensors, radar sensors, sonar sensors, infrared sensors, cameras (e.g., RGB, IR, intensity, depth, etc.), microphone sensors, environmental sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), ultrasonic transducers, wheel encoders, etc. One or more sensor systems 202 may include multiple instances of each of these or other types of sensors. For example, the time-of-flight sensors may include individual time-of-flight sensors located at the corners, front, rear, sides, and/or top of the vehicle 102. As another example, the camera sensor may include multiple cameras disposed at different locations around the exterior and/or interior of the vehicle 102. One or more sensor systems 202 can provide input to one or more computing devices 1002.
The vehicle 102 may also include one or more emitters 1004 for emitting light and/or sound. The one or more transmitters 1004 in this example include internal audio and visual transmitters for communicating with passengers of the vehicle 102. By way of example and not limitation, the internal transmitters may include speakers, lights, signs, display screens, touch screens, tactile transmitters (e.g., vibration and/or force feedback), mechanical actuators (e.g., seat belt tensioners, seat positioners, headrest positioners, etc.), and the like. The one or more transmitters 1004 in this example also include external transmitters. By way of example and not limitation, in this example, the external transmitters include lights that signal a direction of travel or other indicators of vehicle action (e.g., indicator lights, signs, groups of lights, etc.), and one or more audio transmitters (e.g., speakers, groups of speakers, horns, etc.) to communicate acoustically with the pedestrian or other nearby vehicle, one or more of which may include beam steering technology.
Vehicle 102 may also include one or more communication connections 1006 that enable communication between vehicle 102 and one or more other local or remote computing devices (e.g., remote long-range computing devices) or remote services. For example, one or more communication connections 1006 may facilitate communication with one or more other local computing devices on the vehicle 102 and/or one or more drive systems 1010. Further, one or more communication connections 1006 may allow vehicle 102 to communicate with one or more other nearby computing devices (e.g., other nearby vehicles, traffic signals, etc.).
One or more communication connections 1006 may include a physical and/or logical interface for connecting one or more computing devices 1002 to another computing device or one or more external networks 1012 (e.g., the internet). For example, one or more communication connections 1006 may enable Wi-Fi based communications, such as via frequencies defined by the IEEE802.11 standard, short-range wireless frequencies (e.g., bluetooth), mobile communications (e.g., 2G, 3G, 4G lte, 5G, etc.), satellite communications, dedicated short-range communications (DSRC), or any suitable wired or wireless communication protocol that enables a respective computing device to interface with one or more other computing devices. In at least some examples, the one or more communication connections 1006 can include one or more modems, as described in detail above.
In at least one example, the vehicle 102 may include one or more drive systems 1010. In some examples, the vehicle 102 may have a single drive system 1010. In at least one example, if vehicle 102 has multiple drive systems 1010, each drive system 1010 may be positioned at opposite ends (e.g., front and rear, etc.) of vehicle 102. In at least one example, one or more drive systems 1010 may include one or more sensor systems 202 to detect conditions of one or more drive systems 1010 and/or the environment surrounding vehicle 102. By way of example and not limitation, the one or more sensor systems 202 may include one or more wheel encoders (e.g., rotary encoders) for sensing rotation of wheels of the drive system, inertial sensors (e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers, etc.) for measuring orientation and acceleration of the drive system, cameras or other image sensors, ultrasonic sensors for acoustically detecting objects in the environment surrounding the drive system, lidar sensors, radar sensors, and the like. Some sensors, such as wheel encoders, may be unique to one or more drive systems 1010. In some cases, one or more sensor systems 202 on one or more drive systems 1010 may overlap or supplement corresponding systems (e.g., one or more sensor systems 202) of vehicle 102.
The one or more drive systems 1010 may include a number of vehicle systems including a high voltage battery, a motor to drive the vehicle, a converter to convert direct current from the battery to alternating current for use by other vehicle systems, a steering system including a steering motor and a steering rack (which may be electric), a braking system including a hydraulic or electric actuator, a suspension system including hydraulic and/or pneumatic components, a stability control system to distribute braking power to mitigate grip loss and maintain control, an HVAC system, lighting (e.g., such as headlights/taillights to illuminate the surroundings outside of the vehicle), and one or more other systems (e.g., a cooling system, a safety system, an onboard charging system, other electrical components, e.g., a DC/DC converter, a high voltage connector, a high voltage cable, a charging system, a battery system, a vehicle, a charging port, etc.). Additionally, one or more drive systems 1010 may include a drive system controller that may receive and pre-process data from one or more sensor systems 202 and control the operation of various vehicle systems. In some examples, the drive system controller may include one or more processors and memory communicatively coupled with the one or more processors. The memory may store one or more modules to perform various functions of one or more drive systems 1010. Additionally, one or more drive systems 1010 also include one or more communication connections that enable the respective drive system to communicate with one or more other local or remote computing devices.
The one or more computing devices 1002 can include one or more processors 1014 and a memory 1016 communicatively coupled with the processors 1014. In the illustrated example, the memory 1016 of the one or more computing devices 1002 can store the positioning component 204, the perception component 206, the prediction component 210, the estimation component 230, the planning component 208, the one or more error components 228, the one or more uncertainty components 402, and the one or more sensor systems 202. Although described as residing in memory 1016 for purposes of illustration, it is contemplated that positioning component 204, sensing component 206, prediction component 210, estimation component 230, planning component 208, one or more error components 228, one or more uncertainty components 402, and one or more system controllers 1018 can additionally or alternatively be accessible by one or more computing devices 1002 (e.g., stored in a different component of vehicle 102 and/or accessible by vehicle 102 (e.g., remotely stored).
In the memory 1016 of the one or more computing devices 1002, the positioning component 204 may include functionality to receive data from the one or more sensor systems 202 to determine the location of the vehicle 102. For example, the positioning component 204 can include and/or request/receive a three-dimensional map of the environment, and can continuously determine a location of the autonomous vehicle within the map. In some cases, the locating component 204 can receive time-of-flight data, image data, lidar data, radar data, sonar data, IMU data, GPS data, wheel encoder data, any combination thereof, or the like, using SLAM (simultaneous location and mapping) or CLAMS (simultaneous calibration, location and mapping) to accurately determine the location of the autonomous vehicle. In some cases, the positioning component 204 may provide data to various components of the vehicle 102 to determine an initial position of the autonomous vehicle for generating the trajectory, as discussed herein.
The perception component 206 may include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component 206 may provide processed sensor data indicating the presence of entities proximate to the vehicle 102 and/or classifying entities as entity types (e.g., cars, pedestrians, bicycle riders, buildings, trees, road surfaces, curbs, sidewalks, unknown, etc.). In additional and/or alternative examples, the perception component 206 can provide processed sensor data that is indicative of one or more characteristics (also referred to as parameters) associated with the detected entity and/or the environment in which the entity is located. In some examples, the characteristics associated with the entity may include, but are not limited to, an x-position (global position), a y-position (global position), a z-position (global position), a bearing, an entity type (e.g., classification), a velocity of the entity, a range (size) of the entity, and so forth. Characteristics associated with an environment may include, but are not limited to, the presence of another entity in the environment, the status of another entity in the environment, a time of day, a day of the week, a season, weather conditions, geographic location, dark/light indications, and the like.
The perception component 206 can include functionality to store perception data generated by the perception component 206. In some cases, the perception component 206 may determine a tracking corresponding to an object that has been classified as an object type. For illustrative purposes only, the perception component 206 may acquire one or more images of the environment using one or more sensor systems 202. One or more sensor systems 202 may acquire images of an environment that includes objects such as pedestrians. The pedestrian may be in the first position at time T and in the second position at time T + T (e.g., movement within a time T span after time T). In other words, the pedestrian may move from the first position to the second position within the time span. For example, such motion may be recorded as stored perception data associated with the object.
In some examples, the stored perception data may include fused perception data acquired by the vehicle. The fused perception data may include a fusion or other combination of sensor data from one or more sensor systems 202, such as image sensors, lidar sensors, radar sensors, time-of-flight sensors, sonar sensors, global positioning system sensors, internal sensors, and/or any combination of these sensors. The stored perception data may additionally or alternatively include classification data that includes semantic classifications of objects (e.g., pedestrians, vehicles, buildings, roads, etc.) represented in the sensor data. The stored perception data may additionally or alternatively include tracking data (a set of historical locations, orientations, sensor features, etc. associated with the object over time) corresponding to movement of the object in the environment that is classified as a dynamic object. The tracking data may include the tracking of a plurality of different objects over time. When the object is stationary (e.g., standing still) or moving (e.g., walking, running, etc.), the tracking data may be mined to identify images of certain types of objects (e.g., pedestrians, animals, etc.). In this example, the computing device determines a tracking corresponding to the pedestrian.
The prediction component 210 can generate one or more probability maps representing predicted probabilities of estimated positions of one or more objects in the environment. For example, the prediction component 210 may generate one or more probability maps for vehicles, pedestrians, animals, etc. within a threshold distance from the vehicle 102. In some cases, the prediction component 210 can measure the tracking of the object and generate a discretized predicted probability map, thermal map, probability distribution, discretized probability distribution, and/or trajectory of the object based on the observed and predicted behavior. In some cases, the one or more probability maps may represent the intent of one or more objects in the environment.
The planning component 208 may determine a path that the vehicle 102 is to follow to traverse the environment. For example, the planning component 208 may determine various routes and paths and various levels of detail. In some cases, the planning component 208 may determine a route that travels from a first location (e.g., a current location) to a second location (e.g., a target location). For purposes of discussion, a route may be a sequence of waypoints traveling between two locations. By way of non-limiting example, waypoints include streets, intersections, coordinates of a Global Positioning System (GPS), and the like. Further, the planning component 208 may generate instructions for guiding the vehicle 102 along at least a portion of the route from the first location to the second location. In at least one example, the planning component 208 can determine how to direct the vehicle 102 from a first waypoint in the sequence of waypoints to a second waypoint in the sequence of waypoints. In some examples, the instruction may be a path or a portion of a path. In some examples, multiple paths may be generated substantially simultaneously (i.e., within a technical tolerance) according to a rolling domain technique. One of the plurality of paths within the rolling data domain having the highest confidence level may be selected to operate the vehicle.
In other examples, the planning component 208 may alternatively or additionally use data from the perception component 206 and/or the prediction component 210 to determine a path that the vehicle 102 is to follow to traverse the environment. For example, the planning component 208 and/or the prediction component 210 may receive data from the perception component 206 regarding objects associated with the environment. Using this data, the planning component 208 may determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a target location) to avoid objects in the environment. In at least some examples, the planning component 208 may determine that no such collision-free path exists, and in turn may provide a path that brings the vehicle 102 to a safe stop, thereby avoiding all collisions and/or otherwise mitigating damage.
In at least one example, the one or more computing devices 1002 can include one or more system controllers 1018, which can be configured to control steering, propulsion, braking, security, transmitters, communications, and other systems of the vehicle 102. These system controllers 1018 may be in communication with and/or control one or more corresponding systems of the drive system 1010 and/or other components of the vehicle 102, which may be configured to operate according to the path provided from the planning component 208.
The vehicle 102 may be connected to the one or more computing devices 802 through one or more networks 1012 and may include one or more processors 1020 and memory 1022 communicatively coupled with the one or more processors 820. In at least one case, the one or more processors 820 can be similar to the one or more processors 1014, and the memory 1022 can be similar to the memory 1016. In the illustrated example, the memory 1022 of the one or more computing devices 802 can store the vehicle data 704, the ground truth data 708, and the error model component 706. Although described as residing in the memory 1022 for purposes of illustration, it is contemplated that the vehicle data 704, ground truth data 708, and/or error model component 706 can additionally or alternatively be accessible by one or more computing devices 802 (e.g., stored in different components of one or more computing devices 802 and/or accessible by one or more computing devices 802 (e.g., remotely stored)).
The one or more processors 1014 of the one or more computing devices 1002 and the one or more processors 1020 of the one or more computing devices 802 may be any suitable processors capable of executing instructions to process data and perform operations as described herein. By way of example, and not limitation, the one or more processors 1014 and 1020 may include one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or any other device or portion of a device that processes electronic data to convert that electronic data into other electronic data that may be stored in registers and/or memory. In some examples, integrated circuits (e.g., ASICs, etc.), gate arrays (e.g., FPGAs, etc.), and other hardware devices may also be considered processors, so long as they are configured to implement the coded instructions.
Memory 1016 of one or more computing devices 1002 and memory 1022 of one or more computing devices 802 are examples of non-transitory computer-readable media. Memories 1016 and 1022 may store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems. In various embodiments, memories 1016 and 1022 may be implemented using any suitable memory technology, such as Static Random Access Memory (SRAM), synchronous dynamic ram (sdram), non-volatile/flash-type memory, or any other type of memory capable of storing information. The architectures, systems, and individual elements described herein may include many other logical, procedural, and physical components, of which those shown in the figures are merely examples associated with the discussion herein.
In some cases, aspects of some or all of the components discussed herein may include any model, algorithm, and/or machine learning algorithm. For example, in some cases, the components in memories 1016 and 1022 may be implemented as neural networks.
Fig. 11-14 illustrate example processes according to embodiments of the disclosure. The processes are illustrated as logical flow diagrams, where each operation represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and so forth that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the process.
Fig. 11 depicts an example process 1100 for performing collision monitoring using an error model in accordance with an embodiment of the present disclosure. At operation 1102, the process 1100 may include receiving sensor data generated by one or more sensors. For example, the vehicle 102 may navigate along a path from a first location to a second location. While navigating, the vehicle 102 may generate sensor data using one or more sensors of the vehicle 102.
At operation 1104, the process 1100 may include determining, using at least a first system of the vehicle, at least a parameter associated with the vehicle based at least in part on the first portion of the sensor data. For example, the vehicle 102 may analyze a first portion of the sensor data using one or more systems. The one or more systems may include, but are not limited to, a positioning system, a perception system, a planning system, a prediction system, and the like. Based at least in part on the analysis, the vehicle 102 may determine a parameter associated with the vehicle 102. The parameters may include, but are not limited to, a location of the vehicle 102, a speed of the vehicle 102, a direction of travel of the vehicle 102, and the like.
At operation 1106, the process 1100 may include determining an estimated location associated with the vehicle based at least in part on a parameter associated with the vehicle and a first error model associated with the first system. For example, the vehicle 102 may process at least parameters associated with the vehicle 102 using a first error model. As discussed herein, the first error model may represent an error and/or a percentage of error associated with the output of the first system. Based at least in part on the processing, the vehicle 102 may determine an estimated location associated with the vehicle 102 at a later time. As also discussed herein, the estimated location may correspond to a probability distribution of the location.
At operation 1108, the process 1100 may include determining, using at least a second system of the vehicle, at least a parameter associated with the object based at least in part on a second portion of the sensor data. For example, the vehicle 102 may analyze the sensor data and identify the object based at least in part on the analysis. The vehicle 102 may then analyze the second portion of the sensor data using one or more systems. Based at least in part on the analysis, the vehicle 102 may determine a parameter associated with the object. The parameters may include, but are not limited to, the type of object, the location of the object, the speed of the object, the direction of travel of the object, and the like.
At operation 1110, the process 1100 may include determining an estimated location associated with the object based at least in part on the parameter associated with the object and a second error model associated with a second system. For example, the vehicle 102 may process at least the parameter associated with the object using the second error model. As discussed herein, the second error model may represent an error and/or a percentage of error associated with the output of the second system. Based at least in part on the processing, the vehicle 102 may determine an estimated location associated with the object at a later time. As also discussed herein, the estimated location may correspond to a probability distribution of the location.
At operation 1112, the process 1100 may include determining a collision probability based at least in part on the estimated location associated with the vehicle and the estimated location associated with the object. For example, the vehicle 102 may analyze the estimated location associated with the vehicle 102 and the estimated location associated with the object to determine a probability of collision. In some cases, the probability of collision may be based at least in part on an amount of overlap between an estimated location associated with the vehicle 102 and an estimated location associated with the object.
At operation 1114, the process 1100 may include determining whether the probability of collision is equal to or greater than a threshold. For example, the vehicle 102 may compare the collision probability to a threshold value to determine whether the collision probability is equal to or greater than the threshold value.
If it is determined at operation 1114 that the probability of collision is not equal to or greater than the threshold, at operation 1116, the process 1100 may include continuing to navigate the vehicle along the path. For example, if the vehicle 102 determines that the probability of collision is less than the threshold, the vehicle 102 may continue to navigate along the path.
However, if it is determined at operation 1114 that the probability of collision is equal to or greater than the threshold, at operation 1118, the process 1100 may include causing the vehicle to perform one or more actions. For example, if the vehicle 102 determines that the probability of collision is equal to or greater than a threshold, the vehicle 102 may perform one or more actions. The one or more actions may include, but are not limited to, changing a path of the vehicle 102, changing a speed of the vehicle 102, parking the vehicle 102, and the like.
FIG. 12 depicts an example process 1200 for determining an estimated location associated with an object using an error model in accordance with an embodiment of the present disclosure. At operation 1202, the process 1200 may include receiving sensor data generated by one or more sensors. For example, the vehicle 102 may navigate along a path from a first location to a second location. While navigating, the vehicle 102 may generate sensor data using one or more sensors of the vehicle 102.
At operation 1204, the process 1200 may include determining, using one or more systems of the vehicle, a first parameter associated with the object based at least in part on the sensor data. For example, the vehicle 102 may use one or more systems to analyze the sensor data. The one or more systems may include, but are not limited to, a positioning system, a perception system, a planning system, a prediction system, and the like. Based at least in part on the analysis, the vehicle 102 may determine a first parameter associated with an object (e.g., the vehicle or another object). The first parameter may include, but is not limited to, a position of the object, a speed of the object, a direction of travel of the object, and the like.
At operation 1206, the process 1200 may include determining a first probability distribution associated with the first parameter based at least in part on the first error model. For example, the vehicle 102 may process at least a first parameter using a first error model. As discussed herein, the first error model may represent an error and/or a percentage of error associated with the first parameter. Based at least in part on the processing, the vehicle 102 may determine a first probability distribution associated with the first parameter.
At operation 1208, the process 1200 may include determining, using one or more systems of the vehicle, a second parameter associated with the object based at least in part on at least one of the sensor data or the first probability distribution. For example, the vehicle 102 may analyze at least one of the sensor data or the first probability distribution using one or more systems. In some cases, the vehicle 102 analyzes the first probability distribution when the second parameter is determined using the first parameter. Based at least in part on the analysis, the vehicle 102 may determine a second parameter associated with the object (e.g., the vehicle or another object). The second parameter may include, but is not limited to, a position of the object, a speed of the object, a direction of travel of the object, an estimated position of the object at a future time, and the like.
At operation 1210, the process 1200 may include determining a second probability distribution associated with a second parameter based at least in part on a second error model. For example, the vehicle 102 may process at least the second parameter using a second error model. As discussed herein, the second error model may represent an error and/or a percentage of error associated with the second parameter. Based at least in part on the processing, the vehicle 102 may determine a second probability distribution associated with the second parameter.
At operation 1212, the process 1200 may include determining an estimated location associated with the object based at least in part on at least one of the first probability distribution or the second probability distribution. For example, the vehicle 102 may determine the estimated location based at least in part on the first probability distribution and/or the second probability distribution. In some cases, if the first parameter and the second parameter are independent, e.g., the first parameter indicates a current location of the object and the second parameter indicates a speed of the object, the vehicle 102 may determine the estimated location using both the first probability distribution and the second probability distribution. In some cases, the vehicle 102 may determine the estimated location using the second probability distribution if the second parameter is determined using the first parameter, e.g., if the second parameter indicates an estimated location of the object at a future time, and the estimated location is determined using the first parameter indicating a speed of the object.
13A-13B depict an example process 1300 for performing collision monitoring using an uncertainty model in accordance with an embodiment of the present disclosure. At operation 1302, the process 1300 may include receiving sensor data generated by one or more sensors. For example, the vehicle 102 may navigate along a path from a first location to a second location. While navigating, the vehicle 102 may generate sensor data using one or more sensors of the vehicle 102.
At operation 1304, the process 1300 may include determining, using at least a first system of the vehicle, at least a parameter associated with the vehicle based at least in part on the first portion of the sensor data. For example, the vehicle 102 may use one or more systems to analyze a first portion of the sensor data. The one or more systems may include, but are not limited to, a positioning system, a perception system, a planning system, a prediction system, and the like. Based at least in part on the analysis, the vehicle 102 may determine a parameter associated with the vehicle 102. The parameters may include, but are not limited to, a location of the vehicle 102, a speed of the vehicle 102, a direction of travel of the vehicle 102, and the like.
At operation 1306, the process 1300 may include determining a first uncertainty model associated with the first system determining parameters associated with the vehicle. For example, the vehicle 102 may determine a first uncertainty model. In some cases, the vehicle 102 determines the first uncertainty model by receiving the first uncertainty model from the first system. In some cases, the vehicle 102 determines the first uncertainty model using uncertainty data indicative of an uncertainty associated with the first system determining the first parameter.
At operation 1308, the process 1300 may include determining an estimated location associated with the vehicle based at least in part on the parameters associated with the vehicle and the first uncertainty model. For example, the vehicle 102 may process at least parameters associated with the vehicle 102 using a first uncertainty model. Based at least in part on the processing, the vehicle 102 may determine an estimated location associated with the vehicle 102 at a later time. As also discussed herein, the estimated location may correspond to a probability distribution of the location.
At operation 1310, the process 1300 may include determining, using at least a second system of the vehicle, at least a parameter associated with the object based at least in part on a second portion of the sensor data. For example, the vehicle 102 may analyze the sensor data and identify the object based at least in part on the analysis. The vehicle 102 may then analyze the second portion of the sensor data using one or more systems. Based at least in part on the analysis, the vehicle 102 may determine a parameter associated with the object. The parameters may include, but are not limited to, the type of object, the location of the object, the speed of the object, the direction of travel of the object, and the like.
At operation 1312, the process 1300 may include determining a second uncertainty model associated with the second system determining parameters associated with the object. For example, the vehicle 102 may determine a second uncertainty model. In some cases, the vehicle 102 determines the second uncertainty model by receiving the second uncertainty model from the second system. In some cases, the vehicle 102 determines a second uncertainty model using uncertainty data indicative of an uncertainty associated with the second system determining the second parameter.
At operation 1314, the process 1300 may include determining an estimated location associated with the object based at least in part on the parameters associated with the object and the second uncertainty model. For example, the vehicle 102 may process at least the parameter associated with the object using the second uncertainty model. Based at least in part on the processing, the vehicle 102 may determine an estimated location associated with the object at a later time. As also discussed herein, the estimated location may correspond to a probability distribution of the location.
At operation 1316, the process 1300 may include determining a probability of collision based at least in part on the estimated location associated with the vehicle and the estimated location associated with the object. For example, the vehicle 102 may analyze the estimated location associated with the vehicle 102 and the estimated location associated with the object to determine a probability of collision. In some cases, the probability of collision may be based at least in part on an amount of overlap between an estimated location associated with the vehicle 102 and an estimated location associated with the object.
At operation 1318, the process 1300 may include determining whether the collision probability is equal to or greater than a threshold. For example, the vehicle 102 may compare the collision probability to a threshold to determine whether the collision probability is equal to or greater than the threshold.
If it is determined at operation 1318 that the probability of collision is not equal to or greater than the threshold, at operation 1320, the process 1300 may include continuing to navigate the vehicle along the path. For example, if the vehicle 102 determines that the probability of collision is less than the threshold, the vehicle 102 may continue to navigate along the path.
However, if it is determined at operation 1318 that the probability of collision is equal to or greater than the threshold, at operation 1322, the process 1300 may include causing the vehicle to perform one or more actions. For example, if the vehicle 102 determines that the probability of collision is equal to or greater than a threshold, the vehicle 102 may perform one or more actions. The one or more actions may include, but are not limited to, changing a path of the vehicle 102, changing a speed of the vehicle 102, parking the vehicle 102, and the like.
It should be noted that in some examples, the vehicle 102 may perform step 1304-1314 using multiple possible routes associated with the vehicle 102. In such an example, the vehicle 102 may select the route that includes the lowest uncertainty and/or lowest probability of collision.
FIG. 14 depicts an example process 1400 for determining an estimated location associated with an object using an uncertainty model in accordance with an embodiment of the present disclosure. At operation 1402, the process 1400 may include receiving sensor data generated by one or more sensors. For example, the vehicle 102 may navigate along a path from a first location to a second location. While navigating, the vehicle 102 may generate sensor data using one or more sensors of the vehicle 102.
At operation 1404, the process 1400 may include determining, using one or more systems of the vehicle, a first parameter associated with the object based at least in part on the sensor data. For example, the vehicle 102 may use one or more systems to analyze the sensor data. The one or more systems may include, but are not limited to, a positioning system, a perception system, a planning system, a prediction system, and the like. Based at least in part on the analysis, the vehicle 102 may determine a first parameter associated with an object (e.g., the vehicle or another object). The first parameter may include, but is not limited to, a position of the object, a speed of the object, a direction of travel of the object, and the like.
At operation 1406, the process 1400 may include determining a first probability distribution associated with the first parameter based at least in part on the first uncertainty model. For example, the vehicle 102 may process at least the first parameter using a first uncertainty model. Based at least in part on the processing, the vehicle 102 may determine a first probability distribution associated with the first parameter.
At operation 1408, the process 1400 may include determining, using one or more systems of the vehicle, a second parameter associated with the object based at least in part on at least one of the sensor data or the first probability distribution. For example, the vehicle 102 may analyze at least one of the sensor data or the first probability distribution using one or more systems. In some cases, the vehicle 102 may analyze the first probability distribution when the second parameter is determined using the first parameter. Based at least in part on the analysis, the vehicle 102 may determine a second parameter associated with the object (e.g., the vehicle or another object). The second parameter may include, but is not limited to, a position of the object, a speed of the object, a direction of travel of the object, an estimated position of the object at a future time, and the like.
At operation 1410, the process 1400 may include determining a second probability distribution associated with a second parameter based at least in part on a second uncertainty model. For example, the vehicle 102 may process at least the second parameter using a second uncertainty model. Based at least in part on the processing, the vehicle 102 may determine a second probability distribution associated with the second parameter.
At operation 1412, the process 1400 may include determining an estimated location associated with the object based at least in part on at least one of the first probability distribution or the second probability distribution. For example, the vehicle 102 may determine the estimated location based at least in part on the first probability distribution and/or the second probability distribution. In some cases, if the first parameter and the second parameter are independent, e.g., the first parameter indicates a current location of the object and the second parameter indicates a speed of the object, the vehicle 102 may determine the estimated location using both the first probability distribution and the second probability distribution. In some cases, the vehicle 102 may determine the estimated location using the second probability distribution if the second parameter is determined using the first parameter, e.g., if the second parameter indicates an estimated location of the object at a future time that is determined using the first parameter indicating a speed of the object.
Conclusion
While one or more examples of the technology described herein have been described, various modifications, additions, permutations and equivalents thereof are included within the scope of the technology described herein.
In the description of the examples, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration specific examples of the claimed subject matter. It is to be understood that other examples may be used and that changes or substitutions, e.g., structural changes, may be made. Such examples, modifications, or substitutions do not necessarily depart from the scope of the subject matter with which they are intended to be claimed. Although the steps herein may be presented in a particular order, in some cases the order may be changed, so certain inputs may be provided at different times or in a different order without changing the functionality of the systems and methods described. The disclosed procedures may also be performed in a different order. Moreover, the various computations herein are not necessarily performed in the order disclosed, and other examples using alternative orderings of computations may be readily implemented. In addition to reordering, a computation may be decomposed into sub-computations with the same result.
Example clauses
A: an autonomous vehicle comprising: one or more sensors; one or more processors; and one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: acquiring sensor data from one or more sensors; determining an estimated location of the autonomous vehicle at a future time based at least in part on the first portion of the sensor data; determining an estimated location of the object at a future time based at least in part on the system of the autonomous vehicle and the second portion of the sensor data; determining a distribution of estimated locations associated with the object based at least in part on an error model and the estimated locations of the object, the error model representing a probability of error associated with the system; determining a probability of collision between the autonomous vehicle and the object based at least in part on a distribution of the estimated location of the autonomous vehicle and the estimated location associated with the object; and causing the autonomous vehicle to perform one or more actions based at least in part on the collision probability.
B: the autonomous vehicle as recited in paragraph a, the operations further comprising receiving an error model from the one or more computing devices, the error model generated using at least sensor data generated by the one or more vehicles.
C: the autonomous vehicle as recited in paragraphs a or B, the operations further comprising: determining a distribution of estimated locations associated with the autonomous vehicle based at least in part on the additional error model and the estimated location of the vehicle; and wherein determining the probability of collision between the autonomous vehicle and the object comprises at least: determining an amount of overlap between a distribution of estimated locations associated with the autonomous vehicle and a distribution of estimated locations associated with the object; and determining a collision probability based at least in part on the amount of overlap.
D: the autonomous vehicle recited in any of paragraphs a through C, wherein: further determining an estimated location of the object at a future time based at least in part on an additional system of the autonomous vehicle; and further determining a distribution of the estimated locations based at least in part on an additional error model, the additional error model representing an error distribution associated with an additional system.
E: a method, comprising: receiving sensor data from one or more sensors of a vehicle; determining an estimated location associated with the vehicle at a time based at least in part on the first portion of the sensor data; determining a parameter associated with the object based at least in part on a system of the vehicle and the second portion of the sensor data; determining an estimated position associated with the object at a time based at least in part on an error model and a parameter associated with the object, the error model representing a probability of error associated with the system; and cause the vehicle to perform one or more actions based at least in part on the estimated location associated with the vehicle and the estimated location associated with the object.
F: a method as recited in paragraph E, further comprising receiving an error model from the one or more computing devices, the error model generated using at least sensor data generated by the one or more vehicles.
G: a method as recited in paragraphs E or F, wherein the parameter comprises at least one of: an object type associated with the object; a location of an object within the environment; the speed of the object; or the direction of travel of an object within the environment.
H: a method as recited in any of paragraphs E through F, wherein determining the estimated location associated with the vehicle at the time comprises at least: determining a parameter associated with the vehicle based at least in part on the additional system of the vehicle and the first portion of the sensor data; and determining an estimated position associated with the vehicle at the time based at least in part on an additional error model and the parameter associated with the vehicle, the additional error model representing a probability of error associated with the additional system.
I: the method as recited in any of paragraphs E to H, further comprising: determining an additional estimated location associated with the object at the time based at least in part on the parameter; and wherein determining the estimated location associated with the object at the time comprises: an estimated position associated with the object at the time is determined based at least in part on the error model and the additional estimated position associated with the object.
J: the method as recited in any of paragraphs E through I, wherein determining the estimated location associated with the object at the time comprises: a distribution of estimated locations associated with the object over time is determined based at least in part on the error model and the parameters associated with the object.
K: the method as recited in any of paragraphs E through J, wherein determining the estimated location associated with the vehicle comprises at least: determining a parameter associated with the vehicle based at least in part on the additional system of the vehicle and the first portion of the sensor data; and determining a distribution of estimated locations associated with the vehicle at the time based at least in part on an additional error model and parameters associated with the vehicle, the additional error model representing a probability of error associated with an additional system.
L: the method as recited in any of paragraphs E through K, further comprising: determining an amount of overlap between a distribution of estimated positions associated with the vehicle and a distribution of estimated positions associated with the object; and determining a collision probability based at least in part on the amount of overlap, and wherein causing the vehicle to perform the one or more actions is based at least in part on the collision probability.
M: a method as recited in any of paragraphs E through L, further comprising selecting an error model based at least in part on the parameters.
N: the method as recited in any of paragraphs E to M, further comprising: determining an additional parameter associated with the object based at least in part on an additional system of the vehicle and a second portion of the sensor data; and determining an output associated with the object based at least in part on the additional error model and the additional parameter associated with the object, the additional error model representing an error probability associated with the additional system; and wherein determining the parameter associated with the object comprises: a parameter associated with the object is determined based at least in part on the system and the output of the vehicle.
O: a method as recited in any of paragraphs E through N, wherein the system is a perception system and the additional system is a prediction system.
P: the method as recited in any of paragraphs E through O, further comprising: determining, based at least in part on the first portion of sensor data, an additional estimated location associated with the vehicle at an additional time later than the time; determining an additional parameter associated with the object based at least in part on the system of the vehicle and a second portion of the sensor data; determining additional estimated locations associated with the object at additional times based at least in part on the error model and additional parameters associated with the object; and causing the vehicle to perform one or more actions based at least in part on the additional estimated location associated with the vehicle and the additional estimated location associated with the object.
Q: the method as recited in any of paragraphs E to P, further comprising: determining a probability of collision based at least in part on the estimated location associated with the vehicle and the estimated location associated with the object; and wherein causing the vehicle to perform the one or more actions comprises: causing the vehicle to at least one of rate of change or change of course based at least in part on the probability of collision.
R: one or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause one or more computing devices to perform operations comprising: receiving sensor data generated by a sensor associated with a vehicle; determining an estimated location associated with the object at the time based at least in part on a portion of the sensor data; determining an error model from a plurality of error models based at least in part on the estimated location; determining a distribution of estimated locations associated with the object based at least in part on the error model and the estimated locations; and determining one or more actions for navigating the vehicle based at least in part on the distribution of estimated locations.
S: one or more non-transitory computer-readable media as recited in paragraph R, the operations further comprising: determining a parameter associated with the vehicle based at least in part on a portion of the sensor data; an estimated position is determined based at least in part on the parameter, and wherein an error model is associated with the parameter.
T: one or more non-transitory computer-readable media as recited in any of paragraphs R or S, wherein determining the error model is further based, at least in part, on one or more of: a classification of the object, a speed of the object, a size of the object, a number of objects in the environment, a weather condition in the environment, a time of day, or a time of year.
U: an autonomous vehicle comprising: one or more sensors; one or more processors; and one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: acquiring sensor data generated by one or more sensors; determining an estimated location of the autonomous vehicle based at least in part on the first portion of the sensor data; determining an estimated position of the object based at least in part on the second portion of the sensor data; determining an uncertainty model associated with the estimated location of the object; determining a distribution of estimated locations associated with the object based at least in part on the uncertainty model and the estimated location of the object; determining a probability of collision between the autonomous vehicle and the object based at least in part on a probability of the estimated location associated with the vehicle and the estimated location associated with the object; and causing the autonomous vehicle to perform one or more actions based at least in part on the collision probability.
V: the autonomous vehicle as recited in paragraph U, the operations further comprising: determining an additional uncertainty model associated with the additional system determining the estimated location of the autonomous vehicle; and determining a probability of the estimated location associated with the autonomous vehicle based at least in part on the additional uncertainty model and the estimated location of the autonomous vehicle, and wherein determining the probability of collision between the autonomous vehicle and the object comprises at least: determining an amount of overlap between the probability of the estimated location associated with the autonomous vehicle and the probability of the estimated location associated with the object; and determining a collision probability based at least in part on the amount of overlap.
W: an autonomous vehicle as recited in paragraphs U or V, wherein: further determining an estimated location of the object based at least in part on an additional system of the autonomous vehicle; the operations further comprise determining an additional uncertainty model associated with the additional system determining the estimated location of the object; and further determining a probability of the estimated location based at least in part on the additional uncertainty model.
X: a method, comprising: receiving sensor data from one or more sensors of a vehicle; determining an estimated location associated with the vehicle based at least in part on the first portion of the sensor data; determining a parameter associated with the object based at least in part on a system of the vehicle and a second portion of the sensor data; determining an uncertainty model associated with a parameter determined by the system to be associated with the object; determining an estimated location associated with the object based at least in part on the parameter associated with the object and the uncertainty model; and causing the vehicle to perform one or more actions based at least in part on the estimated location associated with the vehicle and the estimated location associated with the object.
Y: a method as recited in paragraph X, further comprising receiving an uncertainty model from the one or more computing devices, the uncertainty model generated based at least in part on sensor data generated by the one or more vehicles.
Z: a method as recited in any of paragraphs X or Y, wherein determining the parameter associated with the object comprises determining, based at least in part on the system and the second portion of sensor data, at least one of: an object type associated with the object; a location of an object within the environment; the speed of the object; or the direction of travel of an object within the environment.
AA: the method as recited in any of paragraphs X through Z, wherein determining the estimated location associated with the vehicle comprises at least: determining a parameter associated with the vehicle based at least in part on the additional system of the vehicle and the first portion of the sensor data; determining an additional uncertainty model associated with the additional system determining parameters associated with the vehicle; and determining an estimated location associated with the vehicle based at least in part on the parameters associated with the vehicle and the additional uncertainty model.
AB: the method as recited in any of paragraphs X to AA, further comprising: determining an additional estimated location associated with the object based at least in part on the parameter, and wherein determining the estimated location associated with the object comprises determining the estimated location associated with the object based at least in part on the additional estimated location associated with the object and the uncertainty model.
AC: a method as recited in any of paragraphs X through AB, wherein determining the estimated location associated with the object comprises determining a distribution of the estimated location associated with the object based, at least in part, on a parameter associated with the object and an uncertainty model.
AD: a method as recited in any of paragraphs X through AC, wherein determining the estimated location associated with the vehicle comprises at least: determining a parameter associated with the vehicle based at least in part on the additional system of the vehicle and the first portion of the sensor data; determining an additional uncertainty model associated with the additional system determining parameters associated with the vehicle; and determining a distribution of estimated locations associated with the vehicle based at least in part on the parameters associated with the vehicle and the additional uncertainty model.
AE: a method as recited in any of paragraphs X through AD, further comprising: determining an amount of overlap between a distribution of estimated positions associated with the vehicle and a distribution of estimated positions associated with the object; and determining a collision probability based at least in part on the amount of overlap, and wherein the vehicle is caused to perform one or more actions based at least in part on the collision probability.
AF: a method as recited in any of paragraphs X through AE, further comprising: determining an additional parameter associated with the object based at least in part on an additional system of the vehicle and a third portion of the sensor data; and determining an additional uncertainty model associated with the additional system determining additional parameters associated with the object; and wherein determining the estimated location associated with the object is further based at least in part on the additional parameters and the additional uncertainty model.
AG: a method as recited in any of paragraphs X through AF, further comprising: determining an additional parameter associated with the object based at least in part on an additional system of the vehicle and the second portion of the sensor data; determining an additional uncertainty model associated with the additional system determining additional parameters associated with the object; and determine an output associated with the object based at least in part on the additional parameter associated with the object and the additional uncertainty model, and wherein determining the parameter associated with the object comprises determining the parameter associated with the object based at least in part on the system and the output of the vehicle.
AH: a method as recited in any of paragraphs X to AG, further comprising: determining a parameter associated with the additional object based at least in part on the system of the vehicle and the third portion of the sensor data; determining an additional uncertainty model associated with the system determining parameters associated with the additional object; determining an estimated location associated with the additional object based at least in part on the parameters associated with the additional object and the additional uncertainty model; and wherein causing the vehicle to perform the one or more actions is further based at least in part on the estimated location associated with the additional object.
AI: a method as recited in any of paragraphs X through AH, further comprising: determining a probability of collision based at least in part on the estimated location associated with the vehicle and the estimated location associated with the object, and wherein causing the vehicle to perform the one or more actions includes at least one of causing the vehicle to at least one of rate of change or change of course based at least in part on the probability of collision.
AJ: one or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause one or more computing devices to perform operations comprising: receiving sensor data generated by a sensor associated with a vehicle; determining an estimated location associated with the object based at least in part on a portion of the sensor data; determining an uncertainty model from a plurality of uncertainty models based at least in part on the estimated location; determining a distribution of estimated locations associated with the object based at least in part on the uncertainty model and the estimated locations; and determining one or more actions for navigating the vehicle based at least in part on the distribution of estimated locations.
AK: one or more non-transitory computer-readable media as recited in paragraph AJ, the operations further comprising: determining a parameter associated with the vehicle based at least in part on a portion of the sensor data; an estimated location is determined based at least in part on the parameter, and wherein an uncertainty model is associated with the parameter.
AL: one or more non-transitory computer-readable media as recited in any of paragraphs AJ or AK, the operations further comprising: determining an estimated location associated with the vehicle based at least in part on the additional portion of the sensor data; determining an additional uncertainty model from the plurality of uncertainty models based at least in part on the estimated location; and determine a distribution of estimated locations associated with the vehicle based at least in part on the additional uncertainty model and the estimated locations associated with the vehicle, and wherein determining the one or more actions is further based at least in part on the distribution of estimated locations associated with the vehicle.
AM: one or more non-transitory computer-readable media as recited in any of paragraphs AJ to AL, the operations further comprising: determining a probability of collision based at least in part on the distribution of estimated locations associated with the vehicle and the distribution of estimated locations associated with the object, and wherein the one or more actions are determined based at least in part on the probability of collision.
AN: one or more non-transitory computer-readable media as recited in any of paragraphs AJ through AM, wherein determining the uncertainty model is further based, at least in part, on one or more of: a classification of the object, a speed of the object, a size of the object, a number of objects in the environment, a weather condition in the environment, a time of day, or a time of year.

Claims (15)

1. An autonomous vehicle comprising:
one or more sensors;
one or more processors; and
one or more computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:
obtaining sensor data from the one or more sensors;
determining an estimated location of the autonomous vehicle at a future time based at least in part on the first portion of the sensor data;
determining an estimated location of an object at the future time based at least in part on the autonomous vehicle's system and the second portion of sensor data;
determining a distribution of estimated locations associated with the object based at least in part on an error model and the estimated location of the object, the error model representing a probability of error associated with the system;
determining a probability of collision between the autonomous vehicle and the object based at least in part on a distribution of estimated locations of the autonomous vehicle and the estimated locations associated with the object; and is
Causing the autonomous vehicle to perform one or more actions based at least in part on the collision probability.
2. The autonomous vehicle of claim 1, the operations further comprising receiving the error model from one or more computing devices, the error model generated using at least sensor data generated by one or more vehicles.
3. The autonomous vehicle of claim 1 or 2, the operations further comprising:
determining a distribution of estimated locations associated with the autonomous vehicle based at least in part on an additional error model and the estimated location of the vehicle;
and wherein determining the probability of collision between the autonomous vehicle and the object comprises at least:
determining an amount of overlap between a distribution of the estimated locations associated with the autonomous vehicle and a distribution of the estimated locations associated with the object; and is
Determining the collision probability based at least in part on the amount of overlap.
4. The autonomous vehicle of any of claims 1-3, wherein:
further determining the estimated location of the object at the future time based at least in part on an additional system of the autonomous vehicle; and is
Further determining a distribution of the estimated locations based at least in part on an additional error model representing an error distribution associated with the additional system.
5. A method, comprising:
receiving sensor data from one or more sensors of a vehicle;
determining an estimated location associated with the vehicle at a time based at least in part on the first portion of the sensor data;
determining a parameter associated with an object based at least in part on a system of the vehicle and a second portion of the sensor data;
determining an estimated location associated with the object at the time based at least in part on an error model and the parameters associated with the object, the error model representing a probability of error associated with the system; and is
Causing the vehicle to perform one or more actions based at least in part on the estimated location associated with the vehicle and the estimated location associated with the object.
6. The method of claim 5, further comprising receiving the error model from one or more computing devices, the error model generated using at least sensor data generated by one or more vehicles.
7. The method of claim 5 or 6, wherein the parameter comprises at least one of:
an object type associated with the object;
a location of the object within an environment;
a speed of the object; or
A direction of travel of the object within the environment.
8. The method of any of claims 5-7, wherein determining the estimated location associated with the vehicle at the time comprises at least:
determining a parameter associated with the vehicle based at least in part on an additional system of the vehicle and the first portion of the sensor data; and is
Determining the estimated location associated with the vehicle at the time based at least in part on an additional error model and the parameter associated with the vehicle, the additional error model representing a probability of error associated with the additional system.
9. The method according to any one of claims 5-8, further comprising:
determining an additional estimated location associated with the object at the time based at least in part on the parameter;
and wherein determining the estimated location associated with the object at the time comprises: determining the estimated location associated with the object at the time based at least in part on the error model and the additional estimated location associated with the object.
10. The method of any of claims 5-9, wherein determining the estimated location associated with the object at the time comprises: determining a distribution of estimated locations associated with the object at the time based at least in part on the error model and the parameters associated with the object.
11. The method of any of claims 5-10, further comprising selecting the error model based at least in part on the parameter.
12. The method according to any one of claims 5-11, further comprising:
determining an additional parameter associated with the object based at least in part on an additional system of the vehicle and a second portion of the sensor data; and is provided with
Determining an output associated with the object based at least in part on an additional error model and the additional parameters associated with the object, the additional error model representing an error probability associated with the additional system;
and wherein determining the parameter associated with the object comprises: determining the parameter associated with the object based at least in part on a system of the vehicle and the output.
13. The method according to any one of claims 5-11, further comprising:
determining, based at least in part on the first portion of sensor data, additional estimated locations associated with the vehicle at additional times later than the time;
determining an additional parameter associated with the object based at least in part on a system of the vehicle and a second portion of the sensor data;
determining additional estimated locations associated with the object at the additional times based at least in part on the error model and the additional parameters associated with the object; and is
Causing the vehicle to perform one or more actions based at least in part on the additional estimated location associated with the vehicle and the additional estimated location associated with the object.
14. The method according to any one of claims 5-13, further comprising:
determining a probability of collision based at least in part on the estimated location associated with the vehicle and the estimated location associated with the object;
and wherein causing the vehicle to perform the one or more actions comprises: causing the vehicle to at least one of rate of change or change course based at least in part on the collision probability.
15. One or more non-transitory computer-readable media storing instructions that, when executed, cause one or more processors to perform the method of any one of claims 5-14.
CN202080078816.XA 2019-11-13 2020-11-12 Collision monitoring using statistical models Pending CN114730521A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US16/683,005 2019-11-13
US16/682,971 2019-11-13
US16/682,971 US11697412B2 (en) 2019-11-13 2019-11-13 Collision monitoring using statistic models
US16/683,005 US11648939B2 (en) 2019-11-13 2019-11-13 Collision monitoring using system data
PCT/US2020/060197 WO2021097070A1 (en) 2019-11-13 2020-11-12 Collision monitoring using statistic models

Publications (1)

Publication Number Publication Date
CN114730521A true CN114730521A (en) 2022-07-08

Family

ID=75912837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080078816.XA Pending CN114730521A (en) 2019-11-13 2020-11-12 Collision monitoring using statistical models

Country Status (4)

Country Link
EP (1) EP4059003A4 (en)
JP (1) JP2023502598A (en)
CN (1) CN114730521A (en)
WO (1) WO2021097070A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2612632B (en) * 2021-11-08 2024-04-03 Jaguar Land Rover Ltd Control system for a vehicle and method thereof

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8509982B2 (en) * 2010-10-05 2013-08-13 Google Inc. Zone driving
US8457827B1 (en) * 2012-03-15 2013-06-04 Google Inc. Modifying behavior of autonomous vehicle based on predicted behavior of other vehicles
US9656667B2 (en) * 2014-01-29 2017-05-23 Continental Automotive Systems, Inc. Method for minimizing automatic braking intrusion based on collision confidence
WO2015155833A1 (en) * 2014-04-08 2015-10-15 三菱電機株式会社 Collision prevention device
DE112014006561T5 (en) * 2014-04-10 2017-02-16 Mitsubishi Electric Corporation Routenvorausberechunungseinrichtung
JP6409680B2 (en) * 2015-05-29 2018-10-24 株式会社デンソー Driving support device and driving support method
JP6481520B2 (en) * 2015-06-05 2019-03-13 トヨタ自動車株式会社 Vehicle collision avoidance support device
US10496766B2 (en) * 2015-11-05 2019-12-03 Zoox, Inc. Simulation system and methods for autonomous vehicles
US10029682B2 (en) * 2016-01-22 2018-07-24 Toyota Motor Engineering & Manufacturing North America, Inc. Surrounding vehicle classification and path prediction
US10699305B2 (en) * 2016-11-21 2020-06-30 Nio Usa, Inc. Smart refill assistant for electric vehicles

Also Published As

Publication number Publication date
EP4059003A1 (en) 2022-09-21
EP4059003A4 (en) 2023-11-22
JP2023502598A (en) 2023-01-25
WO2021097070A1 (en) 2021-05-20

Similar Documents

Publication Publication Date Title
CN112204634B (en) Drive envelope determination
US20210339741A1 (en) Constraining vehicle operation based on uncertainty in perception and/or prediction
US10762396B2 (en) Multiple stage image based object detection and recognition
US11697412B2 (en) Collision monitoring using statistic models
US11648939B2 (en) Collision monitoring using system data
CN112789481A (en) Trajectory prediction for top-down scenarios
JP2022539245A (en) Top-down scene prediction based on action data
CN114845913A (en) Top-down scene prediction based on object motion
CN112752950A (en) Modifying map elements associated with map data
CN112839853A (en) Responsive vehicle control
CN114072841A (en) Depth refinement from images
JP2023547988A (en) Collision avoidance planning system
CN117813230A (en) Active prediction based on object trajectories
CN114730521A (en) Collision monitoring using statistical models
CN113544538A (en) Identifying radar reflections using velocity and position information
WO2023009794A1 (en) Three-dimensional object detection based on image data
CN117545674A (en) Technique for identifying curbs
CN117651880A (en) Radar data analysis and hidden object detection
KR20230033551A (en) Navigation with drivable area detection
US20230033177A1 (en) Three-dimensional point clouds based on images and depth data
US20230095410A1 (en) System for detecting objects in an environment
US11915436B1 (en) System for aligning sensor data with maps comprising covariances
US11780471B1 (en) System for determining a state of an object using thermal data
US20230419830A1 (en) Determining right of way
CN117813228A (en) Determining occupancy using unoccluded sensor emissions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination