US20200202706A1 - Message Broadcasting for Vehicles - Google Patents
Message Broadcasting for Vehicles Download PDFInfo
- Publication number
- US20200202706A1 US20200202706A1 US16/439,956 US201916439956A US2020202706A1 US 20200202706 A1 US20200202706 A1 US 20200202706A1 US 201916439956 A US201916439956 A US 201916439956A US 2020202706 A1 US2020202706 A1 US 2020202706A1
- Authority
- US
- United States
- Prior art keywords
- vehicle
- motion plan
- transmitting
- motion
- processing device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000033001 locomotion Effects 0.000 claims abstract description 375
- 238000000034 method Methods 0.000 claims abstract description 169
- 238000001514 detection method Methods 0.000 claims description 58
- 238000012545 processing Methods 0.000 claims description 57
- 230000004044 response Effects 0.000 claims description 30
- 238000007726 management method Methods 0.000 description 140
- 230000006399 behavior Effects 0.000 description 90
- 230000004927 fusion Effects 0.000 description 48
- 230000008447 perception Effects 0.000 description 36
- 230000009471 action Effects 0.000 description 32
- 230000003542 behavioural effect Effects 0.000 description 30
- 238000004891 communication Methods 0.000 description 26
- 230000008569 process Effects 0.000 description 24
- 230000001133 acceleration Effects 0.000 description 18
- 230000006870 function Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 230000008859 change Effects 0.000 description 12
- 230000004807 localization Effects 0.000 description 12
- 230000005540 biological transmission Effects 0.000 description 8
- 238000000926 separation method Methods 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 230000001413 cellular effect Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 238000010295 mobile communication Methods 0.000 description 4
- 230000000670 limiting effect Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- LHMQDVIHBXWNII-UHFFFAOYSA-N 3-amino-4-methoxy-n-phenylbenzamide Chemical compound C1=C(N)C(OC)=CC=C1C(=O)NC1=CC=CC=C1 LHMQDVIHBXWNII-UHFFFAOYSA-N 0.000 description 1
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 1
- 238000004378 air conditioning Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 239000012620 biological material Substances 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/161—Decentralised systems, e.g. inter-vehicle communication
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0137—Measuring and analyzing of parameters relative to traffic conditions for specific applications
- G08G1/0141—Measuring and analyzing of parameters relative to traffic conditions for specific applications for traffic information dissemination
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/095—Predicting travel path or likelihood of collision
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0097—Predicting future conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0015—Planning or execution of driving tasks specially adapted for safety
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0027—Planning or execution of driving tasks using trajectory prediction for other traffic participants
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
- G05D1/0285—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using signals transmitted via a public communication network, e.g. GSM network
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0287—Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
- G05D1/0289—Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling with means for avoiding collisions between vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0112—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/161—Decentralised systems, e.g. inter-vehicle communication
- G08G1/162—Decentralised systems, e.g. inter-vehicle communication event-triggered
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/164—Centralised systems, e.g. external to vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/166—Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/06—Selective distribution of broadcast services, e.g. multimedia broadcast multicast service [MBMS]; Services to user groups; One-way selective calling services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/30—Services specially adapted for particular environments, situations or purposes
- H04W4/40—Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo or light sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- B60W2420/408—
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/40—Dynamic objects, e.g. animals, windblown objects
- B60W2554/404—Characteristics
- B60W2554/4045—Intention, e.g. lane change or imminent movement
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/45—External transmission of data to or from the vehicle
- B60W2556/50—External transmission of data to or from the vehicle for navigation systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2556/00—Input parameters relating to data
- B60W2556/45—External transmission of data to or from the vehicle
- B60W2556/65—Data transmitted between vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2756/00—Output or target parameters relating to data
- B60W2756/10—Involving external transmission of data to or from the vehicle
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/025—Services making use of location information using location based information parameters
- H04W4/027—Services making use of location information using location based information parameters using movement velocity, acceleration information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/029—Location-based management or tracking services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/90—Services for handling of emergency or hazardous situations, e.g. earthquake and tsunami warning systems [ETWS]
Definitions
- Various aspects include methods enabling a vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, etc., to broadcast motion plans to surrounding vehicles, such as autonomous vehicles, semi-autonomous vehicles, and/or driver-operated vehicles.
- Various aspects include methods for using motion plans received from one or more surrounding vehicles.
- motion plans may include a vehicle's trajectory and one or more descriptors associated with the vehicle and/or the vehicle owner and/or operator.
- motion plans may be used at least in part to control a vehicle.
- controlling the vehicle based at least in part on the motion plan may include determining an expected region of interest for the vehicle based at least in part on the motion plan, and applying a detection algorithm to received sensor data at the expected region of interest to detect the transmitting vehicle in the received sensor data based at least in part on the sensor perceptible attribute.
- the method may further include selecting the detection algorithm based at least in part on the received motion plan.
- controlling the vehicle based at least in part on the motion plan may include correlating vehicle detection sensor data with the transmitting vehicle based at least in part on the sensor perceptible attribute.
- controlling the vehicle based at least in part on the motion plan may include determining a position of the transmitting vehicle based at least in part on the vehicle location attribute, determining whether a comparison between a position of the vehicle and the position of the transmitting vehicle indicate an error, and triggering a recalculation of the position of the vehicle in response to determining the comparison between the position of the vehicle and the position of the transmitting vehicle indicate an error.
- controlling the vehicle based at least in part on the motion plan may include determining whether the motion plan is unsafe, and sending a safety warning to the transmitting vehicle in response to determining the motion plan is unsafe.
- FIG. 3 is a block diagram illustrating components of an example system on chip for use in a vehicle that may be configured to broadcast, receive, and/or otherwise use intentions and/or motion plans in accordance with various embodiments.
- FIG. 4 is a process flow diagram illustrating a method of broadcasting an intention message according to various embodiments.
- FIG. 6 is a process flow diagram illustrating a method of using a broadcast motion plan in sensor perception operations according to various embodiments.
- a problem in autonomous driving of selecting an optimal driving action in an uncertain environment with uncertain agents can be modeled as a partially-observable Markov decision process (POMDP).
- POMDP partially-observable Markov decision process
- solutions to POMDPs are computationally intractable, so much research is devoted to finding ways to sufficiently approximate the problem such that online and real-time solutions are possible.
- Autonomous vehicles can simulate possible actions in order to determine the range of expected outcomes based on each action. There may be some reward or risk or penalty for each action and testing each possible action that can be searched can allow the autonomous vehicle to select the action with the most reward and/or least likely penalty.
- motion plans indicate an expected course of action and may be distinguished from requests to take an action (or other type of confirmation requiring communications) that may be exchanged between vehicles.
- the sharing of a motion plan via broadcasting intentions of autonomous (or semi-autonomous) vehicles to surrounding vehicles may provide a much greater benefit within the POMDP and may reduce the POMDP to a Markov decision process (MDP) that may require far less computational resources to solve.
- MDP Markov decision process
- the sensors may further include other types of object detection and ranging sensors, such as radar 132 , lidar 138 , IR sensors, and ultrasonic sensors.
- the sensors may further include tire pressure sensors 114 , 120 , humidity sensors, temperature sensors, satellite geopositioning sensors 108 , accelerometers, vibration sensors, gyroscopes, gravimeters, impact sensors 130 , force meters, stress meters, strain sensors, fluid sensors, chemical sensors, gas content analyzers, pH sensors, radiation sensors, Geiger counters, neutron detectors, biological material sensors, microphones 124 , 134 , occupancy sensors 112 , 116 , 118 , 126 , 128 , proximity sensors, and other sensors.
- the sensor fusion and RWM management layer 212 may receive data and outputs produced by the radar perception layer 202 , camera perception layer 204 , map fusion and arbitration layer 208 , and route planning layer 210 , and use some or all of such inputs to estimate or refine the location and state of the vehicle 100 in relation to the road, other vehicles on the road, and other objects within a vicinity of the vehicle 100 .
- the sensor fusion and RWM management layer 212 may combine imagery data from the camera perception layer 204 with arbitrated map location information from the map fusion and arbitration layer 208 to refine the determined position of the vehicle within a lane of traffic.
- FIG. 3 illustrates an example system-on-chip (SOC) architecture of a processing device SOC 300 suitable for implementing various embodiments in vehicles.
- the processing device SOC 300 may include a number of heterogeneous processors, such as a digital signal processor (DSP) 303 , a modem processor 304 , an image and object recognition processor 306 , a mobile display processor 307 , an applications processor 308 , and a resource and power management (RPM) processor 317 .
- DSP digital signal processor
- the system components and resources 316 , analog and custom circuitry 314 , and/or CAM 305 may include circuitry to interface with peripheral devices, such as cameras 122 , 136 , radar 132 , lidar 138 , electronic displays, wireless communication devices, external memory chips, etc.
- the processors 303 , 304 , 306 , 307 , 308 may be interconnected to one or more memory elements 312 , system components and resources 316 , analog and custom circuitry 314 , CAM 305 , and RPM processor 317 via an interconnection/bus module 324 , which may include an array of reconfigurable logic gates and/or implement a bus architecture (e.g., CoreConnect, AMBA, etc.). Communications may be provided by advanced interconnects, such as high-performance networks-on chip (NoCs).
- NoCs high-performance networks-on chip
- a processor may also be implemented as a combination of communication devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some blocks or methods may be performed by circuitry that is specific to a given function.
Abstract
Description
- This application claims the benefit of priority to U.S. Provisional Application No. 62/782,573, entitled “Intention Broadcasting for Autonomous Driving” filed Dec. 20, 2018, the entire contents of which are hereby incorporated by reference for all purposes.
- Automobiles and trucks are becoming more intelligent as the industry moves towards deploying autonomous and semi-autonomous vehicles. Autonomous and semi-autonomous vehicles can detect information about their location and surroundings (for example, using radar, lidar, GPS, file odometers, accelerometers, cameras, and other sensors), and include control systems that interpret sensory information to identify hazards and determine navigation paths to follow. Autonomous and semi-autonomous vehicles include control systems to operate with limited or no control from an occupant or other operator of the automobile.
- Various aspects include methods enabling a vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, etc., to broadcast motion plans to surrounding vehicles, such as autonomous vehicles, semi-autonomous vehicles, and/or driver-operated vehicles. Various aspects include methods for using motion plans received from one or more surrounding vehicles. In various embodiments, motion plans may include a vehicle's trajectory and one or more descriptors associated with the vehicle and/or the vehicle owner and/or operator. In various embodiments, motion plans may be used at least in part to control a vehicle.
- Various aspects include methods of controlling a vehicle that may include receiving an intention message including a motion plan for a vehicle transmitting the motion plan (the “transmitting vehicle”), wherein the motion plan comprises a trajectory of the transmitting vehicle and one or more vehicle descriptors associated with the transmitting vehicle, parsing the intention message to identify the motion plan for the transmitting vehicle, and controlling the vehicle based at least in part on the motion plan. In some aspects, the one or more vehicle descriptors may include a sensor perceptible attribute.
- In some aspects, controlling the vehicle based at least in part on the motion plan may include determining an expected region of interest for the vehicle based at least in part on the motion plan, and applying a detection algorithm to received sensor data at the expected region of interest to detect the transmitting vehicle in the received sensor data based at least in part on the sensor perceptible attribute. In some aspects, the method may further include selecting the detection algorithm based at least in part on the received motion plan.
- In some aspects, controlling the vehicle based at least in part on the motion plan may include correlating vehicle detection sensor data with the transmitting vehicle based at least in part on the sensor perceptible attribute.
- In some aspects, the one or more vehicle descriptors may include a vehicle physical capability. In some aspects, controlling the vehicle based at least in part on the motion plan may include setting a behavior prediction for the transmitting vehicle based at least in part on the motion plan. Some aspects may further include determining whether a behavior of the transmitting vehicle conforms to the behavior prediction, and updating the behavior prediction based at least in part on the vehicle physical capability in response to determining that the behavior of the transmitting vehicle does not conform to the behavior prediction.
- In some aspects, the one or more vehicle descriptors may include a vehicle location attribute. In some aspects, controlling the vehicle based at least in part on the motion plan may include determining a position of the transmitting vehicle based at least in part on the vehicle location attribute, determining whether a comparison between a position of the vehicle and the position of the transmitting vehicle indicate an error, and triggering a recalculation of the position of the vehicle in response to determining the comparison between the position of the vehicle and the position of the transmitting vehicle indicate an error.
- In some aspects, controlling the vehicle based at least in part on the motion plan may include determining whether the motion plan is unsafe, and sending a safety warning to the transmitting vehicle in response to determining the motion plan is unsafe.
- Various aspects for broadcasting a message from a vehicle may include determining a motion plan for the vehicle, wherein the motion plan comprises a trajectory of the vehicle and one or more vehicle descriptors of the vehicle, generating an intention message based at least in part on the determined motion plan, and broadcasting the intention message from the vehicle. In some aspects, the one or more vehicle descriptors may include a sensor perceptible attribute, a vehicle physical capability, or a vehicle location attribute.
- Further aspects include a vehicle including a processor configured with processor-executable instructions to perform operations of any of the methods summarized above. Further aspects include a non-transitory processor-readable storage medium having stored thereon processor-executable software instructions configured to cause a processor to perform operations of any of the methods summarized above. Further aspects include a processing device for use in a vehicle and configured to perform operations of any of the methods summarized above.
- The accompanying drawings, which are incorporated herein and constitute part of this specification, illustrate exemplary embodiments, and together with the general description given above and the detailed description given below, serve to explain the features of the various embodiments.
-
FIGS. 1A and 1B are component block diagrams illustrating a vehicle suitable for implementing various embodiments. -
FIG. 1C is a component block diagram illustrating components of a vehicle suitable for implementing various embodiments. -
FIG. 2A is a component block diagram illustrating components of an example vehicle management system according to various embodiments. -
FIG. 2B is a component block diagram illustrating components of another example vehicle management system according to various embodiments. -
FIG. 3 is a block diagram illustrating components of an example system on chip for use in a vehicle that may be configured to broadcast, receive, and/or otherwise use intentions and/or motion plans in accordance with various embodiments. -
FIG. 4 is a process flow diagram illustrating a method of broadcasting an intention message according to various embodiments. -
FIG. 5 is a process flow diagram illustrating a method of extracting a motion plan from a broadcast intention message according to various embodiments. -
FIG. 6 is a process flow diagram illustrating a method of using a broadcast motion plan in sensor perception operations according to various embodiments. -
FIG. 7 is a process flow diagram illustrating a method of using a broadcast motion plan in sensor fusion operations according to various embodiments. -
FIG. 8A is a process flow diagram illustrating a method of using a broadcast motion plan in behavior prediction operations according to various embodiments. -
FIG. 8B is a process flow diagram illustrating a method of using a broadcast motion plan in behavior prediction operations according to various embodiments. -
FIG. 9 is a process flow diagram illustrating a method of using a broadcast motion plan in position localization operations according to various embodiments. -
FIG. 10 is a process flow diagram illustrating a method of using a broadcast motion plan to share burden of safety operations between vehicles according to various embodiments. - Various aspects will be described in detail with reference to the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. References made to particular examples and embodiments are for illustrative purposes and are not intended to limit the scope of the various aspects or the claims.
- The surface transportation industry has increasingly looked to leverage the growing capabilities of cellular and wireless communication technologies through the adoption of Intelligent Transportation Systems (ITS) technologies to increase intercommunication and safety for both driver-operated vehicles and autonomous vehicles. The cellular vehicle-to-everything (C-V2X) protocol defined by the 3rd Generation Partnership Project (3GPP) supports ITS technologies and serves as the foundation for vehicles to communicate directly with the communication devices around them.
- C-V2X defines two transmission modes that, together, provide a 360° non-line-of-sight awareness and a higher level of predictability for enhanced road safety and autonomous driving. A first transmission mode includes direct C-V2X, which includes vehicle-to-vehicle (V2V), vehicle-to-infrastructure (V2I), and vehicle-to-pedestrian (V2P), and that provides enhanced communication range and reliability in the dedicated ITS 5.9 gigahertz (GHz) spectrum that is independent of a cellular network. A second transmission mode includes vehicle-to-network communications (V2N) in mobile broadband systems and technologies, such as third generation wireless mobile communication technologies (3G) (e.g., global system for mobile communications (GSM) evolution (EDGE) systems, code division multiple access (CDMA) 2000 systems, etc.), fourth generation wireless mobile communication technologies (4G) (e.g., long term evolution (LTE) systems, LTE-Advanced systems, mobile Worldwide Interoperability for Microwave Access (mobile WiMAX) systems, etc.), fifth generation wireless mobile communication technologies (5G) (e.g., 5G New Radio (5G NR) systems, etc.), etc.
- The term “system-on-chip” (SOC) is used herein to refer to a set of interconnected electronic circuits typically, but not exclusively, including one or more processors, a memory, and a communication interface. The SOC may include a variety of different types of processors and processor cores, such as a general purpose processor, a central processing unit (CPU), a digital signal processor (DSP), a graphics processing unit (GPU), an accelerated processing unit (APU), a sub-system processor, an auxiliary processor, a single-core processor, and a multicore processor. The SOC may further embody other hardware and hardware combinations, such as a field programmable gate array (FPGA), a configuration and status register (CSR), an application-specific integrated circuit (ASIC), other programmable logic device, discrete gate logic, transistor logic, registers, performance monitoring hardware, watchdog hardware, counters, and time references. SOCs may be integrated circuits (ICs) configured such that the components of the ICs reside on the same substrate, such as a single piece of semiconductor material (e.g., silicon, etc.).
- Various embodiments include methods, vehicles, vehicle management systems, and processing devices configured to implement the methods for broadcasting, receiving, and/or otherwise using intentions and/or motion plans during operation of vehicles, such as autonomous vehicles, semi-autonomous vehicles, driver-operated vehicles, etc.
- Autonomous and semi-autonomous vehicles, such as cars and trucks, are becoming a reality on city streets. Autonomous and semi-autonomous vehicles typically include a plurality of sensors, including cameras, radar, and lidar, that collect information about the environment surrounding the vehicle. For example, such collected information may enable the vehicle to recognize the roadway, identify objects to avoid, and track the movement and future position of other vehicles to enable partial or fully autonomous navigation. Similarly, non-autonomous vehicles, such as vehicles that cannot operate in an autonomous driving or semi-autonomous driving mode, may also include a plurality of sensors, including cameras, radar, and lidar, that collect information about the environment surrounding the vehicle. For example, such collected information may enable the vehicle to recognize the roadway, identify objects to avoid, and track the movement and future position of other vehicles to provide warnings to a driver of the vehicle
- A problem in autonomous driving of selecting an optimal driving action in an uncertain environment with uncertain agents can be modeled as a partially-observable Markov decision process (POMDP). In general, solutions to POMDPs are computationally intractable, so much research is devoted to finding ways to sufficiently approximate the problem such that online and real-time solutions are possible. Autonomous vehicles can simulate possible actions in order to determine the range of expected outcomes based on each action. There may be some reward or risk or penalty for each action and testing each possible action that can be searched can allow the autonomous vehicle to select the action with the most reward and/or least likely penalty.
- Autonomous driving stacks are predominantly designed as independent, standalone systems. In other words, an autonomous vehicle is tasked with inferring its own belief about the state of the world and its evolution without help from the other agents in the environment. Equipped only with its onboard sensors and computing power, the autonomous vehicle's belief of the world can be uncertain and there are infinitely many possibilities for how the world may evolve in time. Thus, the autonomous vehicle is required to search very large spaces of possibilities to decide what action to take next.
- Incorporating C-V2X (connected vehicles sharing information) with autonomous driving stacks can significantly reduce the dimensionality of the POMDP by increasing the certainty of important pieces of information. Various embodiments provide for broadcasting an intention message from an autonomous (or semi-autonomous) vehicle comprising a motion plan for that vehicle. A motion plan may be an indication of that vehicle's current position and an indication of how that vehicle expects its position to change over time. For example, a motion plan may indicate a position of a vehicle and a trajectory the vehicle will follow for a period of time. In various embodiments, the indication of how that vehicle expects its position to change over time may be an affirmative indication of a decision already made by that vehicle. In that manner, motion plans indicate an expected course of action and may be distinguished from requests to take an action (or other type of confirmation requiring communications) that may be exchanged between vehicles. The sharing of a motion plan via broadcasting intentions of autonomous (or semi-autonomous) vehicles to surrounding vehicles may provide a much greater benefit within the POMDP and may reduce the POMDP to a Markov decision process (MDP) that may require far less computational resources to solve.
- Various embodiments enable autonomous (or semi-autonomous) vehicles to broadcast their respective intentions and/or motion plans to surrounding vehicles, such as autonomous vehicles, semi-autonomous vehicles, non-autonomous vehicles, etc. There may be advantages to the communication of intentions and/or motion plan information throughout the entire autonomous vehicle software stack.
- In various embodiments, a motion plan may be a trajectory. An autonomous (or semi-autonomous) vehicle may know its current position and the direction and/or next action the autonomous (or semi-autonomous) vehicle may take to arrive at a next position at a next time. For example, the autonomous vehicle may have a trajectory to get to a next point from a current point. In various embodiments, a motion plan may include a trajectory and one or more vehicle descriptors associated with the vehicle and/or the vehicle owner and/or operator. For example, vehicle descriptors may include sensor perceptible attributes of the vehicle, such as vehicle color, vehicle license plate number, vehicle size, etc. As a further example, vehicle descriptors may include vehicle physical capabilities, such as vehicle type, vehicle turning radius, vehicle top speed, vehicle maximum acceleration, etc. As a still further example, vehicle descriptors may include vehicle location attributes, such as the vehicle's latitude and longitude, the vehicle's distance from and/or orientation to a known landmark in a coordinate plane, etc. Sharing the motion plan with other vehicles, such as other autonomous vehicles, other semi-autonomous vehicles, other non-autonomous vehicles, etc., may provide benefits to the other vehicles receiving the motion plan and/or the vehicle transmitting the motion plan.
- In various embodiments, a vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, a non-autonomous vehicle, etc., may receive an intention message including at least a motion plan. In various embodiments, the motion plan may be shared among one or more (e.g., all) of the components throughout the autonomous vehicle stack and/or may be shared among one or more (e.g., all) other components of the vehicle. Sharing that motion plan may enable a transmitting vehicle to share its position and share how that position is expect to evolve over time for that sharing vehicle. Sharing a motion plan may enable an autonomous or semi-autonomous vehicle receiving the motion plan to determine how the motions of the vehicle transmitting the motion plan will affect the vehicle receiving the motion plan.
- In various embodiments, an intention message may include an identifier of the vehicle broadcasting the intention message, the current position of the vehicle broadcasting the intention message, a motion plan of the vehicle broadcasting the intention message, and/or other data related to the vehicle broadcasting the intention message.
- In various embodiments, a motion plan may include a vehicle's position and an indication of how the position over time to is expected to change. In some embodiments, a motion plan may include a vehicle's trajectory. In some embodiments, a motion plan may include a vehicle's trajectory and one or more vehicle descriptors associated with the vehicle and/or the vehicle owner and/or operator. In some embodiments, a motion plan may include an indication of an expected next position of the reporting vehicle at a certain time. In some embodiments, a motion plan may include a vehicle's motion vector. In some embodiments, a motion plan may include an indication of the coordinate plane used for determining the vehicle's indicated position. In various embodiments, the motion plan may describe features of the vehicle transmitting the motion plan, such as its size, orientation, color, vehicle type, etc. In various embodiments, the motion plan may indicate the speed of the vehicle transmitting the motion plan, orientation of the vehicle transmitting the motion plan, acceleration of the vehicle transmitting the motion plan, or any other state information of the vehicle transmitting the motion plan. In various embodiments, the motion plan may indicate future actions (or intentions) of the vehicle transmitting the motion plan, such as “turning on left blinker in five seconds”, “turning right in two seconds”, “braking in one hundred feet”, or any other type actions or intentions relevant to driving.
- In various embodiments, intention messages may be broadcast from an autonomous (or semi-autonomous) vehicle, such as by C-V2X transmission modes. In various embodiments, intention messages may be broadcast periodically, such as at a set time interval, in response to a change in intention of a broadcasting vehicle, etc.
- In various embodiments, a vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, a non-autonomous vehicle, etc., may receive a broadcast intention message and may parse the received intention message to determine the broadcasting vehicle's identifier, the current position of the intention message, and the motion plan, and/or any other information indicated within the intention message. In various embodiments, broadcast intention messages may be received and parsed regardless of the operating mode of the vehicle. For example, broadcast intention messages may be received by an autonomous or semi-autonomous vehicle being actively controlled by a driver at a given time. In various embodiments, the broadcasting vehicle's identifier, the current position of the intention message, and the motion plan, and/or any other information indicated within the intention message may be provided to various hardware and software components of the receiving vehicle. For example, the broadcasting vehicle's identifier, the current position of the intention message, and the motion plan, and/or any other information indicated within the intention message may be stored in one or more memory locations on the receiving vehicle, may be sent to one or more layers of an autonomous vehicle management system, may be sent to one or more layers of a vehicle management system, may be sent to a vehicle safety and crash avoidance system, etc. In various embodiments, motion plans may be used by one or more layers of the vehicle management system to augment various decision making and/or autonomous driving operations. As examples, a received motion plan may be used by the vehicle management system in: sensor fusion processing; behavior prediction; behavioral planning; motion planning; position localization; and/or sharing the burden of safety operations between vehicles.
- In various embodiments, a received motion plan broadcast from another vehicle may be used in sensor perception operations of a vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, a non-autonomous vehicle, etc. Sensor perception may include operations to control where a sensor looks to confirm a detection of an object, such as another vehicle. In various embodiments, a motion plan for another vehicle may enable a region of interest for perceiving that vehicle in sensor data for the receiving vehicle to be narrowed to the perceptual space defined by the motion plan broadcasted by the autonomous vehicle. Sensor perceptual data on where the system would expect that car to be may confirm or deny whether the vehicle receiving the motion plan (e.g., an autonomous vehicle, a semi-autonomous vehicle, a non-autonomous vehicle, etc.) is handling detection of other vehicles correctly. In perception systems, vehicles must be picked out of potentially terabytes of data coming into the vehicle from many different cameras, many different radars, and many other different sensors. The received data from these sensor needs to be processed by the perception layers in fractions of seconds to identify objects in the data and get that processed sensor data to other layers in the vehicle management system. Being able to focus on a specific region of interest in the data in the space around the vehicle because an object, such as the motion plan broadcasting vehicle, is expected in that region of interest may increase the speed of detection of that vehicle when compared with analyzing the data as a whole. Additionally, when a motion plan indicates vehicle descriptors that may be sensor perceptible attributes, being able to focus the sensor toward a specific sensor perceptible attribute, such as vehicle color, vehicle size, etc., may increase the speed of detection of that vehicle when compared with analyzing the data as a whole. In various embodiments, the motion plan may be used to determine the region of interest to apply to the raw sensor data to perceive a vehicle in that raw sensor data. In some embodiments, the motion plan may be used to select and/or modify the algorithm used to perceive vehicles in the raw sensor data. For example, a different algorithm may be used when the motion plan broadcasting vehicle is expected to be head-on to the receiving vehicle than when the motion plan broadcasting vehicle is expected to be perpendicular to the receiving vehicle. As a further example, a different detection threshold may be used depending on whether a motion plan indicates the broadcasting vehicle is expected to be in a given space. As a specific example, without a motion plan a receiving vehicle may only report detections that pass with a 90% confidence, while with a motion plan the receiving vehicle may report detections that pass a 50% confidence in a region of interest associated with the motion plan. In some embodiments, the motion plan may be used to confirm whether or not a specific vehicle is perceived in the raw sensor data. For example, the detection of a sensor perceptible attribute of a vehicle included in a motion plan as a vehicle descriptor (e.g., the vehicle's color, vehicle's license plate number, etc.) in the raw sensor data may confirm that the vehicle transmitting the motion plan was the actual vehicle perceived in a region of interest by another vehicle in the vicinity.
- In various embodiments, a received motion plan broadcast from another vehicle may be used in sensor fusion operations of a vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, a non-autonomous vehicle, etc. Sensor fusion operations may be operations to combine and associate sensor data and associate that sensor data with a tracked object. A motion plan may improve the performance of data association and tracking operations of a sensor fusion layer of a vehicle management system. The vehicle management system may use the motion plan to determine how to fuse all the raw detections of other vehicles in an environment together. For example, based on the motion plan broadcast for a vehicle, the receiving vehicle may be enabled to determine a radar detection, a LIDAR detection, and a camera detection are actually all the same vehicle because the detections all correlate to the received motion plan for that vehicle rather than initially treating the three detections as separate vehicles. As a specific example, different sensors of the receiving vehicle positively identifying one or more sensor perceptible attributes of a vehicle included in a motion plan may confirm that the sensors are detecting the same vehicle that sent the motion plan. Additionally, the motion plan may enable the receiving vehicle to determine that those vehicle detections will evolve together. The ability to compare vehicle detections to motion plans may enable outlier measurements to be discarded. For example, a tracked vehicle's motion plan indicating it intends to stay in a current lane may enable a detection not corresponding to that lane to be associated with a new object rather than the previously tracked vehicle. The presence of a motion plan may reduce uncertainty in the sensor fusion operations. The noisy detections from the perception layer may be compared with the underlying trajectories and/or vehicle descriptors in a motion plan to give an improved certainty to sensor fusion operations.
- In various embodiments, a received motion plan broadcast from another vehicle may be used in behavior prediction, behavioral planning, and/or motion planning operations of a vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, a non-autonomous vehicle, etc. The broadcast of a motion plan by a reporting vehicle may enable a vehicle receiving that motion plan to predict the behavior of that vehicle with a definable certainty. The benefit of a motion plan and a position may be that the motion plan may be treated as the predicted behavior of that vehicle. This certainty in behavior prediction may reduce the dimensionality of the POMDP by knowing the behavioral/trajectory prediction exactly for surrounding cars that broadcast their respective motion plans. Sharing the motion plan may result in behavioral predictions with small or no uncertainty. Additionally, knowing the intended motion of a vehicle may eliminate the need to estimate that vehicle's motion, thereby reducing the computational resources associated with behavior prediction.
- In various embodiments, receiving a motion plan broadcast by a vehicle reduces the behavioral planning searches over a smaller space of possibilities. Given the perfect (or near perfect) knowledge of the state and actions of other vehicles provided by those vehicles' broadcast motion plans, finding optimal actions for the receiving vehicle collapses from a POMDP to an MDP analysis in which there are a host of solutions suitable for online and real-time operations. As the uncertain number of infinite actions of the broadcasting vehicle are reduced to a finite number of intended actions by receiving the motion plan for that broadcasting vehicle, the behavioral planning layer of the vehicle stack may develop high level driving goals for the receiving vehicle. Additionally, a motion plan including vehicle descriptors that reflect vehicle physical capabilities of the vehicle transmitting the motion plan may reduce the space of possibilities for behavioral planning searches performed by a vehicle management system of a receiving vehicle by limiting the possible behaviors of the transmitting vehicle to be assessed by a vehicle behavior model to those within the transmitting vehicle's capabilities. For example, a type of vehicle indicated in a motion plan may be used by a receiving vehicle management system to constrain a maximum speed or acceleration of the vehicle in a vehicle behavior model based on the maximum speed or acceleration associated with that vehicle type. As another example, a turning radius of the vehicle indicated in a motion plan may be used by a receiving vehicle management system to constrain the potential turning paths modeled for the vehicle transmitting the motion plan to within the indicated turning radius.
- In various embodiments, a vehicle management system receiving a motion plan broadcast by a vehicle including vehicle descriptors that reflect vehicle physical capabilities of the transmitting vehicle may use that information to reduce the space of possibilities for behavioral planning searches within a vehicle behavior model, such as after observing the vehicle deviating from a behavior prediction. In response to determining that a vehicle's observed behavior does not conform to a behavior prediction made by the vehicle management system, the behavior prediction for that vehicle may be updated based at least in part on vehicle capabilities determined from the received motion plan. For example, the vehicle management system may use the other vehicle's physical capabilities to collapse the possible future behaviors of the other vehicle determined by a vehicle behavior model from a POMDP to an MDP analysis in which there are a host of solutions suitable for online and real-time operations. For example, a type of vehicle indicated in a received motion plan may be used by the receiving vehicle management system to constrain a maximum speed or acceleration of the other vehicle used in updating the behavior prediction made by the vehicle behavior model based on the maximum speed or acceleration associated with that the type of the other vehicle. As another example, a turning radius of the other vehicle indicated in a motion plan may be used by the receiving vehicle management system to constrain the vehicle behavior model for potential turning paths of the vehicle transmitting the motion plan to within the indicated turning radius and use the vehicle behavior model updated in this manner to generate an updated other vehicle behavior prediction.
- In various embodiments, a behavioral planning layer may provide a high-level driving goal to a motion planning layer. The motion planning layer may be responsible for actually planning the trajectory to execute that high-level maneuver and the motion planning layer may be responsible for ensuring the safety of executing that trajectory. Receiving a motion plan broadcast by another vehicle in the environment may enable the motion planning layer to perform fewer collision checks as a result of a prediction of the motion of that broadcasting vehicle being known with certainty. Collision checking is often a bottleneck of fast motion planning, so fewer checks may greatly speed up the overall autonomous driving algorithm. In various embodiments, a behavioral planning layer may provide high level driving goals and/or behavior models for other vehicles to a vehicle safety and crash avoidance system. The vehicle safety and crash avoidance system may use the high level driving goals and/or behavior models for other vehicles to perform safety checks, such as collision checks, etc., while the vehicle is driving to avoid crashes.
- In various embodiments, a received motion plan broadcast from another vehicle may be used in position localization operations of a vehicle, such as an autonomous vehicle, semi-autonomous vehicle, non-autonomous vehicle, etc. In various embodiments, a localization layer of a vehicle management system may leverage the observations of the broadcasting vehicle to improve the position estimate of the receiving vehicle. In some embodiments, the localization layer (or positioning layer) may utilize the sensor fusion output describing the state of other vehicles and compare it to where these other vehicles are expected to be based on their broadcasted intentions in their respective motion plans. This may be similar to how map fusion is often done to augment positioning by comparing observation of lanes and landmarks to their expected positions based on an a priori known map. In this manner, the own vehicle localization process may leverage the observations of other vehicles using their respective motion plans. Thus, the receiving vehicle may better localize itself based on the broadcast motion plans of surrounding autonomous vehicles.
- In various embodiments, localization may leverage a high-quality map in a global and/or local coordinate plan including known positions of landmarks in that map, such as lane markers, road signs, etc. In various embodiments, intention messages and/or motion plans broadcast by vehicles may indicate those vehicles distance from, and/or orientation to, a known landmark in a coordinate plane, such as a local and/or global coordinate plane. For example, a motion plan may indicate the transmitting vehicle's distance from, and/or orientation to, a known landmark in a coordinate plane, such as a local and/or global coordinate plane, such as in a vehicle location attribute type vehicle descriptor. A vehicle, such as an autonomous vehicle, semi-autonomous vehicle, non-autonomous vehicle, etc., receiving the indications from the broadcasting vehicles may compare its observations of those broadcasting vehicles and their position relative to the known landmark to the distance from the known landmark in the intention message and/or motion plan to assist in localization process performed on the receiving vehicle. For example, the receiving vehicle may compare its own observations to those in the received motion plans to determine whether there is an error or offset between its observations and those in the motion plans. An error or offset between its observations and those in the received motion plans may indicate to the receiving vehicle that it has localized its respective current position incorrectly. In response, the receiving vehicle may trigger a recalculation of its position. In various embodiments, the receiving vehicle may convert the observations in an intention message and/or motion plan from one coordinate plane to another coordinate plane. For example, the vehicle may convert the observations of a broadcasting vehicle from a global coordinate plane (e.g., latitude & longitude) to a local (e.g., street map-centric) coordinate plane. As a specific example, two different autonomous vehicles may both broadcast their respective motion plans and those motion plans may be received by the receiving vehicle. Observations of a landmark in those two motion plans may match, but may be different from the observation of the receiving vehicle of that same landmark. The agreement of two observations in the different motion plans and their difference from the receiving vehicle's observations may indicate the receiving vehicle localized its position wrong. In response, the receiving vehicle may recalculate its position.
- In various embodiments, a received motion plan broadcast from another vehicle may be used to share the burden of safety operations between vehicles, such as autonomous vehicles, semi-autonomous vehicles, etc. In various embodiments, broadcasting motion plans also allows for extensions like sharing the burden of safety between the equipped vehicles. A vehicle can verify that received motion plans are safe within its own belief of the environment, and issue warnings back to the broadcasting vehicle if safety appears compromised by the received motion plan. For example, the receiving vehicle may detect an object that will cause a motion plan of a broadcasting vehicle to be unsafe even though the broadcasting vehicle did not yet sense or otherwise observe that object. The receiving vehicle may determine the motion plan is unsafe and may indicate a warning to the broadcasting vehicle. The warning may include the observation of the object causing the motion plan to be unsafe.
- Another advantage of broadcasting intentions is that nothing is impacted in the vehicle management system if no other vehicles in the vicinity broadcast intention messages. If no intention messages are received, the vehicle management system may revert to making the vehicle responsible for the full inference of the state of the world.
- Various embodiments may be implemented within a variety of vehicles, an
example vehicle 100 of which is illustrated inFIGS. 1A and 1B . With reference toFIGS. 1A and 1B , avehicle 100 may include acontrol unit 140 and a plurality of sensors 102-138, including satellitegeopositioning system receivers 108,occupancy sensors tire pressure sensors cameras microphones impact sensors 130,radar 132, andlidar 138. The plurality of sensors 102-138, disposed in or on the vehicle, may be used for various purposes, such as autonomous and semi-autonomous navigation and control, crash avoidance, position determination, etc., as well to provide sensor data regarding objects and people in or on thevehicle 100. The sensors 102-138 may include one or more of a wide variety of sensors capable of detecting a variety of information useful for navigation and collision avoidance. Each of the sensors 102-138 may be in wired or wireless communication with acontrol unit 140, as well as with each other. In particular, the sensors may include one ormore cameras radar 132,lidar 138, IR sensors, and ultrasonic sensors. The sensors may further includetire pressure sensors satellite geopositioning sensors 108, accelerometers, vibration sensors, gyroscopes, gravimeters,impact sensors 130, force meters, stress meters, strain sensors, fluid sensors, chemical sensors, gas content analyzers, pH sensors, radiation sensors, Geiger counters, neutron detectors, biological material sensors,microphones occupancy sensors - The
vehicle control unit 140 may be configured with processor-executable instructions to perform various embodiments using information received from various sensors, particularly thecameras control unit 140 may supplement the processing of camera images using distance and relative position (e.g., relative bearing angle) that may be obtained fromradar 132 and/orlidar 138 sensors. Thecontrol unit 140 may further be configured to control steering, breaking and speed of thevehicle 100 when operating in an autonomous or semi-autonomous mode using information regarding other vehicles determined using various embodiments. -
FIG. 1C is a component block diagram illustrating asystem 150 of components and support systems suitable for implementing various embodiments. With reference toFIGS. 1A, 1B, and 1C , avehicle 100 may include acontrol unit 140, which may include various circuits and devices used to control the operation of thevehicle 100. In the example illustrated inFIG. 1C , thecontrol unit 140 includes aprocessor 164,memory 166, aninput module 168, anoutput module 170 and aradio module 172. Thecontrol unit 140 may be coupled to and configured to controldrive control components 154,navigation components 156, and one ormore sensors 158 of thevehicle 100. - As used herein, the terms “component,” “system,” “unit,” “module,” and the like include a computer-related entity, such as, but not limited to, hardware, firmware, a combination of hardware and software, software, or software in execution, which are configured to perform particular operations or functions. For example, a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a communication device and the communication device may be referred to as a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one processor or core and/or distributed between two or more processors or cores. In addition, these components may execute from various non-transitory computer readable media having various instructions and/or data structures stored thereon. Components may communicate by way of local and/or remote processes, function or procedure calls, electronic signals, data packets, memory read/writes, and other known computer, processor, and/or process related communication methodologies.
- The
control unit 140 may include aprocessor 164 that may be configured with processor-executable instructions to control maneuvering, navigation, and/or other operations of thevehicle 100, including operations of various embodiments. Theprocessor 164 may be coupled to thememory 166. The control unit 162 may include theinput module 168, theoutput module 170, and theradio module 172. - The
radio module 172 may be configured for wireless communication. Theradio module 172 may exchange signals 182 (e.g., command signals for controlling maneuvering, signals from navigation facilities, etc.) with anetwork transceiver 180, and may provide thesignals 182 to theprocessor 164 and/or thenavigation unit 156. In some embodiments, theradio module 172 may enable thevehicle 100 to communicate with awireless communication device 190 through awireless communication link 192. Thewireless communication link 192 may be a bidirectional or unidirectional communication link, and may use one or more communication protocols. - The
input module 168 may receive sensor data from one ormore vehicle sensors 158 as well as electronic signals from other components, including thedrive control components 154 and thenavigation components 156. Theoutput module 170 may be used to communicate with or activate various components of thevehicle 100, including thedrive control components 154, thenavigation components 156, and the sensor(s) 158. - The
control unit 140 may be coupled to thedrive control components 154 to control physical elements of thevehicle 100 related to maneuvering and navigation of the vehicle, such as the engine, motors, throttles, steering elements, flight control elements, braking or deceleration elements, and the like. Thedrive control components 154 may also include components that control other devices of the vehicle, including environmental controls (e.g., air conditioning and heating), external and/or interior lighting, interior and/or exterior informational displays (which may include a display screen or other devices to display information), safety devices (e.g., haptic devices, audible alarms, etc.), and other similar devices. - The
control unit 140 may be coupled to thenavigation components 156, and may receive data from thenavigation components 156 and be configured to use such data to determine the present position and orientation of thevehicle 100, as well as an appropriate course toward a destination. In various embodiments, thenavigation components 156 may include or be coupled to a global navigation satellite system (GNSS) receiver system (e.g., one or more Global Positioning System (GPS) receivers) enabling thevehicle 100 to determine its current position using GNSS signals. Alternatively, or in addition, thenavigation components 156 may include radio navigation receivers for receiving navigation beacons or other signals from radio nodes, such as Wi-Fi access points, cellular network sites, radio station, remote computing devices, other vehicles, etc. Through control of thedrive control elements 154, theprocessor 164 may control thevehicle 100 to navigate and maneuver. Theprocessor 164 and/or thenavigation components 156 may be configured to communicate with aserver 184 on a network 186 (e.g., the Internet) using awireless connection 182 with acellular data network 180 to receive commands to control maneuvering, receive data useful in navigation, provide real-time position reports, and assess other data. - The control unit 162 may be coupled to one or
more sensors 158. The sensor(s) 158 may include the sensors 102-138 as described, and may the configured to provide a variety of data to theprocessor 164. - While the
control unit 140 is described as including separate components, in some embodiments some or all of the components (e.g., theprocessor 164, thememory 166, theinput module 168, theoutput module 170, and the radio module 172) may be integrated in a single device or module, such as a system-on-chip (SOC) processing device. Such an SOC processing device may be configured for use in vehicles and be configured, such as with processor-executable instructions executing in theprocessor 164, to perform operations of various embodiments when installed into a vehicle. -
FIG. 2A illustrates an example of subsystems, computational elements, computing devices or units within avehicle management system 200, which may be utilized within avehicle 100. With reference toFIGS. 1A-2A , in some embodiments, the various computational elements, computing devices or units withinvehicle management system 200 may be implemented within a system of interconnected computing devices (i.e., subsystems), that communicate data and commands to each other (e.g., indicated by the arrows inFIG. 2A ). In other embodiments, the various computational elements, computing devices or units withinvehicle management system 200 may be implemented within a single computing device, such as separate threads, processes, algorithms or computational elements. Therefore, each subsystem/computational element illustrated inFIG. 2A is also generally referred to herein as “layer” within a computational “stack” that constitutes thevehicle management system 200. However, the use of the terms layer and stack in describing various embodiments are not intended to imply or require that the corresponding functionality is implemented within a single autonomous (or semi-autonomous) vehicle management system computing device, although that is a potential implementation embodiment. Rather the use of the term “layer” is intended to encompass subsystems with independent processors, computational elements (e.g., threads, algorithms, subroutines, etc.) running in one or more computing devices, and combinations of subsystems and computational elements. - In various embodiments, the vehicle
management system stack 200 may include aradar perception layer 202, acamera perception layer 204, apositioning engine layer 206, a map fusion andarbitration layer 208, aroute planning layer 210, sensor fusion and road world model (RWM)management layer 212, motion planning andcontrol layer 214, and behavioral planning andprediction layer 216. The layers 202-216 are merely examples of some layers in one example configuration of the vehiclemanagement system stack 200. In other configurations consistent with various embodiments, other layers may be included, such as additional layers for other perception sensors (e.g., LIDAR perception layer, etc.), additional layers for planning and/or control, additional layers for modeling, etc., and/or certain of the layers 202-216 may be excluded from the vehiclemanagement system stack 200. Each of the layers 202-216 may exchange data, computational results and commands as illustrated by the arrows inFIG. 2A . Further, the vehiclemanagement system stack 200 may receive and process data from sensors (e.g., radar, lidar, cameras, inertial measurement units (IMU) etc.), navigation systems (e.g., GPS receivers, IMUs, etc.), vehicle networks (e.g., Controller Area Network (CAN) bus), and databases in memory (e.g., digital map data). The vehiclemanagement system stack 200 may output vehicle control commands or signals to the drive by wire (DBW) system/control unit 220, which is a system, subsystem or computing device that interfaces directly with vehicle steering, throttle and brake controls. The configuration of the vehiclemanagement system stack 200 and DBW system/control unit 220 illustrated inFIG. 2A is merely an example configuration and other configurations of a vehicle management system and other vehicle components may be used in the various embodiments. As an example, the configuration of the vehiclemanagement system stack 200 and DBW system/control unit 220 illustrated inFIG. 2A may be used in a vehicle configured for autonomous or semi-autonomous operation while a different configuration may be used in a non-autonomous vehicle. - The
radar perception layer 202 may receive data from one or more detection and ranging sensors, such as radar (e.g., 132) and/or lidar (e.g., 138), and process the data to recognize and determine locations of other vehicles and objects within a vicinity of thevehicle 100. Theradar perception layer 202 may include use of neural network processing and artificial intelligence methods to recognize objects and vehicles, and pass such information on to the sensor fusion andRWM management layer 212. - The
camera perception layer 204 may receive data from one or more cameras, such as cameras (e.g., 122, 136), and process the data to recognize and determine locations of other vehicles and objects within a vicinity of thevehicle 100. Thecamera perception layer 204 may include use of neural network processing and artificial intelligence methods to recognize objects and vehicles, and pass such information on to the sensor fusion andRWM management layer 212. - The
positioning engine layer 206 may receive data from various sensors and process the data to determine a position of thevehicle 100. The various sensors may include, but is not limited to, GPS sensor, an IMU, and/or other sensors connected via a CAN bus. Thepositioning engine layer 206 may also utilize inputs from one or more cameras, such as cameras (e.g., 122, 136) and/or any other available sensor, such as radars, LIDARs, etc. - The map fusion and
arbitration layer 208 may access data within a high definition (HD) map database and receive output received from thepositioning engine layer 206 and process the data to further determine the position of thevehicle 100 within the map, such as location within a lane of traffic, position within a street map, etc. The HD map database may be stored in a memory (e.g., memory 166). For example, the map fusion andarbitration layer 208 may convert latitude and longitude information from GPS into locations within a surface map of roads contained in the HD map database. GPS position fixes include errors, so the map fusion andarbitration layer 208 may function to determine a best guess location of the vehicle within a roadway based upon an arbitration between the GPS coordinates and the HD map data. For example, while GPS coordinates may place the vehicle near the middle of a two-lane road in the HD map, the map fusion andarbitration layer 208 may determine from the direction of travel that the vehicle is most likely aligned with the travel lane consistent with the direction of travel. The map fusion andarbitration layer 208 may pass map-based location information to the sensor fusion andRWM management layer 212. - The
route planning layer 210 may utilize the HD map, as well as inputs from an operator or dispatcher to plan a route to be followed by thevehicle 100 to a particular destination. Theroute planning layer 210 may pass map-based location information to the sensor fusion andRWM management layer 212. However, the use of a prior map by other layers, such as the sensor fusion andRWM management layer 212, etc., is not required. For example, other stacks may operate and/or control the vehicle based on perceptual data alone without a provided map, constructing lanes, boundaries, and the notion of a local map as perceptual data is received. - The sensor fusion and
RWM management layer 212 may receive data and outputs produced by theradar perception layer 202,camera perception layer 204, map fusion andarbitration layer 208, androute planning layer 210, and use some or all of such inputs to estimate or refine the location and state of thevehicle 100 in relation to the road, other vehicles on the road, and other objects within a vicinity of thevehicle 100. For example, the sensor fusion andRWM management layer 212 may combine imagery data from thecamera perception layer 204 with arbitrated map location information from the map fusion andarbitration layer 208 to refine the determined position of the vehicle within a lane of traffic. As another example, the sensor fusion andRWM management layer 212 may combine object recognition and imagery data from thecamera perception layer 204 with object detection and ranging data from theradar perception layer 202 to determine and refine the relative position of other vehicles and objects in the vicinity of the vehicle. As another example, the sensor fusion andRWM management layer 212 may receive information from vehicle-to-vehicle (V2V) communications (such as via the CAN bus) regarding other vehicle positions and directions of travel, and combine that information with information from theradar perception layer 202 and thecamera perception layer 204 to refine the locations and motions of other vehicles. The sensor fusion andRWM management layer 212 may output refined location and state information of thevehicle 100, as well as refined location and state information of other vehicles and objects in the vicinity of the vehicle, to the motion planning andcontrol layer 214 and/or the behavior planning andprediction layer 216. - As a further example, the sensor fusion and
RWM management layer 212 may use dynamic traffic control instructions directing thevehicle 100 to change speed, lane, direction of travel, or other navigational element(s), and combine that information with other received information to determine refined location and state information. The sensor fusion andRWM management layer 212 may output the refined location and state information of thevehicle 100, as well as refined location and state information of other vehicles and objects in the vicinity of thevehicle 100, to the motion planning andcontrol layer 214, the behavior planning andprediction layer 216 and/or devices remote from thevehicle 100, such as a data server, other vehicles, etc., via wireless communications, such as through C-V2X connections, other wireless connections, etc. - As a still further example, the sensor fusion and
RWM management layer 212 may monitor perception data from various sensors, such as perception data from aradar perception layer 202,camera perception layer 204, other perception layer, etc., and/or data from one or more sensors themselves to analyze conditions in the vehicle sensor data. The sensor fusion andRWM management layer 212 may be configured to detect conditions in the sensor data, such as sensor measurements being at, above, or below a threshold, certain types of sensor measurements occurring, etc., and may output the sensor data as part of the refined location and state information of thevehicle 100 provided to the behavior planning andprediction layer 216 and/or devices remote from thevehicle 100, such as a data server, other vehicles, etc., via wireless communications, such as through C-V2X connections, other wireless connections, etc. - The refined location and state information may include vehicle descriptors associated with the vehicle and the vehicle owner and/or operator, such as: vehicle specifications (e.g., size, weight, color, on board sensor types, etc.); vehicle position, speed, acceleration, direction of travel, attitude, orientation, destination, fuel/power level(s), and other state information; vehicle emergency status (e.g., is the vehicle an emergency vehicle or private individual in an emergency); vehicle restrictions (e.g., heavy/wide load, turning restrictions, high occupancy vehicle (HOV) authorization, etc.); capabilities (e.g., all-wheel drive, four-wheel drive, snow tires, chains, connection types supported, on board sensor operating statuses, on board sensor resolution levels, etc.) of the vehicle; equipment problems (e.g., low tire pressure, weak breaks, sensor outages, etc.); owner/operator travel preferences (e.g., preferred lane, roads, routes, and/or destinations, preference to avoid tolls or highways, preference for the fastest route, etc.); permissions to provide sensor data to a data agency server (e.g., 184); and/or owner/operator identification information.
- The behavioral planning and
prediction layer 216 of the autonomousvehicle system stack 200 may use the refined location and state information of thevehicle 100 and location and state information of other vehicles and objects output from the sensor fusion andRWM management layer 212 to predict future behaviors of other vehicles and/or objects. For example, the behavioral planning andprediction layer 216 may use such information to predict future relative positions of other vehicles in the vicinity of the vehicle based on own vehicle position and velocity and other vehicle positions and velocity. Such predictions may take into account information from the HD map and route planning to anticipate changes in relative vehicle positions as host and other vehicles follow the roadway. The behavioral planning andprediction layer 216 may output other vehicle and object behavior and location predictions to the motion planning andcontrol layer 214. Additionally, the behavior planning andprediction layer 216 may use object behavior in combination with location predictions to plan and generate control signals for controlling the motion of thevehicle 100. For example, based on route planning information, refined location in the roadway information, and relative locations and motions of other vehicles, the behavior planning andprediction layer 216 may determine that thevehicle 100 needs to change lanes and accelerate, such as to maintain or achieve minimum spacing from other vehicles, and/or prepare for a turn or exit. As a result, the behavior planning andprediction layer 216 may calculate or otherwise determine a steering angle for the wheels and a change to the throttle setting to be commanded to the motion planning andcontrol layer 214 and DBW system/control unit 220 along with such various parameters necessary to effectuate such a lane change and acceleration. One such parameter may be a computed steering wheel command angle. - The motion planning and
control layer 214 may receive data and information outputs from the sensor fusion andRWM management layer 212 and other vehicle and object behavior as well as location predictions from the behavior planning andprediction layer 216, and use this information to plan and generate control signals for controlling the motion of thevehicle 100 and to verify that such control signals meet safety requirements for thevehicle 100. For example, based on route planning information, refined location in the roadway information, and relative locations and motions of other vehicles, the motion planning andcontrol layer 214 may verify and pass various control commands or instructions to the DBW system/control unit 220. - The DBW system/
control unit 220 may receive the commands or instructions from the motion planning andcontrol layer 214 and translate such information into mechanical control signals for controlling wheel angle, brake and throttle of thevehicle 100. For example, DBW system/control unit 220 may respond to the computed steering wheel command angle by sending corresponding control signals to the steering wheel controller. - In various embodiments, the vehicle
management system stack 200 may include functionality that performs safety checks or oversight of various commands, planning or other decisions of various layers that could impact vehicle and occupant safety. Such safety check or oversight functionality may be implemented within a dedicated layer or distributed among various layers and included as part of the functionality. In some embodiments, a variety of safety parameters may be stored in memory and the safety checks or oversight functionality may compare a determined value (e.g., relative spacing to a nearby vehicle, distance from the roadway centerline, etc.) to corresponding safety parameter(s), and issue a warning or command if the safety parameter is or will be violated. For example, a safety or oversight function in the behavior planning and prediction layer 216 (or in a separate layer) may determine the current or future separate distance between another vehicle (as refined by the sensor fusion and RWM management layer 212) and the vehicle (e.g., based on the world model refined by the sensor fusion and RWM management layer 212), compare that separation distance to a safe separation distance parameter stored in memory, and issue instructions to the motion planning andcontrol layer 214 to speed up, slow down or turn if the current or predicted separation distance violates the safe separation distance parameter. As another example, safety or oversight functionality in the motion planning and control layer 214 (or a separate layer) may compare a determined or commanded steering wheel command angle to a safe wheel angle limit or parameter, and issue an override command and/or alarm in response to the commanded angle exceeding the safe wheel angle limit. - Some safety parameters stored in memory may be static (i.e., unchanging over time), such as maximum vehicle speed. Other safety parameters stored in memory may be dynamic in that the parameters are determined or updated continuously or periodically based on vehicle state information and/or environmental conditions. Non-limiting examples of safety parameters include maximum safe speed, maximum brake pressure, maximum acceleration, and the safe wheel angle limit, all of which may be a function of roadway and weather conditions.
-
FIG. 2B illustrates an example of subsystems, computational elements, computing devices or units within avehicle management system 250, which may be utilized within avehicle 100. With reference toFIGS. 1A-2B , in some embodiments, thelayers management system stack 200 may be similar to those described with reference toFIG. 2A and the vehiclemanagement system stack 250 may operate similar to the vehiclemanagement system stack 200, except that the vehiclemanagement system stack 250 may pass various data or instructions to a vehicle safety andcrash avoidance system 252 rather than the DBW system/control unit 220. For example, the configuration of the vehiclemanagement system stack 250 and the vehicle safety andcrash avoidance system 252 illustrated inFIG. 2B may be used in a non-autonomous vehicle. - In various embodiments, the behavioral planning and
prediction layer 216 and/or sensor fusion andRWM management layer 212 may output data to the vehicle safety andcrash avoidance system 252. For example, the sensor fusion andRWM management layer 212 may output sensor data as part of refined location and state information of thevehicle 100 provided to the vehicle safety andcrash avoidance system 252. The vehicle safety andcrash avoidance system 252 may use the refined location and state information of thevehicle 100 to make safety determinations relative to thevehicle 100 and/or occupants of thevehicle 100. As another example, the behavioral planning andprediction layer 216 may output behavior models and/or predictions related to the motion of other vehicles to the vehicle safety andcrash avoidance system 252. The vehicle safety andcrash avoidance system 252 may use the behavior models and/or predictions related to the motion of other vehicles to make safety determinations relative to thevehicle 100 and/or occupants of thevehicle 100. - In various embodiments, the vehicle safety and
crash avoidance system 252 may include functionality that performs safety checks or oversight of various commands, planning, or other decisions of various layers, as well as human driver actions, that could impact vehicle and occupant safety. In some embodiments, a variety of safety parameters may be stored in memory and the vehicle safety andcrash avoidance system 252 may compare a determined value (e.g., relative spacing to a nearby vehicle, distance from the roadway centerline, etc.) to corresponding safety parameter(s), and issue a warning or command if the safety parameter is or will be violated. For example, a vehicle safety andcrash avoidance system 252 may determine the current or future separate distance between another vehicle (as refined by the sensor fusion and RWM management layer 212) and the vehicle (e.g., based on the world model refined by the sensor fusion and RWM management layer 212), compare that separation distance to a safe separation distance parameter stored in memory, and issue instructions to a driver to speed up, slow down or turn if the current or predicted separation distance violates the safe separation distance parameter. As another example, a vehicle safety andcrash avoidance system 252 may compare a human driver's change in steering wheel angle to a safe wheel angle limit or parameter, and issue an override command and/or alarm in response to the steering wheel angle exceeding the safe wheel angle limit. -
FIG. 3 illustrates an example system-on-chip (SOC) architecture of aprocessing device SOC 300 suitable for implementing various embodiments in vehicles. With reference toFIGS. 1A-3 , theprocessing device SOC 300 may include a number of heterogeneous processors, such as a digital signal processor (DSP) 303, amodem processor 304, an image and objectrecognition processor 306, amobile display processor 307, anapplications processor 308, and a resource and power management (RPM)processor 317. Theprocessing device SOC 300 may also include one or more coprocessors 310 (e.g., vector co-processor) connected to one or more of theheterogeneous processors processing device SOC 300 may include a processor that executes a first type of operating system (e.g., FreeBSD, LINUX, OS X, etc.) and a processor that executes a second type of operating system (e.g., Microsoft Windows). In some embodiments, theapplications processor 308 may be the SOC's 300 main processor, central processing unit (CPU), microprocessor unit (MPU), arithmetic logic unit (ALU), etc. Thegraphics processor 306 may be graphics processing unit (GPU). - The
processing device SOC 300 may include analog circuitry andcustom circuitry 314 for managing sensor data, analog-to-digital conversions, wireless data transmissions, and for performing other specialized operations, such as processing encoded audio and video signals for rendering in a web browser. Theprocessing device SOC 300 may further include system components andresources 316, such as voltage regulators, oscillators, phase-locked loops, peripheral bridges, data controllers, memory controllers, system controllers, access ports, timers, and other similar components used to support the processors and software clients (e.g., a web browser) running on a computing device. - The
processing device SOC 300 also include specialized circuitry for camera actuation and management (CAM) 305 that includes, provides, controls and/or manages the operations of one ormore cameras 122, 136 (e.g., a primary camera, webcam, 3D camera, etc.), the video display data from camera firmware, image processing, video preprocessing, video front-end (VFE), in-line JPEG, high definition video codec, etc. TheCAM 305 may be an independent processing unit and/or include an independent or internal clock. - In some embodiments, the image and object
recognition processor 306 may be configured with processor-executable instructions and/or specialized hardware configured to perform image processing and object recognition analyses involved in various embodiments. For example, the image and objectrecognition processor 306 may be configured to perform the operations of processing images received from cameras (e.g., 122, 136) via theCAM 305 to recognize and/or identify other vehicles, and otherwise perform functions of thecamera perception layer 204 as described. In some embodiments, theprocessor 306 may be configured to process radar or lidar data and perform functions of theradar perception layer 202 as described. - The system components and
resources 316, analog andcustom circuitry 314, and/orCAM 305 may include circuitry to interface with peripheral devices, such ascameras radar 132,lidar 138, electronic displays, wireless communication devices, external memory chips, etc. Theprocessors more memory elements 312, system components andresources 316, analog andcustom circuitry 314,CAM 305, andRPM processor 317 via an interconnection/bus module 324, which may include an array of reconfigurable logic gates and/or implement a bus architecture (e.g., CoreConnect, AMBA, etc.). Communications may be provided by advanced interconnects, such as high-performance networks-on chip (NoCs). - The
processing device SOC 300 may further include an input/output module (not illustrated) for communicating with resources external to the SOC, such as aclock 318 and avoltage regulator 320. Resources external to the SOC (e.g.,clock 318, voltage regulator 320) may be shared by two or more of the internal SOC processors/cores (e.g., aDSP 303, amodem processor 304, agraphics processor 306, anapplications processor 308, etc.). - In some embodiments, the
processing device SOC 300 may be included in a control unit (e.g., 140) for use in a vehicle (e.g., 100). The control unit may include communication links for communication with a telephone network (e.g., 180), the Internet, and/or a network server (e.g., 184) as described. - The
processing device SOC 300 may also include additional hardware and/or software components that are suitable for collecting sensor data from sensors, including motion sensors (e.g., accelerometers and gyroscopes of an IMU), user interface elements (e.g., input buttons, touch screen display, etc.), microphone arrays, sensors for monitoring physical conditions (e.g., location, direction, motion, orientation, vibration, pressure, etc.), cameras, compasses, GPS receivers, communications circuitry (e.g., Bluetooth®, WLAN, WiFi, etc.), and other well-known components of modern electronic devices. -
FIG. 4 illustrates amethod 400 of broadcasting an intention message according to various embodiments. With reference toFIGS. 1A-4 , themethod 400 may be implemented in a processor (e.g., 164), a processing device (e.g., 300), and/or a control unit (e.g., 104) (variously referred to as a “processor”) of a vehicle (e.g., 100). In some embodiments, themethod 400 may be performed by one or more layers within a vehicle management system stack, such as avehicle management stack 200, avehicle management stack 250, etc. In other embodiments, themethod 400 may be performed by a processor independently from, but in conjunction with, a vehicle control system stack, such as avehicle management stack 200, avehicle management stack 250, etc. For example, themethod 400 may be implemented as a stand-alone software module or within dedicated hardware that monitors data and commands from/within the vehicle management system stack (e.g.,vehicle management stack - In
block 402, the processor may determine a motion plan for the vehicle. In various embodiments, a motion plan may include a position and an indication of how the position over time to is expected to change. In some embodiments, a motion plan may include a trajectory. An autonomous or semi-autonomous vehicle may know its current position and the direction and/or next action the autonomous or semi-autonomous vehicle may take to arrive at a next position at a next time. For example, the autonomous vehicle may have a trajectory to get to a next point from a current point. In various embodiments, a motion plan may include a trajectory and one or more vehicle descriptors associated with the vehicle and/or the vehicle owner and/or operator. For example, vehicle descriptors may include sensor perceptible attributes of the vehicle, such as vehicle color, vehicle license plate number, vehicle size, etc. As a further example, vehicle descriptors may include vehicle physical capabilities, such as vehicle type, vehicle turning radius, vehicle top speed, vehicle maximum acceleration, etc. As a still further example, vehicle descriptors may include vehicle location attributes, such as the vehicle's latitude & longitude, the vehicle's distance from, and/or orientation to, a known landmark in a coordinate plane, etc. In some embodiments, a motion plan may include an indication of an expected next position at a certain time. In some embodiments, a motion plan may include a motion vector. In some embodiments, a motion plan may include an indication of the coordinate plane used for determining the indicated position. In various embodiments, the motion plan may describe features of the vehicle transmitting the motion plan, such as its size, orientation, color, vehicle type, etc. In various embodiments, the motion plan may indicate the speed of the vehicle transmitting the motion plan, orientation of the vehicle transmitting the motion plan, acceleration of the vehicle transmitting the motion plan, or any other state information of the vehicle transmitting the motion plan. In various embodiments, the motion plan may indicate future actions (or intentions) of the vehicle, such as “turning on left blinker in five seconds”, “turning right in two seconds”, “braking in one hundred feet”, or any other type actions or intentions relevant to driving. - In
block 404, the processor may generate an intention message based at least in part on the determined motion plan. In various embodiments, an intention message may include an identifier of the vehicle broadcasting the intention message, the current position of the vehicle broadcasting the intention message, the motion plan of the vehicle broadcasting the intention message, and/or other data related to the vehicle broadcasting the intention message. - In
block 406, the processor may broadcast the intention message. In various embodiments, intention messages may be broadcast from an autonomous vehicle, such as by C-V2X transmission modes. In various embodiments, intention messages may be broadcast periodically, such as at a set time interval, in response to a change in intention of a broadcasting vehicle, etc. -
FIG. 5 illustrates amethod 500 of extracting a motion plan from a broadcast intention message according to various embodiments. With reference toFIGS. 1A-5 , themethod 500 may be implemented in a processor (e.g., 164), a processing device (e.g., 300), and/or a control unit (e.g., 104) (variously referred to as a “processor”) of a vehicle (e.g., 100). In some embodiments, themethod 500 may be performed by one or more layers within a vehicle management system stack, such as avehicle management stack 200, avehicle management stack 250, etc. In other embodiments, themethod 500 may be performed by a processor independently from, but in conjunction with, a vehicle control system stack, such as avehicle management stack 200, avehicle management stack 250, etc. For example, themethod 500 may be implemented as a stand-alone software module or within dedicated hardware that monitors data and commands from/within the vehicle management system stack (e.g.,vehicle management stack method 500 may be performed in conjunction with the operations of method 400 (FIG. 4 ). - In
block 502, the processor may receive an intention message. In various embodiments, broadcast intention messages may be received by any vehicle within transmission range of the vehicles broadcasting such intention messages. In various embodiments, the intention message may be received via C-V2X transmissions from a broadcasting vehicle. - In
block 504, the processor may parse the intention message to identify the autonomous (or semi-autonomous) vehicle transmitting the intention message (the “transmitting vehicle”) and a motion plan for that transmitting vehicle. In various embodiments, a vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, a non-autonomous vehicle, etc., may parse the received intention message to determine the broadcasting vehicle's identifier, the current position of the intention message, and the motion plan, and/or any other information indicated within the intention message. In various embodiments, the motion plan may include a trajectory and one or more vehicle descriptors associated with the vehicle and/or the vehicle owner and/or operator. For example, vehicle descriptors may include sensor perceptible attributes of the vehicle, such as vehicle color, vehicle license plate number, vehicle size, etc. As a further example, vehicle descriptors may include vehicle physical capabilities, such as vehicle type, vehicle turning radius, vehicle top speed, vehicle maximum acceleration, etc. As a still further example, vehicle descriptors may include vehicle location attributes, such as the vehicle's latitude & longitude, the vehicle's distance from, and/or orientation to, a known landmark in a coordinate plane, etc. - In
block 506, the processor may send the motion plan for the transmitting vehicle. In various embodiments, the broadcasting vehicle's identifier, the current position of the intention message, and the motion plan, and/or any other information indicated within the intention message may be provided to various hardware and software components of the receiving vehicle. For example, the broadcasting vehicle's identifier, the current position of the intention message, the motion plan, and/or any other information indicated within the intention message may be stored in one or more memory locations on the receiving vehicle, may be sent to one or more layers of a vehicle management system stack, etc. - In
block 508, the processor may control the vehicle based at least in part on the motion plan. For example, using the broadcasting vehicle's identifier, the current position of the intention message, the motion plan, and/or any other information indicated within the intention message, the vehicle management system stack may control the operations of the vehicle. In various embodiments, motion plans may be used by one or more various layers of the vehicle management system stack to augment various decision making and/or autonomous driving operations. As examples, a received motion plan may be used by the vehicle management system stack in: sensor fusion processing; behavior prediction; behavioral planning; motion planning; position localization; and/or sharing the burden of safety operations between autonomous vehicles. As specific examples, the motion plan may be used to control the operations of the vehicle in various embodiment methods described herein (e.g., method 400 (FIG. 4 ), method 500 (FIG. 5 ), method 600 (FIG. 6 ), method 700 (FIG. 7 ), method 800 (FIG. 8A ), method 850 (FIG. 8B ), method 900 (FIG. 9 ), and/or method 1000 (FIG. 10 )). -
FIG. 6 illustrates amethod 600 of using a broadcast motion plan in sensor perception operations according to various embodiments. With reference toFIGS. 1A-6 , themethod 600 may be implemented in a processor (e.g., 164), a processing device (e.g., 300), and/or a control unit (e.g., 104) (variously referred to as a “processor”) of a vehicle (e.g., 100). In some embodiments, themethod 600 may be performed by one or more layers within a vehicle management system stack, such as avehicle management stack 200, avehicle management stack 250, etc. For example, some or all of operations of themethod 600 may be performed as part of a perception functions implemented within thecamera perception layer 204 and/orradar perception layer 202. In other embodiments, themethod 600 may be performed by a processor independently from, but in conjunction with, the vehicle management system stack, such as thevehicle management stack 200, thevehicle management stack 250, etc. For example, themethod 600 may be implemented as a stand-alone software module or within dedicated hardware that monitors data and commands from/within the vehicle management system stack (e.g., vehiclemanagement system stack method 600 may be performed in conjunction with the operations of method 400 (FIG. 4 ) and/or method 500 (FIG. 5 ). In various embodiments, the operations of themethod 600 may be example operations to control a vehicle based at least in part on a motion plan. - In
block 602, the processor may receive a motion plan. A motion plan may be received in various manners, such as in a message received from another component, retrieving the motion plan from a memory location (e.g., a cache, etc.) used for storing motion plans, in response to a request for a motion plan, etc. In various embodiments, a motion plan may include a trajectory and one or more vehicle descriptors associated with the vehicle and/or the vehicle owner and/or operator. For example, vehicle descriptors may include sensor perceptible attributes of the vehicle, such as vehicle color, vehicle license plate number, vehicle size, etc. As a further example, vehicle descriptors may include vehicle physical capabilities, such as vehicle type, vehicle turning radius, vehicle top speed, vehicle maximum acceleration, etc. As a still further example, vehicle descriptors may include vehicle location attributes, such as the vehicle's latitude and longitude, the vehicle's distance from, and/or orientation to, a known landmark in a coordinate plane, etc. - In
block 604, the processor may determine an expected region of interest for a vehicle based at least in part on the received motion plan. Being able to focus on a specific region of interest in the data in the space around the vehicle because an object, such as the motion plan broadcasting vehicle, is expected in that region of interest may increase the speed of detection of that vehicle when compared with analyzing the data as a whole. In various embodiments, the motion plan may be used to determine the region of interest to apply to the raw sensor data to perceive a vehicle in that raw sensor data. - In
optional block 606, the processor may select a detection algorithm based at least in part on the received motion plan. In some embodiments, the motion plan may be used to select and/or modify the algorithm used to perceive vehicles in the raw sensor data. For example, a different algorithm may be used when the motion plan broadcasting vehicle is expected to be head on to the receiving vehicle than when the motion plan broadcasting vehicle is expected to be perpendicular to the receiving vehicle. As a further example, a different detection threshold may be used depending on whether a motion plan indicates the broadcasting vehicle is expected to be in a given space. As a specific example, without a motion plan a receiving vehicle may only report detections that pass with a 90% confidence, while with a motion plan the receiving vehicle may report detections that pass a 50% confidence in a region of interest associated with the motion plan.Block 606 may be optional, as the detection algorithm may not change in some embodiments. - In
block 608, the processor may receive sensor data. The sensor data may be raw sensor data received from one or more sensors, such as cameras, LIDARs, radars, etc. - In
block 610, the processor may apply the detection algorithm to the sensor data at the expected region of interest to detect the vehicle in the sensor data. Sensor perception may include operations to control where a sensor looks in the region of interest to confirm a detection of an object, such as another vehicle. A motion plan that indicates vehicle descriptors which sensor perceptible attributes, may enable a vehicle management system to focus one or more sensors on a specific sensor perceptible attribute, such as cueing an image sensor to look for a particular vehicle color, vehicle size, etc. This may increase the speed of detection by sensors and the vehicle management system of that other vehicle compared to analyzing the entirety of sensor data as a whole. In some embodiments, the motion plan may be used by the vehicle management system to confirm whether or not a specific vehicle is perceived in the raw sensor data. For example, the detection in raw sensor data of a sensor perceptible attribute of a vehicle that was included in a motion plan as a vehicle descriptor (e.g., the vehicle's color, vehicle's license plate number, etc.) may confirm that the vehicle transmitting the motion plan is the vehicle perceived by vehicle sensors in a region of interest. - In
block 612, the processor may send the vehicle detection sensor data. For example, the vehicle detection sensor data may be sent to the sensor fusion andRWM management layer 212. -
FIG. 7 illustrates amethod 700 of using a broadcast motion plan in sensor fusion operations according to various embodiments. With reference toFIGS. 1A-7 , themethod 700 may be implemented in a processor (e.g., 164), a processing device (e.g., 300), and/or a control unit (e.g., 104) (variously referred to as a “processor”) of a vehicle (e.g., 100). In some embodiments, themethod 700 may be performed by one or more layers within a vehicle management system stack, such as a vehiclemanagement system stack 200, a vehiclemanagement system stack 250, etc. For example, some or all of operations of themethod 700 may be performed as part of a sensor fusion functions implemented within the sensor fusion andRWM management layer 212. In other embodiments, themethod 700 may be performed by a processor independently from, but in conjunction with, the vehicle management system stack, such as the vehiclemanagement system stack 200, the vehiclemanagement system stack 250, etc. For example, themethod 700 may be implemented as a stand-alone software module or within dedicated hardware that monitors data and commands from/within the vehicle management system stack (e.g., vehiclemanagement system stack method 700 may be performed in conjunction with the operations of method 400 (FIG. 4 ), method 500 (FIG. 5 ), and/or method 600 (FIG. 6 ). In various embodiments, the operations ofmethod 700 may be example operations to control a vehicle based at least in part on a motion plan. - In
block 702, the processor may receive a motion plan. A motion plan may be received in various manners, such as in a message received from another component, retrieving the motion plan from a memory location (e.g., a cache, etc.) used for storing motion plans, in response to a request for a motion plan, etc. In various embodiments, a motion plan may include a trajectory and one or more vehicle descriptors associated with the vehicle and/or the vehicle owner and/or operator. For example, vehicle descriptors may include sensor perceptible attributes of the vehicle, such as vehicle color, vehicle license plate number, vehicle size, etc. As a further example, vehicle descriptors may include vehicle physical capabilities, such as vehicle type, vehicle turning radius, vehicle top speed, vehicle maximum acceleration, etc. As a still further example, vehicle descriptors may include vehicle location attributes, such as the vehicle's latitude & longitude, the vehicle's distance from and/or orientation to a known landmark in a coordinate plane, etc. - In
block 704, the processor may receive vehicle detection sensor data. For example, the processor may receive vehicle detection sensor data from thecamera perception layer 204 and/orradar perception layer 202. - In
block 706, the processor may correlate vehicle detection sensor data with an associated vehicle based at least in part on the received motion plan for that vehicle. In various embodiments, correlating vehicle detection sensor data with an associated vehicle based at least in part on the received motion plan for that vehicle may include operations to combine and associate sensor data and associate that sensor data with a tracked object. A motion plan may improve the performance of data association and tracking operations of a sensor fusion layer of a vehicle management system. The vehicle management system may use the motion plan to determine how to fuse all the raw detections of other vehicles in an environment together. For example, based on the motion plan broadcast for a vehicle, the receiving vehicle may be enabled to determine a radar detection, a LIDAR detection, and a camera detection are actually all the same vehicle because the detections all correlate to the motion plan for that vehicle rather than considering the three detections separate vehicles. As a specific example, different sensors of the receiving vehicle positively identifying one or more sensor perceptible attributes of a vehicle included in a received motion plan may confirm that the sensors are detecting the vehicle that transmitted the motion plan. Additionally, the motion plan may enable the receiving vehicle to determine that those vehicle detections will evolve together. The ability to compare vehicle detections to motion plans may enable outlier measurements to be discarded. For example, a tracked vehicle's motion plan indicating it intends to stay in a current lane may enable a detection not corresponding to that lane to be associated with a new object rather than the previously tracked vehicle. The presence of a motion plan may reduce uncertainty in the sensor fusion operations. The noisy detections from the perception layer may be compared with the underlying trajectories in a motion plan to give an improved certainty to sensor fusion operations. - In
block 708, the processor may send combined vehicle detection sensor data. For example, the processor may send the combined vehicle detection sensor data to a behavioral planning andprediction layer 216, vehicle safety andcrash avoidance system 252, and/or motion planning andcontrol layer 214. -
FIG. 8A illustrates amethod 800 of using a broadcast motion plan in behavior prediction operations according to various embodiments. With reference toFIGS. 1A-8A , themethod 800 may be implemented in a processor (e.g., 164), a processing device (e.g., 300), and/or a control unit (e.g., 104) (variously referred to as a “processor”) of a vehicle (e.g., 100). In some embodiments, themethod 800 may be performed by one or more layers within a vehicle management system stack, such as a vehiclemanagement system stack 200, a vehiclemanagement system stack 250, etc. For example, some or all of operations of themethod 800 may be performed as part of a behavioral prediction functions implemented within the behavioral planning andprediction layer 216. In other embodiments, themethod 800 may be performed by a processor independently from, but in conjunction with, the vehicle management system stack, such as the vehiclemanagement system stack 200, the vehiclemanagement system stack 250, etc. For example, themethod 800 may be implemented as a stand-alone software module or within dedicated hardware that monitors data and commands from/within the vehicle management system stack (e.g., vehiclemanagement system stack method 800 may be performed in conjunction with the operations of method 400 (FIG. 4 ), method 500 (FIG. 5 ), method 600 (FIG. 6 ), and/or method 700 (FIG. 7 ). In various embodiments, the operations ofmethod 800 may be example operations to control a vehicle based at least in part on a motion plan. - In
block 802, the processor may identify a tracked vehicle. For example, the processor may select a next tracked vehicle in a list of tracked vehicle currently perceived in the environment based on data from the sensor fusion andRWM management layer 212. - In
determination block 804, the processor may determine whether the motion plan has been received for the tracked vehicle. For example, the processor may compare the tracked vehicle identifier to the identifiers of motion plans stored in a memory to determine whether a motion plan has been received for the tracked vehicle. In some embodiments, when a motion plan is received, the motion plan may include vehicle descriptors that reflect vehicle physical capabilities of the vehicle transmitting the motion plan. - In response to determining that the motion plan has not been received (i.e., determination block 804=“No”), the processor may determine a behavior prediction in
block 808. In this manner, even though no intention messages are received, the processor may revert to making the vehicle fully responsible for full inference of the state of the world by determining a behavior prediction. In a similar manner as described with reference toblocks FIG. 8A ), the other embodiment methods described herein (e.g., method 400 (FIG. 4 ), method 500 (FIG. 5 ), method 600 (FIG. 6 ), method 700 (FIG. 7 ), method 900 (FIG. 9 ), and/or method 1000 (FIG. 10 )), may optionally include operations to conditionally take actions (and/or make determinations) based on one or more available motion plans when such broadcast motion plans are available and to default to other actions (and/or make other determinations) when no such broadcast motion plans are received. In various embodiments, for example, when no intention messages (or motion plans) are received, the processor may revert to making the vehicle fully responsible for full inference of the state of the world to enable taking actions (and/or making determinations). - In response to determining that the motion plan has been received (i.e., determination block 804=“Yes”), the processor may set a behavior prediction based on the received motion plan in
block 806. The presence of a motion plan broadcast by a vehicle may enable a vehicle receiving that motion plan to predict the behavior of that vehicle with a known certainty. The benefit of a motion plan and a position may be that the motion plan may be treated as the predicted behavior of that vehicle. This certainty in behavior prediction may reduce the dimensionality of the POMDP by knowing behavioral/trajectory prediction exactly for surrounding cars that broadcast their respective motion plans. Sharing the motion plan may result in behavioral predictions with no uncertainty. Additionally, knowing the intended motion of a vehicle may eliminate the need to estimate that vehicle's motion, thereby reducing the computational resources associated with behavior prediction. Additionally, a motion plan including vehicle descriptors that reflect vehicle physical capabilities of the vehicle transmitting the motion plan may reduce the space of possibilities for behavioral planning searches within a vehicle behavior model by limiting the possible behaviors of the transmitting vehicle to those within that vehicle's capabilities. For example, a type of vehicle indicated in a motion plan may be used to constrain a maximum speed or acceleration of the vehicle based on the maximum speed or acceleration associated with that vehicle type. As another example, a turning radius of the vehicle indicated in a motion plan may be used to constrain the potential turning paths of the vehicle transmitting the motion plan to within the indicated turning radius. - In
optional block 810, the processor may indicate the behavior prediction as having a high certainty. For example, the certainty may be high as the broadcasting vehicle affirmatively indicated its intention in its motion plan and the behavior prediction was set to that indicated intention. - In various embodiments, behavior predictions based on the motion plan of a broadcasting vehicle may be used for behavioral planning and/or motion planning with a high degree of certainty. In various embodiments, a behavioral planning layer may provide a high-level driving goal to a motion planning layer. The motion planning layer may be responsible for actually planning the trajectory to execute that high-level maneuver and the motion planning layer may be responsible for ensuring the safety of executing that trajectory. The presence of a motion plan broadcast by another vehicle in the environment may enable the motion planning layer to perform fewer collision checks as a result of prediction of the motion of that broadcasting vehicle being known with certainty. Collision checking is often a bottleneck of fast motion planning, so fewer checks may greatly speed up the overall autonomous driving algorithm.
-
FIG. 8B illustrates amethod 850 of using a broadcast motion plan in behavior prediction operations according to various embodiments. With reference toFIGS. 1A-8B , themethod 850 may be implemented in a processor (e.g., 164), a processing device (e.g., 300), and/or a control unit (e.g., 104) (variously referred to as a “processor”) of a vehicle (e.g., 100). In some embodiments, themethod 850 may be performed by one or more layers within a vehicle management system stack, such as a vehiclemanagement system stack 200, a vehiclemanagement system stack 250, etc. For example, some or all of operations of themethod 850 may be performed as part of behavioral prediction functions implemented within the behavioral planning andprediction layer 216. In other embodiments, themethod 850 may be performed by a processor independently from, but in conjunction with, the vehicle management system stack, such as the vehiclemanagement system stack 200, the vehiclemanagement system stack 250, etc. For example, themethod 850 may be implemented as a stand-alone software module or within dedicated hardware that monitors data and commands from/within the vehicle management system stack (e.g., vehiclemanagement system stack method 850 may be performed in conjunction with the operations of method 400 (FIG. 4 ), method 500 (FIG. 5 ), method 600 (FIG. 6 ), and/or method 700 (FIG. 7 ). In various embodiments, the operations of themethod 850 may be example operations to control a vehicle based at least in part on a received motion plan. - In blocks 802, 804, 806, 808, and 810, the processor may perform like operations of like numbered blocks of method 800 (
FIG. 8A ) described above. - In
determination block 852, the processor may determine whether the behavior of the tracked other vehicle conformed with the behavior prediction by the behavioral planning andprediction layer 216. For example, the processor may compare sensor data associated with the tracked vehicle to the behavior prediction by the behavioral planning and prediction layer to determine whether the tracked vehicle is at an expected location per the behavior prediction. The sensor data indicating the tracked vehicle is located where expected may indicate that the behavior of the tracked vehicle conformed to the behavior prediction. The sensor data indicating the tracked vehicle is not located where expected may indicate that the behavior of the tracked vehicle did not conform to the behavior prediction. - In response to determining that the behavior conformed to the behavior prediction (i.e., determination block 852=“Yes”), the processor may continue to monitor the behavior of the tracked vehicle and continuously or periodically determine whether the behavior conformed to the behavior prediction in
determination block 852. In this manner, processor may repeatedly check for non-conforming behavior of the tracked vehicle. - In response to determining that the behavior did not conform to the behavior prediction (i.e., determination block 852=“No”), the processor may update the behavior prediction or the vehicle behavior model based on information in the received motion plan in
block 854. In various embodiments, receiving a motion plan broadcast by a vehicle including vehicle descriptors that reflect vehicle physical capabilities of the vehicle transmitting the motion plan may reduce the space of possibilities for behavioral planning searches by or within a vehicle behavior model after a vehicle deviates from a behavior prediction. In response to determining that a vehicle's behavior does not conform to a behavior prediction, the behavior prediction or the vehicle behavior model for that vehicle may be updated based at least in part on information in the received motion plan. For example, vehicle physical capabilities identified in the received motion plan may be used by or within a vehicle behavior model to collapse the possible future behaviors of the vehicle from a POMDP to an MDP analysis in which there are a host of solutions suitable for online and real-time operations. For example, a type of vehicle indicated in a motion plan may be used to constrain a maximum speed or acceleration of the vehicle used in a vehicle behavior model that is then used to update the behavior prediction based on the maximum speed or acceleration associated with that vehicle type. As another example, a turning radius of the vehicle indicated in a motion plan may be used in a vehicle behavior model to constrain the potential turning paths of the vehicle transmitting the motion plan to within the indicated turning radius used in updating the behavior prediction. -
FIG. 9 illustrates amethod 900 of using a broadcast motion plan in position localization operations according to various embodiments. With reference toFIGS. 1A-9 , themethod 900 may be implemented in a processor (e.g., 164), a processing device (e.g., 300), and/or a control unit (e.g., 104) (variously referred to as a “processor”) of a vehicle (e.g., 100). In some embodiments, themethod 900 may be performed by one or more layers within a vehicle management system stack, such as a vehiclemanagement system stack 200, a vehiclemanagement system stack 250, etc. For example, some or all of operations of themethod 900 may be performed as part of a localization functions implemented within the map fusion andarbitration layer 208. In other embodiments, themethod 900 may be performed by a processor independently from, but in conjunction with, the vehicle management system stack, such as the vehiclemanagement system stack 200, the vehiclemanagement system stack 250, etc. For example, themethod 900 may be implemented as a stand-alone software module or within dedicated hardware that monitors data and commands from/within the vehicle management system stack (e.g., vehiclemanagement system stack method 900 may be performed in conjunction with the operations of method 400 (FIG. 4 ), method 500 (FIG. 5 ), method 600 (FIG. 6 ), method 700 (FIG. 7 ), method 800 (FIG. 8A ), and/or method 850 (FIG. 8B ). In various embodiments, the operations ofmethod 900 may be example operations to control a vehicle based at least in part on a motion plan. - In
block 902, the processor may receive one or more motion plans. Motion plans may be received in various manners, such as in messages received from another component, retrieving the motion plans from a memory location (e.g., a cache, etc.) used for storing motion plans, in response to a request for motion plans, etc. In various embodiments, a motion plan may include a trajectory and one or more vehicle descriptors associated with the vehicle and/or the vehicle owner and/or operator as described. - In
block 904, the processor may determine positions of other vehicles based on their respective motion plans. In various embodiments, intention messages and/or motion plans broadcast by vehicles may indicate those vehicles distance from, and/or orientation to, a known landmark in a coordinate plan, such as a local and/or global coordinate plane. For example, a motion plan may indicate the transmitting vehicle's distance from and/or orientation to a known landmark in a coordinate plane, such as a local and/or global coordinate plane, in a vehicle location attribute type vehicle descriptor. The processor may determine the positions and/or relative distances from a known landmark for the other vehicles broadcasting motion plans to determine positions of other vehicles based on their respective motion plans. - In
determination block 906, the processor may determine whether a comparison between the vehicle's own position and other vehicle positions indicates an error. A vehicle, such as an autonomous vehicle, a semi-autonomous vehicle, a non-autonomous vehicle, etc., receiving the indications from the broadcasting vehicles may compare its observations of those broadcasting vehicles and their position relative to the known landmark to the distance from the known landmark in the intention message and/or motion plan to assist in localization of the receiving vehicle. For example, the receiving vehicle may compare its own observations to those in the motion plans to determine whether there is an error or offset between its observations and those in the motion plans. An error or offset between its observations and those in the motion plans may indicate to the receiving vehicle that it has localized its respective current position incorrectly. As a specific example, two different autonomous vehicles may both broadcast their respective motion plans and those motion plans may be received by the receiving vehicle. Observations of a landmark in those two motion plans may match, but may be different from the observation of the receiving vehicle of that same landmark. The agreement of two observations in the different motion plans and their difference from the receiving vehicle's observations may indicate the receiving vehicle localized its position incorrectly. - In response to no error being indicated (i.e., determination block 906=“No”), the processor may continue to receive motion plans in
block 902. - In response to an error being indicated by the comparison between the vehicle's own position and other vehicle positions (i.e., determination block 906=“Yes”), the processor may trigger a recalculation of the vehicle's own position in
block 908. -
FIG. 10 illustrates amethod 1000 of using a broadcast motion plan to share the burden of safety operations according to various embodiments. With reference toFIGS. 1A-10 , themethod 1000 may be implemented in a processor (e.g., 164), a processing device (e.g., 300), and/or a control unit (e.g., 104) (variously referred to as a “processor”) of a vehicle (e.g., 100). In some embodiments, themethod 1000 may be performed by one or more layers within a vehicle management system stack, such as a vehiclemanagement system stack 200, a vehiclemanagement system stack 250, etc. For example, some or all of operations of themethod 1000 may be performed as part of a safety functions implemented within the motion planning andcontrol layer 214. In other embodiments, themethod 1000 may be performed by a processor independently from, but in conjunction with, the vehicle management system stack, such as the vehiclemanagement system stack 200, the vehiclemanagement system stack 250, etc. For example, themethod 1000 may be implemented as a stand-alone software module or within dedicated hardware that monitors data and commands from/within the vehicle management system stack (e.g., vehiclemanagement system stack method 1000 may be performed as part of a safety functions implemented within the vehicle safety andcrash avoidance system 252. In various embodiments, the operations ofmethod 1000 may be performed in conjunction with the operations of method 400 (FIG. 4 ), method 500 (FIG. 5 ), method 600 (FIG. 6 ), method 700 (FIG. 7 ), method 800 (FIG. 8 ), and/or method 900 (FIG. 9 ). In various embodiments, the operations ofmethod 1000 may be example operations to control a vehicle based at least in part on a received motion plan. - In
block 1002, the processor may receive a motion plan. A motion plan may be received in various manners, such as in a message received from another component, retrieving the motion plan from a memory location (e.g., a cache, etc.) used for storing motion plans, in response to a request for a motion plan, etc. In various embodiments, a motion plan may include a trajectory and one or more vehicle descriptors associated with the vehicle and/or the vehicle owner and/or operator as described. - In
determination block 1006, the processor may determine whether the motion plan is unsafe. For example, the receiving vehicle may detect an object that will cause a motion plan of a broadcasting vehicle to be unsafe even though the broadcasting vehicle did not yet sense or otherwise observe that object. In response, the receiving vehicle may that determine the motion plan is unsafe. - In response to determining that the motion plan is safe (i.e.,
determination block 1006=“No”), the processor may take no action inblock 1010. - In response to determining that the motion plan is unsafe (i.e.,
determination block 1006=“Yes”), the processor may send a safety warning to the other vehicle inblock 1008. The warning may include the observation of the object causing the motion plan to be unsafe. - Various embodiments illustrated and described are provided merely as examples to illustrate various features of the claims. However, features shown and described with respect to any given embodiment are not necessarily limited to the associated embodiment and may be used or combined with other embodiments that are shown and described. Further, the claims are not intended to be limited by any one example embodiment.
- The foregoing method descriptions and the process flow diagrams are provided merely as illustrative examples and are not intended to require or imply that the blocks of various embodiments must be performed in the order presented. As will be appreciated by one of skill in the art the order of blocks in the foregoing embodiments may be performed in any order. Words such as “thereafter,” “then,” “next,” etc. are not intended to limit the order of the blocks; these words are simply used to guide the reader through the description of the methods. Further, any reference to claim elements in the singular, for example, using the articles “a,” “an” or “the” is not to be construed as limiting the element to the singular.
- The various illustrative logical blocks, modules, circuits, and algorithm blocks described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and blocks have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such embodiment decisions should not be interpreted as causing a departure from the scope of various embodiments.
- The hardware used to implement the various illustrative logics, logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of communication devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some blocks or methods may be performed by circuitry that is specific to a given function.
- In various embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable medium or non-transitory processor-readable medium. The operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a non-transitory computer-readable or processor-readable storage medium. Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor. By way of example but not limitation, such non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.
- The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the scope of the embodiments. Thus, various embodiments are not intended to be limited to the embodiments shown herein but are to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.
Claims (26)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/439,956 US20200202706A1 (en) | 2018-12-20 | 2019-06-13 | Message Broadcasting for Vehicles |
CN201980082284.4A CN113228129B (en) | 2018-12-20 | 2019-10-29 | Message broadcast for vehicles |
PCT/US2019/058460 WO2020131223A2 (en) | 2018-12-20 | 2019-10-29 | Message broadcasting for vehicles |
EP19805830.7A EP3899905A2 (en) | 2018-12-20 | 2019-10-29 | Message broadcasting for vehicles |
US18/340,294 US20230334983A1 (en) | 2018-12-20 | 2023-06-23 | Message broadcasting for vehicles |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862782573P | 2018-12-20 | 2018-12-20 | |
US16/439,956 US20200202706A1 (en) | 2018-12-20 | 2019-06-13 | Message Broadcasting for Vehicles |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/340,294 Continuation US20230334983A1 (en) | 2018-12-20 | 2023-06-23 | Message broadcasting for vehicles |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200202706A1 true US20200202706A1 (en) | 2020-06-25 |
Family
ID=71097788
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/439,956 Abandoned US20200202706A1 (en) | 2018-12-20 | 2019-06-13 | Message Broadcasting for Vehicles |
US18/340,294 Pending US20230334983A1 (en) | 2018-12-20 | 2023-06-23 | Message broadcasting for vehicles |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/340,294 Pending US20230334983A1 (en) | 2018-12-20 | 2023-06-23 | Message broadcasting for vehicles |
Country Status (4)
Country | Link |
---|---|
US (2) | US20200202706A1 (en) |
EP (1) | EP3899905A2 (en) |
CN (1) | CN113228129B (en) |
WO (1) | WO2020131223A2 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210005085A1 (en) * | 2019-07-03 | 2021-01-07 | Cavh Llc | Localized artificial intelligence for intelligent road infrastructure |
US20210245758A1 (en) * | 2020-02-11 | 2021-08-12 | Ford Global Technologies, Llc | V2x traffic maneuver handshaking between traffic participants |
US11157784B2 (en) * | 2019-05-08 | 2021-10-26 | GM Global Technology Operations LLC | Explainable learning system and methods for autonomous driving |
US20210345075A1 (en) * | 2018-12-29 | 2021-11-04 | Intel Corporation | Content centric dynamic ad hoc networking |
US20210366289A1 (en) * | 2020-05-19 | 2021-11-25 | Toyota Motor North America, Inc. | Control of transport en route |
US11214268B2 (en) * | 2018-12-28 | 2022-01-04 | Intel Corporation | Methods and apparatus for unsupervised multimodal anomaly detection for autonomous vehicles |
US20220073103A1 (en) * | 2020-09-08 | 2022-03-10 | Electronics And Telecommunications Research Institute | Metacognition-based autonomous driving correction device and method |
US20220164245A1 (en) * | 2020-11-23 | 2022-05-26 | Argo AI, LLC | On-board feedback system for autonomous vehicles |
US20220221574A1 (en) * | 2021-01-11 | 2022-07-14 | Movon Corporation | Camera and radar sensor system and error compensation method thereof |
US20220388501A1 (en) * | 2021-08-20 | 2022-12-08 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method for automated parking, device, and storage medium |
US20240005785A1 (en) * | 2022-06-30 | 2024-01-04 | Kodiak Robotics, Inc. | System and method for identifying a vehicle subject to an emergency alert |
WO2024039634A1 (en) * | 2022-08-15 | 2024-02-22 | Robotic Research Opco, Llc | Region focusing radar systems and methods |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112750298B (en) * | 2020-12-17 | 2022-10-28 | 华路易云科技有限公司 | Truck formation dynamic resource allocation method based on SMDP and DRL |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6861957B2 (en) * | 1999-01-12 | 2005-03-01 | Toyota Jidosha Kabushiki Kaisha | Positional data utilizing inter-vehicle communication method and traveling control apparatus |
US20170031361A1 (en) * | 2015-07-31 | 2017-02-02 | Ford Global Technologies, Llc | Vehicle trajectory determination |
US9864064B2 (en) * | 2012-06-27 | 2018-01-09 | Mitsubishi Electric Corporation | Positioning device |
US20180056998A1 (en) * | 2016-08-29 | 2018-03-01 | Mitsubishi Electric Research Laboratories, Inc. | System and Method for Multi-Vehicle Path Planning Technical Field |
US20190005812A1 (en) * | 2017-06-28 | 2019-01-03 | Zendrive, Inc. | Method and system for determining traffic-related characteristics |
US20190035275A1 (en) * | 2017-07-28 | 2019-01-31 | Toyota Motor Engineering & Manufacturing North America, Inc. | Autonomous operation capability configuration for a vehicle |
US20190266498A1 (en) * | 2018-02-28 | 2019-08-29 | Cisco Technology, Inc. | Behavioral models for vehicles |
US20200097841A1 (en) * | 2018-09-21 | 2020-03-26 | Renovo Motors, Inc. | Systems and methods for processing vehicle data |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3714258B2 (en) * | 2002-02-01 | 2005-11-09 | 日産自動車株式会社 | Recommended operation amount generator for vehicles |
US20070162550A1 (en) * | 2006-01-06 | 2007-07-12 | Outland Research, Llc | Vehicle-to-vehicle instant messaging with locative addressing |
DE102009035072A1 (en) * | 2009-07-28 | 2011-02-10 | Bayerische Motoren Werke Aktiengesellschaft | Motor vehicle collision warning system has a transceiver working with a transponder at the object in the path, e.g. a pedestrian, to determine its position and predict its future position |
US10609335B2 (en) * | 2012-03-23 | 2020-03-31 | Magna Electronics Inc. | Vehicle vision system with accelerated object confirmation |
US20130278441A1 (en) * | 2012-04-24 | 2013-10-24 | Zetta Research and Development, LLC - ForC Series | Vehicle proxying |
US9300423B2 (en) * | 2012-04-24 | 2016-03-29 | Zetta Research and Development LLC—ForC Series | Device for synchronizing a time base for V2V communictation |
US20150161894A1 (en) * | 2013-12-05 | 2015-06-11 | Elwha Llc | Systems and methods for reporting characteristics of automatic-driving software |
KR20170023085A (en) * | 2014-06-18 | 2017-03-02 | 센시티 시스템즈 아이엔씨. | Application framework for interactive light sensor networks |
JP6149846B2 (en) * | 2014-11-14 | 2017-06-21 | トヨタ自動車株式会社 | Warning device |
US9711050B2 (en) * | 2015-06-05 | 2017-07-18 | Bao Tran | Smart vehicle |
US9922565B2 (en) * | 2015-07-20 | 2018-03-20 | Dura Operating Llc | Sensor fusion of camera and V2V data for vehicles |
DE102015215929A1 (en) * | 2015-08-20 | 2017-02-23 | Volkswagen Aktiengesellschaft | Apparatus, methods and computer program for providing information about a probable driving intention |
DE102015220481A1 (en) * | 2015-10-21 | 2017-05-11 | Volkswagen Aktiengesellschaft | Method and device in a traffic unit for the cooperative tuning of driving maneuvers of at least two motor vehicles |
DE102016205140A1 (en) * | 2015-11-04 | 2017-05-04 | Volkswagen Aktiengesellschaft | Method and control systems for determining a traffic gap between two vehicles for a lane change for a vehicle |
US11067996B2 (en) * | 2016-09-08 | 2021-07-20 | Siemens Industry Software Inc. | Event-driven region of interest management |
US20180208195A1 (en) * | 2017-01-20 | 2018-07-26 | Pcms Holdings, Inc. | Collaborative risk controller for vehicles using v2v |
EP3364393A1 (en) * | 2017-02-20 | 2018-08-22 | Continental Automotive GmbH | A method and apparatus for safe operation of a vehicle |
CN107284452B (en) * | 2017-07-18 | 2018-04-10 | 吉林大学 | Merge the hybrid vehicle future operating mode forecasting system of intelligent communication information |
US10098014B1 (en) * | 2018-01-31 | 2018-10-09 | Toyota Jidosha Kabushiki Kaisha | Beam alignment using shared driving intention for vehicular mmWave communication |
-
2019
- 2019-06-13 US US16/439,956 patent/US20200202706A1/en not_active Abandoned
- 2019-10-29 WO PCT/US2019/058460 patent/WO2020131223A2/en unknown
- 2019-10-29 EP EP19805830.7A patent/EP3899905A2/en active Pending
- 2019-10-29 CN CN201980082284.4A patent/CN113228129B/en active Active
-
2023
- 2023-06-23 US US18/340,294 patent/US20230334983A1/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6861957B2 (en) * | 1999-01-12 | 2005-03-01 | Toyota Jidosha Kabushiki Kaisha | Positional data utilizing inter-vehicle communication method and traveling control apparatus |
US9864064B2 (en) * | 2012-06-27 | 2018-01-09 | Mitsubishi Electric Corporation | Positioning device |
US20170031361A1 (en) * | 2015-07-31 | 2017-02-02 | Ford Global Technologies, Llc | Vehicle trajectory determination |
US20180056998A1 (en) * | 2016-08-29 | 2018-03-01 | Mitsubishi Electric Research Laboratories, Inc. | System and Method for Multi-Vehicle Path Planning Technical Field |
US20190005812A1 (en) * | 2017-06-28 | 2019-01-03 | Zendrive, Inc. | Method and system for determining traffic-related characteristics |
US20190035275A1 (en) * | 2017-07-28 | 2019-01-31 | Toyota Motor Engineering & Manufacturing North America, Inc. | Autonomous operation capability configuration for a vehicle |
US20190266498A1 (en) * | 2018-02-28 | 2019-08-29 | Cisco Technology, Inc. | Behavioral models for vehicles |
US20200097841A1 (en) * | 2018-09-21 | 2020-03-26 | Renovo Motors, Inc. | Systems and methods for processing vehicle data |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11214268B2 (en) * | 2018-12-28 | 2022-01-04 | Intel Corporation | Methods and apparatus for unsupervised multimodal anomaly detection for autonomous vehicles |
US20210345075A1 (en) * | 2018-12-29 | 2021-11-04 | Intel Corporation | Content centric dynamic ad hoc networking |
US11157784B2 (en) * | 2019-05-08 | 2021-10-26 | GM Global Technology Operations LLC | Explainable learning system and methods for autonomous driving |
US20210005085A1 (en) * | 2019-07-03 | 2021-01-07 | Cavh Llc | Localized artificial intelligence for intelligent road infrastructure |
US20210245758A1 (en) * | 2020-02-11 | 2021-08-12 | Ford Global Technologies, Llc | V2x traffic maneuver handshaking between traffic participants |
US11847919B2 (en) * | 2020-05-19 | 2023-12-19 | Toyota Motor North America, Inc. | Control of transport en route |
US20210366289A1 (en) * | 2020-05-19 | 2021-11-25 | Toyota Motor North America, Inc. | Control of transport en route |
US20220073103A1 (en) * | 2020-09-08 | 2022-03-10 | Electronics And Telecommunications Research Institute | Metacognition-based autonomous driving correction device and method |
WO2022108744A1 (en) * | 2020-11-23 | 2022-05-27 | Argo AI, LLC | On-board feedback system for autonomous vehicles |
US20220164245A1 (en) * | 2020-11-23 | 2022-05-26 | Argo AI, LLC | On-board feedback system for autonomous vehicles |
US20220221574A1 (en) * | 2021-01-11 | 2022-07-14 | Movon Corporation | Camera and radar sensor system and error compensation method thereof |
US20220388501A1 (en) * | 2021-08-20 | 2022-12-08 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method for automated parking, device, and storage medium |
US11951976B2 (en) * | 2021-08-20 | 2024-04-09 | Beijing Baidu Netcom Science Technology Co., Ltd. | Method for automated parking, device, and storage medium |
US20240005785A1 (en) * | 2022-06-30 | 2024-01-04 | Kodiak Robotics, Inc. | System and method for identifying a vehicle subject to an emergency alert |
WO2024039634A1 (en) * | 2022-08-15 | 2024-02-22 | Robotic Research Opco, Llc | Region focusing radar systems and methods |
Also Published As
Publication number | Publication date |
---|---|
EP3899905A2 (en) | 2021-10-27 |
WO2020131223A2 (en) | 2020-06-25 |
CN113228129A (en) | 2021-08-06 |
US20230334983A1 (en) | 2023-10-19 |
CN113228129B (en) | 2023-05-02 |
WO2020131223A3 (en) | 2020-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230334983A1 (en) | Message broadcasting for vehicles | |
US11807247B2 (en) | Methods and systems for managing interactions between vehicles with varying levels of autonomy | |
US11966228B2 (en) | Safety procedure analysis for obstacle avoidance in autonomous vehicles | |
US11495131B2 (en) | Vehicle to vehicle safety messaging congestion control for platooning vehicles | |
US20200189591A1 (en) | Steering Command Limiting For Safe Autonomous Automobile Operation | |
US11325524B2 (en) | Collaborative vehicle headlight directing | |
US20230020040A1 (en) | Batch control for autonomous vehicles | |
US11834071B2 (en) | System to achieve algorithm safety in heterogeneous compute platform | |
US20220230537A1 (en) | Vehicle-to-Everything (V2X) Misbehavior Detection Using a Local Dynamic Map Data Model | |
US11743700B2 (en) | Evaluating vehicle-to-everything (V2X) information | |
US20230130814A1 (en) | Yield scenario encoding for autonomous systems | |
WO2021253374A1 (en) | V2X Message For Platooning | |
US20220258739A1 (en) | Method and System for Generating a Confidence Value in a Position Overlap Check Using Vehicle Threshold Models | |
EP4282173A1 (en) | Vehicle-to-everything (v2x) misbehavior detection using a local dynamic map data model | |
Shao | Self-driving car technology: When will the robots hit the road? |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: QUALCOMM INCORPORATED, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHAVES, STEPHEN MARC;MELLINGER, DANIEL WARREN, III;MARTIN, PAUL DANIEL;AND OTHERS;SIGNING DATES FROM 20190806 TO 20190819;REEL/FRAME:050098/0267 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION COUNTED, NOT YET MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |