CN117148828A - Method, system and medium for operating autonomous vehicles - Google Patents
Method, system and medium for operating autonomous vehicles Download PDFInfo
- Publication number
- CN117148828A CN117148828A CN202211610915.5A CN202211610915A CN117148828A CN 117148828 A CN117148828 A CN 117148828A CN 202211610915 A CN202211610915 A CN 202211610915A CN 117148828 A CN117148828 A CN 117148828A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- trajectory
- condition
- autonomous vehicle
- criteria
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 81
- 230000009471 action Effects 0.000 claims abstract description 127
- 238000003066 decision tree Methods 0.000 claims abstract description 91
- 238000012549 training Methods 0.000 claims abstract description 49
- 230000009467 reduction Effects 0.000 claims description 19
- 238000004088 simulation Methods 0.000 claims description 15
- 230000004044 response Effects 0.000 claims description 13
- 238000013439 planning Methods 0.000 description 74
- 230000008447 perception Effects 0.000 description 64
- 210000002569 neuron Anatomy 0.000 description 47
- 238000005070 sampling Methods 0.000 description 41
- 238000013527 convolutional neural network Methods 0.000 description 39
- 230000006870 function Effects 0.000 description 33
- 238000004891 communication Methods 0.000 description 27
- 238000007726 management method Methods 0.000 description 24
- 238000012545 processing Methods 0.000 description 19
- 230000007613 environmental effect Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 13
- 230000001133 acceleration Effects 0.000 description 8
- 230000001276 controlling effect Effects 0.000 description 7
- 238000010801 machine learning Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 6
- 238000011144 upstream manufacturing Methods 0.000 description 6
- 239000003550 marker Substances 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 239000007787 solid Substances 0.000 description 4
- 230000002123 temporal effect Effects 0.000 description 4
- 230000001413 cellular effect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000036461 convulsion Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000029305 taxis Effects 0.000 description 2
- 238000012384 transportation and delivery Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Traffic Control Systems (AREA)
Abstract
Methods, systems, and media for operating an autonomous vehicle are provided. An example method may include: obtaining a training data set associated with an autonomous vehicle action, the training data set comprising trajectory data for a plurality of examples of the autonomous vehicle action and labels for each of the plurality of examples; generating a decision tree based on the trajectory data and the labels of the training dataset; determining a vehicle trajectory criterion based on the decision tree; and communicating the vehicle trajectory criteria to at least one autonomous vehicle, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select a vehicle trajectory for the at least one autonomous vehicle.
Description
Technical Field
The present invention relates to operating autonomous vehicles.
Background
Certain vehicle trajectory criteria, such as pedestrian clearance and maximum velocity, etc., can be easily (e.g., manually) converted into formulas of boolean logic and temporal logic that can be interpreted and accounted for by a user. It is desirable to have all vehicle track standards reformatted with logic that is expressive enough to capture all vehicle track standards but simple enough to be interpreted and accounted for.
Disclosure of Invention
A method for operating an autonomous vehicle, comprising: obtaining a training data set associated with an autonomous vehicle action, the training data set comprising trajectory data for a plurality of examples of the autonomous vehicle action and labels for each of the plurality of examples; generating a decision tree based on the trajectory data and the labels of the training dataset; determining a vehicle trajectory criterion based on the decision tree; and communicating the vehicle trajectory criteria to at least one autonomous vehicle, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select a vehicle trajectory for the at least one autonomous vehicle.
A system for operating an autonomous vehicle, comprising: at least one processor, and at least one non-transitory storage medium storing instructions that, when executed by the at least one processor, cause the at least one processor to perform the above operations.
At least one non-transitory storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform the above operations.
Drawings
FIG. 1 is an example environment in which a vehicle including one or more components of an autonomous system may be implemented;
FIG. 2 is a diagram of one or more systems of a vehicle including an autonomous system;
FIG. 3 is a diagram of components of one or more devices and/or one or more systems of FIGS. 1 and 2;
FIG. 4A is a diagram of certain components of an autonomous system;
FIG. 4B is a diagram of an implementation of a neural network;
fig. 4C and 4D are diagrams illustrating an example operation of the CNN;
FIG. 5A is a block diagram illustrating an example environment in which an inference system infers vehicle trajectory criteria;
FIG. 5B is a flow chart illustrating an example decision tree to infer vehicle trajectory criteria;
FIG. 6 is a flow diagram illustrating an example of a routine implemented by one or more processors to infer vehicle trajectory criteria; and
FIG. 7 is a flow chart illustrating an example of a routine implemented by one or more processors to select a vehicle track based on vehicle track criteria.
Detailed Description
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, that the embodiments described in this disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring aspects of the present disclosure.
In the drawings, for ease of description, specific arrangements or sequences of illustrative elements (such as those representing systems, devices, modules, blocks of instructions, and/or data elements, etc.) are illustrated. However, those of skill in the art will understand that a specific order or arrangement of elements illustrated in the drawings is not intended to require a specific order or sequence of processes, or separation of processes, unless explicitly described. Furthermore, the inclusion of a schematic element in a figure is not intended to mean that such element is required in all embodiments nor that the feature represented by such element is not included in or combined with other elements in some embodiments unless explicitly described.
Furthermore, in the drawings, connecting elements (such as solid or dashed lines or arrows, etc.) are used to illustrate a connection, relationship or association between or among two or more other schematic elements, the absence of any such connecting element is not intended to mean that no connection, relationship or association exists. In other words, some connections, relationships, or associations between elements are not illustrated in the drawings so as not to obscure the present disclosure. Further, for ease of illustration, a single connection element may be used to represent multiple connections, relationships, or associations between elements. For example, if a connection element represents a communication of signals, data, or instructions (e.g., "software instructions"), those skilled in the art will understand that such element may represent one or more signal paths (e.g., buses) that may be required to effect the communication.
Although the terms "first," "second," and/or "third," etc. may be used to describe various elements, these elements should not be limited by these terms. The terms "first," second, "and/or third" are used merely to distinguish one element from another element. For example, a first contact may be referred to as a second contact, and similarly, a second contact may be referred to as a first contact, without departing from the scope of the described embodiments. Both the first contact and the second contact are contacts, but they are not the same contacts.
The terminology used in the description of the various embodiments described herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the specification of the various embodiments described and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, and may be used interchangeably with "one or more than one" or "at least one," unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises," "comprising," "includes," "including" and/or "having," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the terms "communication" and "communicating" refer to at least one of the receipt, transmission, and/or provision of information (or information represented by, for example, data, signals, messages, instructions, and/or commands, etc.). For one unit (e.g., a device, system, component of a device or system, and/or a combination thereof, etc.) to communicate with another unit, this means that the one unit is capable of directly or indirectly receiving information from and/or sending (e.g., transmitting) information to the other unit. This may refer to a direct or indirect connection that is wired and/or wireless in nature. In addition, two units may communicate with each other even though the transmitted information may be modified, processed, relayed and/or routed between the first unit and the second unit. For example, a first unit may communicate with a second unit even if the first unit passively receives information and does not actively transmit information to the second unit. As another example, if at least one intervening unit (e.g., a third unit located between the first unit and the second unit) processes information received from the first unit and transmits the processed information to the second unit, the first unit may communicate with the second unit. In some embodiments, a message may refer to a network packet (e.g., a data packet, etc.) that includes data.
As used herein, the term "if" is optionally interpreted to mean "when …", "at …", "in response to being determined to" and/or "in response to being detected", etc., depending on the context. Similarly, the phrase "if determined" or "if [ a stated condition or event ] is detected" is optionally interpreted to mean "upon determination …", "in response to determination" or "upon detection of [ a stated condition or event ]" and/or "in response to detection of [ a stated condition or event ]" or the like, depending on the context. Furthermore, as used herein, the terms "having," "having," or "owning," and the like, are intended to be open-ended terms. Furthermore, unless explicitly stated otherwise, the phrase "based on" is intended to mean "based, at least in part, on".
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments described. It will be apparent, however, to one of ordinary skill in the art that the various embodiments described may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
General overview
The autonomous vehicle may plan a vehicle trajectory and execute control instructions to operate the autonomous vehicle to follow the vehicle trajectory. For example, the autonomous vehicle may determine whether the vehicle trajectory complies with vehicle trajectory criteria (e.g., a set of conditions for safety, comfort, and/or legal requirements (such as speed limits, etc.) for operation of the autonomous vehicle, etc.). Vehicle trajectory criteria underlying good driving behavior may not generally be well defined or difficult to formulate (e.g., complex interactions based on machine learning networks). As an example, it may be difficult to derive by analysis a combination of turn timing, forward acceleration, initial approach (initial approach), curvature, etc. that results in comfortable ride (e.g., in the user's experience in an autonomous vehicle).
In some cases, certain vehicle trajectory criteria (such as pedestrian clearance and maximum velocity, etc.) may be easily (e.g., manually) converted into formulas of boolean logic and temporal logic that may be interpreted and illustrated by a user. In other cases, certain other vehicle trajectory criteria (such as specific good (safe) or bad (unsafe) behavior, etc.) may be learned from the data. In some cases, machine learning techniques may provide classifiers for such vehicle trajectory criteria that a user may not be able to interpret. However, it may be desirable to have all vehicle track standards formatted with logic that expresses enough force to capture all vehicle track standards but is simple enough to be interpreted and accounted for.
Accordingly, in some aspects and/or embodiments, the systems, methods, and computer program products described herein include and/or implement an inference system to infer vehicle trajectory criteria. As a non-limiting example, the inference system may infer vehicle trajectory criteria based on training data. In some cases, vehicle trajectory criteria may be inferred for a particular autonomous vehicle action. In some cases, the training data may be based on positive (positive) examples and negative (negative) examples collected from the traffic data and/or simulation. In some cases, the inference system may determine vehicle trajectory criteria from a decision tree generated based on training data. In some cases, the vehicle trajectory criteria may be interpretable and interpretable.
In some cases, to generate the decision tree, the inference system may split the trace (trace) of the training data using conditions (e.g., formulas) at each node. For example, the inference system may split the trace by selecting a first set based on a first condition and associating the first set with a first child node. The inference system may associate a second set of traces that do not satisfy the first condition with a second child node. The inference system can evaluate conditions related to vehicle trajectory, velocity, distance to the object, acceleration, and the like.
In some cases, the inference system may select the first condition by ranking a plurality of conditions using the measure of reduced opacity. The measure of reduction in the degree of non-purity may quantify the quality of the resolution for the condition. The inference system may then select the highest ranking condition as the first condition. For example, the measure of reduced opacity may be an information gain measure, a Gini gain measure, a misclassification gain measure, and the like. In general, good splitting can result in a set of traces that are "pure" (i.e., contain the same class of objects).
In some cases, the inference system may recursively split successive child nodes until the continuation condition is not satisfied. The continuation condition may indicate that a set of traces is not "pure" (i.e., contains a mix of objects of different classes). For example, the object may be a trace and the class may be a tag of the trace. In some cases, the inference system may determine that a continuation condition is satisfied when the traces of the set include a trace labeled as a first tag (e.g., a positive example of an autonomous vehicle action) and a trace labeled as a second tag (e.g., a negative example of an autonomous vehicle action). In some cases, when the traces of a group include only traces (or a threshold percentage of traces) labeled as first tags, the inference system may determine that the continuation condition is not satisfied. In response to not satisfying the continuation condition, the inference system may determine that the decision tree is complete. In some cases, the inference system may use enhanced decision tree generation to generate a decision tree.
In some cases, the inference system may determine the vehicle trajectory criteria by traversing a complete decision tree. For example, the inference system may traverse the positive path of the decision tree and retrieve conditions for nodes along the positive path. In some cases, the vehicle trajectory criteria may be constructed using Signal Temporal Logic (STL) with conditions retrieved from nodes along the path. Thus, the vehicle trajectory criteria may define logic that is sufficiently expressive for autonomous vehicle applications, but simple enough to make learning computationally feasible.
In some cases, the inference system may provide the vehicle trajectory criteria in the same format as the traffic regulatory constraints. Thus, the present methods and systems may provide a unified framework to represent traffic regulations and vehicle trajectory criteria that represent good vehicle behavior. In some cases, the vehicle trajectory criteria are interpretable. Thus, one can understand how the vehicle trajectory criteria are being applied. In some cases, the vehicle trajectory criteria are generated using training data that includes examples of good and bad behavior. Thus, the vehicle trajectory criteria may be automatically learned from the data. In some cases, satisfaction of the vehicle trajectory criteria is quantifiable. Thus, in operation, the planning system may select a vehicle trajectory based on no vehicle trajectory criteria violations or prioritized vehicle trajectory criteria violations (e.g., a first vehicle trajectory criteria cannot be violated (for safety) but a second vehicle trajectory criteria can be violated (for comfort)).
With the implementation of the systems, methods, and computer program products described herein, an autonomous vehicle or AV system may select a vehicle trajectory that meets a vehicle trajectory criterion inferred from examples marked as acceptable or unacceptable. Thus, the system of the present disclosure may perform autonomous vehicle actions according to vehicle trajectory criteria corresponding to vehicle actions marked as acceptable.
Referring now to FIG. 1, an example environment 100 is illustrated in which a vehicle that includes an autonomous system and a vehicle that does not include an autonomous system operate in the example environment 100. As illustrated, environment 100 includes vehicles 102a-102n, objects 104a-104n, routes 106a-106n, areas 108, vehicle-to-infrastructure (V2I) devices 110, a network 112, a remote Autonomous Vehicle (AV) system 114, a queue management system 116, and a V2I system 118. The vehicles 102a-102n, the vehicle-to-infrastructure (V2I) device 110, the network 112, the Autonomous Vehicle (AV) system 114, the queue management system 116, and the V2I system 118 are interconnected via wired connections, wireless connections, or a combination of wired or wireless connections (e.g., establishing a connection for communication, etc.). In some embodiments, the objects 104a-104n are interconnected with at least one of the vehicles 102a-102n, the vehicle-to-infrastructure (V2I) device 110, the network 112, the Autonomous Vehicle (AV) system 114, the queue management system 116, and the V2I system 118 via a wired connection, a wireless connection, or a combination of wired or wireless connections.
The vehicles 102a-102n (individually referred to as vehicles 102 and collectively referred to as vehicles 102) include at least one device configured to transport cargo and/or personnel. In some embodiments, the vehicle 102 is configured to communicate with the V2I device 110, the remote AV system 114, the queue management system 116, and/or the V2I system 118 via the network 112. In some embodiments, the vehicle 102 comprises a car, bus, truck, train, or the like. In some embodiments, the vehicle 102 is the same as or similar to the vehicle 200 (see fig. 2) described herein. In some embodiments, vehicles 200 in a group of vehicles 200 are associated with an autonomous queue manager. In some embodiments, the vehicles 102 travel along respective routes 106a-106n (individually referred to as routes 106 and collectively referred to as routes 106), as described herein. In some embodiments, one or more vehicles 102 include an autonomous system (e.g., the same or similar to autonomous system 202).
The objects 104a-104n (individually referred to as objects 104 and collectively referred to as objects 104) include, for example, at least one vehicle, at least one pedestrian, at least one rider, and/or at least one structure (e.g., building, sign, hydrant, etc.), and the like. Each object 104 is stationary (e.g., at a fixed location and for a period of time) or moves (e.g., has a velocity and is associated with at least one trajectory). In some embodiments, the object 104 is associated with a respective location in the region 108.
Routes 106a-106n (individually referred to as routes 106 and collectively referred to as routes 106) are each associated with (e.g., define) a series of actions (also referred to as tracks) that connect the states along which the AV can navigate. Each route 106 begins in an initial state (e.g., a state corresponding to a first space-time location and/or speed, etc.) and ends in a final target state (e.g., a state corresponding to a second space-time location different from the first space-time location) or target area (e.g., a subspace of acceptable states (e.g., end states)). In some embodiments, the first state includes one or more places where the one or more individuals are to pick up the AV, and the second state or zone includes one or more places where the one or more individuals pick up the AV are to be off. In some embodiments, the route 106 includes a plurality of acceptable state sequences (e.g., a plurality of spatiotemporal site sequences) associated with (e.g., defining) a plurality of trajectories. In an example, the route 106 includes only high-level actions or imprecise status places, such as a series of connecting roads indicating a change of direction at a roadway intersection, and the like. Additionally or alternatively, the route 106 may include more precise actions or states such as, for example, specific target lanes or precise locations within a lane region, and target speeds at these locations, etc. In an example, the route 106 includes a plurality of precise state sequences along at least one high-level action with a limited look-ahead view to an intermediate target, where a combination of successive iterations of the limited view state sequences cumulatively corresponds to a plurality of trajectories that collectively form a high-level route that terminates at a final target state or zone.
The area 108 includes a physical area (e.g., a geographic area) that the vehicle 102 may navigate. In an example, the region 108 includes at least one state (e.g., a country, a province, an individual state of a plurality of states included in a country, etc.), at least a portion of a state, at least one city, at least a portion of a city, etc. In some embodiments, the area 108 includes at least one named thoroughfare (referred to herein as a "road"), such as a highway, interstate, park, city street, or the like. Additionally or alternatively, in some examples, the area 108 includes at least one unnamed road, such as a roadway, a section of a parking lot, a section of an open space and/or undeveloped area, a mud path, and the like. In some embodiments, the roadway includes at least one lane (e.g., a portion of the roadway through which the vehicle 102 may traverse). In an example, the road includes at least one lane associated with (e.g., identified based on) the at least one lane marker.
A Vehicle-to-infrastructure (V2I) device 110 (sometimes referred to as a Vehicle-to-Everything (V2X) device) includes at least one device configured to communicate with the Vehicle 102 and/or the V2I system 118. In some embodiments, V2I device 110 is configured to communicate with vehicle 102, remote AV system 114, queue management system 116, and/or V2I system 118 via network 112. In some embodiments, V2I device 110 includes a Radio Frequency Identification (RFID) device, a sign, a camera (e.g., a two-dimensional (2D) and/or three-dimensional (3D) camera), a lane marker, a street light, a parking meter, and the like. In some embodiments, the V2I device 110 is configured to communicate directly with the vehicle 102. Additionally or alternatively, in some embodiments, the V2I device 110 is configured to communicate with the vehicle 102, the remote AV system 114, and/or the queue management system 116 via the V2I system 118. In some embodiments, V2I device 110 is configured to communicate with V2I system 118 via network 112.
Network 112 includes one or more wired and/or wireless networks. In an example, the network 112 includes a cellular network (e.g., a Long Term Evolution (LTE) network, a third generation (3G) network, a fourth generation (4G) network, a fifth generation (5G) network, a Code Division Multiple Access (CDMA) network, etc.), a Public Land Mobile Network (PLMN), a Local Area Network (LAN), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), a telephone network (e.g., a Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the internet, a fiber-optic based network, a cloud computing network, etc., and/or a combination of some or all of these networks, etc.
The remote AV system 114 includes at least one device configured to communicate with the vehicle 102, the V2I device 110, the network 112, the queue management system 116, and/or the V2I system 118 via the network 112. In an example, the remote AV system 114 includes a server, a group of servers, and/or other similar devices. In some embodiments, the remote AV system 114 is co-located with the queue management system 116. In some embodiments, the remote AV system 114 participates in the installation of some or all of the components of the vehicle (including autonomous systems, autonomous vehicle computing, and/or software implemented by autonomous vehicle computing, etc.). In some embodiments, the remote AV system 114 maintains (e.g., updates and/or replaces) these components and/or software over the life of the vehicle.
The queue management system 116 includes at least one device configured to communicate with the vehicle 102, the V2I device 110, the remote AV system 114, and/or the V2I system 118. In an example, the queue management system 116 includes a server, a server group, and/or other similar devices. In some embodiments, the queue management system 116 is associated with a carpool company (e.g., an organization for controlling operation of multiple vehicles (e.g., vehicles that include autonomous systems and/or vehicles that do not include autonomous systems), etc.).
In some embodiments, the V2I system 118 includes at least one device configured to communicate with the vehicle 102, the V2I device 110, the remote AV system 114, and/or the queue management system 116 via the network 112. In some examples, the V2I system 118 is configured to communicate with the V2I device 110 via a connection other than the network 112. In some embodiments, V2I system 118 includes a server, a server farm, and/or other similar devices. In some embodiments, the V2I system 118 is associated with a municipality or private institution (e.g., a private institution for maintaining the V2I device 110, etc.).
The number and arrangement of elements illustrated in fig. 1 are provided as examples. There may be additional elements, fewer elements, different elements, and/or differently arranged elements than those illustrated in fig. 1. Additionally or alternatively, at least one element of environment 100 may perform one or more functions described as being performed by at least one different element of fig. 1. Additionally or alternatively, at least one set of elements of environment 100 may perform one or more functions described as being performed by at least one different set of elements of environment 100.
Referring now to fig. 2, a vehicle 200 includes an autonomous system 202, a powertrain control system 204, a steering control system 206, and a braking system 208. In some embodiments, the vehicle 200 is the same as or similar to the vehicle 102 (see fig. 1). In some embodiments, vehicle 200 has autonomous capabilities (e.g., implements at least one function, feature, and/or means, etc., that enables vehicle 200 to operate partially or fully without human intervention, including, but not limited to, a fully autonomous vehicle (e.g., a vehicle that foregoes human intervention), and/or a highly autonomous vehicle (e.g., a vehicle that foregoes human intervention in some cases), etc. For a detailed description of fully autonomous vehicles and highly autonomous vehicles, reference may be made to SAE International Standard J3016, classification and definition of on-road automotive autopilot system related terms (SAE International's Standard J3016: taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems), which is incorporated by reference in its entirety. In some embodiments, the vehicle 200 is associated with an autonomous queue manager and/or a carpooling company.
The autonomous system 202 includes a sensor suite that includes one or more devices such as a camera 202a, liDAR sensor 202b, radar (radar) sensor 202c, and microphone 202 d. In some embodiments, autonomous system 202 may include more or fewer devices and/or different devices (e.g., ultrasonic sensors, inertial sensors, GPS receivers (discussed below), and/or odometry sensors for generating data associated with an indication of the distance that vehicle 200 has traveled, etc.). In some embodiments, the autonomous system 202 uses one or more devices included in the autonomous system 202 to generate data associated with the environment 100 described herein. The data generated by the one or more devices of the autonomous system 202 may be used by the one or more systems described herein to observe the environment (e.g., environment 100) in which the vehicle 200 is located. In some embodiments, autonomous system 202 includes a communication device 202e, an autonomous vehicle calculation 202f, and a safety controller 202g.
The camera 202a includes at least one device configured to communicate with the communication device 202e, the autonomous vehicle calculation 202f, and/or the safety controller 202g via a bus (e.g., the same or similar to the bus 302 of fig. 3). The camera 202a includes at least one camera (e.g., a digital camera using a light sensor such as a Charge Coupled Device (CCD), thermal camera, infrared (IR) camera, event camera, etc.) to capture images including physical objects (e.g., cars, buses, curbs, and/or people, etc.). In some embodiments, camera 202a generates camera data as output. In some examples, camera 202a generates camera data including image data associated with the image. In this example, the image data may specify at least one parameter corresponding to the image (e.g., image characteristics such as exposure, brightness, etc., and/or an image timestamp, etc.). In such examples, the image may be in a format (e.g., RAW, JPEG, and/or PNG, etc.). In some embodiments, the camera 202a includes a plurality of independent cameras configured (e.g., positioned) on the vehicle to capture images for stereoscopic (stereo vision) purposes. In some examples, camera 202a includes a plurality of cameras that generate and transmit image data to autonomous vehicle computing 202f and/or a queue management system (e.g., a queue management system that is the same as or similar to queue management system 116 of fig. 1). In such an example, the autonomous vehicle calculation 202f determines a depth to one or more objects in the field of view of at least two cameras of the plurality of cameras based on image data from the at least two cameras. In some embodiments, camera 202a is configured to capture images of objects within a distance (e.g., up to 100 meters and/or up to 1 kilometer, etc.) relative to camera 202 a. Thus, the camera 202a includes features such as sensors and lenses that are optimized for sensing objects at one or more distances relative to the camera 202 a.
In an embodiment, camera 202a includes at least one camera configured to capture one or more images associated with one or more traffic lights, street signs, and/or other physical objects that provide visual navigation information. In some embodiments, the camera 202a generates traffic light data (TLD data) associated with one or more images. In some examples, the camera 202a generates TLD data associated with one or more images including formats (e.g., RAW, JPEG, and/or PNG, etc.). In some embodiments, the camera 202a that generates TLD data differs from other systems described herein that include cameras in that: the camera 202a may include one or more cameras having a wide field of view (e.g., wide angle lens, fisheye lens, and/or lens having a viewing angle of about 120 degrees or greater, etc.) to generate images related to as many physical objects as possible.
Laser detection and ranging (LiDAR) sensor 202b includes at least one device configured to communicate with communication device 202e, autonomous vehicle computation 202f, and/or security controller 202g via a bus (e.g., the same or similar bus as bus 302 of fig. 3). LiDAR sensor 202b includes a system configured to emit light from a light emitter (e.g., a laser emitter). Light emitted by the LiDAR sensor 202b includes light outside the visible spectrum (e.g., infrared light, etc.). In some embodiments, during operation, light emitted by the LiDAR sensor 202b encounters a physical object (e.g., a vehicle) and is reflected back to the LiDAR sensor 202b. In some embodiments, the light emitted by LiDAR sensor 202b does not penetrate the physical object that the light encounters. LiDAR sensor 202b also includes at least one light detector that detects light emitted from the light emitter after the light encounters a physical object. In some embodiments, at least one data processing system associated with the LiDAR sensor 202b generates an image (e.g., a point cloud and/or a combined point cloud, etc.) representative of objects included in the field of view of the LiDAR sensor 202b. In some examples, at least one data processing system associated with the LiDAR sensor 202b generates images representing boundaries of the physical object and/or surfaces (e.g., topology of surfaces) of the physical object, etc. In such an example, the image is used to determine the boundary of a physical object in the field of view of the LiDAR sensor 202b.
The radio detection and ranging (radar) sensor 202c includes at least one device configured to communicate with the communication device 202e, the autonomous vehicle calculation 202f, and/or the safety controller 202g via a bus (e.g., the same or similar bus as the bus 302 of fig. 3). The radar sensor 202c includes a system configured to emit (pulsed or continuous) radio waves. The radio waves emitted by the radar sensor 202c include radio waves within a predetermined frequency spectrum. In some embodiments, during operation, radio waves emitted by the radar sensor 202c encounter a physical object and are reflected back to the radar sensor 202c. In some embodiments, the radio waves emitted by the radar sensor 202c are not reflected by some objects. In some embodiments, at least one data processing system associated with radar sensor 202c generates signals representative of objects included in the field of view of radar sensor 202c. For example, at least one data processing system associated with radar sensor 202c generates images representing boundaries of physical objects and/or surfaces (e.g., topology of surfaces) of physical objects, etc. In some examples, the image is used to determine boundaries of physical objects in the field of view of radar sensor 202c.
Microphone 202d includes at least one device configured to communicate with communication device 202e, autonomous vehicle computing 202f, and/or security controller 202g via a bus (e.g., the same or similar bus as bus 302 of fig. 3). Microphone 202d includes one or more microphones (e.g., array microphone and/or external microphone, etc.) that capture an audio signal and generate data associated with (e.g., representative of) the audio signal. In some examples, microphone 202d includes transducer means and/or the like. In some embodiments, one or more systems described herein may receive data generated by microphone 202d and determine a position (e.g., distance, etc.) of an object relative to vehicle 200 based on an audio signal associated with the data.
The communication device 202e includes at least one device configured to communicate with a camera 202a, a LiDAR sensor 202b, a radar sensor 202c, a microphone 202d, an autonomous vehicle calculation 202f, a security controller 202g, and/or a drive-by-wire (DBW) system 202 h. For example, communication device 202e may include the same or similar devices as communication interface 314 of fig. 3. In some embodiments, the communication device 202e comprises a vehicle-to-vehicle (V2V) communication device (e.g., a device for enabling wireless communication of data between vehicles).
The autonomous vehicle calculation 202f includes at least one device configured to communicate with the camera 202a, the LiDAR sensor 202b, the radar sensor 202c, the microphone 202d, the communication device 202e, the security controller 202g, and/or the DBW system 202 h. In some examples, the autonomous vehicle computing 202f includes devices such as client devices, mobile devices (e.g., cellular phones and/or tablet computers, etc.), and/or servers (e.g., computing devices including one or more central processing units and/or graphics processing units, etc.), among others. In some embodiments, the autonomous vehicle calculation 202f is the same as or similar to the autonomous vehicle calculation 400 described herein. Additionally or alternatively, in some embodiments, the autonomous vehicle computing 202f is configured to communicate with an autonomous vehicle system (e.g., an autonomous vehicle system that is the same as or similar to the remote AV system 114 of fig. 1), a queue management system (e.g., a queue management system that is the same as or similar to the queue management system 116 of fig. 1), a V2I device (e.g., a V2I device that is the same as or similar to the V2I device 110 of fig. 1), and/or a V2I system (e.g., a V2I system that is the same as or similar to the V2I system 118 of fig. 1).
The safety controller 202g includes at least one device configured to communicate with the camera 202a, the LiDAR sensor 202b, the radar sensor 202c, the microphone 202d, the communication device 202e, the autonomous vehicle calculation 202f, and/or the DBW system 202 h. In some examples, the safety controller 202g includes one or more controllers (electrical and/or electromechanical controllers, etc.) configured to generate and/or transmit control signals to operate one or more devices of the vehicle 200 (e.g., the powertrain control system 204, the steering control system 206, and/or the braking system 208, etc.). In some embodiments, the safety controller 202g is configured to generate control signals that override (e.g., override) control signals generated and/or transmitted by the autonomous vehicle calculation 202 f.
The DBW system 202h includes at least one device configured to communicate with the communication device 202e and/or the autonomous vehicle calculation 202 f. In some examples, the DBW system 202h includes one or more controllers (e.g., electrical and/or electromechanical controllers, etc.) configured to generate and/or transmit control signals to operate one or more devices of the vehicle 200 (e.g., the powertrain control system 204, the steering control system 206, and/or the braking system 208, etc.). Additionally or alternatively, one or more controllers of the DBW system 202h are configured to generate and/or transmit control signals to operate at least one different device of the vehicle 200 (e.g., turn signal lights, headlights, door locks, and/or windshield wipers, etc.).
The powertrain control system 204 includes at least one device configured to communicate with the DBW system 202 h. In some examples, the powertrain control system 204 includes at least one controller and/or actuator, etc. In some embodiments, the powertrain control system 204 receives control signals from the DBW system 202h, and the powertrain control system 204 causes the vehicle 200 to begin moving forward, stop moving forward, begin moving backward, stop moving backward, accelerate in a direction, decelerate in a direction, make a left turn, make a right turn, and/or the like. In an example, the powertrain control system 204 increases, maintains the same, or decreases the energy (e.g., fuel and/or electricity, etc.) provided to the motor of the vehicle, thereby rotating or not rotating at least one wheel of the vehicle 200.
The steering control system 206 includes at least one device configured to rotate one or more wheels of the vehicle 200. In some examples, the steering control system 206 includes at least one controller and/or actuator, etc. In some embodiments, steering control system 206 rotates the two front wheels and/or the two rear wheels of vehicle 200 to the left or right to turn vehicle 200 to the left or right.
The braking system 208 includes at least one device configured to actuate one or more brakes to slow and/or hold the vehicle 200 stationary. In some examples, the braking system 208 includes at least one controller and/or actuator configured to cause one or more calipers associated with one or more wheels of the vehicle 200 to close on a respective rotor of the vehicle 200. Additionally or alternatively, in some examples, the braking system 208 includes an Automatic Emergency Braking (AEB) system and/or a regenerative braking system, or the like.
In some embodiments, the vehicle 200 includes at least one platform sensor (not explicitly illustrated) for measuring or inferring a property of the state or condition of the vehicle 200. In some examples, the vehicle 200 includes platform sensors such as a Global Positioning System (GPS) receiver, an Inertial Measurement Unit (IMU), a wheel speed sensor, a wheel brake pressure sensor, a wheel torque sensor, an engine torque sensor, and/or a steering angle sensor, among others.
Referring now to fig. 3, a schematic diagram of an apparatus 300 is illustrated. As illustrated, the apparatus 300 includes a processor 304, a memory 306, a storage component 308, an input interface 310, an output interface 312, a communication interface 314, and a bus 302. In some embodiments, the apparatus 300 corresponds to: at least one device of the vehicle 102 (e.g., at least one device of a system of the vehicle 102); and/or one or more devices of network 112 (e.g., one or more devices of a system of network 112). In some embodiments, one or more devices of the vehicle 102 (e.g., one or more devices of the system of the vehicle 102), and/or one or more devices of the network 112 (e.g., one or more devices of the system of the network 112) include at least one device 300 and/or at least one component of the device 300. As shown in fig. 3, the apparatus 300 includes a bus 302, a processor 304, a memory 306, a storage component 308, an input interface 310, an output interface 312, and a communication interface 314.
Bus 302 includes components that permit communication between the components of device 300. In some cases, processor 304 includes a processor (e.g., a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and/or an Acceleration Processing Unit (APU), etc.), a microphone, a Digital Signal Processor (DSP), and/or any processing component that may be programmed to perform at least one function (e.g., a Field Programmable Gate Array (FPGA), and/or an Application Specific Integrated Circuit (ASIC), etc.), in some examples. Memory 306 includes Random Access Memory (RAM), read Only Memory (ROM), and/or another type of dynamic and/or static storage device (e.g., flash memory, magnetic and/or optical memory, etc.) that stores data and/or instructions for use by processor 304.
The storage component 308 stores data and/or software related to operation and use of the apparatus 300. In some examples, storage component 308 includes a hard disk (e.g., magnetic disk, optical disk, magneto-optical disk, and/or solid state disk, etc.), a Compact Disk (CD), a Digital Versatile Disk (DVD), a floppy disk, a magnetic cassette tape, a magnetic tape, a CD-ROM, RAM, PROM, EPROM, FLASH-EPROM, NV-RAM, and/or another type of computer-readable medium, and a corresponding drive.
Input interface 310 includes components that permit device 300 to receive information, such as via user input (e.g., a touch screen display, keyboard, keypad, mouse, buttons, switches, microphone, and/or camera, etc.). Additionally or alternatively, in some embodiments, the input interface 310 includes sensors (e.g., global Positioning System (GPS) receivers, accelerometers, gyroscopes, and/or actuators, etc.) for sensing information. Output interface 312 includes components (e.g., a display, a speaker, and/or one or more Light Emitting Diodes (LEDs), etc.) for providing output information from device 300.
In some embodiments, the communication interface 314 includes transceiver-like components (e.g., a transceiver and/or separate receivers and transmitters, etc.) that permit the device 300 to communicate with other devices via a wired connection, a wireless connection, or a combination of a wired connection and a wireless connection. In some examples, the communication interface 314 permits the device 300 to receive information from and/or provide information to another device. In some of the examples of the present invention, communication interface 314 includes an ethernet interface, an optical interface, a coaxial interface an infrared interface, a Radio Frequency (RF) interface, a Universal Serial Bus (USB) interface, An interface and/or a cellular network interface, etc.
In some embodiments, the apparatus 300 performs one or more of the processes described herein. The apparatus 300 performs these processes based on the processor 304 executing software instructions stored by a computer readable medium, such as the memory 306 and/or the storage component 308. A computer-readable medium (e.g., a non-transitory computer-readable medium) is defined herein as a non-transitory memory device. Non-transitory memory devices include storage space located within a single physical storage device or distributed across multiple physical storage devices.
In some embodiments, the software instructions are read into memory 306 and/or storage component 308 from another computer-readable medium or from another device via communication interface 314. The software instructions stored in memory 306 and/or storage component 308, when executed, cause processor 304 to perform one or more of the processes described herein. Additionally or alternatively, hardwired circuitry is used in place of or in combination with software instructions to perform one or more processes described herein. Thus, unless explicitly stated otherwise, the embodiments described herein are not limited to any specific combination of hardware circuitry and software.
Memory 306 and/or storage component 308 includes a data store or at least one data structure (e.g., database, etc.). The apparatus 300 is capable of receiving information from, storing information in, communicating information to, or searching information stored in a data store or at least one data structure in the memory 306 or storage component 308. In some examples, the information includes network data, input data, output data, or any combination thereof.
In some embodiments, apparatus 300 is configured to execute software instructions stored in memory 306 and/or a memory of another apparatus (e.g., another apparatus that is the same as or similar to apparatus 300). As used herein, the term "module" refers to at least one instruction stored in memory 306 and/or a memory of another device that, when executed by processor 304 and/or a processor of another device (e.g., another device that is the same as or similar to device 300), causes device 300 (e.g., at least one component of device 300) to perform one or more processes described herein. In some embodiments, the modules are implemented in software, firmware, hardware, and/or the like.
The number and arrangement of components illustrated in fig. 3 are provided as examples. In some embodiments, apparatus 300 may include additional components, fewer components, different components, or differently arranged components than those illustrated in fig. 3. Additionally or alternatively, a set of components (e.g., one or more components) of the apparatus 300 may perform one or more functions described as being performed by another component or set of components of the apparatus 300.
Referring now to fig. 4A, an example block diagram of an autonomous vehicle computation 400 (sometimes referred to as an "AV stack") is illustrated. As illustrated, autonomous vehicle computation 400 includes a perception system 402 (sometimes referred to as a perception module), a planning system 404 (sometimes referred to as a planning module), a positioning system 406 (sometimes referred to as a positioning module), a control system 408 (sometimes referred to as a control module), and a database 410. In some embodiments, the perception system 402, the planning system 404, the positioning system 406, the control system 408, and the database 410 are included in and/or implemented in an automated navigation system of the vehicle (e.g., the autonomous vehicle calculation 202f of the vehicle 200). Additionally or alternatively, in some embodiments, the perception system 402, the planning system 404, the positioning system 406, the control system 408, and the database 410 are included in one or more independent systems (e.g., one or more systems identical or similar to the autonomous vehicle calculation 400, etc.). In some examples, the perception system 402, the planning system 404, the positioning system 406, the control system 408, and the database 41 are included in one or more independent systems located in the vehicle and/or at least one remote system as described herein. In some embodiments, any and/or all of the systems included in autonomous vehicle computing 400 are implemented in software (e.g., software instructions stored in memory), computer hardware (e.g., by microprocessors, microcontrollers, application Specific Integrated Circuits (ASICs), and/or Field Programmable Gate Arrays (FPGAs), etc.), or a combination of computer software and computer hardware. It will also be appreciated that in some embodiments, the autonomous vehicle computing 400 is configured to communicate with a remote system (e.g., an autonomous vehicle system that is the same as or similar to the remote AV system 114, a queue management system 116 that is the same as or similar to the queue management system 116, and/or a V2I system that is the same as or similar to the V2I system 118, etc.).
In some embodiments, the perception system 402 receives data associated with at least one physical object in the environment (e.g., data used by the perception system 402 to detect the at least one physical object) and classifies the at least one physical object. In some examples, perception system 402 receives image data captured by at least one camera (e.g., camera 202 a) that is associated with (e.g., represents) one or more physical objects within a field of view of the at least one camera. In such examples, the perception system 402 classifies at least one physical object based on one or more groupings of physical objects (e.g., bicycles, vehicles, traffic signs, and/or pedestrians, etc.). In some embodiments, based on the classification of the physical object by the perception system 402, the perception system 402 transmits data associated with the classification of the physical object to the planning system 404.
In some embodiments, planning system 404 receives data associated with a destination and generates data associated with at least one route (e.g., route 106) along which a vehicle (e.g., vehicle 102) may travel toward the destination. In some embodiments, the planning system 404 receives data (e.g., the data associated with the classification of the physical object described above) from the perception system 402 periodically or continuously, and the planning system 404 updates at least one trajectory or generates at least one different trajectory based on the data generated by the perception system 402. In some embodiments, planning system 404 receives data associated with an updated position of a vehicle (e.g., vehicle 102) from positioning system 406, and planning system 404 updates at least one track or generates at least one different track based on the data generated by positioning system 406.
In some embodiments, the positioning system 406 receives data associated with (e.g., representative of) a location of a vehicle (e.g., the vehicle 102) in an area. In some examples, the positioning system 406 receives LiDAR data associated with at least one point cloud generated by at least one LiDAR sensor (e.g., liDAR sensor 202 b). In some examples, the positioning system 406 receives data associated with at least one point cloud from a plurality of LiDAR sensors, and the positioning system 406 generates a combined point cloud based on each point cloud. In these examples, the positioning system 406 compares the at least one point cloud or combined point cloud to a two-dimensional (2D) and/or three-dimensional (3D) map of the area stored in the database 410. The location system 406 then determines the location of the vehicle in the area based on the location system 406 comparing the at least one point cloud or combined point cloud to the map. In some embodiments, the map includes a combined point cloud for the region generated prior to navigation of the vehicle. In some embodiments, the map includes, but is not limited to, a high-precision map of roadway geometry, a map describing road network connection properties, a map describing roadway physical properties (such as traffic rate, traffic flow, number of vehicles and bicycle traffic lanes, lane width, type and location of lane traffic direction or lane markings, or combinations thereof, etc.), and a map describing spatial locations of roadway features (such as crosswalks, traffic signs or various types of other travel signals, etc.). In some embodiments, the map is generated in real-time based on data received by the perception system.
In another example, the positioning system 406 receives Global Navigation Satellite System (GNSS) data generated by a Global Positioning System (GPS) receiver. In some examples, positioning system 406 receives GNSS data associated with a location of a vehicle in an area, and positioning system 406 determines a latitude and longitude of the vehicle in the area. In such examples, the positioning system 406 determines the location of the vehicle in the area based on the latitude and longitude of the vehicle. In some embodiments, the positioning system 406 generates data associated with the position of the vehicle. In some examples, based on the positioning system 406 determining the location of the vehicle, the positioning system 406 generates data associated with the location of the vehicle. In such examples, the data associated with the location of the vehicle includes data associated with one or more semantic properties corresponding to the location of the vehicle.
In some embodiments, control system 408 receives data associated with at least one trajectory from planning system 404, and control system 408 controls the operation of the vehicle. In some examples, the control system 408 receives data associated with at least one trajectory from the planning system 404, and the control system 408 controls operation of the vehicle by generating and transmitting control signals to operate a powertrain control system (e.g., the DBW system 202h and/or the powertrain control system 204, etc.), a steering control system (e.g., the steering control system 206), and/or a braking system (e.g., the braking system 208). In an example, where the trajectory includes a left turn, the control system 408 transmits a control signal to cause the steering control system 206 to adjust the steering angle of the vehicle 200, thereby causing the vehicle 200 to turn left. Additionally or alternatively, the control system 408 generates and transmits control signals to cause other devices of the vehicle 200 (e.g., headlights, turn signal lights, door locks, and/or windshield wipers, etc.) to change state.
In some embodiments, the perception system 402, the planning system 404, the localization system 406, and/or the control system 408 implement at least one machine learning model (e.g., at least one multi-layer perceptron (MLP), at least one Convolutional Neural Network (CNN), at least one Recurrent Neural Network (RNN), at least one automatic encoder and/or at least one transformer, etc.). In some examples, the perception system 402, the planning system 404, the positioning system 406, and/or the control system 408 implement at least one machine learning model alone or in combination with one or more of the above systems. In some examples, the perception system 402, the planning system 404, the positioning system 406, and/or the control system 408 implement at least one machine learning model as part of a pipeline (e.g., a pipeline for identifying one or more objects located in an environment, etc.). Examples of implementations of the machine learning model are included below with respect to fig. 4B-4D.
Database 410 stores data transmitted to, received from, and/or updated by sensing system 402, planning system 404, positioning system 406, and/or control system 408. In some examples, database 410 includes a storage component (e.g., the same or similar to storage component 308 of fig. 3) for storing data and/or software related to operations and using at least one system of autonomous vehicle computing 400. In some embodiments, database 410 stores data associated with 2D and/or 3D maps of at least one region. In some examples, database 410 stores data associated with 2D and/or 3D maps of a portion of a city, portions of multiple cities, counties, states, and/or countries (states) (e.g., countries), etc. In such examples, a vehicle (e.g., the same or similar vehicle as vehicle 102 and/or vehicle 200) may drive along one or more drivable regions (e.g., single lane roads, multi-lane roads, highways, remote roads, and/or off-road roads, etc.) and cause at least one LiDAR sensor (e.g., the same or similar LiDAR sensor as LiDAR sensor 202 b) to generate data associated with an image representative of an object included in a field of view of the at least one LiDAR sensor.
In some embodiments, database 410 may be implemented across multiple devices. In some examples, database 410 is included in a vehicle (e.g., the same or similar to vehicle 102 and/or vehicle 200), an autonomous vehicle system (e.g., the same or similar to remote AV system 114), a queue management system (e.g., the same or similar to queue management system 116 of fig. 1), and/or a V2I system (e.g., the same or similar to V2I system 118 of fig. 1), etc.
Referring now to FIG. 4B, a diagram of an implementation of a machine learning model is illustrated. More specifically, a diagram illustrating an implementation of Convolutional Neural Network (CNN) 420. For purposes of illustration, the following description of CNN 420 will be with respect to the implementation of CNN 420 by sensing system 402. However, it will be appreciated that in some examples, CNN 420 (e.g., one or more components of CNN 420) is implemented by other systems (such as planning system 404, positioning system 406, and/or control system 408, etc.) other than sensing system 402 or in addition to sensing system 402. Although CNN 420 includes certain features as described herein, these features are provided for illustrative purposes and are not intended to limit the present disclosure.
CNN420 includes a plurality of convolutional layers including a first convolutional layer 422, a second convolutional layer 424, and a convolutional layer 426. In some embodiments, CNN420 includes a sub-sampling layer 428 (sometimes referred to as a pooling layer). In some embodiments, the sub-sampling layer 428 and/or other sub-sampling layers have dimensions that are smaller than the dimensions of the upstream system (i.e., the amount of nodes). By means of the sub-sampling layer 428 having a dimension smaller than that of the upstream layer, the CNN420 merges the amount of data associated with the initial input and/or output of the upstream layer, thereby reducing the amount of computation required by the CNN420 to perform the downstream convolution operation. Additionally or alternatively, CNN420 incorporates the amount of data associated with the initial input by way of sub-sampling layer 428 being associated with (e.g., configured to perform) at least one sub-sampling function (as described below with respect to fig. 4C and 4D).
Based on the perception system 402 providing respective inputs and/or outputs associated with each of the first convolution layer 422, the second convolution layer 424, and the convolution layer 426 to generate respective outputs, the perception system 402 performs convolution operations. In some examples, the perception system 402 implements the CNN420 based on the perception system 402 providing data as input to a first convolution layer 422, a second convolution layer 424, and a convolution layer 426. In such examples, based on the perception system 402 receiving data from one or more different systems (e.g., one or more systems of a vehicle that is the same or similar to the vehicle 102, a remote AV system that is the same or similar to the remote AV system 114, a queue management system that is the same or similar to the queue management system 116, and/or a V2I system that is the same or similar to the V2I system 118, etc.), the perception system 402 provides data as input to the first convolution layer 422, the second convolution layer 424, and the convolution layer 426. The following detailed description of the convolution operation is included with respect to fig. 4C.
In some embodiments, the perception system 402 provides data associated with an input (referred to as an initial input) to a first convolution layer 422, and the perception system 402 generates data associated with an output using the first convolution layer 422. In some embodiments, the perception system 402 provides as input the output generated by the convolutional layers to the different convolutional layers. For example, the perception system 402 provides the output of the first convolution layer 422 as an input to the sub-sampling layer 428, the second convolution layer 424, and/or the convolution layer 426. In such examples, the first convolution layer 422 is referred to as an upstream layer and the sub-sampling layer 428, the second convolution layer 424, and/or the convolution layer 426 are referred to as downstream layers. Similarly, in some embodiments, the perception system 402 provides the output of the sub-sampling layer 428 to the second convolution layer 424 and/or the convolution layer 426, and in this example, the sub-sampling layer 428 will be referred to as an upstream layer and the second convolution layer 424 and/or the convolution layer 426 will be referred to as a downstream layer.
In some embodiments, the perception system 402 processes data associated with the input provided to the CNN 420 before the perception system 402 provides the input to the CNN 420. For example, based on the sensor data (e.g., image data, liDAR data, radar data, etc.) being normalized by the perception system 402, the perception system 402 processes data associated with the input provided to the CNN 420.
In some embodiments, CNN 420 generates an output based on the perceptual system 402 performing convolution operations associated with the respective convolution layers. In some examples, CNN 420 generates an output based on the perception system 402 performing convolution operations associated with the various convolution layers and the initial input. In some embodiments, the perception system 402 generates an output and provides the output to the fully connected layer 430. In some examples, the perception system 402 provides the output of the convolutional layer 426 to the fully-connected layer 430, where the fully-connected layer 430 includes data associated with a plurality of characteristic values referred to as F1, F2. In this example, the output of convolution layer 426 includes data associated with a plurality of output characteristic values representing predictions.
In some embodiments, based on the perception system 402 identifying the feature value associated with the highest likelihood as the correct prediction of the plurality of predictions, the perception system 402 identifies the prediction from the plurality of predictions. For example, where fully connected layer 430 includes eigenvalues F1, F2,..fn, and F1 is the largest eigenvalue, perception system 402 identifies the prediction associated with F1 as the correct prediction of the plurality of predictions. In some embodiments, the perception system 402 trains the CNN 420 to generate predictions. In some examples, based on perception system 402 providing training data associated with the predictions to CNN 420, perception system 402 trains CNN 420 to generate the predictions.
Referring now to fig. 4C and 4D, diagrams illustrating example operations of CNN440 utilizing perception system 402 are illustrated. In some embodiments, CNN440 (e.g., one or more components of CNN 440) is the same as or similar to CNN 420 (e.g., one or more components of CNN 420) (see fig. 4B).
At step 450, perception system 402 provides data associated with the image as input to CNN440 (step 450). For example, as illustrated, the perception system 402 provides data associated with an image to the CNN440, where the image is a grayscale image represented as values stored in a two-dimensional (2D) array. In some embodiments, the data associated with the image may include data associated with a color image represented as values stored in a three-dimensional (3D) array. Additionally or alternatively, the data associated with the image may include data associated with an infrared image and/or a radar image, or the like.
At step 455, cnn440 performs a first convolution function. For example, based on CNN440 providing a value representing an image as input to one or more neurons (not explicitly illustrated) included in first convolution layer 442, CNN440 performs a first convolution function. In this example, the value representing the image may correspond to the value of the region (sometimes referred to as receptive field) representing the image. In some embodiments, each neuron is associated with a filter (not explicitly illustrated). The filter (sometimes referred to as a kernel) may be represented as an array of values corresponding in size to the values provided as inputs to the neurons. In one example, the filter may be configured to identify edges (e.g., horizontal lines, vertical lines, and/or straight lines, etc.). In successive convolutional layers, filters associated with neurons may be configured to continuously identify more complex patterns (e.g., arcs and/or objects, etc.).
In some embodiments, CNN 440 performs a first convolution function based on CNN 440 multiplying the value provided as input to each of the one or more neurons included in first convolution layer 442 with the value of the filter corresponding to each of the same one or more neurons. For example, CNN 440 may multiply the value provided as an input to each of the one or more neurons included in first convolutional layer 442 by the value of the filter corresponding to each of the one or more neurons to generate a single value or array of values as an output. In some embodiments, the collective outputs of the neurons of the first convolutional layer 442 are referred to as convolutional outputs. In some embodiments, where the individual neurons have the same filter, the convolved output is referred to as a signature.
In some embodiments, CNN 440 provides the output of each neuron of first convolutional layer 442 to neurons of a downstream layer. For clarity, the upstream layer may be a layer that transfers data to a different layer (referred to as a downstream layer). For example, CNN 440 may provide the output of each neuron of first convolutional layer 442 to a corresponding neuron of the sub-sampling layer. In an example, CNN 440 provides the output of each neuron of first convolutional layer 442 to a corresponding neuron of first sub-sampling layer 444. In some embodiments, CNN 440 adds bias values to the set of all values provided to the various neurons of the downstream layer. For example, CNN 440 adds bias values to the set of all values provided to the individual neurons of first sub-sampling layer 444. In such an example, CNN 440 determines the final value to be provided to each neuron of first sub-sampling layer 444 based on the set of all values provided to each neuron and the activation function associated with each neuron of first sub-sampling layer 444.
At step 460, cnn440 performs a first sub-sampling function. For example, CNN440 may perform a first sub-sampling function based on CNN440 providing the values output by first convolutional layer 442 to the corresponding neurons of first sub-sampling layer 444. In some embodiments, CNN440 performs a first sub-sampling function based on the aggregate function. In an example, the CNN440 performs a first sub-sampling function based on the CNN440 determining the largest input (referred to as the max-pooling function) of the values provided to a given neuron. In another example, the CNN440 performs a first sub-sampling function based on the CNN440 determining an average input (referred to as an average pooling function) in the values provided to a given neuron. In some embodiments, based on CNN440 providing values to the various neurons of first sub-sampling layer 444, CNN440 generates an output, sometimes referred to as a sub-sampled convolutional output.
At step 465, cnn440 performs a second convolution function. In some embodiments, CNN440 performs a second convolution function in a manner similar to how CNN440 performs the first convolution function described above. In some embodiments, CNN440 performs a second convolution function based on CNN440 providing as input the value output by first sub-sampling layer 444 to one or more neurons (not explicitly illustrated) included in second convolution layer 446. In some embodiments, as described above, each neuron of the second convolution layer 446 is associated with a filter. As described above, the filter(s) associated with the second convolution layer 446 may be configured to identify more complex patterns than the filter associated with the first convolution layer 442.
In some embodiments, CNN 440 performs a second convolution function based on CNN 440 multiplying the value provided as input to each of the one or more neurons included in second convolution layer 446 with the value of the filter corresponding to each of the one or more neurons. For example, CNN 440 may multiply the value provided as an input to each of the one or more neurons included in second convolution layer 446 with the value of the filter corresponding to each of the one or more neurons to generate a single value or array of values as an output.
In some embodiments, CNN 440 provides the output of each neuron of second convolutional layer 446 to neurons of a downstream layer. For example, CNN 440 may provide the output of each neuron of first convolutional layer 442 to a corresponding neuron of the sub-sampling layer. In an example, CNN 440 provides the output of each neuron of first convolutional layer 442 to a corresponding neuron of second sub-sampling layer 448. In some embodiments, CNN 440 adds bias values to the set of all values provided to the various neurons of the downstream layer. For example, CNN 440 adds bias values to the set of all values provided to the individual neurons of second sub-sampling layer 448. In such an example, CNN 440 determines the final value provided to each neuron of second sub-sampling layer 448 based on the set of all values provided to each neuron and the activation function associated with each neuron of second sub-sampling layer 448.
At step 470, cnn440 performs a second sub-sampling function. For example, CNN440 may perform a second sub-sampling function based on CNN440 providing the values output by second convolution layer 446 to corresponding neurons of second sub-sampling layer 448. In some embodiments, CNN440 performs a second sub-sampling function based on CNN440 using an aggregation function. In an example, as described above, CNN440 performs a first sub-sampling function based on CNN440 determining the maximum or average input of the values provided to a given neuron. In some embodiments, CNN440 generates an output based on CNN440 providing values to individual neurons of second sub-sampling layer 448.
At step 475, cnn440 provides the output of each neuron of second sub-sampling layer 448 to full connection layer 449. For example, CNN440 provides the output of each neuron of second sub-sampling layer 448 to fully connected layer 449, such that fully connected layer 449 generates an output. In some embodiments, the fully connected layer 449 is configured to generate an output associated with the prediction (sometimes referred to as classification). The prediction may include an indication that an object included in the image provided as input to CNN440 includes an object and/or a set of objects, etc. In some embodiments, the perception system 402 performs one or more operations and/or provides data associated with predictions to the different systems described herein.
Inference system
The autonomous vehicle plans a vehicle trajectory and executes control instructions to operate the autonomous vehicle to follow the vehicle trajectory. For example, the autonomous vehicle may determine whether the vehicle trajectory complies with vehicle trajectory criteria (e.g., a set of conditions for safety, comfort, and/or legal requirements (such as speed limits, etc.) for operation of the autonomous vehicle, etc.).
FIG. 5A is a block diagram illustrating an example environment 500A in which an inference system 502 infers vehicle trajectory criteria. The environment 500A includes a remote AV system 114 and an autonomous vehicle (such as the vehicle 102a, etc.) in communication. In particular, the perception system 402 tracks (e.g., monitors and obtains) track data, stores the track data, and provides the track data to the remote AV system 114. The remote AV system 114 stores the track data in the track data structure 506. Inference system 502 retrieves specific trajectory data and infers vehicle trajectory criteria. Inference system 502 may then store the vehicle trajectory criteria in rules data structure 504 and/or provide the vehicle trajectory criteria to planning system 404 of vehicle 102a. The planning system 404 may use vehicle trajectory criteria to select a vehicle trajectory to plan and navigate the vehicle 102a.
In some cases, the perception system 402 may track (e.g., monitor and obtain) trajectory data as the autonomous vehicle 102 maneuvers in the environment. For example, the perception system 402 may track trajectory data continuously (e.g., for each time unit or distance travelled, for each trip between places, and for each autonomous vehicle action, etc.). In some cases, the trajectory data may include vehicle status over time and/or environmental relationships over time. In some cases, the vehicle state over time may include location, velocity or speed, acceleration, jerk (change in acceleration over time), heading (e.g., heading), etc., indexed to a timestamp or location. The environmental relationship over time may be a distance to other objects (e.g., other vehicles and pedestrians, etc.) or a distance to environmental constraints (e.g., lane edges or curbs, stop signs and stop lights, etc.) indexed to a timestamp or location. In this manner, trajectory data may be tracked and reported to inform inference system 502.
In some cases, the perception system 402 may associate trajectory data with a particular autonomous vehicle action. For example, the perception system 402 may associate trajectory data for a trip with actions taken during the trip. In some cases, the perception system 402 may segment trajectory data for a trip (or some sub-portion thereof) and mark the segments according to autonomous vehicle actions taken during the trip (or sub-portion of the trip). As an example, planning system 404 may determine a particular autonomous vehicle action (e.g., maintain a route, change lanes, turn right/left onto a different street, stop at a stop sign, stop at a stop light, and present pedestrians, etc.), and perception system 402 may tag the corresponding trajectory data with the taken autonomous vehicle action. In this way, the trajectory data may catalog different autonomous vehicle actions and correlate those actions with trajectory data generated by taking autonomous vehicle actions.
In some cases, the perception system 402 may obtain vehicle status over time from the positioning system 406. In some cases, the perception system 402 may obtain environmental relationships over time based on output from sensors of the perception system 402 (e.g., camera 202a, liDAR sensor 202b, and/or radar sensor 202 c) and/or output from vehicle states of a map of the environment.
In some cases, the perception system 402 may receive user input (referred to herein as "annotations") representing aspects of a particular segment or vehicle action. For example, the user input may represent whether the segment or vehicle action is acceptable or unacceptable, comfortable, dangerous to feel, etc. (typically the "user experience" of the segment or vehicle action). As an example, the perception system 402 may be connected (e.g., via wired or wireless communication) to a user device, and the user device may provide a user interface for a user to enter annotations related to segmentation or vehicle actions. In some cases, the user device may be part of the autonomous vehicle 102a or a personal computing device of the user (e.g., a mobile device, a laptop, a computer, etc.). The perception system 402 may then tag the segment or vehicle action with annotations. In some cases, the annotation may be part of the track data or separate from the track data. In this way, the perception system 402 may obtain and store a user experience of the segment and/or vehicle action.
In some cases, the perception system 402 may store (segmented and/or annotated) track data and report it to the remote AV system 114. For example, the perception system 402 may report trajectory data continuously (e.g., per set time period or distance traveled), for each trip between locations, for each autonomous vehicle action, and so on. In this manner, the perception system 402 may offload trajectory data and maintain memory configured on the autonomous vehicle 102 a.
In some cases, the remote AV system 114 stores the track data in the track data structure 506. For example, upon receiving track data from an autonomous vehicle including autonomous vehicle 102a, remote AV system 114 may store the track data in track data structure 506. The remote AV system 114 may store the track data as a record in the track data structure 506. For example, track data structure 506 may be a relational data structure (e.g., a database of records) or a non-relational data structure (e.g., a lake of recorded data). In some cases, each record may include reported trajectory data. In some cases, each record may include trajectory data for a particular autonomous vehicle. In some cases, each record may include track data of a segment or vehicle action.
In some cases, the remote AV system 114 stores the track data from the simulation in the track data structure 506. In some such cases, the trajectory data may include simulated vehicle states over time and/or simulated environment relationships over time for a particular simulation of an autonomous vehicle in the simulated environment. For example, the remote AV system 114 (or simulation system) may perform a simulation of vehicle motion in a simulation environment to test and/or confirm any changes in the software and/or hardware of the autonomous vehicle. In this case, the remote AV system 114 may obtain the track data from the simulation (e.g., from the simulation system) and store the track data in the track data structure 506. The remote AV system 114 can tag segments of track data according to the simulated vehicle action (e.g., based on a timestamp and/or location in the simulation). In some cases, the trajectory data from the simulation may be stored with a tag that indicates that the trajectory data is from the simulation.
In some cases, the remote AV system 114 (or an analog system) can determine whether the trajectory data meets at least one safety threshold. The at least one safety threshold may include a distance threshold to the simulated object or simulated environment (e.g., a minimum distance relative to the simulated object, etc.), a collision threshold with the simulated object or simulated environment (e.g., whether a collision will occur or is likely to occur, etc.), and so forth. When the location of the simulated vehicle is within at least one distance threshold of any simulated object or simulated environment, the remote AV system 114 may determine that at least one safety threshold is not met. If the remote AV system 114 determines that the track data meets all of the at least one security threshold, the remote AV system 114 may determine an annotation of the track data that indicates that the track data is acceptable (e.g., not dangerous). If the remote AV system 114 determines that the track data does not meet any of the at least one security thresholds, the remote AV system 114 may determine an annotation of the track data that indicates that the track data is not acceptable (e.g., dangerous).
In some cases, inference system 502 retrieves specific trajectory data and infers vehicle trajectory criteria. For example, the inference system 502 can receive instructions to determine vehicle trajectory criteria for autonomous vehicle actions (e.g., maintaining a route, changing lanes, turning right/left on a different street, stopping at a stop sign, stopping at a stop light, and giving pedestrians, etc.). As an example, inference system 502 can receive instructions from a user device associated with a user (e.g., engineer or administrator) of inference system 502. Inference system 502 may then retrieve specific trajectory data from trajectory data structure 506 according to the instructions. For example, inference system 502 can determine if any records contain trajectory data for autonomous vehicle actions and retrieve records containing trajectory data for autonomous vehicle actions. To determine that the record contains trajectory data for the autonomous vehicle action, the inference system 502 can determine whether the record has a marker segment (e.g., at least one segment) that matches the autonomous vehicle action. If the record has a matching marker segment, inference system 502 can return a matching record. If the record does not have a matching marker segment, the inference system 502 may not return the record.
In some cases, inference system 502 can determine whether the retrieved record meets a generation threshold. The threshold may be generated in order to generate a preset number of decision trees. For example, the preset number may be 100 records, 1000 records, and the like. If inference system 502 determines that the retrieved records meet the generation threshold, inference system 502 may continue to generate a decision tree. If inference system 502 determines that the retrieved records do not meet the generation threshold, inference system 502 can continue to generate no decision tree and notify the user device (e.g., user) that track data structure 506 has records less than the generation threshold.
In some cases, the retrieved record may be a training dataset associated with the autonomous vehicle action. In this case, the training data set may include trajectory data for multiple examples of autonomous vehicle actions and labels for each of the multiple examples. For example, each segment in the retrieved record corresponding to an autonomous vehicle action may be an example, and the label of the segment may be the label of the example. In this way, the retrieved records may form an example training dataset.
In some cases, to infer vehicle trajectory criteria, inference system 502 may generate a decision tree based on the trajectory data and labels of the training data set and determine vehicle trajectory criteria based on the decision tree. For example, inference system 502 can use conditions (e.g., formulas) at each node to split the trace of training data. Each trace may correspond to a particular segment of training data and each trace may have a label. Inference system 502 can split the traces by selecting a first set of traces based on the conditions and associating the first set with a first child node from the root node. Inference system 502 can associate a second set of traces that do not satisfy the condition with a second child node from the root node. The inference system 502 may evaluate a condition related to a value of a parameter of the trace. Parameters of the trace may include components of vehicle state over time and/or environmental relationships over time, such as vehicle trajectory (e.g., shape of location versus time), velocity, acceleration, jerk, and distance to the object, among others. The values of the parameters may be data from trajectory data corresponding to the parameters (e.g., values of components of vehicle states over time and/or environmental relationships over time).
In some cases, each condition may include at least one condition operator and at least one variable to evaluate one or more parameters. Thus, conditions may be associated with a set of parameters. For example, for each parameter or combination of parameters, a first condition may be associated with a location, a second condition may be associated with a velocity, a third condition may be associated with an acceleration, and a fourth condition may be associated with a heading, and so on.
In some cases, the at least one condition operator may be one or a combination of "greater than," "greater than or equal to," "less than or equal to," "and," "or" and "nand," "nor," etc. The at least one variable may be a threshold value and/or a sampling range. The threshold is a real number of parameters evaluated according to at least one conditional operator. The sampling range may be a range representing a portion of the trace (e.g., a time or distance from a beginning to an end, or a point in between). Thus, each condition may be any complex condition that evaluates one or more parameters of a trace to determine whether the condition is true for a particular trace. In some cases, the condition may be limited to only one parameter (e.g., to evaluate a single parameter of the trace, which is referred to as a threshold condition or a range condition).
In some cases, the conditions may be of different types. For example, a first type of condition may determine whether any value of a parameter within a sampling range causes the condition to evaluate to true (or not true). The second type of condition may determine whether all values of the parameter within the sampling range are such that the condition evaluates to true (or not).
In some cases, inference system 502 can adjust at least one variable and/or at least one operator of a condition. For example, inference system 502 can adjust at least one variable and/or at least one operator of a condition based on a set of traces associated with a node. In some cases, inference system 502 can adjust at least one variable by adjusting a threshold and/or sampling a range (e.g., one or both of the endpoints of the range, including or excluding the endpoints). In some cases, inference system 502 can adjust at least one operator to a different conditional operator (e.g., adjust greater than to less than, etc.). In some cases, inference system 502 can adjust at least one variable and/or at least one operator based on a set of traces associated with a node to maximize splitting of labels related to an example set of conditions. As an example, for a condition, inference system 502 can generate multiple sets of variables and operators for the condition (e.g., generate multiple particular conditions for one or more parameters associated with the condition), determine scores (e.g., an opacity reduction metric) for each set of variables and operators, and select the highest (or lowest) scored set of variables and operators as the adjusted at least one variable and/or the adjusted at least one operator for the condition.
In some cases, the decision tree includes at least two levels. In some cases, the decision tree includes two levels, three levels, four levels, and so on. At least one condition may be associated with a node at each level (e.g., each node of a level may be associated with a condition). Inference system 502 can categorize the trajectory data into different groups at each level according to at least one condition at each level and the trace associated with each node. Inference system 502 can associate different groups with child nodes of each node at the next level.
In general, each successive node (e.g., child nodes of a node (referred to as a parent node)) may correspond to a hierarchy of the decision tree. In some cases, the root node may be in a first level of the decision tree, and the child nodes of the root node may be in a second level of the decision tree having any of the same generation nodes (e.g., nodes in the same level and that may share the same parent node or different parent nodes but share ancestor nodes (e.g., grandparent nodes, etc.)).
In some cases, inference system 502 can use predetermined conditions at each node to generate a decision tree. In some cases, inference system 502 can generate a decision tree for a predetermined number of nodes (e.g., root node, first child node, second child node, etc.). In some cases, inference system 502 can generate a decision tree for a predetermined number of nodes using predetermined conditions at each node. For example, the predetermined condition may be defined by the user (e.g., via user input on the user device) based on domain knowledge of the vehicle action being represented by the decision tree. The predetermined number of nodes may be defined by the user (e.g., via user input on the user device) to ensure that the decision tree is shallow (e.g., has no more than the predetermined number of nodes).
In some cases, inference system 502 can select a condition of a node by ranking a plurality of conditions using an unrepeace metric. The measure of reduction in the degree of non-purity may quantify the quality of the resolution for that condition. Inference system 502 can then select the highest ranked condition as the condition for the node. In successive child nodes, inference system 502 can exclude or modify any previously used conditions from the ranking so as not to have the same at least one variable and/or the same at least one operator. For example, the measure of reduced opacity may be an information gain measure, a Gini gain measure, a misclassification gain measure, and the like. In general, good splitting can result in a set of traces that are "pure" (i.e., contain the same class of objects).
In some cases, inference system 502 may recursively split successive child nodes until a continuation condition is not met. The continuation condition may indicate that a set of traces is not "pure" (i.e., contains a mix of objects of different classes). For example, the object may be a trace and the class may be a tag of the trace. In some cases, inference system 502 may determine that a continuation condition is satisfied when the traces in the group include traces labeled as a first tag (e.g., positive examples of autonomous vehicle actions such as acceptable, comfortable, and not dangerous, etc.) and traces labeled as a second tag (e.g., negative examples of autonomous vehicle actions such as not acceptable, uncomfortable, and dangerous, etc.). In some cases, the inference system 502 may determine that the continuation condition is not satisfied when the traces of a group include only traces (or a threshold percentage of traces) labeled as first tags. In response to not satisfying the continuation condition, inference system 502 may determine that the decision tree is complete. In some cases, inference system 5002 can use enhanced decision tree generation to generate decision trees.
In some cases, inference system 502 may determine the sequence of connected nodes as the positive path of the decision tree. The sequence of connected nodes may start at the root node and follow branches to child nodes that group traces evaluated as true for the condition of the node. In this way, inference system 502 can use conditions related to the values of parameters to split training data for autonomous vehicle actions, grouping subsets of traces in a manner that increases the concentration of similar tags (e.g., first tags) at each successive node.
In some cases, inference system 502 can determine vehicle trajectory criteria by traversing a complete decision tree. For example, inference system 502 can traverse the positive path of the decision tree and retrieve conditions for nodes along the positive path. In some cases, the vehicle trajectory criteria may be constructed using Signal Temporal Logic (STL) with conditions retrieved from nodes along the path. Thus, the vehicle trajectory criteria may define logic that is sufficiently expressive for autonomous vehicle applications, but simple enough to make learning computationally feasible.
FIG. 5B is a diagram illustrating an example decision tree 500B used to infer vehicle trajectory criteria. Decision tree 500B begins at root node 510 associated with training dataset 508. The training data set 508 includes multiple examples of trajectory data, including a positive example 508A of trajectory data (represented by a corresponding label) and a negative example 508B of trajectory data (represented by a corresponding label). Inference system 502 can then generate decision tree 500B, as discussed herein.
For example, inference system 502 can determine a first condition at root node 510 from a set of conditions that satisfies a branch condition. Where the first condition is predetermined, the inference system 502 can adjust at least one condition operator and/or at least one variable of the first condition to increase the measure of reduction in the opacity for the condition. In this case, the branching condition may be a threshold impure reduction measure. In this manner, inference system 502 can leverage domain knowledge input by a user to reduce computations and/or reduce computation time. As an example, if the autonomous vehicle action is a stop, the first condition may be predetermined by the user to evaluate for deceleration (e.g., negative acceleration) or separation distance to the object (e.g., stop sign or other object).
Where the first condition is not predetermined, inference system 502 can rank the set of conditions according to respective measure of reduction in the opacity of the set of conditions. In this case, the inference system 502 may select the highest ranking condition, and when the first condition has the highest measure of reduced impure degree, the inference system may determine that the branch condition is satisfied. In some cases, inference system 502 can adjust conditions to increase their respective measure of reduction in unrepeace prior to ranking the set of conditions. In this way, without domain knowledge, inference system 502 can consider all combinations of parameters (e.g., alone or in different sets of combinations) to efficiently (e.g., with fewer computations) reduce the non-purity.
Inference system 502 then branches the decision tree to first node 512. Inference system 502 associates root node 510 with a first condition and associates a first subset of traces in the trace set that satisfy the first condition with first node 512. In some cases, inference system 502 can also associate a remaining subset of traces in the trace set that do not satisfy the first condition with a different first node 514. In some cases, this operation may not be performed (e.g., to reduce computation or memory consumption).
Inference system 502 can then determine whether a continuation condition is met. As discussed herein, the inference system 502 may determine that a continuation condition is satisfied when the set of traces associated with the node includes traces having different tags (e.g., a first tag and a second tag). In some cases, inference system 502 may determine that a continuation condition is not satisfied when the set of traces associated with the node includes traces of the same label (e.g., the first label) or traces of the same label having a threshold percentage.
Then, in response to determining that the continuation condition is satisfied, the inference system 502 may recursively: a second condition is determined that satisfies the branching condition, the decision tree is branched to the second node 516, and the first node 512 is associated with the second condition and a second subset of the first subset of traces that satisfies the second condition is associated with the second node 516. In the case of decision tree 500B, since all traces associated with second node 516 have the same label (e.g., first label), continuation conditions are considered not satisfied at second node 516.
In some cases, inference system 502 may determine the second condition in a similar manner as determining the first condition. In some cases, the second condition may be predetermined, and inference system 502 may adjust the second condition to increase the measure of reduction in the degree of non-purity for the second condition. In some cases, the second condition may not be predetermined, and inference system 502 may rank the remaining conditions relative to the first subset of traces according to an measure of reduction in the opacity of the remaining conditions (e.g., no first condition in the set of conditions, or the first condition modified to be different with different values and operators). In this way, since the ranking is based on the set of traces associated with a particular node along the positive path, the remaining conditions may have different rankings at each level along the positive path. Thus, the inference system 502 can efficiently (e.g., with reduced computation) determine the final decision tree (e.g., the last child node that does not satisfy the continuation condition).
In some cases, inference system 502 can also associate remaining subsets of traces in the first subset of traces that do not satisfy the second condition with a different second node 518. In some cases, this operation may not be performed (e.g., to reduce computation or memory consumption).
In some cases, to determine an impure reduction measure of a condition at a node associated with a trace, inference system 502 can determine a first set of traces that satisfy the condition and a second set of traces that do not satisfy the condition, and determine the impure reduction measure based on the first set of traces and the second set of traces.
In some cases, to determine the vehicle trajectory criteria 520 based on the decision tree, the inference system 502 determines the vehicle trajectory criteria 520 by traversing nodes of the decision tree and concatenating conditions associated with the traversed nodes. For example, in decision tree 500B, inference system 502 may traverse a positive path connecting root node 510 to first node 512 and ultimately to second node 516. Because the root node 510 is associated with a first condition and the first node 512 is associated with a second condition, the inference system 502 can couple the first condition with the second condition to form the vehicle trajectory criteria 520. Since second node 516 is associated with not meeting the continuation threshold, second node 516 may not be associated with a condition. In some cases, the different second node 518 and the different first node 514 may not be associated with a condition because the different second node 518 and the different first node 514 may not be on the positive path of the decision tree. Thus, additional processing to split the respective trace sets at the respective nodes may be omitted.
Returning to fig. 5A, in some cases, inference system 502 may then store the vehicle trajectory criteria (e.g., vehicle trajectory criteria 520) in rule data structure 504 and/or provide the vehicle trajectory criteria to planning system 404 of an autonomous vehicle (such as vehicle 102a, etc.). Rule data structure 504 may be a relational data structure (e.g., a database) or a non-relational data structure (e.g., a data lake). The rule data structure 504 may store the vehicle trajectory criteria in association with the vehicle autonomous action (e.g., one-to-one mapping). The rule data structure 504 may store vehicle track criteria (e.g., by copying data or pointing to data in the track data structure 506) in association with track data on which these vehicle track criteria are based. In some cases, the rule data structure 504 and the track data structure 506 may be combined or separated.
In some cases, the autonomous vehicle reporting the trajectory data may be the same as or different from the autonomous vehicle using the vehicle trajectory criteria. For example, some autonomous vehicles (e.g., mapping and/or try-on vehicles) may report track data with annotations, while some other autonomous vehicles (e.g., taxis and/or delivery vehicles) may use vehicle track criteria. In some cases, mapping and/or fitting vehicles may also use vehicle trajectory criteria. In some cases, taxis and/or delivery vehicles may also report track data with annotations.
In some cases, planning system 404 may use vehicle trajectory criteria to select a vehicle trajectory to plan and navigate vehicle 102a. For example, planning system 404 may receive environmental data associated with an environment of autonomous vehicle 102a and determine a plurality of trajectories of autonomous vehicle actions based on the environmental data.
In some cases, to receive environmental data, planning system 404 may receive location data and map data from positioning system 406 and/or object data from perception system 402. The location data may represent a current location on a map, and the map data may represent nearby (e.g., within a threshold distance) objects, such as map features representing lanes or other environmental objects to evade, as well as traffic control features (such as traffic direction and speed limit, etc.), and so forth. The object data may represent the location, speed, heading, etc. of other objects sensed by the sensors of the autonomous vehicle (e.g., camera 202a, liDAR sensor 202b, and/or radar sensor 202 c).
In some cases, to determine the plurality of trajectories, planning system 404 may generate the plurality of trajectories (e.g., according to various generation methods to evade the object, stay in an operating environment (e.g., a lane), and travel from the starting point to the destination point). For example, planning system 404 may generate a plurality of trajectories based on the location data, the map data, and the object data. In some cases, planning system 404 may determine autonomous vehicle actions as part of generating the plurality of trajectories. The autonomous vehicle action may be one of: maintaining a route (e.g., stay in the current lane), changing lanes, turning right/left to a different street, stopping at stop signs, stopping at stop lights, and giving pedestrians, etc.
In some cases, planning system 404 may select a trajectory of the plurality of trajectories for the autonomous vehicle based on vehicle trajectory criteria associated with the autonomous vehicle action. As discussed herein, the vehicle trajectory criteria may be generated based on a decision tree, wherein the decision tree is generated from trajectory data of a plurality of example trajectories, and the plurality of example trajectories are associated with autonomous vehicle actions. Planning system 404 may then control autonomous vehicle 102a based on the selected trajectory by, for example, communicating instructions and/or the selected trajectory to control system 408. The control system 408 may then determine instructions (if not provided) based on the selected trajectory and execute the actuation command via the DBW system 202h according to the provided instructions or the determined instructions.
In some cases, planning system 404 may select an initial trajectory from a plurality of trajectories, wherein the initial trajectory meets the vehicle trajectory criteria. For example, planning system 404 may evaluate a plurality of trajectories using the vehicle trajectory criteria to determine whether none, some, or all of the trajectories meet the vehicle trajectory criteria. Of the multiple trajectories that meet the vehicle trajectory criteria, planning system 404 may select one trajectory as the initial trajectory. For example, the planning system 404 may select one of the plurality of trajectories that meets the vehicle trajectory criteria based on meeting other vehicle trajectory criteria, the distance traveled or the time spent traversing the trajectory, etc. (commonly referred to as selection criteria).
In some cases, planning system 404 may select an initial trajectory from a plurality of trajectories by: an initial trajectory is determined (e.g., according to various generation methods) and is determined as not meeting the vehicle trajectory criteria. In this case, planning system 404 may modify the initial trajectory until the initial trajectory meets the vehicle trajectory criteria. In some cases, planning system 404 may randomly cause a change in the initial trajectory (e.g., according to various generation methods) and determine whether the (modified) initial trajectory meets the vehicle trajectory criteria. In some cases, planning system 404 may input values (based on vehicle trajectory criteria) to various generation methods to cause a change in the initial trajectory and determine whether the (modified) initial trajectory meets the vehicle trajectory criteria. For example, some or all of the generation methods may take as input additional constraints to be used in track generation.
In some cases, planning system 404 may select an initial trajectory from a plurality of trajectories by: the initial trajectory is determined by inputting values (based on the vehicle trajectory criteria) into various generation methods to generate the initial trajectory and determining whether the initial trajectory meets the vehicle trajectory criteria. For example, some or all of the generation methods may treat the additional constraint as a fixed constraint, while some or all of the generation methods may treat the additional constraint as a flexible constraint (and thus deviate from the additional constraint). Where the additional constraints are considered a method of generating fixed constraints, the planning system 404 may confirm compliance by confirming that the vehicle trajectory criteria are met. Where the additional constraints are considered a method of generating flexible constraints, planning system 404 may confirm that the generated trajectory meets the vehicle trajectory criteria.
In some cases, the vehicle track criteria may be one of a plurality of vehicle track criteria. In some cases, each of the plurality of vehicle trajectory criteria may be associated with at least one autonomous vehicle action.
In some cases, to select a trajectory of the plurality of trajectories, planning system 404 may determine a set of vehicle trajectory criteria associated with the autonomous vehicle action and select a trajectory from the plurality of trajectories that meets each of the vehicle trajectory criteria of the set of vehicle trajectory criteria. For example, planning system 404 may determine a set of vehicle trajectory criteria by retrieving each vehicle trajectory criteria associated with a particular autonomous vehicle action. To select a track from the plurality of tracks that meets each of the vehicle track criteria in the set of vehicle track criteria, planning system 404 may evaluate each track using each of the vehicle track criteria and determine which track, if any, meets each of the vehicle track criteria in the set of vehicle track criteria. In some cases, planning system 404 may examine each track in parallel (against each vehicle track standard) and may remove the track from consideration if the track does not meet the vehicle track standard. In some cases, planning system 404 may examine each vehicle trajectory standard in parallel (against each trajectory), and may consider only those trajectories that pass each parallel examination. In this way, planning system 404 may determine whether any of the trajectories meet all of the vehicle trajectory criteria. Among the set of trajectories that meet all of the vehicle trajectory criteria, the planning system 404 may select one trajectory, for example, based on selection criteria.
In some cases, planning system 404 may store the evaluation results in, for example, a result matrix. The results matrix may associate each result (satisfied or not) with a trajectory and a vehicle trajectory criteria.
In some cases, the planning system may select a trajectory that meets a maximum number of vehicle trajectory criteria. For example, if none of the trajectories meet all of the vehicle trajectory criteria, the planning system 404 may then select a trajectory that meets the maximum number of vehicle trajectory criteria. In some cases, as discussed herein, planning system 404 may evaluate all trajectories against all vehicle trajectory criteria. In some cases, planning system 404 may retrieve the result matrix. Planning system 404 may then determine, for each track, the number of vehicle track criteria that the track meets. For example, planning system 404 may count the number of entries in the result matrix that represent trajectories that meet the corresponding vehicle trajectory criteria. Planning system 404 may then select the trajectory that meets the maximum number of criteria. In the event of a tie (tie) (e.g., two or more tracks having the same maximum number of satisfied vehicle track criteria are satisfied), the planning system 404 may select one of the tie tracks according to selection criteria.
In some cases, planning system 404 may select the trajectory according to a priority of vehicle trajectory criteria. For example, after determining the set of vehicle trajectory criteria, the planning system may determine a priority of the set of vehicle trajectory criteria according to a priority policy. The priority policy may prioritize the vehicle track criteria associated with safety over the vehicle track criteria associated with comfort or the vehicle track criteria associated with signaling driving intent, etc. Thus, the planning system 404 may prioritize the first vehicle trajectory criteria over the second vehicle trajectory criteria by selecting trajectories that meet the first vehicle trajectory criteria but not the second vehicle trajectory criteria, rather than selecting trajectories that meet the second vehicle trajectory criteria but not the first vehicle trajectory criteria. For example, if none of the trajectories meet all of the vehicle trajectory criteria, the planning system 404 may first select a trajectory that meets the priority vehicle trajectory criteria and select a trajectory that meets the lower priority vehicle trajectory criteria only if the priority vehicle trajectory criteria is not met by any of the trajectories.
In some cases, the planning system 404 (or perception system 402, as discussed herein) may receive annotations from user devices after controlling the autonomous vehicle based on the selected trajectory. The planning system 404 may then report the annotations associated with the selected track to the remote AV system 114 along with the track data. In this way, planning system 404 may provide annotations representing the user experience of the selected trajectory. Thus, the systems and methods of the present disclosure may provide feedback for future adjustments of existing vehicle trajectory criteria associated with autonomous vehicle actions.
Example flow diagrams of inference System
FIG. 6 is a flow diagram illustrating an example of a routine 600 implemented by one or more processors to infer vehicle trajectory criteria. The flowchart illustrated in fig. 6 is provided for illustrative purposes only. It will be appreciated that one or more of the steps of the routine 600 illustrated in fig. 6 may be removed, or the order of the steps may be changed. Furthermore, for purposes of illustrating clear examples, one or more particular system components are described in the context of performing various operations during various data flow phases. However, other system arrangements and distributions of processing steps across system components may be used.
At block 602, the remote AV system 114 obtains a training data set associated with autonomous vehicle actions. In some cases, the training data set may include trajectory data for multiple examples of autonomous vehicle actions and labels for each of the multiple examples. For example, as discussed herein, the remote AV system 114 may retrieve particular track data from the track data structure 506 according to instructions.
At block 604, the remote AV system 114 inputs the trajectory data and labels of the training data set into the inference system 502. In some cases, inference system 502 may be configured to output vehicle trajectory criteria from a decision tree generated based on trajectory data and labels of a training dataset. For example, as discussed herein, the remote AV system 114 may use the inference system 502 to generate a decision tree by splitting the trace according to conditions, and determine vehicle trajectory criteria by traversing the decision tree. At block 606, the remote AV system 114 receives the vehicle track criteria from the inference system 502.
At block 608, the remote AV system 114 transmits the vehicle trajectory criteria to at least one autonomous vehicle. In some cases, at least one autonomous vehicle may use vehicle trajectory criteria to plan trajectories and/or control actions associated with autonomous vehicle actions. For example, as discussed herein, the remote AV system 114 may store the vehicle track criteria in the rules data structure 504 and transmit the vehicle track criteria to the planning system 404 of the autonomous vehicle.
FIG. 7 is a flow chart illustrating an example of a routine implemented by one or more processors to select a vehicle track based on vehicle track criteria. The flow chart shown in fig. 7 is provided for illustrative purposes only. It will be appreciated that one or more of the steps of the routine 700 illustrated in fig. 7 may be removed, or the order of the steps may be changed. Furthermore, for purposes of illustrating clear examples, one or more particular system components are described in the context of performing various operations during various data flow phases. However, other system arrangements and distributions of processing steps across system components may be used.
At block 702, the planning system 404 receives environmental data associated with an environment of an autonomous vehicle. For example, as discussed herein, planning system 404 may receive location data, map data, and object data.
At block 704, the planning system 404 determines a plurality of trajectories of autonomous vehicle actions based on the environmental data. For example, as discussed herein, planning system 404 may generate a plurality of trajectories based on the location data, map data, and object data.
At block 706, the planning system 404 selects a trajectory of the plurality of trajectories for the autonomous vehicle based on vehicle trajectory criteria associated with the autonomous vehicle action. In some cases, the vehicle trajectory criteria is generated based on a decision tree, the decision tree is generated from trajectory data of a plurality of example trajectories, and the plurality of example trajectories are associated with autonomous vehicle actions. For example, as discussed herein, the planning system 404 may select the trajectory according to one or more of a plurality of vehicle trajectory criteria.
At block 708, the planning system 404 controls the autonomous vehicle based on the selected trajectory. For example, as discussed herein, planning system 404 may communicate the selected trajectory and/or instructions based on the selected trajectory to control system 408, and control system 408 may actuate various components of vehicle 102a accordingly.
In the foregoing specification, aspects and embodiments of the disclosure have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the application, and what is intended by the applicants to be the scope of the application, is the literal and equivalent scope of the claims, including any subsequent amendments, that are issued from the present application in the specific form of the claims. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. In addition, when the term "further comprises" is used in the preceding description or the appended claims, the phrase may be followed by additional steps or entities, or sub-steps/sub-entities of the previously described steps or entities.
Example
Various example embodiments of the present disclosure may be described by the following clauses:
clause 1. A method for operating an autonomous vehicle, comprising: obtaining a training data set associated with an autonomous vehicle action, the training data set comprising trajectory data for a plurality of examples of the autonomous vehicle action and labels for each of the plurality of examples; generating a decision tree based on the trajectory data and the labels of the training dataset; determining a vehicle trajectory criterion based on the decision tree; and communicating the vehicle trajectory criteria to at least one autonomous vehicle, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select a vehicle trajectory for the at least one autonomous vehicle.
Clause 2. The method of clause 1, wherein obtaining the training data set associated with the autonomous vehicle action comprises: obtaining first trajectory data associated with a first instance of the autonomous vehicle action; obtaining a first tag of the first instance of the autonomous vehicle action; and associating the first tag with the first trajectory data.
Clause 3, the method of clause 2, wherein obtaining the first tag of the first instance of the autonomous vehicle action comprises: an annotation is received from a user device, and wherein the annotation represents a user experience of the first example of the autonomous vehicle action.
Clause 4 the method of clause 2, wherein obtaining the first tag of the first instance of the autonomous vehicle action comprises: an annotation is received from a simulation system, and wherein the simulation system determines the annotation based on the first trajectory data satisfying at least one safety threshold.
Clause 5 the method of any of clauses 1 to 4, wherein generating the decision tree based on the trajectory data and the labels of the training dataset comprises: receiving the trajectory data and the label at a root node of the decision tree, the trajectory data comprising a set of traces each corresponding to a label and an instance; determining a first condition meeting a branch condition from the condition set; branching the decision tree to a first node; associating the root node with the first condition and associating a first subset of traces in the trace set that satisfy the first condition with the first node; determining that the continuation condition is satisfied; and in response to determining that the continuation condition is satisfied, recursively performing the following until the continuation condition is not satisfied: determining a second condition that satisfies the branching condition, branching the decision tree to a second node, and associating the first node with the second condition, and associating a second subset of the first subset of traces that satisfies the second condition with the second node.
Clause 6 the method of clause 5, wherein the branching condition is satisfied where the first condition has the highest measure of reduction in the degree of non-purity.
Clause 7 the method of clause 5, wherein determining the first condition from the set of conditions that satisfies the branching condition comprises: determining an opacity reduction measure for each condition in the set of conditions; and selecting the first condition based on the first condition having the highest measure of reduced opacity.
Clause 8 the method of clause 7, wherein determining the measure of reduction in the opacity for each condition in the set of conditions comprises: for each condition, determining a first set of traces that meet the condition and a second set of traces that do not meet the condition; and determining the measure of reduced opacity based on the first set of traces and the second set of traces.
Clause 9 the method of clause 7, wherein the measure of the reduction in the degree of non-purity is at least one of an information gain, a Gini gain, and a misclassification gain.
Clause 10. The method of clause 5, wherein the set of conditions comprises different types of conditions, and wherein each condition comprises at least one condition operator and at least one variable.
Clause 11 the method of clause 10, wherein the at least one variable is adjusted based on the set of traces.
The method of clause 12, wherein the continuation condition is not satisfied in the case that a subset of the set of traces that satisfies the second condition contains only positively marked traces.
Clause 13 the method of clause 5, wherein determining the vehicle trajectory criteria based on the decision tree comprises: the vehicle trajectory criteria are determined by traversing nodes of the decision tree and concatenating conditions associated with the traversed nodes.
The method of any of clauses 1-13, wherein the decision tree comprises at least two levels, and at least one condition at each of the at least two levels.
Clause 15 the method of clause 14, wherein each of the at least two hierarchies classifies the trajectory data into different groups according to the at least one condition at each hierarchy.
The method of any of clauses 1-15, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select the vehicle trajectory for the at least one autonomous vehicle by selecting an initial trajectory from a plurality of trajectories, wherein the initial trajectory meets the vehicle trajectory criteria.
The method of any of clauses 1-16, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select the vehicle trajectory for the at least one autonomous vehicle by: determining an initial track; determining that the initial trajectory does not meet the vehicle trajectory criteria; and modifying the initial trajectory until the initial trajectory meets the vehicle trajectory criteria.
Clause 18, a system for operating an autonomous vehicle, comprising: at least one processor, and at least one non-transitory storage medium storing instructions that, when executed by the at least one processor, cause the at least one processor to: obtaining a training data set associated with an autonomous vehicle action, the training data set comprising trajectory data for a plurality of examples of the autonomous vehicle action and labels for each of the plurality of examples; generating a decision tree based on the trajectory data and the labels of the training dataset; determining a vehicle trajectory criterion based on the decision tree; and communicating the vehicle trajectory criteria to at least one autonomous vehicle, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select a vehicle trajectory for the at least one autonomous vehicle.
Clause 19 the system of clause 18, wherein generating the decision tree based on the trajectory data and the labels of the training dataset comprises: receiving the trajectory data and the label at a root node of the decision tree, the trajectory data comprising a set of traces each corresponding to a label and an instance; determining a first condition meeting a branch condition from the condition set; branching the decision tree to a first node; associating the root node with the first condition and associating a first subset of traces in the trace set that satisfy the first condition with the first node; determining that the continuation condition is satisfied; and in response to determining that the continuation condition is satisfied, recursively performing the following until the continuation condition is not satisfied: determining a second condition that satisfies the branching condition, branching the decision tree to a second node, and associating the first node with the second condition, and associating a second subset of the first subset of traces that satisfies the second condition with the second node.
Clause 20, at least one non-transitory storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to: obtaining a training data set associated with an autonomous vehicle action, the training data set comprising trajectory data for a plurality of examples of the autonomous vehicle action and labels for each of the plurality of examples; generating a decision tree based on the trajectory data and the labels of the training dataset; determining a vehicle trajectory criterion based on the decision tree; and communicating the vehicle trajectory criteria to at least one autonomous vehicle, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select a vehicle trajectory for the at least one autonomous vehicle.
Clause 21, a method comprising: receiving environmental data associated with an environment of the autonomous vehicle; determining a plurality of trajectories of autonomous vehicle actions based on the environmental data; selecting a track of the plurality of tracks for the autonomous vehicle based on a vehicle track criteria associated with the autonomous vehicle action, wherein the vehicle track criteria is generated based on a decision tree, wherein the decision tree is generated from track data of a plurality of example tracks, wherein the plurality of example tracks are associated with the autonomous vehicle action; and controlling the autonomous vehicle based on the selected trajectory.
Clause 22 the method of clause 21, wherein selecting a track of the plurality of tracks for the autonomous vehicle based on the vehicle track criteria associated with autonomous vehicle action comprises: the track is selected from the plurality of tracks, wherein the track meets the vehicle track criteria.
Clause 23 the method of clause 21, wherein selecting a track of the plurality of tracks for the autonomous vehicle based on the vehicle track criteria associated with autonomous vehicle action comprises: selecting the track from a plurality of tracks; determining that the trajectory does not meet the vehicle trajectory criteria; and modifying the trajectory until the trajectory meets the vehicle trajectory criteria.
Clause 24 the method of clause 21, wherein the vehicle trajectory criteria is one of a plurality of vehicle trajectory criteria, wherein each of the plurality of vehicle trajectory criteria is associated with at least one autonomous vehicle action, and wherein selecting a trajectory of the plurality of trajectories for the autonomous vehicle based on the vehicle trajectory criteria associated with the autonomous vehicle action comprises: determining a set of vehicle trajectory criteria associated with the autonomous vehicle action; and selecting the track from the plurality of tracks, wherein the track meets each vehicle track criterion in the set of vehicle track criteria.
The method of clause 25, wherein the vehicle trajectory criteria is one of a plurality of vehicle trajectory criteria, wherein each of the plurality of vehicle trajectory criteria is associated with at least one autonomous vehicle action, and wherein selecting a trajectory of the plurality of trajectories for the autonomous vehicle based on the vehicle trajectory criteria associated with the autonomous vehicle action comprises: determining a set of vehicle trajectory criteria associated with the autonomous vehicle action; and selecting the track from the plurality of tracks, wherein the track meets a maximum number of vehicle track criteria in the set of vehicle track criteria.
The method of clause 26, wherein the vehicle trajectory criteria is one of a plurality of vehicle trajectory criteria, wherein each of the plurality of vehicle trajectory criteria is associated with at least one autonomous vehicle action, and wherein selecting a trajectory of the plurality of trajectories for the autonomous vehicle based on the vehicle trajectory criteria associated with the autonomous vehicle action comprises: determining a set of vehicle trajectory criteria associated with the autonomous vehicle action; determining a priority of the set of vehicle trajectory criteria according to a priority policy, wherein a first vehicle trajectory criteria is prioritized over a second vehicle trajectory criteria; selecting a first track from the plurality of tracks; determining that the first trajectory does not meet each vehicle trajectory criterion in the set of vehicle trajectory criteria; and selecting a second track from the plurality of tracks, wherein the second track meets the first vehicle track criteria but does not meet the second vehicle track criteria, wherein the second track is the track.
The method of any one of clauses 21 to 26, further comprising: after controlling the autonomous vehicle based on the selected trajectory: receiving an annotation from a user device; and reporting an annotation to the remote system having new track data associated with the selected track, wherein the annotation represents a user experience of the selected track.
Clause 28, a system comprising: at least one processor; and at least one non-transitory storage medium storing instructions that, when executed by the at least one processor, cause the at least one processor to: receiving environmental data associated with an environment of the autonomous vehicle; determining a plurality of trajectories of autonomous vehicle actions based on the environmental data; selecting a track of the plurality of tracks for the autonomous vehicle based on a vehicle track criteria associated with the autonomous vehicle action, wherein the vehicle track criteria is generated based on a decision tree, wherein the decision tree is generated from track data of a plurality of example tracks, wherein the plurality of example tracks are associated with the autonomous vehicle action; and controlling the autonomous vehicle based on the selected trajectory.
Clause 29 the system of clause 28, wherein selecting a track of the plurality of tracks for the autonomous vehicle based on the vehicle track criteria associated with autonomous vehicle action comprises: the track is selected from the plurality of tracks, wherein the track meets the vehicle track criteria.
The system of clause 30, wherein selecting a track of the plurality of tracks for the autonomous vehicle based on the vehicle track criteria associated with autonomous vehicle actions comprises: selecting the track from a plurality of tracks; determining that the trajectory does not meet the vehicle trajectory criteria; and modifying the trajectory until the trajectory meets the vehicle trajectory criteria.
Clause 31, wherein the vehicle trajectory criteria is one of a plurality of vehicle trajectory criteria, wherein each of the plurality of vehicle trajectory criteria is associated with at least one autonomous vehicle action, and wherein selecting a trajectory of the plurality of trajectories for the autonomous vehicle based on the vehicle trajectory criteria associated with the autonomous vehicle action comprises: determining a set of vehicle trajectory criteria associated with the autonomous vehicle action; and selecting the track from the plurality of tracks, wherein the track meets each vehicle track criterion in the set of vehicle track criteria.
The system of clause 32, wherein the vehicle trajectory criteria is one of a plurality of vehicle trajectory criteria, wherein each of the plurality of vehicle trajectory criteria is associated with at least one autonomous vehicle action, and wherein selecting a trajectory of the plurality of trajectories for the autonomous vehicle based on the vehicle trajectory criteria associated with the autonomous vehicle action comprises: determining a set of vehicle trajectory criteria associated with the autonomous vehicle action; and selecting the track from the plurality of tracks, wherein the track meets a maximum number of vehicle track criteria in the set of vehicle track criteria.
Clause 33, the system of clause 28, wherein the vehicle trajectory criteria is one of a plurality of vehicle trajectory criteria, wherein each of the plurality of vehicle trajectory criteria is associated with at least one autonomous vehicle action, and wherein selecting a trajectory of the plurality of trajectories for the autonomous vehicle based on the vehicle trajectory criteria associated with the autonomous vehicle action comprises: determining a set of vehicle trajectory criteria associated with the autonomous vehicle action; determining a priority of the set of vehicle trajectory criteria according to a priority policy, wherein a first vehicle trajectory criteria is prioritized over a second vehicle trajectory criteria; selecting a first track from the plurality of tracks; determining that the first trajectory does not meet each vehicle trajectory criterion in the set of vehicle trajectory criteria; and selecting a second track from the plurality of tracks, wherein the second track meets the first vehicle track criteria but does not meet the second vehicle track criteria, wherein the second track is the track.
The system of any one of clauses 28 to 33, further comprising: after controlling the autonomous vehicle based on the selected trajectory: receiving an annotation from a user device; and reporting an annotation to the remote system having new track data associated with the selected track, wherein the annotation represents a user experience of the selected track.
Clause 35, at least one non-transitory storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to: receiving environmental data associated with an environment of the autonomous vehicle; determining a plurality of trajectories of autonomous vehicle actions based on the environmental data; selecting a track of the plurality of tracks for the autonomous vehicle based on a vehicle track criteria associated with the autonomous vehicle action, wherein the vehicle track criteria is generated based on a decision tree, wherein the decision tree is generated from track data of a plurality of example tracks, wherein the plurality of example tracks are associated with the autonomous vehicle action; and controlling the autonomous vehicle based on the selected trajectory.
Clause 36, at least one non-transitory storage medium of clause 35, wherein selecting a track of the plurality of tracks for the autonomous vehicle based on the vehicle track criteria associated with autonomous vehicle actions comprises: the track is selected from the plurality of tracks, wherein the track meets the vehicle track criteria.
Clause 37, at least one non-transitory storage medium of clause 35, wherein selecting a track of the plurality of tracks for the autonomous vehicle based on the vehicle track criteria associated with autonomous vehicle actions comprises: selecting the track from a plurality of tracks; determining that the trajectory does not meet the vehicle trajectory criteria; and modifying the trajectory until the trajectory meets the vehicle trajectory criteria.
Clause 38, wherein the vehicle trajectory criterion is one of a plurality of vehicle trajectory criteria, wherein each of the plurality of vehicle trajectory criteria is associated with at least one autonomous vehicle action, and wherein selecting a trajectory of the plurality of trajectories for the autonomous vehicle based on the vehicle trajectory criteria associated with the autonomous vehicle action comprises: determining a set of vehicle trajectory criteria associated with the autonomous vehicle action; and selecting the track from the plurality of tracks, wherein the track meets each vehicle track criterion in the set of vehicle track criteria.
Clause 39, at least one non-transitory storage medium of clause 35, wherein the vehicle trajectory criterion is one of a plurality of vehicle trajectory criteria, wherein each of the plurality of vehicle trajectory criteria is associated with at least one autonomous vehicle action, and wherein selecting a trajectory of the plurality of trajectories for the autonomous vehicle based on the vehicle trajectory criteria associated with an autonomous vehicle action comprises: determining a set of vehicle trajectory criteria associated with the autonomous vehicle action; and selecting the track from the plurality of tracks, wherein the track meets a maximum number of vehicle track criteria in the set of vehicle track criteria.
Clause 40, at least one non-transitory storage medium of clause 35, wherein the vehicle trajectory criterion is one of a plurality of vehicle trajectory criteria, wherein each of the plurality of vehicle trajectory criteria is associated with at least one autonomous vehicle action, and wherein selecting a trajectory of the plurality of trajectories for the autonomous vehicle based on the vehicle trajectory criterion associated with an autonomous vehicle action comprises: determining a set of vehicle trajectory criteria associated with the autonomous vehicle action; determining a priority of the set of vehicle trajectory criteria according to a priority policy, wherein a first vehicle trajectory criteria is prioritized over a second vehicle trajectory criteria; selecting a first track from the plurality of tracks; determining that the first trajectory does not meet each vehicle trajectory criterion in the set of vehicle trajectory criteria; and selecting a second track from the plurality of tracks, wherein the second track meets the first vehicle track criteria but does not meet the second vehicle track criteria, wherein the second track is the track.
Additional examples
All of the methods and tasks described herein can be performed by a computer system and are fully automated. Computer systems may in some cases include a number of different computers or computing devices (e.g., physical servers, workstations, storage arrays, cloud computing resources, etc.) that communicate and interoperate over a network to perform the described functions. Each such computing device typically includes a processor (or multiple processors) that execute program instructions or modules stored in a memory or other non-transitory computer-readable storage medium or device (e.g., solid state storage device, disk drive, etc.). The various functions disclosed herein may be embodied in such program instructions or may be implemented in dedicated circuitry (e.g., ASIC or FPGA) of a computer system. Where a computer system includes multiple computing devices, the devices may be, but need not be, co-located. The results of the disclosed methods and tasks may be persistently stored by transforming a physical storage device, such as a solid state memory chip or disk, into a different state. In some embodiments, the computer system may be a cloud-based computing system whose processing resources are shared by a plurality of different business entities or other users.
The processes described herein or illustrated in the figures of the present disclosure may begin in response to an event (such as on a predetermined or dynamically determined schedule, on demand when initiated by a user or system administrator, or in response to some other event). When such processing is initiated, a set of executable program instructions stored on one or more non-transitory computer readable media (e.g., hard disk drive, flash memory, removable media, etc.) may be loaded into a memory (e.g., RAM) of a server or other computing device. These executable instructions may then be executed by a hardware-based computer processor of the computing device. In some embodiments, such processing, or portions thereof, may be implemented serially or in parallel on multiple computing devices and/or multiple processors.
Some acts, events, or functions of any of the processes or algorithms described herein can occur in different sequences, can be added, combined, or omitted entirely, in accordance with the embodiments (e.g., not all of the described acts or events are essential to the practice of the algorithm). Further, in some embodiments, operations or events may occur concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially.
The various illustrative logical blocks, modules, routines, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware (e.g., an ASIC or FPGA device), computer software running on computer hardware, or combinations of both. Furthermore, the various illustrative logical blocks and modules described in connection with the embodiments disclosed herein may be implemented or performed with the following: a machine, such as a processor device, digital signal processor ("DSP"), application specific integrated circuit ("ASIC"), field programmable gate array ("FPGA"), or other programmable logic device; discrete gate (discrete gate) or transistor logic; discrete hardware components; or any combination thereof designed to perform the functions described herein. The processor means may be a microprocessor, but in the alternative, the processor means may be a controller, a microcontroller, or a state machine, or a combination thereof, or the like. The processor apparatus may include circuitry configured to process computer-executable instructions. In another embodiment, the processor device comprises an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor device may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor), a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although primarily described herein with respect to digital technology, the processor device may also primarily include analog components. For example, some or all of the rendering techniques described herein may be implemented in analog circuitry or hybrid analog and digital circuitry. The computing environment may include any type of computer system including, but not limited to, a microprocessor-based computer system, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computing engine within an appliance, to name a few.
Elements of the methods, processes, routines, or algorithms described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor device, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of non-transitory computer-readable storage medium. An exemplary storage medium may be coupled to the processor device such the processor device can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor device. The processor means and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor means and the storage medium may reside as discrete components in a user terminal.
In the foregoing specification, aspects and embodiments of the disclosure have been described with reference to numerous specific details that may vary from implementation to implementation. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The sole and exclusive indicator of the scope of the application, and what is intended by the applicants to be the scope of the application, is the literal and equivalent scope of the claims, including any subsequent amendments, that are issued from the present application in the specific form of the claims. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. In addition, when the term "further comprises" is used in the preceding description or the appended claims, the phrase may be followed by additional steps or entities, or sub-steps/sub-entities of the previously described steps or entities.
Cross Reference to Related Applications
The present application claims the benefit of priority from U.S. provisional patent application 63/365,694, entitled "INFERRING AUTONOMOUS DRIVING RULES FROM DATA," filed on 1, 6, 2022, which is incorporated herein by reference in its entirety.
Claims (20)
1. A method for operating an autonomous vehicle, comprising:
obtaining a training data set associated with an autonomous vehicle action, the training data set comprising trajectory data for a plurality of examples of the autonomous vehicle action and labels for each of the plurality of examples;
generating a decision tree based on the trajectory data and the labels of the training dataset;
determining a vehicle trajectory criterion based on the decision tree; and
communicating the vehicle trajectory criteria to at least one autonomous vehicle, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select a vehicle trajectory for the at least one autonomous vehicle.
2. The method of claim 1, wherein obtaining the training data set associated with the autonomous vehicle action comprises:
obtaining first trajectory data associated with a first instance of the autonomous vehicle action;
Obtaining a first tag of the first instance of the autonomous vehicle action; and
the first tag is associated with the first trajectory data.
3. The method of claim 2, wherein obtaining the first tag of the first instance of the autonomous vehicle action comprises: an annotation is received from a user device, and wherein the annotation represents a user experience of the first example of the autonomous vehicle action.
4. The method of claim 2, wherein obtaining the first tag of the first instance of the autonomous vehicle action comprises: an annotation is received from a simulation system, and wherein the simulation system determines the annotation based on the first trajectory data satisfying at least one safety threshold.
5. The method of any of claims 1-4, wherein generating the decision tree based on the trajectory data and the labels of the training dataset comprises:
receiving the trajectory data and the label at a root node of the decision tree, the trajectory data comprising a set of traces each corresponding to a label and an instance;
determining a first condition meeting a branch condition from the condition set;
Branching the decision tree to a first node;
associating the root node with the first condition and associating a first subset of traces in the trace set that satisfy the first condition with the first node;
determining that the continuation condition is satisfied; and
in response to determining that the continuation condition is satisfied, recursively performing the following until the continuation condition is not satisfied: determining a second condition that satisfies the branching condition, branching the decision tree to a second node, and associating the first node with the second condition, and associating a second subset of the first subset of traces that satisfies the second condition with the second node.
6. The method of claim 5, wherein the branching condition is satisfied where the first condition has a highest measure of unrepeace reduction.
7. The method of claim 5, wherein determining the first condition from the set of conditions that satisfies the branching condition comprises:
determining an opacity reduction measure for each condition in the set of conditions; and
the first condition is selected based on the first condition having the highest measure of reduced opacity.
8. The method of claim 7, wherein determining the measure of reduction in opacity for each condition in the set of conditions comprises: for each of the conditions to be met,
determining a first set of traces that meet the condition and a second set of traces that do not meet the condition; and
the measure of reduction in opacity is determined based on the first set of traces and the second set of traces.
9. The method of claim 7, wherein the measure of reduced opacity is at least one of an information gain, gini gain, and misclassification gain.
10. The method of claim 5, wherein the set of conditions comprises different types of conditions, and wherein each condition comprises at least one condition operator and at least one variable.
11. The method of claim 10, wherein the at least one variable is adjusted based on the set of traces.
12. The method of claim 5, wherein the continuation condition is not met where a subset of the set of traces that meets the second condition contains only positively marked traces.
13. The method of claim 5, wherein determining the vehicle trajectory criteria based on the decision tree comprises:
The vehicle trajectory criteria are determined by traversing nodes of the decision tree and concatenating conditions associated with the traversed nodes.
14. The method of any of claims 1-13, wherein the decision tree comprises at least two levels, and at least one condition at each of the at least two levels.
15. The method of claim 14, wherein each of the at least two tiers classifies the trajectory data into different groups according to the at least one condition at each tier.
16. The method of any of claims 1-15, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select the vehicle trajectory for the at least one autonomous vehicle by selecting an initial trajectory from a plurality of trajectories, wherein the initial trajectory meets the vehicle trajectory criteria.
17. The method of any of claims 1-16, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select the vehicle trajectory for the at least one autonomous vehicle by:
Determining an initial track;
determining that the initial trajectory does not meet the vehicle trajectory criteria; and
the initial trajectory is modified until the initial trajectory meets the vehicle trajectory criteria.
18. A system for operating an autonomous vehicle, comprising:
at least one processor, and
at least one non-transitory storage medium storing instructions that, when executed by the at least one processor, cause the at least one processor to:
obtaining a training data set associated with an autonomous vehicle action, the training data set comprising trajectory data for a plurality of examples of the autonomous vehicle action and labels for each of the plurality of examples;
generating a decision tree based on the trajectory data and the labels of the training dataset;
determining a vehicle trajectory criterion based on the decision tree; and
communicating the vehicle trajectory criteria to at least one autonomous vehicle, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select a vehicle trajectory for the at least one autonomous vehicle.
19. The system of claim 18, wherein generating the decision tree based on the trajectory data and the labels of the training dataset comprises:
receiving the trajectory data and the label at a root node of the decision tree, the trajectory data comprising a set of traces each corresponding to a label and an instance;
determining a first condition meeting a branch condition from the condition set;
branching the decision tree to a first node;
associating the root node with the first condition and associating a first subset of traces in the trace set that satisfy the first condition with the first node;
determining that the continuation condition is satisfied; and
in response to determining that the continuation condition is satisfied, recursively performing the following until the continuation condition is not satisfied: determining a second condition that satisfies the branching condition, branching the decision tree to a second node, and associating the first node with the second condition, and associating a second subset of the first subset of traces that satisfies the second condition with the second node.
20. At least one non-transitory storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to:
Obtaining a training data set associated with an autonomous vehicle action, the training data set comprising trajectory data for a plurality of examples of the autonomous vehicle action and labels for each of the plurality of examples;
generating a decision tree based on the trajectory data and the labels of the training dataset;
determining a vehicle trajectory criterion based on the decision tree; and
communicating the vehicle trajectory criteria to at least one autonomous vehicle, wherein the at least one autonomous vehicle uses the vehicle trajectory criteria to select a vehicle trajectory for the at least one autonomous vehicle.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US63/365,694 | 2022-06-01 | ||
US17/818,861 US20230391367A1 (en) | 2022-06-01 | 2022-08-10 | Inferring autonomous driving rules from data |
US17/818,861 | 2022-08-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117148828A true CN117148828A (en) | 2023-12-01 |
Family
ID=88884867
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211610915.5A Withdrawn CN117148828A (en) | 2022-06-01 | 2022-12-14 | Method, system and medium for operating autonomous vehicles |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117148828A (en) |
-
2022
- 2022-12-14 CN CN202211610915.5A patent/CN117148828A/en not_active Withdrawn
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12112535B2 (en) | Systems and methods for effecting map layer updates based on collected sensor data | |
US11562556B1 (en) | Prediction error scenario mining for machine learning models | |
US20220355825A1 (en) | Predicting agent trajectories | |
US20220355821A1 (en) | Ride comfort improvement in different traffic scenarios for autonomous vehicle | |
US12050660B2 (en) | End-to-end system training using fused images | |
GB2605463A (en) | Selecting testing scenarios for evaluating the performance of autonomous vehicles | |
CN116265862A (en) | Vehicle, system and method for a vehicle, and storage medium | |
KR20230004212A (en) | Cross-modality active learning for object detection | |
CN115705693A (en) | Method, system and storage medium for annotation of sensor data | |
CN116890875A (en) | Vehicle motion selection based on simulated state | |
CN116466697A (en) | Method, system and storage medium for a vehicle | |
CN116229703A (en) | Method, system and storage medium for detecting traffic signals | |
US20230360375A1 (en) | Prediction error scenario mining for machine learning models | |
WO2024044305A1 (en) | Efficient and optimal feature extraction from observations | |
EP4156043A1 (en) | Driver scoring system and method with accident proneness filtering using alert clustering and safety-warning based road classification | |
US20230294716A1 (en) | Filtering perception-related artifacts | |
CN117148829A (en) | Method and system for autonomous vehicles and non-transitory storage medium | |
CN116265313A (en) | Method and system for operating autonomous vehicles and computer readable media | |
KR20230042430A (en) | Learning to identify safety-critical scenarios for an autonomous vehicle | |
US20230391367A1 (en) | Inferring autonomous driving rules from data | |
CN117148828A (en) | Method, system and medium for operating autonomous vehicles | |
US20230415772A1 (en) | Trajectory planning based on extracted trajectory features | |
US20240059302A1 (en) | Control system testing utilizing rulebook scenario generation | |
US20240265298A1 (en) | Conflict analysis between occupancy grids and semantic segmentation maps | |
US20230063368A1 (en) | Selecting minimal risk maneuvers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20231201 |