WO2024145420A1 - Véhicule autonome à capteur orientable - Google Patents

Véhicule autonome à capteur orientable Download PDF

Info

Publication number
WO2024145420A1
WO2024145420A1 PCT/US2023/086146 US2023086146W WO2024145420A1 WO 2024145420 A1 WO2024145420 A1 WO 2024145420A1 US 2023086146 W US2023086146 W US 2023086146W WO 2024145420 A1 WO2024145420 A1 WO 2024145420A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensor
autonomous vehicle
goal
data
field
Prior art date
Application number
PCT/US2023/086146
Other languages
English (en)
Inventor
James Robert CURRY
Fangrong Peng
Original Assignee
Aurora Operations, Inc.
Aurora Innovation, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aurora Operations, Inc., Aurora Innovation, Inc. filed Critical Aurora Operations, Inc.
Publication of WO2024145420A1 publication Critical patent/WO2024145420A1/fr

Links

Definitions

  • FIG. 2 is a block diagram of an example autonomy system for an autonomous platform, according to some implementations of the present disclosure.
  • FIG. 6 is a flowchart showing one example of a process flow that may be executed in the autonomy system of FIG. 2 to manage the field-of-view of a steerable sensor.
  • FIG. 7 is a diagram showing one example implementation of a sensor field-of-view system.
  • FIG. 8 is a flowchart showing one example of a process flow that may be executed to select a goal location for a steerable sensor.
  • FIG. 9 is a diagram showing one example of an environment illustrating one example implementation of the process flow of FIG. 8.
  • FIG. 11 is a flowchart showing one example of a process flow that may be executed to implement a goal condition that a candidate location not be occluded to the steerable sensor.
  • FIG. 12 is a diagram showing another example environment illustrating an example goal condition that a candidate location be on a travel way having a common vertical level with the travel way on which an autonomous vehicle is traversing.
  • FIG. 13 is a flowchart showing one example of a process flow that may be executed to implement a goal condition that candidate locations that are on travel ways having a common vertical level with the travel way that the autonomous vehicle is traversing.
  • FIG. 16 is a diagram showing one example of an environment with an area of interest system for analyzing sensor data.
  • the plurality of goal conditions further comprises a condition that a line-of-sight from the position of the first sensor to the goal location is not occluded by any map objects described by the map data.
  • the plurality of goal conditions further comprises a condition that the candidate location is on a travel way at a common travel way level with a current travel way of the autonomous vehicle based at least in part on the position of the autonomous vehicle.
  • the operations further comprise receiving route data describing a route of the autonomous vehicle, and the plurality of goal conditions further comprise a condition that the candidate location be either on the route of the autonomous vehicle or on a travel way that leads to the route of the autonomous vehicle.
  • FIG. 1 is a block diagram of an example operational scenario, according to some implementations of the present disclosure.
  • an environment 100 contains an autonomous platform 110 and a number of objects, including first actor 120, second actor 130, and third actor 140.
  • the autonomous platform 110 can move through the environment 100 and interact with the object(s) that are located within the environment 100 (e.g., first actor 120, second actor 130, third actor 140, etc.).
  • the autonomous platform 110 can optionally be configured to communicate with remote system(s) 160 through network(s) 170.
  • the autonomous platform can be configured to operate in a plurality of operating modes.
  • the autonomous platform can be configured to operate in a fully autonomous (e.g., self-driving, etc.) operating mode in which the autonomous platform is controllable without user input (e.g., can drive and navigate with no input from a human operator present in the autonomous vehicle or remote from the autonomous vehicle, etc.).
  • the autonomous platform can operate in a semi- autonomous operating mode in which the autonomous platform can operate with some input from a human operator present in the autonomous platform (or a human operator that is remote from the autonomous platform).
  • the autonomy system 200 can include the planning system 250, which can be configured to determine how the autonomous platform is to interact with and move within its environment.
  • the planning system 250 can determine one or more motion plans for an autonomous platform.
  • a motion plan can include one or more trajectories (e.g., motion trajectories) that indicate a path for an autonomous platform to follow.
  • a trajectory can be of a certain length or time range. The length or time range can be defined by the computational planning horizon of the planning system 250.
  • a motion trajectory can be defined by one or more waypoints (with associated coordinates). The waypoint(s) can be future location(s) for the autonomous platform.
  • the motion plans can be continuously generated, updated, and considered by the planning system 250.
  • the planning system 250 can determine a strategy for the autonomous platform.
  • a strategy may be a set of discrete decisions (e.g., yield to actor, reverse yield to actor, merge, lane change) that the autonomous platform makes.
  • the strategy may be selected from a plurality of potential strategies.
  • the selected strategy may be a lowest cost strategy as determined by one or more cost functions.
  • the cost functions may, for example, evaluate the probability of a collision with another actor or object.
  • the planning system 250 can determine a desired trajectory for executing a strategy. For instance, the planning system 250 can obtain one or more trajectories for executing one or more strategies. The planning system 250 can evaluate trajectories or strategies (e.g., with scores, costs, rewards, constraints, etc.) and rank them. For instance, the planning system 250 can use forecasting output(s) that indicate interactions (e.g., proximity, intersections, etc.) between trajectories for the autonomous platform and one or more objects to inform the evaluation of candidate trajectories or strategies for the autonomous platform.
  • trajectories or strategies e.g., with scores, costs, rewards, constraints, etc.
  • the planning system 250 can select a motion plan (and a corresponding trajectory) based on a ranking of a plurality of candidate trajectories. In some implementations, the planning system 250 can select a highest ranked candidate, or a highest ranked feasible candidate.
  • the planning system 250 can then validate the selected trajectory against one or more constraints before the trajectory is executed by the autonomous platform.
  • the autonomous platform 110 (e.g., using its autonomy system 200) can forecast that a platform trajectory 112A to more quickly move the autonomous platform 110 into the area in front of the first actor 120 is likely associated with the first actor 120 decreasing forward speed and yielding more quickly to the autonomous platform 110 in accordance with first actor trajectory 122 A. Additionally, or alternatively, the autonomous platform 110 can forecast that a platform trajectory 112B to gently move the autonomous platform 110 into the area in front of the first actor 120 is likely associated with the first actor 120 slightly decreasing speed and yielding slowly to the autonomous platform 110 in accordance with first actor trajectory 122B.
  • the autonomous platform 110 can forecast that a platform trajectory 112C to remain in a parallel alignment with the first actor 120 is likely associated with the first actor 120 not yielding any distance to the autonomous platform 110 in accordance with first actor trajectory 122C. Based on comparison of the forecasted scenarios to a set of desired outcomes (e.g., by scoring scenarios based on a cost or reward), the planning system 250 can select a motion plan (and its associated trajectory) in view of the autonomous platform’s interaction with the environment 100. In this manner, for example, the autonomous platform 110 can interleave its forecasting and motion planning functionality.
  • the autonomy system 200 can include a control system 260 (e.g., a vehicle control system).
  • the control system 260 can provide an interface between the autonomy system 200 and the platform control devices 212 for implementing the strategies and motion plan(s) generated by the planning system 250.
  • the control system 260 can implement the selected motion plan/trajectory to control the autonomous platform’s motion through its environment by following the selected trajectory (e.g., the waypoints included therein).
  • the control system 260 can, for example, translate a motion plan into instructions for the appropriate platform control devices 212 (e.g., acceleration control, brake control, steering control, etc.).
  • control system 260 can translate a selected motion plan into instructions to adjust a steering component (e.g., a steering angle) by a certain number of degrees, apply a certain magnitude of braking force, increase/decrease speed, etc.
  • control system 260 can communicate with the platform control devices 212 through communication channels including, for example, one or more data buses (e.g., controller area network (CAN), etc.), onboard diagnostics connectors (e.g., OBD-II, etc.), or a combination of wired or wireless communication links.
  • the platform control devices 212 can send or obtain data, messages, signals, etc. to or from the autonomy system 200 (or vice versa) through the communication channel(s).
  • the autonomy system 200 can receive, through communication interface(s) 206, assistive signal(s) from remote assistance system 270.
  • Remote assistance system 270 can communicate with the autonomy system 200 over a network (e.g., as a remote system 160 over network 170).
  • the autonomy system 200 can initiate a communication session with the remote assistance system 270.
  • the autonomy system 200 can initiate a session based on or in response to a trigger.
  • the trigger may be an alert, an error signal, a map feature, a request, a location, a traffic condition, a road condition, etc.
  • the autonomy system 200 After initiating the session, the autonomy system 200 can provide context data to the remote assistance system 270.
  • the assistive signal(s) can include waypoints (e.g., a path around an obstacle, lane change, etc.), velocity or acceleration profiles (e.g., speed limits, etc.), relative motion instructions (e.g., convoy formation, etc.), operational characteristics (e.g., use of auxiliary systems, reduced energy processing modes, etc.), or other signals to assist the autonomy system 200.
  • waypoints e.g., a path around an obstacle, lane change, etc.
  • velocity or acceleration profiles e.g., speed limits, etc.
  • relative motion instructions e.g., convoy formation, etc.
  • operational characteristics e.g., use of auxiliary systems, reduced energy processing modes, etc.
  • the autonomy system 200 can use the assistive signal(s) for input into one or more autonomy subsystems for performing autonomy functions.
  • the planning system 250 can receive the assistive signal(s) as an input for generating a motion plan.
  • assistive signal(s) can include constraints for generating a motion plan.
  • assistive signal(s) can include cost or reward adjustments for influencing motion planning by the planning system 250.
  • assistive signal(s) can be considered by the autonomy system 200 as suggestive inputs for consideration in addition to other received data (e.g., sensor inputs, etc.).
  • the autonomy system 200 may be platform agnostic, and the control system 260 can provide control instructions to platform control devices 212 for a variety of different platforms for autonomous movement (e.g., a plurality of different autonomous platforms fitted with autonomous control systems).
  • This can include a variety of different types of autonomous vehicles (e.g., sedans, vans, SUVs, trucks, electric vehicles, combustion power vehicles, etc.) from a variety of different manufacturers/developers that operate in various different environments and, in some implementations, perform one or more vehicle services.
  • an operational environment can include a dense environment 300.
  • An autonomous platform can include an autonomous vehicle 310 controlled by the autonomy system 200.
  • the autonomous vehicle 310 can be configured for maneuverability in a dense environment, such as with a configured wheelbase or other specifications. In some implementations, the autonomous vehicle 310 can be configured for transporting cargo or passengers. In some implementations, the autonomous vehicle 310 can be configured to transport numerous passengers (e.g., a passenger van, a shuttle, a bus, etc.). In some implementations, the autonomous vehicle 310 can be configured to transport cargo, such as large quantities of cargo (e.g., a truck, a box van, a step van, etc.) or smaller cargo (e.g., food, personal packages, etc.).
  • cargo such as large quantities of cargo (e.g., a truck, a box van, a step van, etc.) or smaller cargo (e.g., food, personal packages, etc.).
  • an autonomous vehicle e.g., the autonomous vehicle 310 or the autonomous vehicle 350
  • an example trip/service to traverse the one or more travel ways 332 (optionally connected by the interchange 334) to transport cargo between the transfer hub 336 and the transfer hub 338.
  • the example trip/service includes a cargo delivery/transport service, such as a freight delivery/transport service.
  • the example trip/service can be assigned by a remote computing system.
  • the transfer hub 336 can be an origin point for cargo (e.g., a depot, a warehouse, a facility, etc.) and the transfer hub 338 can be a destination point for cargo (e.g., a retailer, etc.).
  • the transfer hub 336 can be an intermediate point along a cargo item’s ultimate journey between its respective origin and its respective destination.
  • a cargo item’s origin can be situated along the access travel ways 340 at the location 342.
  • the cargo item can accordingly be transported to the transfer hub 336 (e.g., by a human-driven vehicle, by the autonomous vehicle 310, etc.) for staging.
  • various cargo items can be grouped or staged for longer distance transport over the travel ways 332.
  • the example trip/service can be prescheduled (e.g., for regular traversal, such as on a transportation schedule). In some implementations, the example trip/service can be on-demand (e.g., as requested by or for performing a chartered passenger transport or freight delivery service).
  • some or all of the sensors 202 used by an autonomous platform may concentrate sensing resources densely. This may result in sensors that generate data having a higher resolution, but in a smaller field-of-view.
  • an example LIDAR sensor having a scan pattern of N beams. The beams may be arranged in a vertical pattern and may be scanned through a horizontal sweep. The vertical field-of- view of the example LIDAR sensor may be based on the vertical spread of the N beams. The horizontal or azimuth field-of-view of the example LIDAR sensor may be based on the horizontal sweep of the scan.
  • Concentrating the sensing resources of the example LIDAR sensor may include reducing the spread of the N beams in the vertical direction, for example, by using a smaller spread angle. This may limit the vertical field-of-view of the example LIDAR sensor, but increase the resolution of the sensor in the vertical direction.
  • the sensing resources of the example LIDAR sensor may be concentrated in the horizontal direction by limiting the horizontal sweep of the scan. This may also increase the resolution of the data generated by the example LIDAR sensor, at least because the sensor may be able to execute more cycles of a smaller horizontal sweep in a given time period. Limiting the horizontal sweep of the example LIDAR sensor, however, also limits the horizontal field-of-view.
  • an autonomous vehicle may select a static field-of-view for a steerable sensor such that the sensor’s field-of- view is in a position most likely to be advantageous for the autonomous vehicle.
  • This approach can lead to situations where the field-of-view of the steerable sensor is not optimally positioned for the autonomous vehicle. For example, as the autonomous vehicle turns, the field-of-view of the steerable sensor may not be optimally positioned. Also, for example, if the autonomous vehicle is traveling on a travel way having a variable grade, a fixed field-of-view for a steerable sensor may not be optimally positioned as the grade of the travel way changes.
  • the autonomy system 200 or other suitable system of an autonomous vehicle may be configured to select a goal location in the environment of the autonomous vehicle.
  • the goal location is on a travel way in the environment.
  • the goal location may be on a travel way that is part of a route to be executed by an autonomous vehicle and/or may be on a travel away that is not part of a route to be executed by the autonomous vehicle.
  • a goal location may be selected on a travel way that is not part of a route to be executed by an autonomous vehicle, but is on a travel away that may be used by other vehicles that may enter a travel way being traversed by the autonomous vehicle.
  • the autonomy system 200 may apply various other parameters to determine the goal location.
  • the goal location is selected to be within a range of distances from a position of the autonomous vehicle and within a field-of-regard of the one or more sensors. Also, in some examples the goal location is selected to be on a travel way that is part of a route being executed by the autonomous vehicle.
  • the autonomy system 200 may be programmed to determine a field-of-view position for the steerable sensor so that the steerable sensor is directed to the goal location (e.g., so that the goal location is within the field-of- view of the sensor).
  • the autonomy system 200 may select the field-of-view position using pose data describing a position of the autonomous vehicle in the environment and sensor position data describing a position of the steerable sensor (e.g., relative to the autonomous vehicle).
  • the autonomy system 200 may send a field-of-view control signal to the steerable sensor instructing the steerable sensor to steer its field-of-view to the determined field-of-view position.
  • the autonomous vehicle 402 comprises a steerable sensor 422.
  • the steerable sensor 422 may be or include any suitable sensor or sensor type such as, for example, a LIDAR sensor, a RADAR sensor, an optical image capturing device, or the like.
  • the sensor 422 may be arranged in a manner similar to that described with respect to the sensors 202 of FIG. 2.
  • the steerable sensor 422 may generate sensor data similar to the sensor data 204, also described with respect to FIG. 2.
  • a single steerable sensor 422 is shown in FIG. 4, it will be appreciated that the autonomous vehicle 402 may include multiple sensors, for example, as illustrated in more detail in FIG. 5.
  • an autonomous vehicle such as the autonomous vehicle 402 may include both steerable and non-steerable sensors.
  • the steerable sensors may be managed as described herein while non-steerable sensors may maintain a static field-of- view.
  • FIG. 4 illustrates a sensor direction 418 and a vehicle direction 416.
  • the vehicle direction 416 indicates a direction in which the autonomous vehicle 402 is oriented.
  • the sensor direction 418 indicates a direction in which the steerable sensor 422 is oriented. In some examples, the sensor direction 418 indicates a center of a field-of-view of the steerable sensor 422.
  • FIG. 4 also includes a breakout window 403 showing additional details of the steerable sensor 422.
  • the breakout window 403 illustrates the steerable sensor 422 and the sensor direction 418 as well as a vertical field-of- view 426 of the steerable sensor 422 and a vertical field-of-regard 424 of the steerable sensor 422.
  • the field-of-view of the steerable sensor 422 may be an area in the environment 400 that is observable by the steerable sensor 422.
  • the steerable sensor 422 is shown in cross-section along the vertical axis or Z axis.
  • the vertical field-of-view 426 is a component of the field-of-view of the steerable sensor 422 in the vertical or Z axis direction.
  • the vertical field-of-regard 424 may be a range within which the vertical field-of-view 426 can be moved.
  • FIG. 5 is a diagram showing another example of the environment 400 of FIG. 4 viewing the autonomous vehicle 402 from a top- down view parallel to the Z axis.
  • the top-down view shown in FIG. 5 illustrates an example azimuth field-of-view 508 for the steerable sensor 422.
  • the azimuth field-of-view 508 for the steerable sensor 422 may be fixed.
  • the steerable sensor 422 may modify its vertical field- of-view 426, as indicated by arrow 428 and described herein.
  • the steerable sensor 422 may be or include a LIDAR sensor.
  • the steerable sensor 422 may comprise transmitter optics and receiver optics.
  • the steerable sensor 422 may be a monostatic LIDAR sensor in which the transmitter optics and receiver optics are implemented in a common assembly.
  • the transmitter optics may generate an array of N beams that are transmitted into the environment 400.
  • the beams may be reflected back to the steerable sensor 422 by objects in the environment 400, generating a return signal.
  • the return signal is detected by the receiver optics of the steerable sensor 422.
  • Sensor data generated by the steerable sensor 422 may be and/or be based on the return signal.
  • the array of N beams may be spread vertically in the Z axis along the vertical field-of-view 426. In some examples, the spacing of the N beams may determine a resolution of the steerable sensor 422.
  • the array of N beams may be periodically scanned along the azimuth field-of-view 508.
  • the N beams of the array may be arranged in a scan pattern with adjacent beams separated by a beam angle.
  • the number of beams N may be any suitable number such as, for example, six beams, twelve beams, twenty-four beams, thirty -two beams, sixty- four beams, etc.
  • the angle separating the N beams of the array may be any suitable angle such as, for example, 1/10°, 1/12°, 1/6° and/or the like.
  • the array of N beams may be implemented progressively using N separate beam sources, or in an interlaced manner using fewer than N separate beam sources.
  • a sixty-four beam scan pattern with 1/24° spread between beams may be implemented using an array of sixteen beams separated by 1/6°.
  • the sixteen-beam array may be scanned across the azimuth field-of-view 508 four times per sensor cycle, with each scan being offset in the Z axis direction by 1/24°.
  • the steerable sensor 422 may be any suitable type of sensor. It will also be appreciated that, in examples in which the steerable sensor 422 is a LIDAR sensor, it may be implemented using a beam array comprising any suitable number of beams generated in a progressive and/or interlaced manner.
  • FIG. 4 also shows a breakout window 401 showing an example of the autonomy system 200 including a sensor field-of-view system 404.
  • the sensor field-of-view system 404 may generate a field-of-view command 414 that is provided to the steerable sensor 422 to cause the steerable sensor 422 to modify its field-of-view.
  • the field-of-view command 414 may indicate a position of the vertical field-of-view 426 within the vertical field-of-regard 424.
  • the sensor field-of-view system 404 may generate the field-of- view command 414 based on pose data 408, map data 410, and sensor position data 412.
  • the pose data 408 may describe a position or pose of the autonomous vehicle 402.
  • the pose data 408 may indicate where the autonomous vehicle 402 is located in the environment 400 and an orientation of the autonomous vehicle 402, for example, in 6-degrees.
  • the map data 410 may describe the environment 400 including travel ways as well as landmarks such as, for example, buildings, hills, bridges, tunnels, and/or the like.
  • the map data 410 may include travel way data describing the position of travel ways in the environment 400 and ground data describing the position of the ground in the environment 400.
  • the sensor field-of-view system 404 may be programmed to select a goal location 432 in the environment 400.
  • the goal location 432 may represent a location in the environment 400 that it is desirable for the autonomous vehicle 402 to sense.
  • the goal location 432 is determined using the map data 410 and the pose data 408.
  • the sensor field-of-view system 404 may apply one or more goal conditions to the map data 410 to select the goal location 432.
  • One example goal condition may be that the goal location 432 be within a field-of-regard of the steerable sensor 422.
  • the sensor field-of-view system 404 may not select a candidate location as a goal location 432 unless the field-of-view of the steerable sensor 422 may be steered to a position such that the goal location 432 is within the field-of-view of the steerable sensor 422.
  • Another example goal condition may be that the goal location 432 is within a range of distances from the autonomous vehicle 402. Another example goal condition may be that the goal location 432 is on a travel way in the environment 400. Another example goal condition may be that the goal location 432 is the location farthest from the autonomous vehicle 402 that meets the other goal conditions. For example, the goal location 432 may be the position on a travel way that is farthest from the autonomous vehicle 402 while still being within a range of distances from the autonomous vehicle 402. In another example, the goal conditions may include a goal condition that the goal location 432 be on a route that is being executed by the autonomous vehicle.
  • the angle 430 is a two-dimensional angle in a plane parallel to the X-Z axis.
  • the angle 430 may be a three-dimensional angle.
  • the angle 430 may be a two-dimensional angle in a plane parallel to the X-Y axis.
  • the sensor field-of-view system 404 may determine a field-of- view command 414 indicating a position for the field-of-view 426 of the steerable sensor 422.
  • the command 414 may indicate the vehicle direction 416.
  • the view of the environment 400 shown in FIG. 5 illustrates an azimuth field-of-view 508 of the steerable sensor 422.
  • FIG. 5 also illustrates other example features.
  • the autonomous vehicle 402 is a tractor that is pulling a trailer 502.
  • FIG. 5 also shows additional steerable sensors 504, 506.
  • the steerable sensor 504 has an azimuth field-of- view 510.
  • the steerable sensor 506 has an azimuth field-of-view 512. It will be appreciated that, in some examples, the steerable sensors 504, 506 may be steerable in a vertical direction, similar to the steerable sensor 422. In some examples, the steerable sensors 504, 506 may also be steerable in an azimuth direction, as described herein. In examples that include more than one steerable sensor, the sensor field-of-view system 404 may be configured to select fields- of-view for each steerable sensor. For example, the sensor field-of-view system 404 may select a goal location for each steerable sensor 422, 504, 506.
  • the sensor field-of-view system 404 may translate the respective goal locations for each steerable sensor 504, 506, 422 into positions for the respective fields of view of the respective steerable sensors 422, 504, 506.
  • the sensor field-of-view system 404 may send a respective field-of-view command, such as the field-of- view command 414, to the respective sensors.
  • FIG. 6 is a flowchart showing one example of a process flow 600 that may be executed in the autonomy system 200 (e.g., by the sensor field-of- view system 404 thereof) to manage the field-of-view of a steerable sensor.
  • the process flow 600 is described with respect to the steerable sensor 422. It will be appreciated, however, that, in some examples, the process flow 600 may be separately executed to manage the fields-of-view of multiple steerable sensors.
  • the autonomy system 200 may determine a sensor angle to direct the field-of-view of the steerable sensor towards the goal location.
  • the angle may be a two-dimensional angle, for example, if the steerable sensor has a field-of-view that is steerable in two dimensions, or a three-dimensional angle, for example, if the steerable sensor has a field-of-view that is steerable in three dimensions.
  • the autonomy system 200 may implement the sensor angle determined that operation 606. This may include providing a field- of-view command, such as the field-of-view command 414, to the steerable sensor.
  • the steerable sensor may respond to the field-of-view command by steering its field-of-view to the position indicated by the angle.
  • the process flow 600 may be executed multiple times, for example, concurrently. Each instance of the process flow 600 may be executed for a different steerable sensor 422, 504, 506 of the autonomous vehicle 402. In some examples, an instance of the process flow 600 executed may be constrained to select goal positions within a field-of-regard of the respective steerable sensors 422, 504, 506. For example, a goal location for the steerable sensor 422 may be within a field-of-regard of the steerable sensor 422. A goal location for the steerable sensor 504 may be within a field-of-regard for the steerable sensor 504. A goal location for the steerable sensor 506 may be within a field-of-regard for the steerable sensor 506, and so on.
  • FIG. 7 is a diagram 700 showing one example implementation of a sensor field-of-view system 702.
  • the sensor field-of-view system 702 may be implemented as a component in conjunction with an autonomy system associated with an autonomous vehicle, such as the sensor field-of-view system 404 and the autonomy system 200.
  • the sensor field-of-view system 702 may receive input data including local pose data 701, travel way data 703, ground data 705, and sensor calibration data 707.
  • the local pose data 701 may indicate a position of an autonomous vehicle, such as, a position and orientation of the autonomous vehicle. In some examples, the local pose data 701 is determined by a localization system of the autonomy system, such as the localization system 230 shown in FIG. 2.
  • Travel way data 703 and ground data 705 may be provided separately, or provided together as map data. Travel way data 703 may indicate the positions of one or more travel ways in the environment of the autonomous vehicle.
  • Ground data 705 may indicate the position of the ground in the environment of the autonomous vehicle.
  • the ground data 705 may also indicate landmarks in the environment of the autonomous vehicle such as, for example, buildings, hills, bridges, tunnels, and/or the like.
  • the sensor calibration data 707 may indicate a position of the steerable sensor, for example, relative to a position of the autonomous vehicle.
  • the sensor field-of-view system 702 comprises a planning module 704, a local module 706, and a device module 708.
  • the various modules 704, 706, 708 are implemented in hardware, software, and/or the like.
  • the planning module 704 may operate in a map domain.
  • the map domain may be a three-dimensional space described by the map data such as, for example, the travel way data 703 and the ground data 705.
  • the planning module 704 may utilize the local pose data 701 to translate a location of the autonomous vehicle from a vehicle domain (e.g., a three- dimensional space described by the local pose data 701) into the map domain.
  • the planning module 704 may select a goal location for the steerable sensor.
  • the planning module 704 may apply one or more goal conditions to select the goal location in the map domain.
  • the local pose data 701 may be determined, for example by a localization system of the autonomous vehicle, without using sensor data from the steerable sensor. In this way, inputs to the sensor field-of- view system 702 may be independent of the steerable sensor 422. It may be desirable, in some examples, to keep the output of the sensor field-of-view system 702 independent from sensor data generated by the steerable sensor. This, in some examples, may prevent destructive feedback.
  • the local module 706 may generate a field-of-view command that is provided to the device module 708.
  • the device module 708 may convert the field-of-view command to a corresponding command signal that may be provided to the steerable sensor 710.
  • the device module 708 may provide the command signal to the steerable sensor 710, which may cause the steerable sensor 710 to assume the field-of-view position described by the command signal.
  • the command signal is a digital -to-analog count.
  • the digital-to-analog count for example, may be provided to a digital-to-analog converter, which may generate an analog signal that is used by the steerable sensor 710 to assume the desired field-of-view position.
  • the digital-to-analog converter may be a component of the sensor field-of-view system 702, a component of the steerable sensor 710, and/or a component of the autonomy system 200.
  • the outer loop is configured to cycle through candidate locations at different distances from the autonomous vehicle 402.
  • the outer loop may execute for every distance from the autonomous vehicle 402 that is within a range of distances from the autonomous vehicle 402, where the range of distances may be indicated by a goal condition.
  • the inner loop may consider a range of candidate locations at different azimuth positions along the considered distance. In this way, the structure of the process flow 800 may implement a goal condition that the goal location be within a range of distances from the autonomous vehicle 402 corresponding to the range of distances considered by the outer loop.
  • the process flow 800 may also implement other goal conditions. For example, if the outer loop is executed starting from distances that are farthest from the autonomous vehicle 402, then the process flow 800 may return a goal location within the range of distances that is farthest from the autonomous vehicle 402. Conversely, if the outer loop is executed starting from a distance in the range that is closest to the autonomous vehicle 402, then the process flow 800 may return a goal location within the range of distances that is closest to the autonomous vehicle 402.
  • the autonomy system 200 may determine, at operation 814, whether the current distance from the autonomous vehicle 402 is the last distance within the range 908 (illustrated in FIG. 9) to be considered. If the current distance is the last distance within the range 908 be considered, and no goal location has been returned yet, then the autonomy system 200 may return an error at operation 818. If the current distance is not the last distance within the range 908 to be considered, the autonomy system 200 may begin a next execution of the outer loop by implementing to a next distance at operation 816 and returning to operation 802 to access map data at the next distance.
  • FIG. 11 is a flowchart showing one example of a process flow 1100 that may be executed by the autonomy system 200 to implement a goal condition that a candidate location not be occluded to the steerable sensor.
  • the process flow 1100 may be executed as part of the operation 806 of the process flow 800 to determine whether a candidate location is a goal location.
  • a candidate location on the ramp may be considered a goal location for the steerable sensor 422.
  • various techniques described herein for managing a field of view of the steerable sensor may also be used to select sensor data that may be of interest to the autonomous vehicle 402.
  • the autonomous vehicle 402 may include various sensors that generate high resolution data.
  • the autonomous vehicle 402 may have limiting processing resources. Accordingly, it may not be possible to analyze all of the data generated by the various sensors of an autonomous vehicle. To address this, the autonomy system 200 may implement an area of interest system.
  • the first computing system 20 can obtain the one or more models 26 using communication interface(s) 27 to communicate with the second computing system 40 over the network(s) 60.
  • the first computing system 20 can store the model(s) 26 (e.g., one or more machine-learned models) in the memory 23.
  • the first computing system 20 can then use or otherwise implement the models 26 (e.g., by the processors 22).
  • the first computing system 20 can implement the model(s) 26 to localize an autonomous platform in an environment, perceive an autonomous platform’s environment or objects therein, plan one or more future states of an autonomous platform for moving through an environment, control an autonomous platform for interacting with an environment, etc.
  • the network(s) 60 can be any type of network or combination of networks that allows for communication between devices.
  • the network(s) 60 can include one or more of a local area network, wide area network, the Internet, secure network, cellular network, mesh network, peer-to-peer communication link or some combination thereof and can include any number of wired or wireless links. Communication over the network(s) 60 can be accomplished, for instance, through a network interface using any type of protocol, protection scheme, encoding, format, packaging, etc.
  • FIG. 18 illustrates one example computing ecosystem 10 that can be used to implement the present disclosure. Other systems can be used as well.
  • the first computing system 20 can include the model trainer(s) 47 and the training data 48.
  • the model(s) 26, 46 can be both trained and used locally at the first computing system 20.
  • the computing system 20 may not be connected to other computing systems. Additionally, components illustrated or discussed as being included in one of the computing systems 20 or 40 can instead be included in another one of the computing systems 20 or 40.

Abstract

Divers exemples concernent des systèmes et des procédés pour diriger un champ de vision d'un premier capteur (422) positionné sur un véhicule autonome (402). Dans un exemple, au moins un processeur (404) sélectionne un emplacement de but (432) sur au moins un trajet de déplacement dans un environnement (400) du véhicule autonome (402). La sélection de l'emplacement de but (432) est basée au moins en partie sur des données de carte (410) décrivant au moins un trajet de déplacement dans un environnement du véhicule autonome (402) et de données de pose (408) décrivant une position du véhicule autonome (402) dans l'environnement (400). Le ou les processeurs (404, 414) déterminent une position de champ de vision (426, 428) pour diriger le premier capteur (420) vers l'emplacement de but (432) sur la base, au moins en partie, des données de position de capteur (412). Le ou les processeurs (404, 414) envoient une instruction de champ de vision au premier capteur (422). L'instruction de champ de vision modifie le champ de vision (426, 428) du premier capteur (422) sur la base de la position de champ de vision.
PCT/US2023/086146 2022-12-30 2023-12-28 Véhicule autonome à capteur orientable WO2024145420A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US63/478,001 2022-12-30
US18/158,854 2023-01-24

Publications (1)

Publication Number Publication Date
WO2024145420A1 true WO2024145420A1 (fr) 2024-07-04

Family

ID=

Similar Documents

Publication Publication Date Title
US11112796B2 (en) Object motion prediction and autonomous vehicle control
US10803325B2 (en) Autonomous vehicle lane boundary detection systems and methods
US11480435B2 (en) Map generation systems and methods
US10656657B2 (en) Object motion prediction and autonomous vehicle control
US20200302662A1 (en) System and Methods for Generating High Definition Maps Using Machine-Learned Models to Analyze Topology Data Gathered From Sensors
US20190147254A1 (en) Autonomous Vehicle Lane Boundary Detection Systems and Methods
US20190147255A1 (en) Systems and Methods for Generating Sparse Geographic Data for Autonomous Vehicles
US20210188316A1 (en) Systems and Methods for Generating Behavioral Predictions in Reaction to Autonomous Vehicle Movement
US20210278852A1 (en) Systems and Methods for Using Attention Masks to Improve Motion Planning
US11618444B2 (en) Methods and systems for autonomous vehicle inference of routes for actors exhibiting unrecognized behavior
US20220032970A1 (en) Systems and Methods for Mitigating Vehicle Pose Error Across an Aggregated Feature Map
US20190163201A1 (en) Autonomous Vehicle Sensor Compensation Using Displacement Sensor
US11880203B2 (en) Methods and system for predicting trajectories of uncertain road users by semantic segmentation of drivable area boundaries
US11851083B2 (en) Methods and system for constructing data representation for use in assisting autonomous vehicles navigate intersections
CA3126236A1 (fr) Systemes et methodes pour le traitement de paquets de donnees de capteur et mise a jour de memoire spatiale pour des plateformes robotiques
US11551494B2 (en) Predictive mobile test device control for autonomous vehicle testing
WO2022165498A1 (fr) Procédés et système pour générer une carte de niveau voie pour une zone d'intérêt pour la navigation d'un véhicule autonome
WO2022081399A1 (fr) Système d'anticipation de l'état futur d'un véhicule autonome
US11755469B2 (en) System for executing structured tests across a fleet of autonomous vehicles
US20240217542A1 (en) Autonomous vehicle steerable sensor management
US20240124028A1 (en) Autonomous vehicle blind spot management
WO2024145420A1 (fr) Véhicule autonome à capteur orientable
US20240103522A1 (en) Perception system for an autonomous vehicle
US11801871B1 (en) Goal-based motion forecasting
US11884292B1 (en) Radar sensor system for vehicles