CN114830202A - Planning for unknown objects by autonomous vehicles - Google Patents

Planning for unknown objects by autonomous vehicles Download PDF

Info

Publication number
CN114830202A
CN114830202A CN201880030067.6A CN201880030067A CN114830202A CN 114830202 A CN114830202 A CN 114830202A CN 201880030067 A CN201880030067 A CN 201880030067A CN 114830202 A CN114830202 A CN 114830202A
Authority
CN
China
Prior art keywords
vehicle
sensors
hypothetical
sensor
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880030067.6A
Other languages
Chinese (zh)
Inventor
E·弗拉佐利
秦宝星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motional AD LLC
Original Assignee
Nutonomy Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/451,703 external-priority patent/US10234864B2/en
Priority claimed from US15/451,734 external-priority patent/US10281920B2/en
Priority claimed from US15/451,747 external-priority patent/US10095234B2/en
Application filed by Nutonomy Inc filed Critical Nutonomy Inc
Publication of CN114830202A publication Critical patent/CN114830202A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/164Centralised systems, e.g. external to vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/161Decentralised systems, e.g. inter-vehicle communication
    • G08G1/163Decentralised systems, e.g. inter-vehicle communication involving continuous checking
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/05Big data

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

In particular, a world model of the environment of the vehicle is maintained. Hypothetical objects that are not perceivable in the environment by the vehicle's sensors are included in the world model.

Description

Planning for unknown objects by autonomous vehicles
Cross reference to related applications
This application claims benefit of U.S. application serial No. 15/451,703 filed on 7/3/2017, U.S. application serial No. 15/451,734 filed on 7/3/2017, and U.S. application serial No. 15/451,747 filed on 7/3/7, each of which is hereby incorporated by reference in its entirety.
Background
In making driving decisions, typical Autonomous Vehicle (AV) systems consider that the AV system knows its objects, such as other vehicles and obstacles, in the environment of the AV because the sensor systems on the AV observe these objects or because these objects are identified by maps or other data sources. To make driving decisions, the AV system may maintain a world model that includes objects known to be in the environment of the AV. AV also presents challenges to good driving decisions based on available data, vehicles and obstacles that are not perceivable and not otherwise known to be present.
Disclosure of Invention
The techniques described herein enable an AV system to plan for risks associated with objects that may be present but are not aware in the environment of the AV. The AV system may then make driving decisions that take into account unknown possible objects in its environment-including relatively safe driving decisions in view of potentially unsafe scenarios. To this end, in some implementations of these techniques, the AV system determines boundaries between perceived and imperceptible worlds in the world model. The AV system then assumes the presence or attributes of a possible unknown object in the world (we sometimes refer to it as a "dark object") that is not perceived based on various factors and ways. These hypothetical objects and their attributes are then added to the world model for making driving decisions that accommodate not only known objects in the perceived world but also unknown objects in the unseen world (or so-called "dark objects").
In general, in one aspect, a world model of the environment of a vehicle is maintained. Maintenance access to the world model includes a database of road network information, or using data from one or more sensors, or both. The world model includes hypothetical objects in the environment that are not perceivable by the vehicle's sensors. It is assumed that the object includes a moving object, or an object using a travel path from which the vehicle is excluded, or both. Assume that the object may include at least one of: second vehicles, bicycles, buses, trains, pedestrians, and animals. The implementation of including hypothetical objects in the world model includes probabilistically selecting the type of hypothetical object and the attributes of the object based on previously observed objects in the environment. The attribute includes size or velocity or both. The world model in the environment includes one or more known objects that are sensed or otherwise known by the sensors of the vehicle. The hypothetical objects maintained by the world model and the known objects are in different parts of the environment. Different parts of the environment include the perceived world and the unaware world. The perceived world is separated from the non-perceived world by a boundary.
Implementations may include detection of boundaries. The detection of the boundary uses data from one or more sensors to distinguish observable ground from foreground blocking a portion of the ground. The one or more sensors include a vehicle sensor or an off-vehicle sensor or both.
The position of the self (ego) vehicle is determined based on a road network database and one or more sensors. Traffic lane information is also queried from the road network database. Stored data obtained from any database or from any sensor may be used to infer the likely location of a hypothetical object. It is assumed that the determination of the position of the object is based on querying traffic lane information from a database and discretizing the traffic lane into discrete points. Implementations may generate an unknown skeleton of discrete points of a lane that cannot be perceived by a sensor of a vehicle. Hypothetical objects in the world model are generated by performing the following iterations on discrete points: a representative shape is generated at discrete points of an unknown skeleton and evaluated whether the representative shape is entirely within the world that is not perceived. If the representative shape is completely within the world that is not perceived, the representative shape is considered to be a hypothetical object.
Implementations may apply temporal filtering to determine the location of a hypothetical object. The filtering includes smoothing the unknown skeleton by a forward-propagating unknown skeleton generated by moving an old unknown skeleton forward along the traffic lane.
Hypothetical objects in the world model are associated with one or more attributes. One or more of the attributes are associated with a possible motion state of the hypothetical object. The motion state may be a stationary condition, or a moving condition, or a speed, or a direction of movement, or a combination of two or more thereof. The speed is set to be less than or equal to a predetermined maximum value. The predetermined maximum value includes a speed limit. In some cases, the predetermined maximum includes quantities derived from other objects observed simultaneously or previously in the environment. The predetermined maximum may be a quantity derived from historical data, road configuration, traffic regulations, events, time, weather conditions, and combinations of two or more thereof.
The one or more sensors may include one or more of the following: radar sensors, lidar sensors, and camera sensors. The camera sensor includes a stereo camera sensor or a monocular camera sensor or both.
Updating a trajectory of a vehicle and executing the trajectory of the vehicle based on the world model is implemented. The vehicle comprises an autonomous vehicle.
In general, in one aspect, data representative of an observable portion of an environment of a vehicle is received from a sensor. Data representing a non-observable portion of an environment is generated, including data representing at least one hypothetical object in the non-observable portion of the environment. Commands for operation of the vehicle within the environment are generated. The command depends on data representing an observable portion of the environment and data representing a hypothetical object in a non-observable portion of the environment. It is assumed that the object may comprise a moving object. In some cases, the vehicle is excluded from the path assuming that the object includes an object using the travel path. It is assumed that the object may be at least one of: vehicles, bicycles, buses, trains, pedestrians, and animals.
Generating data representing non-observable portions of the environment includes probabilistically selecting a type of hypothetical object and attributes of the hypothetical object based on previously observed objects in the environment. In some examples, it is assumed that the object comprises a vehicle and the attributes comprise a size and a speed.
The observable portion is separated from the non-observable portion by a boundary. Techniques include detection of boundaries. Detecting the boundary includes using data from the sensors to distinguish observable ground from foreground blocking a portion of the ground. Generating data representative of a non-observable portion of an environment includes one or more of the following data processing steps: (1) using the stored data to infer possible locations of hypothetical objects; (2) inquiring traffic lane information from a road network database; (3) determining a location of the vehicle based on the road network database and the one or more sensors; (4) querying a database for traffic lane information and discretizing the traffic lane into discrete points; (5) generating an unknown skeleton of discrete points of the lane that cannot be perceived by the sensor; (6) generating a representative shape at discrete points of the unknown skeleton; and (7) evaluating whether the representative shape is entirely within the non-observable portion. The representative shape may be considered a hypothetical object.
Implementations may apply temporal filtering to determine the location of a hypothetical object. The filtering includes smoothing the unknown skeleton by a forward-propagating unknown skeleton generated by moving an old unknown skeleton forward along the traffic lane.
Assume that an object is assigned one or more attributes. One or more of the attributes are associated with a possible motion state of the hypothetical object. The motion state includes one or more of the following factors: stationary conditions, moving conditions, speed, and moving direction. The speed is set to be less than or equal to a predetermined maximum value. The predetermined maximum value may be set as the speed limit. In some cases, the predetermined maximum is a quantity derived from other objects observed simultaneously or previously in the environment. The predetermined maximum may be derived from historical data, road configuration, traffic regulations, events, time, weather conditions, and combinations of two or more thereof.
The implementation comprises the following steps: accessing a database having road network information; or using data from a second set of sensors; or both. The sensor or the second set of sensors comprises one or more of: radar sensors, lidar sensors, and camera sensors. The camera sensor may be a stereo camera sensor, a monocular camera sensor, or both.
Generating commands for operating the vehicle includes: updating the trajectory of the vehicle; or executing a trajectory of a vehicle; or both. The vehicle comprises an autonomous vehicle.
In general, in one aspect, a technique generates commands to cause an autonomous vehicle to drive at a particular speed on a road network and make a particular turn to reach a target location. The command is updated in response to an assumed speed and direction of movement representative of an assumed vehicle also traveling on the road network. The command is updated to reduce the risk of the autonomous vehicle colliding with another vehicle on the road network. The assumed speed and direction of movement are derived probabilistically based on vehicles previously observed in the environment.
The observable portion is separated from the non-observable portion by a boundary. The implementation includes detection of the boundary. The detection of the boundary uses data from one or more sensors to distinguish observable ground from foreground blocking a portion of the ground. The one or more sensors include sensors on the autonomous vehicle or sensors off the autonomous vehicle, or both.
Current data representing an assumed velocity and direction of movement of an assumed vehicle may be generated based on known objects sensed by one or more sensors. In some cases, data generation includes one or more of the following operations: inquiring lane traffic information from a road network database; using the stored data to infer a likely position of the hypothetical vehicle; and determining a location of the autonomous vehicle based on the road network database and the one or more sensors. Inferring possible locations for a hypothetical vehicle includes: querying traffic lane information from a database and discretizing the traffic lane into discrete points; generating an unknown skeleton of discrete points of the lane that cannot be perceived by the sensor; generating a representative shape at discrete points of the unknown skeleton; and evaluating whether the representative shape is entirely within the world that is not perceived. Representative shapes in the world that are not perceived are considered hypothetical vehicles.
The techniques may apply temporal filtering to determine the location of a hypothetical vehicle. The filtering process smoothes the unknown skeleton through a forward-propagating unknown skeleton generated by moving the old unknown skeleton forward along the traffic lane.
Assume that a vehicle is assigned one or more attributes. One or more of the attributes are associated with a likely state of motion of the hypothetical vehicle. The motion state includes a stationary condition. It is assumed that the speed is set to be less than or equal to a predetermined maximum value. The predetermined maximum value may be a speed limit or a calculated quantity. The quantity may be derived from other objects simultaneously observed or previously observed in the environment. The quantity may be derived from historical data, road configuration, traffic rules, events, time, weather conditions, and combinations of two or more thereof.
Access to a database having road network information is enabled. Data from one or more sensors is used. The one or more sensors include a radar sensor, or a lidar sensor, or a camera sensor, or a combination of two or more thereof. The camera sensor includes a stereo camera sensor, or a monocular camera sensor, or both.
In general, in one aspect, a technique includes an apparatus that includes an autonomous vehicle. The autonomous vehicle comprises a controllable device configured to cause the autonomous vehicle to move on the road network; a controller for providing commands to the controllable device; and a computing element for updating the command in response to current data representing an assumed speed and a direction of movement of an assumed vehicle also driven on the road network. The assumed speed and direction of movement are derived probabilistically based on vehicles previously observed in the environment.
The observable portion is separated from the non-observable portion by a boundary. The computing element detects the boundary. The detection of the boundary includes using data from one or more sensors to distinguish observable terrain from foreground blocking a portion of the terrain. The one or more sensors include sensors on the autonomous vehicle, sensors off the autonomous vehicle, or both.
The generation of current data representing the assumed speed and direction of movement of the assumed vehicle may be based on known objects sensed by one or more sensors. In some cases, data generation includes one or more of the following operations: inquiring lane traffic information from a road network database; using the stored data to infer hypothetical vehicle likely locations; and determining a location of the autonomous vehicle based on the road network database and the one or more sensors. Inferring possible locations for a hypothetical vehicle includes: querying traffic lane information from a database and discretizing the traffic lane into discrete points; generating an unknown skeleton of discrete points of the lane that cannot be perceived by the sensor; generating a representative shape at discrete points of the unknown skeleton; and evaluating whether the representative shape is entirely within the world that is not perceived. Representative shapes in the world that are not perceived are considered hypothetical vehicles.
The techniques may apply temporal filtering to determine the location of a hypothetical vehicle. The filtering process smoothes the unknown skeleton by a forward-propagating unknown skeleton generated by moving the old unknown skeleton forward along the traffic lane.
Assume that a vehicle is assigned one or more attributes. One or more of the attributes are associated with a likely state of motion of the hypothetical vehicle. The motion state includes a stationary condition. It is assumed that the speed is set to be less than or equal to a predetermined maximum value. The predetermined maximum value may be a speed limit or a calculated quantity. The quantity may be derived from other objects simultaneously observed or previously observed in the environment. The quantity may be derived from historical data, road configuration, traffic rules, events, time, weather conditions, and combinations of two or more thereof.
The computing element accesses a database having road network information. Data from one or more sensors is used. The one or more sensors include a radar sensor, or a lidar sensor, or a camera sensor, or a combination of two or more thereof. The camera sensor includes a stereo camera sensor or a monocular camera sensor or both.
These and other aspects, features and implementations may be expressed as methods, apparatus, systems, components, program products, methods of doing business, means or steps for performing functions, and in other ways.
These and other aspects, features and implementations will become apparent from the following description, including the claims.
Drawings
Fig. 1 shows an example of an AV system.
Fig. 2 to 6 show examples of road scenarios.
Fig. 7 to 9 show examples of the procedure.
Fig. 10 shows an example of a scanning process.
Fig. 11 to 15 show examples of boundary determination.
Fig. 16 shows an example of dark vehicle generation.
Fig. 17 shows an example of temporal filtering.
Fig. 18 shows an example of setting the expected future trajectory.
Detailed Description
The phrase "environment of the AV" is used herein to broadly include, for example: the area, geography, location (location), surrounding area (vicinity) or road configuration in which the AV is located or driven, including road networks and features, the environment built, the current situation and objects in the environment. The term "world" is sometimes used interchangeably herein with "environment".
The term "trajectory" is used herein to broadly include any path or route from one place to another, for example, a path from a pickup location to a drop-off location.
The term "traffic lane" is used herein to broadly include any type of lane for a moving object to travel (e.g., unpaved surfaces, sidewalks, intersections, pedestrian passageways, roads, streets, highways, truck lanes, vehicle lanes, bike lanes, bus lanes, electric lanes, railways, acceleration lanes, merge lanes, deceleration lanes, turn lanes, overtaking lanes, climbing areas, slow-going lanes, operation lanes, auxiliary lanes, ramps, shoulders, emergency lanes, trouble lanes, transit lanes, express lanes, collection lanes, exclusive lanes, ride share lanes, toll lanes, parking lanes, fire lanes, and slow lanes).
The term "object" is used herein to broadly include vehicles (e.g., automobiles, trains, trucks, buses, bicycles, motorcycles, trains, trams, watercraft, airplanes, and spacecraft), humans, animals, signs, poles, curbs, traffic cones, obstacles, moving signs, trees, shrubs, greens, parks, railways, worksites, stones, boulders, graves, rivers, lakes, ponds, floods, logs, grasslands, snow heaps, deserts, sands, buildings, and obstacles.
The term "world model" is used herein to broadly include a representation of the environment of the AV.
The term "region" is used broadly herein to include a physical region in an environment such as an AV, regardless of whether an object is present.
The term "perceived world" is used herein to broadly refer to a region or object, or a property of a region or object, or a combination of regions or objects or properties, that is perceived or observed or known in the environment.
The term "non-perceived world" is used herein to broadly refer to an area or object, or a property of an area or object, or a combination of areas or objects or a combination of properties, that is not perceptible or non-observable or unknown in an environment.
The term "dark object" is used herein to broadly include an unknown object in the world that is not perceived. Information about the non-perceived dark objects of the world in the world model may be inferred or simulated or imagined or generated. The term "unknown object" is sometimes used interchangeably herein with "dark object".
The term "destination" or "destination location" is used herein to broadly include, for example, anywhere the AV is to arrive, including, for example, temporary drop-off locations, final drop-off locations, and destinations, etc.
Although the technology is described herein based on AV, the technology is equally applicable to semi-autonomous vehicles, such as so-called class 2 and class 3 vehicles (see SAE International Standard J3016: Classification and definition of terms relating to road automotive vehicle autopilots, which is incorporated herein by reference in its entirety to provide more details regarding classification of the level of autonomy in a vehicle), which attempt to control the steering or speed of the vehicle. Based on the analysis of the sensor inputs, the level 2 or 3 system may automate certain vehicle operations, such as steering and braking, under certain driving conditions. The level 2 and level 3 systems on the market typically only consider the world as perceived, e.g. obstacles directly perceived by the vehicle sensors during the decision making process. The techniques described herein may benefit semi-autonomous vehicles. Further, the techniques described herein may also assist in driving decisions of human-operated vehicles.
AV
As shown in fig. 1, a typical activity of AV 10 is to safely and reliably autonomously drive through an environment 12 to a target location 14 while avoiding vehicles, pedestrians, cyclists and other obstacles 16 and adhering to the rules of the road. The ability of the AV to perform this activity is commonly referred to as autonomous driving ability.
The autonomous driving capabilities of the AV are typically supported by an array of technologies 18 and 20 (e.g., hardware, software, and stored and real-time data), which are collectively referred to herein as an AV system 22. In some implementations, one or some or all of the techniques reside on the AV. In some cases, one or some or all of the techniques are at another location, such as at a server (e.g., in a cloud computing infrastructure). The components of the AV system may include one or more or all of the following (among others).
1. A memory 32, the memory 32 for storing machine instructions and various types of data.
2. One or more sensors 24, the one or more sensors 24 for measuring or inferring or measuring and inferring states and conditions of the AV, such as the position of the vehicle, linear and angular velocities and accelerations, and heading (i.e., the orientation of the front end of the AV). For example, such sensors may include, but are not limited to: a GPS; an inertial measurement unit that measures vehicle linear acceleration and angular rate; individual wheel speed sensors for measuring or estimating individual wheel slip rates; individual wheel brake pressure or brake torque sensors; an engine torque or individual wheel torque sensor; and steering wheel angle and angular rate sensors.
3. One or more sensors 26, the one or more sensors 26 for measuring an attribute of the environment of the AV. For example, such sensors may include, but are not limited to: a laser radar; a radar; monocular or stereo cameras in the visible, infrared and/or thermal spectrum; an ultrasonic sensor; a time-of-flight (TOF) depth sensor; and temperature and rain sensors.
4. One or more devices 28 for communicating measured or inferred or measured and inferred attributes of the state and conditions of other vehicles, such as position, linear and angular velocity, linear and angular acceleration, and linear and angular heading. These devices include vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication devices as well as devices for wireless communication over point-to-point or ad hoc networks or both. Devices may operate across the electromagnetic spectrum (including radio and optical communications) or other media (e.g., acoustic communications).
5. One or more data sources 30 for providing historical or real-time or predicted information about the environment 12, or a combination of any two or more thereof, including, for example, traffic congestion updates and weather conditions. Such data may be stored on the on-vehicle memory storage unit 32 or transmitted to the vehicle from a remote database 34 via wireless communication.
6. One or more data sources 36 for providing digital road map data extracted from the GIS database, possibly including one or more of: high-precision maps of road geometric attributes; a map describing road network connection attributes; maps describing road physics attributes (such as the number of motor and non-motor lanes, lane width, lane traffic direction, lane marker type and location); and maps describing the spatial location of road features such as crosswalks, various types of traffic signs (e.g., stop, yield), and various types of traffic signals (e.g., red-yellow-green indicators, blinking yellow or red indicators, right or left turn arrows).
7. One or more data sources 38, the one or more data sources 38 for providing historical information about driving attributes (e.g., typical speed and acceleration profiles) of vehicles that have previously traveled along the local road segment at similar times of day. Such data may be stored on a memory storage unit 32 on the AV, or transmitted to the AV from a remotely located database 34 by wireless communication, or a combination of both.
8. One or more computer systems 40, the one or more computer systems 40 located on the AV, for executing algorithms for generating control actions (e.g., processes 42) online (i.e., in real-time on the AV) based on real-time sensor data and prior information, thereby allowing the AV to perform its autonomous driving capabilities.
9. One or more interface devices 44 (e.g., a display, a mouse, a track point, a keyboard, a touch screen, a speaker, a biometric reader, and a gesture reader) coupled to the computer system 40 for providing various types of information and alerts to and receiving input from the occupants of the AV. The coupling may be wireless or wired. Any two or more of the interface devices may be integrated into a single.
10. One or more wireless communication devices 46 for communicating data from the remotely located database 34 to the AV and communicating vehicle sensor data or data related to drivability to the remotely located database 34.
11. A function device and AV features 48, the function device and AV features 48 equipped to receive and act upon commands from the computer system for driving (e.g., steering, acceleration, deceleration, gear selection) and for auxiliary functions (e.g., steering indicator activation).
12. An inference algorithm 101, the inference algorithm 101 for analyzing information about the sensed world and the non-sensed world.
World model
In implementations involving operation of the AV, the AV is designed to drive through the environment without direct human control or supervisory input, while avoiding collisions with obstacles and complying with road regulations (e.g., operating regulations or driving preferences). To accomplish such autonomous driving, the AV (or more specifically, a computer system or data processing device associated with (and in some cases attached to) the vehicle) or AV system typically first builds a world model.
In some implementations, the world model includes a representation of the environment of the AV, e.g., a world model built using data from a geo-location device, or a map, or a geographic information system, or a combination of two or more thereof, and from sensors observing any area or object. To build the world model, the AV system collects data from various sensors (e.g., lidar, monocular or stereo cameras, and radar) mounted to or attached to, or placed within, or outside of the AV. In some cases, data is collected from some sensor that is not on or within the AV, for example, from another vehicle, building, traffic light, street light, a person's mobile phone, or a combination thereof in a nearby or remote location. The AV system then analyzes the collected data to extract information about areas and objects in the environment (e.g., location and motion attributes). The AV may also rely on information collected by onboard sensors, off-board sensors, vehicle-to-vehicle communications, vehicle-to-infrastructure communications, or information otherwise obtained from other data sources.
Given a world model, the AV system employs an algorithmic process to automatically generate and execute tracks through the environment toward the target. The target is usually provided by another algorithmic process that relies on human input or automated computational analysis.
In various applications, the world model includes representations of perceived worlds and regions and objects in an imperceptible world. Perceived objects in the world include objects that are observable by sensors on or off the AV, as well as objects about which the AV system has information received from other data sources.
Sensors (regardless of their type or on or off AV) typically have a limited sensing range; that is, the sensor can only observe a certain degree of the area or object of physical measurement, such as distance, width, vertical range, horizontal range, orientation, speed, electromagnetic amplitude and frequency, audio amplitude and frequency, weight, and pressure. Areas or objects or properties that exceed the limited sensing range of the sensor may not be observable or may not be determined by the AV system. Furthermore, since some types of sensors collect sensing data along direct lines of sight, there may be areas or objects in the environment that are blocked from the sensor's field of view due to the presence of other objects in the middle of these lines of sight.
In some implementations, AV systems typically have information of static attributes of the world (both perceived and imperceptible), such as a road network, which typically comes from one or more data sources (e.g., maps or geographic information systems, or both) and dynamic attributes of objects in the perceived world. In contrast, AV systems lack information about the dynamic state of the world that is not perceived (i.e., information about movable or moving or changing objects such as vehicles, pedestrians, animals, and their attributes, e.g., location, orientation, and velocity), and thus the techniques herein present a method for processing this lack of information about the dynamic state of the world that is not perceived.
In some cases, the lack of information about the unaware world may not affect the decisions made by the AV system. In other cases, the lack of information about the unaware world may be crucial for the decision-making of the AV system. The following exemplary scenario illustrates a situation.
In a first scenario, the AV system may travel on a straight road, with the sensing range of its sensors unobstructed by any object in the direction of travel. However, the limited sensing range of the sensor means that the sensor can only perceive a limited part of the AV environment. The area outside the sensor sensing range is a portion of the world that is not perceived. However, the sensing range of the sensors of the AV may give the AV system sufficient information of the environment of the AV to make decisions, such as at what speed to drive and when to brake to avoid collision with objects. The lack of information about the unaware world may not affect the decisions of the AV system, as previously unobserved areas or objects in the unaware world that become known to the AV system as it moves may be far enough away from the AV to provide the AV system with enough time and distance to safely react once it is observed.
Fig. 2 shows a first scenario. The values of speed and distance described in this scenario are for illustration, and the values may vary. In this example, assume that AV 201 has a sensor with a range 210 of 200m, the AV 201 is traveling at 30m/s and requires 150m to come to a full stop (i.e., a stopping distance 211). The shaded area 205 represents the portion of the world that the sensor can observe given the sensor range and side field of view, and represents the perceived world of the AV if there are no other sensors to consider. The area outside the boundaries of the shaded area is a world that is not observable by the sensor 201 and, therefore, is not perceived. An object, such as vehicle 202, may be present in the unseen world at the very edge of the sensor range (e.g., 200 meters away). Even if the vehicle 202 has stopped, the presence of such objects will not affect the driving decisions of the AV system, as once it observes an object (as the object moves from the unaware world into the perceived world), the AV may stop without risk of collision with the stationary object 202.
A second scenario is shown in fig. 3. The values of speed and distance described in this scenario are for illustration, and these values may vary. This situation is similar to fig. 2, but the sensing range of the sensor 310 of the AV system is shorter, assumed to be limited to 60 m. The shaded area 305 represents the corresponding range of the perceived world. In this case, a stationary object (e.g., vehicle 302) positioned at the boundary of the sensed world and the non-sensed world may be farther from AV 60m than the stopping distance 311(150m) required for the AV to come to a full stop. In this case, the current speed of the AV, the stopping distance, and the sensing range of the sensor combine to risk that the AV cannot stop until it hits an obstacle in the world that is not perceived and that is about to enter the perceived world. To avoid such risks, the AV system may decide to reduce the speed of the AV before the obstacle enters the perceived world, thereby shortening its stopping distance to correspond to the sensing range of the sensor and thereby being able to stop before colliding with an object 302 located at or outside the boundaries of the unseen world. Thus, the driving decision of the AV is influenced by the possibility that the object will be present in the world that is not perceived. Thus, the techniques described herein include consideration algorithms 321 in the AV system for providing the possible presence of objects that are not perceived, even if these objects are unknown because they are located outside the sensor range of the AV.
The third scenario is that the sensing range of the sensor of the AV system is blocked by another moving object. Referring to fig. 4, an AV 401 may follow another vehicle 402 (referred to as a "preceding vehicle") immediately in front of the AV. In this case, the preceding vehicle may partially or completely block the sensing range of one or more of the sensors of the AV, including, for example, limiting the field of view of traffic in the opposite traffic lane. The first shaded area 405 represents the perceived world of AV. The second shaded area 406 represents an area that is part of the world that is not perceived, but would have been part of the world if the sensing range of the sensors of the AV system and the lateral scenes were not limited by the preceding vehicle 402. In fig. 4, a vehicle 403 is traveling in opposite traffic lanes belonging to the world that is not perceived. However, a portion of the vehicle 403 is present in the shadow area 406, and thus, if the sensors of the AV are not partially blocked by the preceding vehicle 402, the vehicle 403 should be perceived by the AV 401. For example, consider a situation where AV 401 needs to make a decision as to whether to pass a preceding vehicle by temporarily changing lanes to the opposite traffic lane. In this case, information about the world that is not perceived, in particular information about the possible presence of obstacles (such as the vehicle 403) in the opposite traffic lane, may be critical for making safety decisions about passing maneuvers. Thus, the presence of such objects may affect the driving decision of AV 401, as AV 401 may not be able to safely perform a cut-in maneuver without colliding with an obstacle. Thus, the techniques described herein include a consideration algorithm 421 for handling the scenario in an AV system. In other words, the AV system can make decisions based not only on actual information of the perceived world, but also on hypothetical information about the perceived world.
A fourth scenario is that the sensing range of the sensors of the AV system is blocked by non-moving objects in its environment. For example, buildings, billboards, or other objects may be positioned near intersections along roads, which limits the ability of the AV system to observe, for example, cross-traffic traveling on the intersecting roads. Referring to fig. 5, AV502 is approaching an intersection where conflicting traffic is not controlled by traffic signals, the AV502 is intended to proceed straight along trajectory 505. However, a portion of trajectory 505 may collide with trajectory 506 of another vehicle 503 traveling in another lane. In general, if a "stop" flag is present, AV502 may stop, check for and yield any conflicting traffic, and then continue straight ahead. However, the presence of buildings or other objects (such as the built environment 501) near the road may partially block the sensing range of the sensors of the AV as well as the side field of view, resulting in a reduced perceived world (represented by shaded area 504) that does not contain possible colliding vehicles (such as 503). In such cases, the techniques described herein may enable AV502 to override (override) its conventional practice of continuing straight ahead at a conventional speed without observing obstacles in the perceived world, but rather consider how to deal with the lack of information about potential objects in the unseen world. The techniques may, for example, enable the AV to employ a strategy that slowly continues to move forward to increase its sensor-sensible range and thereby increase the size of the world it perceives. In this case, it is therefore important for AV502 to be able to consider when its information about the perceived world is sufficient and when the potential risks in the unseen world are unimportant for it to safely travel forward.
A fifth scenario is that the geometry of the road limits the vertical aspect of the sensing range of the sensor, e.g. with respect to the perceived environment of the world. Consider the situation shown in fig. 6. The AV 601 may be driving on a road segment 603 where the AV 601 is approaching a mountain of a steep mountain. In this case, the segment of the road just beyond the peak (to the right of line 605) may be located in the unseen world, since this segment of the road cannot be observed by the sensors of the AV. Thus, an object on an unaware road segment (such as vehicle 602) may become an obstacle for AV 601, as AV 601 may not be able to stop without colliding with the obstacle. The techniques described herein enable AV systems to account for the need to travel at lower speeds in order to have sufficient time to react to obstacles that may be present in the unaware world.
In general, although AV systems do not have information of an unaware world and one or more objects that may be present in the unaware world, it is important that AV considers unknown objects that may be present in the unaware world. Unknown objects may affect the decision making process of the AV system as well as the trajectories to be selected or adopted and executed by the AV system.
In addition to this, techniques are described herein for considering unknown objects that may be located in the unaware world, with the aim of improving the decision-making process of the AV system and ultimately improving the security, comfort or other aspects of the AV operation.
Dark object process
The broad concept of the techniques described herein is to have a world model systematically generate or infer the presence of hypothetical (i.e., unknown) objects in the world that are not perceived. For example, the generation or inference of dark objects is done in a manner that may change the speed or trajectory of the AV or other aspects of the decision-making process of the AV system (i.e., having assumed position, orientation, and speed, for example). The dark object process is performed recursively through a temporal process. This technique helps AV systems make decisions in the face of uncertainty about the world that is not perceived.
In some implementations, the autonomous driving capability of the AV is achieved through the process shown in fig. 7 or a similar process. Some implementations may not use all of the operations shown in fig. 7. Some implementations may divide the tasks differently than shown in fig. 7. Some implementations may include additional processes beyond those shown in fig. 7. In general, the AV system begins with a perception process 701 that locates the AV (e.g., determines the current location of the AV) using data from onboard sensors, data from previous information stored in a database, and data from other data sources, and creates a world model 702 by the current time. The AV system can also create one or more predicted world models that are cut by one or more future time steps (time steps). The AV system then operates a planning and decision-making process 703 that uses a world model generated from a perception process to plan a trajectory in space-time to move the AV from its current location to its target location. The trajectory must meet requirements regarding feasibility, safety, comfort, efficiency or other criteria or combinations thereof. The AV system then operates a control process 704, which control process 704 executes the trajectories generated by the planning process by providing the low-level actuation signals required to control the AV. These three processes 701, 703 and 704 are performed sequentially or simultaneously.
The techniques described herein include a dark object process for analyzing and generating hypothetical objects that are part of a world model. The dark object process is run as part of or in conjunction with the perception process. Referring to fig. 8, in some implementations, the dark object process 812 uses the same input as the perception process 811 and modifies the world model 820 output by the perception process 811. The input includes one or more of: data from on-board sensors (801); data from off-board sensors, such as data from sensors on other objects (802); and data on road networks from maps, databases, and other data sources (803). The world model 820 serves as an input to the planning and decision-making process in a manner that allows the planning and decision-making process to take into account the unaware world and make decisions that take into account unknown objects.
Referring to fig. 9, in some implementations, the dark object process includes a boundary determination process 901 and a dark object generation process 902. The boundary determination process 901 uses the data to determine the boundary between the sensed world and the non-sensed world. The boundary between the perceived world and the unseen world is calculated using data from, for example, onboard sensors, off-board sensors, maps, and other data sources. The dark object generation process 902 generates hypothetical dark objects for inclusion in the world model and determines their attributes (e.g., object type, position, orientation, direction, velocity, dynamic, and static).
The generation of the dark object may include, for example, considering a worst case scenario, such as an imperceptible presence of the dark object in the world. The world model is updated with the presence and attributes of these dark objects. The updated world model is then passed to a planning and decision-making process to generate and execute a trajectory to the target location. Such trajectories are planned and executed taking into account the presence of the generated dark objects. The boundary determination process and the dark object generation process are described in more detail in the following sections.
Boundary determination process
As described previously, for various reasons, including the limited sensing range and field of view and the presence of objects that constrain the sensing range and field of view, the sensors of an AV system may be able to view or have real-time information about only a limited portion of the world around them (referred to as the "perceived world"). The rest of the world, the complement of the perceived world, is called the "unaware world". However, some information about the world that is not perceived (e.g., road configuration, traffic flow, traffic lights, rush hour, and corresponding historical data) may still be obtained from other data sources.
The boundary determination process may be applied to analyze any type of sensor as well as other data. Examples of sensors include: lidar, radar, stereo vision cameras, monocular vision cameras, speed sensors, global positioning system sensors, gyroscopic sensors, and combinations of two or more of these or other sensors among others. Similar processes may be designed for other sensors and information collected from other vehicles or sensors located in an infrastructure or other location.
Boundary determination using data from the lidar sensor. A typical 3D lidar sensor returns a point cloud with M x N points in each scan, where M is the number of beams in the vertical direction and N is the number of beams in the horizontal direction. Thus, each vertical slice of the scan is a sub-region with M × 1 points along a particular orientation from the sensor. Each beam emanating from the lidar returns to the range of the first object that the beam encounters. A collection of such points is called a point cloud.
The point cloud may be analyzed to determine whether each point has been classified as belonging to the road surface in the available data sources. If not, the point is assumed to be part of the foreground. This analyzed point cloud may be referred to as a "semantic point cloud. For example, as shown in fig. 10, if the map used by the AV system 1001 has information about the precise height profile of the road surface 1004, this information can be used to classify a given laser radar point as belonging to the road surface as follows. Given the current location and orientation of the AV, the location of the center 1002 of the lidar 1003 relative to the center of the AV, and the information of the precise height profile 1004 of the road surface or more generally the ground, the AV system 1001 calculates the point (1009, 1010, 1011, or 1012) at which the emitted lidar beam (1005, 1006, 1007, or 1008) is expected to encounter the ground (1009, 1010, 1011, or 1012). If the point (1013) returned by the lidar beam (1008) is closer to AV than the expected point (1012) by a predefined difference, then it can be assumed that the lidar beam has encountered an object 1014 (e.g., a point on the foreground), and it can be assumed that point 1013 does not belong to the ground. In some implementations, a machine learning based approach (e.g., deep learning) is used to perform the task of foreground classification. Such approaches may also fuse data from multiple sensors (such as lidar and cameras) to improve classification accuracy.
Consider a vertical slice of a lidar scan having M x 1 points that has been classified as described above. It is assumed that the vertical slice includes four lidar beams (1005, 1006, 1007, and 1008) such that M is 4 in this case. And checking the semantic label of each point in the semantic point cloud. If the semantic label of the first lidar point 1009 in the vertical slice is "ground," then it is safely assumed that the space between the sensor origin and the lidar point is unobstructed. Subsequent points 1010 and 1011 will then be examined in turn until the closest foreground point 1013 is encountered. The space between the sensor origin 1002 and the closest foreground point 1013 is labeled as "known" in the world model. The perceived world expansion of the vertical slice will stop at the closest foreground point 1013, and the area behind point 1013 will be marked as "not perceived" in the world model. However, if all points are examined in turn and no foreground points are encountered, the final point of the scan is considered to be the closest foreground point and the area beyond this final point is marked as "not perceived".
This process is performed on all N vertical slices that make up the lidar scan, and each foreground point is determined for each slice as described in detail above. The N closest foreground points represent sampled boundary points between the perceived world and the non-perceived world.
In some implementations, boundaries in 3-D space can be constructed by interpolating and extrapolating sampled boundary points. Consider the situation shown, for example, in fig. 11. The laser radar 1110 is mounted on the AV 1100, and the laser radar 1110 performs scanning of N-3 and determines sampled boundary points 1101, 1102, and 1103. An exemplary 1-D boundary (denoted 1101-1102-1103) may be constructed by drawing lines 1104 and 1105. If the lidar is capable of performing a 360 degree scan, the boundary may be completed by connecting the sampled boundary points as contours. If the lidar is not performing a 360 degree scan (such as shown in fig. 11), the boundaries 1101-1102-1103 may be extended by drawing a line 1106 from one end point 1103 of the 1-D boundary to the center 1110 of the lidar and drawing another line 1107 from the center 1110 of the lidar to the other end point 1101. The extended boundary becomes a polygon, denoted 1110-1101-1102-1103-1110. The regions within the boundaries 1110-1101-1102-1103-1110 (i.e., shaded region 1109 in fig. 11) are considered as the perceived world with respect to the lidar and map data. The region 1111 outside the boundary is the world which is not perceived. When the number of foreground points (N) is large, i.e., when the lidar has a relatively high resolution, then the boundary determination method is expected to perform well.
In some implementations, the boundary may be constructed using curves between foreground points rather than straight lines. Various algorithms (e.g., polynomial curve fitting, bezier curves, etc.) may be applied to the boundary construction.
While the description above for FIG. 11 builds 1-D boundaries, the technique can be extended for determining 2-D boundaries between perceived and unseen worlds. For example, 2-D patches (e.g., flat or flexible) may be interpolated between points 1101, 1102, 1103, and 1110, and their union results in a 2-D boundary.
In some applications, the perceived world is constructed as a union of multiple perceived regions, one for each vertical slice. The boundary is then defined as the boundary of the union of the perceived regions. Consider, for example, the case shown in fig. 12, where lidar 1210 is mounted on AV 1200. Lidar 1210 generates a vertical slice and identifies boundary sample points 1201, 1202, and 1203. For each of these points, a buffer may be built around the slice to represent the perceived area corresponding to the slice. For example, perceived region 1204 is constructed as a circular sector based on point 1201. The center radius of the sector is line 1207 connecting point 1201 to the center of lidar 1210. The sector is defined by sweeping a predefined angle theta on each side of the central radius. The angle θ of the sweep may be determined by various factors, including the horizontal resolution of the lidar sensor 1210. This process is repeated for points 1202 and 1203, resulting in perceived regions 1205 and 1206, respectively. The complete perceived world is then defined as the union of the three perceived regions 1204, 1205, and 1206, and the boundaries of the union can then be computed.
The boundary determination method may take into account additional factors or information. For example, the boundary sample points are not limited to the closest foreground points, but may be based on the second, third, fourth, or Z-th found foreground points. In some cases, the area perceived by another sensor on or off the AV may be integrated into the perceived world; for example, referring to fig. 12, region 1220 is determined to be a region perceived by a sensor on another vehicle or a region perceived by AV1200 itself in an earlier process cycle, and AV1200 can extend its perceived world by including perceived region 1220.
Boundary determination using data from the radar sensor. Radar measurements are similar to laser radar measurements. The main difference between lidar sensors and radar sensors is that while lidar sensors return to the distance of the first object encountered by the beam, radar sensors may be able to return to the distance of more than one object in the path of the beam due to the ability of the radar beam to pass through certain objects.
In view of these similarities, the above described method can also be used for boundary determination using radar measurements, by the following adaptations: if the radar beam returns multiple foreground points in response to encountering multiple objects, the closest point may be designated as the sampled boundary point for further boundary construction. For example, consider the case shown in fig. 13, where a radar 1310 mounted on an AV 1300 transmits three beams 1301, 1302, and 1303. Each of these beams returns a number of foreground points, represented by black dots; for example, light beam 1303 receives return points 1306 and 1307. Thus, for the individual beams 1301, 1302, and 1303, the AV system designates the closest points (1304, 1305, and 1306, respectively) as sampled boundary points for determining the boundary between the perceived world and the non-perceived world using the methods described above (e.g., as shown by the shaded area 1309 in fig. 13).
Boundary determination using data from stereo camera sensors. Stereo cameras output denser point clouds, especially in the overlap between a pair of acquired images, than radar sensors and lidar sensors. Semantic labeling of the point cloud (e.g., using deep learning) may be performed as previously described to distinguish foreground points from background points. The stereo camera setup comprises two or more cameras. One of the cameras is designated as the origin of the beam of the semantically labeled point cloud and the point cloud is processed in a similar manner as the point cloud from the lidar scan. Each of the marked points may then be projected onto the ground, and the sampled boundary points may then be identified.
Fig. 14 illustrates a boundary determination based on a stereo camera sensor. The line (e.g., 1402) represents a vertical slice emanating from the stereo camera 1410. After the point cloud has been semantically labeled and projected onto the ground, the black points (e.g., 1401) represent the closest foreground points encountered along the corresponding slice (e.g., point 1401 corresponding to slice 1402). Once the closest foreground point for each slice has been determined, the previously described techniques may be used to determine the boundary between the known world and the unaware world, e.g., as shown by the shaded region 1405 in fig. 14.
Boundary determination using data from a monocular camera sensor. The monocular camera image boundary determination process includes two steps as described below. The first step performs semantic labeling on the image to distinguish the ground (e.g., road surface) from the foreground. As mentioned previously, this step may be based on classification and machine learning algorithms. The output of this step is an image in which pixels representing the observable road surface are distinguished from pixels representing the foreground. The observable road surface represents the perceived world, and everything else is assumed to belong to the unseen world. Thus, the boundary between the perceived world and the non-perceived world may be calculated in pixel space.
The second step of the process is called inverse perspective mapping. Given information of intrinsic properties of the camera (e.g., focal length) and extrinsic properties (e.g., position and angle of the camera relative to the AV, and the location of the AV in the world model derived by localization), a homographic transformation can be performed to map a 2-D image plane in pixel space to a road surface plane in 3-D metric space. By performing an inverse perspective mapping, the boundary between the known world and the unaware world, previously estimated in pixel space, is mapped to the 3-D metric space.
Fig. 15 shows this process. Scene 1510 represents a typical image captured by a forward monocular camera. Scene 1520 represents the output of the semantic labeling step. Road surface pixels representing the perceived world (shown as shaded region 1521) are retained, and the remaining pixels (which represent the non-perceived world) are discarded. Scene 1530 shows the result of performing a reverse perspective mapping on image 1521. The scene 1530 corresponds to a perceived top view of the world 1531, i.e., a view looking above the ground and directly down to the ground. The thick outline 1532 represents the boundary between the perceived world and the non-perceived world. The regions outside of the boundary 1532 are the unaware world.
Boundary determination using data from multiple sensors. If the AV is equipped with multiple sensors, measurements from one or more of the sensors may be integrated to determine the boundary between the sensed world and the non-sensed world. Integration of sensor measurements may be employed as follows.
In some applications, measurements from two or more sensors are utilized and processed independently to generate a perceived world relative to each individual sensor. The perceived world may then be computed, for example, as a union (or intersection or result of other geospatial operations) of these individual perceived regions, and the boundaries of the perceived world are the union of the boundaries of the individual perceived regions.
In some implementations, measurements from two or more sensors are first fused together using data fusion techniques, rather than forming a single perceived region from measurements of individual sensors. The above-described method may then be applied to the fused data to determine a perceived world and evaluate the boundaries between the perceived world and the non-perceived world. This approach may be useful, for example, for integrating radar sensors as well as lidar sensors, because the measurements returned by both types of sensors are similar, and the data processing method is also similar.
Dark object generation process
Dark objects are unknown or hypothetical or fanciful objects. In the world model, dark objects are generated in the unaware world, while observed objects are placed in the perceived world. The dark object generation process allows the AV system to use the calculated boundaries between the perceived world and the unaware world to account for unknown objects, and to plan the movement (e.g., speed, direction, and trajectory) of the AV in a safe and conservative manner, for example, taking into account observed objects in the perceived world and dark objects in the unaware world.
The unaware world may be more complex. There may be many dark objects in the world that are not perceived. Further, dark objects may become present and then disappear in the world without perception. Dark objects can move in the world without perception, e.g., close to AV. Thus, in some implementations, the techniques described herein are capable of modeling pessimistic scenes (pessimistic scenarios) for attributes such as the presence, absence, and motion of dark objects. By modeling pessimistic scenarios, the driving decisions of the AV system may be more conservative than otherwise. However, in some implementations, a similar process may be employed to result in less conservative driving decisions or to meet other goals.
The dark object generation process can generate dark objects in the unaware world at the boundary between the immediate vicinity and the perceived world, and can assign reasonable values to attributes (e.g., position, speed, direction, and orientation) of the dark objects that can be selected to constrain or adjust the motion of the AV. That is, dark objects can be treated as the worst case for things that might reasonably exist in the world that are not perceived. By considering the worst case that an AV system may encounter, the planning and decision-making process of the AV system can select actions that result in conservative AV motion.
Examples are described herein to illustrate generating dark objects in a manner that represents a conservative scenario and allows the AV system to make relatively safe (e.g., safest possible) decisions. Methods for generating different types of dark objects, such as vehicles, bicycles, buses, trains, pedestrians, and animals, are described herein, as these are some of the most frequently encountered objects on the road.
Generation of dark objects. Any unknown object (e.g., dark object) in the world that is not perceived is assumed to travel along existing, known traffic lanes on the road network and to comply with road regulations. I.e. they move within the speed limit (or within some other predefined upper limit representing the maximum speed of the vehicle that is usually observed) in the right direction within the lane, and they comply with any other road regulations, such as compliance with traffic signs, traffic lights, etc. Thus, the generation process avoids generating dark objects that travel at speeds that exceed the speed limit or travel in the wrong direction or do not travel within known traffic lanes. This assumption ensures that the AV makes safe and conservative decisions without being unable to work in the face of uncertainty caused by the need to consider the possible presence of unknown objects that do not move according to the road rules. However, it is also important to note that in some geographies, some road regulations may often be disregarded. For example, vehicles in an area may typically travel faster than a speed limit, and in this case, dark vehicles may be generated at the average speed of the vehicles in the area, and need not be constrained by the speed limit.
Dark objects having different types and sizes, as well as other attributes, may be generated. The choice of which type and size of dark object to generate depends on a number of factors, including but not limited to: road configuration, queue combinations observed in the vicinity (i.e., combinations of vehicles of different attributes), vehicle type of AV, frequently observed objects, and other considerations. For example, in an area where most objects are vans, a preferred rule for dark object generation may be to generate a dark van. In areas where heavy trucks are typically observed, a preferred rule may be to generate a dark truck. In areas where motorcycles are commonly observed, a preferred rule may be to generate a dark motorcycle. The vehicle type of the dark object may also be a random variable with a predefined distribution, and the distribution may be sampled to determine the type of the generated dark object.
For simplicity of illustration, the following description considers objects as vehicles, but it is understood that other types of dark objects (e.g., dark pedestrians and dark animals) can be generated using the same method.
We assume that the AV system can access detailed road network information including the configuration of lanes on the road (e.g., the number and location of lanes, the location of a center line, the width, etc.) and other relevant information. Furthermore, assuming that the AV system knows its precise location derived from a localization process that uses data from sensors of the AV system to estimate the AV's precise location in the world model. Further, some or all of the following steps may be included in the dark object generation process:
1. lanes (e.g., in the range of 10, 20, 30, 40, 50, 100, 150, 200, 300, 400, or 500 meters) near the AV system are queried in the road network, and the centerline of each lane is stored as a sequence of discrete points.
2. The discrete centerline points of each lane are examined to see if they lie within the perceived world modeled in the boundary determination step. Points located in the world that are not perceived are marked as part of the "unknown skeleton" of the lane.
3. For each lane, the points in its unknown skeleton are iterated. In some implementations, the iteration can be based on the ordered distance between the point and the AV. For the point under consideration, the dark vehicle generation process creates a representative shape (e.g., square, rectangle, triangle, polygon, circle, and ellipse) centered on the point, oriented in the direction of travel of the lane, and whose size represents the likely footprint of the dark vehicle.
4. If the possible footprint of the dark vehicle is located entirely in the unaware world, a dark vehicle centered on the point is generated and considered to be traveling in the known direction of travel of the lane. Then, a value is assigned to an attribute (e.g., speed) of the dark vehicle. The speed allocation is based on various factors, such as regulations (e.g., speed limits, traffic lights, and road regulations), road configurations (e.g., highways, intersections, diversions, one-way roads, four-way stops, three-way stops, T-junctions, and detours), weather conditions (e.g., clear, rainy, snowy, foggy, and fog-laden), time (e.g., rush hour, day, night, and morning), and events (e.g., accidents, counseling, and touring). For example, if a dark vehicle is driving in the same direction as AV, its speed may be set to 0; if a dark vehicle is driving in the other direction, the speed of the dark vehicle may be set to the speed limit of the lane (or a predefined maximum speed depending on such factors).
5. The previous step is repeated for all lanes until one or more dark vehicles are generated for each lane, unless if no point in the unknown skeleton of the lane passes the above-described check in item 2, then the lane will not generate any dark vehicles.
Fig. 16 shows an example of an AV 1601 following another vehicle 1602. Shaded polygon 1603 represents a perceived world as perceived by one or more sensors. The road network is queried for lanes near AV 1601. Line 1605 represents the center line of the lane, which is discretized into a plurality of points. Perceived points outside world 1603 (e.g., 1606) constitute the unknown skeleton of the lane. The generation process iterates at each point on the unknown skeleton. For example, at point 1606, shape 1607 is created to represent the possible footprint of the dark vehicle. Since the possible footprint 1607 of the dark vehicle is not completely outside the perceived world 1603, this point 1606 is ignored and the corresponding dark vehicle is not generated, and the process moves to successive subsequent points in the unknown skeleton until point 1608 is reached and a footprint 1609 whose size is completely in the non-perceived world is generated. A dark vehicle is then generated at point 1608. The dark vehicles are oriented in the same direction as the traffic flow of the lane and their speed is set to the speed limit of the lane or another predefined limit.
The process described above generates a dark vehicle and sets the attributes (e.g., position, orientation, speed, size, and type) of the dark vehicle so that it can be inserted into the world model. In some implementations, the world model allows for the inclusion of additional vehicle attributes, such as, but not limited to: vehicle acceleration, status of turn indicators (left or right turn indicators, also known as blinkers), expected future vehicle trajectory (e.g., expected trajectory of a vehicle for 1 second, 2 seconds, 3 seconds, 4 seconds, 5 seconds, 10 seconds, or 15 seconds or 5 meters, 10 meters, 15 meters, 20 meters, 25 meters, or 30 meters below), and the like. In some implementations, these and other attributes are assigned values that conservatively (e.g., most conservatively) limit the speed or trajectory of the AV, or both.
Fig. 18 illustrates a method for setting an expected future trajectory of a generated dark vehicle. The AV 1801 is approaching an intersection and is intending to turn right along the trajectory 1804. The built-up environment 1802 limits the sensing range and field of view of the sensor, resulting in a limited perceived area, as shown by the shaded area 1803. Following the previously described process, a dark vehicle 1805 is generated as shown in the world that is not perceived. To set the desired trajectory for the vehicle, the AV system considers the available trajectories (available from the road network) for the vehicle. The dark vehicle 1805 may turn right along a trajectory 1806, which trajectory 1806 does not conflict with the AV's trajectory 1804, or proceed straight along a trajectory 1807, which trajectory 1807 may conflict with the AV's trajectory 1804. Thus, some implementations consider a conservative scenario by setting trajectory 1807 to the expected future trajectory of dark vehicle 1805, as this scenario has more limitations on the driving decisions of the AV system and is more representative of a conservative scenario.
In general, when selecting between multiple tracks of a dark vehicle, in an implementation where the goal is to design a worst case scenario, the track of the dark vehicle may be selected that will cause the greatest delay on AV. Other measures may also be used to select the trajectory of the dark vehicle from a set of possible trajectories, depending on the purpose for which the dark vehicle is generated.
In setting the attributes of the generated dark vehicles, it is possible to utilize detailed contextual information, such as output from the perception process (such as perceived or inferred states of traffic lights, positions, orientations, and speeds of other objects), detailed road network information (such as positions of stop signs and turn restrictions), historical data of behavior of nearby vehicles, and other information. These additional data can be used to model the properties of the dark vehicle in a richer way or, for example, in a more conservative way (i.e., in a way that imposes greater constraints on the speed and trajectory decisions of the AV system). The following examples illustrate some scenarios.
If a dark vehicle is generated in some way (e.g., has a position and orientation) such that it is approaching a traffic light, and if the traffic light is known to be red in status, the speed of the dark vehicle may be set accordingly, rather than setting the speed as the speed limit for the lane. In this case, if the dark vehicle is positioned at the stop-line of the traffic light, its speed may be set to 0km/hr, or otherwise set to the maximum speed at which the vehicle can be driven (subject to the speed limit of the lane) and the vehicle can decelerate at a reasonable rate and stop before coming to the stop-line. A similar process may be employed if the dark vehicle is approaching a stop sign.
If the dark vehicle is generated on a lane with also other known vehicles, the speed of the dark vehicle can be set to the speed of the immediately preceding known vehicle if it is within a certain distance from the immediately preceding known vehicle, otherwise the speed of the dark vehicle is set to the maximum speed at which the dark vehicle can drive (subject to the speed limit of the lane) and the dark vehicle can safely accelerate to the speed of the immediately preceding known vehicle.
If historical data of vehicle behavior in a particular area shows that certain rules of the road are typically violated, that behavior may also be applied to dark vehicles. For example, if an analysis of historical data at a particular intersection shows that a traffic light violation is observed with a sufficiently high predefined probability, a dark vehicle generated at that intersection may also be expected to violate a traffic light with some predefined probability. For example, if a dark vehicle is generated at an intersection immediately after a traffic light stop line, and the state of the traffic light is known to be red, a traffic light violation at the intersection may be provided by setting the speed of the dark vehicle to a speed limit instead of 0 km/hr.
Specific case: the bicycle is dark. The above-described method for generating a dark vehicle may also be applied to generating a dark bicycle. This is particularly important in cities or areas with bicycle-specific lanes, as these cities or areas present road space that is not used by conventional vehicles but is reserved for bicycles. In this case, the method for generating a dark vehicle may be used with some or all of the following adaptations, for example: instead of querying regular traffic lanes, the system queries the road network for a bike lane. The speed of the bicycle may be set as high as, faster than or slower than AV within a predefined upper limit. If the cycle lane is bidirectional, the generated dark bicycle can travel in one of two directions; to be conservative, in some examples, a direction that causes the bicycle to move closer to the AV may be used, as that direction may result in a conservative scenario, and thus cause the AV system to make a more conservative decision.
Specific case: a dark bus. The dark vehicle generation process may also be applied to generate dark buses. This is particularly important for cities or areas having bus-specific lanes, as these cities or areas present road space that is not used by conventional vehicles, but is reserved for buses. In this case, the dark vehicle generation process may be used with some or all of the following adaptations, for example: the process queries the road network for bus lanes, rather than regular traffic lanes. The speed of the bus may be set as high as, faster than or slower than AV within a predefined maximum. If the bus-way is designed to be wider for two-phase traffic, a direction that causes the bus to move closer to the AV may be used, as this direction may result in a conservative scenario and thus cause the AV system to make more conservative decisions.
Specific case: dark train. Railroad crossings represent the portion of the road network where trains or trams can potentially collide with other vehicles. The dark vehicle generation process may also be applied to generate dark trains. The hidden train generation process may infer whether deceleration is required as the AV system approaches the railroad crossing, or whether it is safe for the vehicle to cross the railroad crossing. The method for generating a dark vehicle may be used with some or all of the following adaptations, for example: instead of querying regular traffic lanes, the system queries the road network for railways.
Specific case: a dark pedestrian. Pedestrians are not generally expected to walk in conventional traffic lanes. Crosswalks (Crosswalks) are examples of pedestrians sharing the same road space with other vehicles and objects. In some cases, sidewalks are also considered, as pedestrians on sidewalks may enter the pedestrian crossings. Therefore, it is desirable to generate dark pedestrians as part of the world that is not perceived. Similar to dark vehicle generation, the process queries sidewalks, crosswalks, or a combination thereof. Subsequently, a dark object generation process is applied. Then, the hidden pedestrian is assigned a reasonable walking or running speed, for example, 1km/hr, 2km/hr, 3km/hr, 4km/hr, 5km/hr, 6km/hr, 7km/hr, 8km/hr, 9km/hr, 10km/hr, 11km/hr, 12km/hr, 13km/hr, 14km/hr or 15km/hr
And (4) temporal filtering. The above-described methods for boundary determination and dark object generation may use instantaneous sensor readings, which may lead to discontinuities in the boundary and flickering of dark objects, for reasons including, for example: noise in sensor measurements and missing or delayed sensor measurements. Since boundary determination and dark object generation are performed recursively through a temporal process, temporal filtering or smoothing methods can be employed to mitigate this problem by using temporal relationships between dark objects generated at different time steps. The temporal filtering or smoothing may be performed in many ways and the following representation is only some implementations of this idea. The following method is described from the perspective of a vehicle, and it is applicable to other types of objects, such as pedestrians and animals. The steps used by the temporal filtering algorithm may include some or all of the following steps (among others).
1. Using the method described in the previous section, the measurements from the instant time are processed to compute the unknown skeleton for each lane.
2. By the last time step, the points of the unknown skeleton belonging to each lane are propagated forward, i.e. they move forward in the driving direction of the lane along the center line of the lane. The distance that the points move forward is equal to the distance that the dark vehicle on the lane should travel within the time step, i.e. the speed of the dark vehicle on the lane multiplied by the duration of the time step.
3. Then, the intersection of the two sets of unknown skeletons, i.e., the unknown skeleton from the measurement of the current time step (calculated in step 1 above) and the unknown skeleton from the forward propagation of the previous time step (calculated in step 2 above), is taken as the unknown skeleton for the current time step. In some cases, it is possible not to compute an unknown skeleton from measurements of the current time step (e.g., if the sensor has not returned measurements or returned spam measurements in the current time step); in this case, the skeleton from the forward propagation of the last time step may be designated as the unknown skeleton for the current time step. Similarly, in some implementations, the unknown skeleton from the previous time step forward propagation may not be available (e.g., during initialization or the first time step); in such an implementation, the unknown skeleton computed from the measurement of the current time step may be designated as the unknown skeleton for the current time step.
This process is shown in fig. 17. The sequence of points 1701 represents the unknown skeleton from the last time step, while the sequence of points 1703 represents the unknown skeleton computed from the current measurements. The unknown skeleton 1701 from the previous time step is propagated forward in the direction of travel of the lane 1705 by a distance 1706 (equal to the distance that the dark vehicle should have covered during that time step on the lane) to generate the forward propagated unknown skeleton 1702 from the previous time step. Finally, the sequence of points 1704 represents the intersection of the unknown skeleton 1703 computed from the current measurement with the forward-propagated unknown skeleton 1702 from the previous skeleton and used as the unknown skeleton for the current time step. After the dark objects generated at the last time step are discarded from the world model, the unknown skeleton is then used to generate the dark objects in the current time step.
While embodiments have been shown and described herein, these embodiments have been provided by way of example. Variations, changes, and substitutions will be apparent. It should be understood that other implementations are within the scope of the claims.
The claims (modification according to treaty clause 19)
1. A computer-implemented method, comprising:
maintaining a model of an environment of a vehicle, the environment of the vehicle comprising a first portion and a second portion, wherein the first portion is perceptible by one or more sensors of the vehicle based on sensor conditions; wherein the second portion is not perceptible by the one or more sensors of the vehicle based on sensor conditions and the second portion is adjacent a boundary with the first portion;
selectively generating a hypothetical object within a particular location of a second portion of the environment of the vehicle that is not perceived by the one or more sensors of the vehicle; and is
Updating the model using hypothetical objects generated at the particular location in the environment of the vehicle.
2. The method of claim 1, wherein the hypothetical object comprises a moving object.
3. The method of claim 1, wherein the hypothetical objects comprise objects that use a travel path from which the vehicle is excluded.
4. The method of claim 1, wherein the hypothetical object comprises at least one of: second vehicles, bicycles, buses, trains, pedestrians, and animals.
5. The method of claim 1, wherein selectively generating the hypothetical object comprises probabilistically selecting a type of the hypothetical object and attributes of the hypothetical object based on previously observed objects in the environment.
6. The method of claim 5, wherein the attribute comprises a size or a velocity.
7. The method of claim 1, further comprising including in the model known objects in the environment that are perceived or otherwise known by the one or more sensors of the vehicle.
8. The method of claim 7, wherein the hypothetical object and the known object maintained by the model are in different portions of the environment.
9. The method of claim 1, wherein the selectively generating the hypothesis model comprises:
obtaining historical data of objects previously observed at the particular location in the environment;
selecting a type of the hypothetical object based on the historical data;
probabilistically selecting attributes of the hypothetical object based on the object previously observed at the particular location in the environment;
determining that the hypothetical object of the selected type at the particular location behaves with a particular property based on the historical data and objects of the same selected type as objects previously observed at the particular location in the environment; and is provided with
In response to determining that the hypothetical object of the selected type at the particular location behaves with particular properties, probabilistically selecting additional properties of the hypothetical object based on the object previously observed in the environment at the particular location.
10. The method of claim 1, wherein the first portion and the second portion are separated by a boundary.
11. The method of claim 10, comprising detection of the boundary.
12. The method of claim 11, wherein the detection of the boundary comprises using data from one or more sensors to distinguish observable ground and a foreground blocking a portion of the observable ground.
13. The method of claim 12, wherein the one or more sensors comprise a sensor of the vehicle or a sensor external to the vehicle, or both.
14. The method of claim 1, further comprising:
inquiring traffic lane information from a road network database; and
updating the generated hypothetical objects within the model based on the traffic lane information.
15. The method of claim 1, wherein updating the hypothetical object within the model comprises using stored data to infer a likely location of the hypothetical object.
16. The method of claim 1, wherein updating the hypothetical objects within the model comprises determining a location of the vehicle based on a road network database and one or more sensors.
17. The method of claim 1, wherein updating the hypothetical objects within the model comprises querying traffic lane information from a database and discretizing the traffic lane into discrete points.
18. The method of claim 1, wherein updating the hypothetical objects within the model comprises generating an unknown skeleton of discrete points of a lane that are not perceivable by the one or more sensors of the vehicle.
19. The method of claim 18, wherein updating the hypothetical objects within the model comprises: (a) generating a representative shape at discrete points of the unknown skeleton; and (b) evaluating whether the representative shape is entirely within the second portion of the environment.
20. The method of claim 1, wherein updating the hypothetical object within the model treats a representative shape as the hypothetical object.
21. The method of claim 1, wherein updating the hypothetical object within the model comprises applying temporal filtering to determine a location of the hypothetical object.
22. The method of claim 21, wherein applying the temporal filtering comprises smoothing an unknown skeleton with a forward-propagating unknown skeleton, wherein the forward-propagating unknown skeleton is generated by moving an old unknown skeleton forward along a traffic lane.
23. The method of claim 1, wherein updating the hypothetical object within the model comprises associating one or more attributes with the hypothetical object.
24. The method of claim 23, wherein the one or more of the attributes relate to possible motion states of the hypothetical object.
25. The method of claim 24, wherein the motion state comprises a stationary condition.
26. The method of claim 24, wherein the motion state comprises a movement condition.
27. The method of claim 24, wherein the motion state comprises a speed and a direction of movement.
28. The method of claim 27, wherein the speed is set to less than or equal to a predetermined maximum value.
29. The method of claim 28, wherein the predetermined maximum value comprises a speed limit.
30. The method of claim 28, wherein the predetermined maximum value comprises a quantity derived from other objects concurrently observed or previously observed in the environment.
31. The method of claim 28, wherein the predetermined maximum value comprises an amount derived from at least one of historical data, road configuration, traffic rules, events, time, and weather conditions.
32. The method of claim 1, wherein the maintaining of the model comprises accessing a database, the database comprising road network information.
33. The method of claim 1, wherein the maintenance of the model comprises using data from the one or more sensors.
34. The method of claim 33, wherein the one or more sensors comprise radar sensors.
35. The method of claim 33, wherein the one or more sensors comprise lidar sensors.
36. The method of claim 33, wherein the one or more sensors comprise a camera sensor.
37. The method of claim 36, wherein the camera sensor comprises a stereo camera sensor.
38. The method of claim 36, wherein the camera sensor comprises a monocular camera sensor.
39. The method of claim 1, comprising updating a trajectory of the vehicle based on the model.
40. The method of claim 39, comprising executing the trajectory of the vehicle.
41. The method of claim 1, wherein the vehicle comprises an autonomous vehicle.
42. A computer-implemented method, comprising:
receiving data from a sensor representative of an observable portion of an environment of a vehicle;
generating data representing a non-observable portion of the environment, including data representing at least one hypothetical object in the non-observable portion of the environment; and
generating a command for operation of the vehicle within the environment, the command being dependent on data representing the observable portion of the environment and on data representing the hypothetical object of the non-observable portion of the environment.
43. The method of claim 42, wherein the hypothetical object comprises a moving object.
44. The method of claim 42, wherein the hypothetical object comprises an object that uses a travel path from which the vehicle is excluded.
45. The method of claim 42, wherein the hypothetical objects comprise at least one of: vehicles, bicycles, buses, trains, pedestrians, and animals.
46. The method of claim 42, wherein generating data representative of non-observable portions of the environment comprises probabilistically selecting a type of the hypothetical object and attributes of the hypothetical object based on previously observed objects in the environment.
47. The method of claim 46, wherein the hypothetical object comprises a vehicle and the attributes comprise a size and a velocity.
48. The method of claim 42, wherein the observable portion is separated from the non-observable portion by a boundary.
49. The method of claim 48, comprising detection of the boundary.
50. The method of claim 49, wherein the detection of the boundary comprises using data from the sensors to distinguish between the observable ground and a foreground blocking a portion of the ground.
51. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises using stored data to infer a likely location of the hypothetical object.
52. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises querying a road network database for traffic lane information.
53. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises determining a location of the vehicle based on a road network database and one or more sensors.
54. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises querying a road network database for traffic lane information and discretizing the traffic lane into discrete points.
55. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises generating an unknown skeleton of discrete points of a lane that are not perceivable by the sensor.
56. The method of claim 55, wherein generating data representative of the non-observable portion of the environment comprises: (a) generating a representative shape at discrete points of the unknown skeleton; and (b) evaluating whether the representative shape is entirely within the non-observable portion.
57. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises treating a representative shape as the hypothetical object.
58. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises applying temporal filtering to determine a location of the hypothetical object.
59. The method of claim 58, wherein applying the temporal filtering comprises smoothing an unknown skeleton with a forward-propagated unknown skeleton, wherein the forward-propagated unknown skeleton is generated by moving an old unknown skeleton forward along a traffic lane.
60. The method of claim 42, wherein including the hypothetical object in the world model comprises associating one or more attributes with the hypothetical object.
61. The method of claim 60, wherein the one or more of the attributes relate to possible motion states of the hypothetical object.
62. The method of claim 61, wherein the motion state comprises a stationary condition.
63. The method of claim 61, wherein the motion state comprises a movement condition.
64. The method of claim 61, wherein the motion state comprises a speed and a direction of movement.
65. The method of claim 64, wherein the speed is set to less than or equal to a predetermined maximum value.
66. The method of claim 65, in which the predetermined maximum value comprises a speed limit.
67. The method of claim 65, wherein the predetermined maximum value comprises a quantity derived from other objects concurrently observed or previously observed in the environment.
68. The method of claim 65, wherein the predetermined maximum value comprises an amount derived from historical data, road configuration, traffic rules, events, time, weather conditions, and combinations of two or more thereof.
69. The method of claim 42, comprising accessing a database comprising road network information.
70. The method of claim 69, comprising using data from a second set of sensors.
71. The method of claim 70, wherein the sensor or second set of sensors comprises radar sensors.
72. The method of claim 70, wherein the sensor or second set of sensors comprises lidar sensors.
73. The method of claim 70, wherein the sensor or second set of sensors comprises a camera sensor.
74. The method of claim 70, wherein the sensor or second set of sensors comprises a stereo camera sensor.
75. The method of claim 70, wherein the sensor or second set of sensors comprises a monocular camera sensor.
76. The method of claim 42, wherein generating the command comprises updating a trajectory of the vehicle.
77. The method of claim 76, comprising executing the trajectory of the vehicle.
78. The method of claim 42, wherein the vehicle comprises an autonomous vehicle.
79. A computer-implemented method, comprising:
generating instructions to cause the autonomous vehicle to travel on the road network at a specified speed and make a specified turn to reach the target location; and is
Updating the commands in response to current data representing an assumed speed and direction of movement of an assumed vehicle also traveling on the road network, the commands being updated to reduce a risk of collision of the autonomous vehicle with another vehicle on the road network.
80. The method of claim 79, wherein the assumed speed and direction of movement are derived probabilistically based on vehicles previously observed in the environment.
81. The method of claim 79, wherein the observable portion is separated from the non-observable portion by a boundary.
82. The method of claim 81, comprising detection of the boundary.
83. The method of claim 82, wherein the detection of the boundary comprises using data from one or more sensors to distinguish observable terrain and foreground blocking a portion of the terrain.
84. The method of claim 83, wherein the one or more sensors comprise a sensor on the autonomous vehicle.
85. The method of claim 83, wherein the one or more sensors comprise sensors external to the autonomous vehicle.
86. The method of claim 79, comprising generating the current data based on known objects perceived by one or more sensors.
87. The method of claim 86, wherein the generating of the current data comprises querying a road network database for traffic lane information.
88. The method of claim 86, wherein the generating of the current data comprises using stored data to infer possible locations of the hypothetical vehicle.
89. The method of claim 86, wherein the generating of the current data comprises determining a location of the autonomous vehicle based on a road network database and one or more sensors.
90. The method of claim 86, wherein the generation of the current data includes querying traffic lane information from a database and discretizing the traffic lane into discrete points.
91. The method of claim 86, wherein the generation of the current data comprises generating an unknown skeleton of discrete points of a lane that are not perceivable by the sensor.
92. The method of claim 91, wherein the generation of the current data comprises: (a) generating a representative shape at discrete points of the unknown skeleton; and (b) evaluating whether the representative shape is entirely within the unaware world.
93. The method of claim 92, wherein the generation of the current data comprises treating the representative shape as the hypothetical vehicle.
94. The method of claim 86, wherein the generating of the current data comprises applying temporal filtering to determine the location of the hypothetical vehicle.
95. The method of claim 94, wherein applying the temporal filtering comprises smoothing an unknown skeleton by a forward-propagating unknown skeleton, wherein the forward-propagating unknown skeleton is generated by moving an old unknown skeleton forward along a traffic lane.
96. The method of claim 86, wherein the generating of the current data comprises associating one or more attributes with the hypothetical vehicle.
97. The method of claim 96, wherein the one or more of the attributes relate to a likely state of motion of the hypothetical vehicle.
98. The method of claim 97, wherein the motion state comprises a stationary condition.
99. The method of claim 79, wherein the assumed speed is set to less than or equal to a predetermined maximum value.
100. The method of claim 99, wherein the predetermined maximum value comprises a speed limit.
101. The method of claim 99, wherein the predetermined maximum value comprises a quantity derived from other objects concurrently observed or previously observed in the environment.
102. The method of claim 99, wherein the predetermined maximum value comprises an amount derived from historical data, road configuration, traffic rules, events, time, weather conditions, and combinations of two or more thereof.
103. The method of claim 79, comprising accessing a database, the database comprising road network information.
104. The method of claim 79, wherein the sensor comprises a radar sensor.
105. The method of claim 79, wherein the sensor comprises a lidar sensor.
106. The method of claim 79, wherein the sensor comprises a camera sensor.
107. The method of claim 106, wherein the camera sensor comprises a stereo camera sensor.
108. The method of claim 106, wherein the camera sensor comprises a monocular camera sensor.
109. An apparatus, comprising:
an autonomous vehicle, comprising:
a) a steering, acceleration, deceleration, or gear selection device, or a combination thereof, configured to effect movement of the autonomous vehicle on a road network; and
b) a computer having a processor for performing a process for: (i) generating commands to a steering, acceleration, deceleration, or gear selection device, or a combination thereof, to move the autonomous vehicle in accordance with driving decisions; and (ii) (a) in response to data representing a motion characteristic of a hypothetical vehicle driving on the road network and (b) updating the command based on discrete points of the lane that are not perceivable by one or more sensors.
110. The apparatus of claim 109, wherein the data representative of motion characteristics of a hypothetical vehicle is probabilistically derived based on previously observed vehicles in the environment of the autonomous vehicle.
111. The device of claim 109, wherein regions perceivable by the sensor are separated from regions not perceivable by a boundary.
112. The apparatus of claim 111, wherein the computer detects the boundary.
113. An apparatus according to claim 112, wherein detection of the boundary is based on using data from the sensor to distinguish perceptible ground and to block the foreground of a portion of the ground.
114. The apparatus of claim 113, wherein the sensor comprises a sensor on the autonomous vehicle.
115. The apparatus of claim 113, wherein the sensor comprises a sensor external to the autonomous vehicle.
116. The apparatus of claim 109, wherein the computer generates the data based on known objects sensed by the sensor.
117. The apparatus of claim 116, wherein the generation of the data comprises querying a road network database for traffic lane information.
118. The apparatus of claim 116, wherein the generation of the data comprises using stored data to infer possible locations of the hypothetical vehicle.
119. The apparatus of claim 116, wherein the generation of the data comprises determining a location of the autonomous vehicle based on a road network database and the sensor.
120. The apparatus of claim 116, wherein the generation of the data comprises querying traffic lane information from a database and discretizing the traffic lane into discrete points.
121. The device of claim 116, wherein the generation of the data comprises generating an unknown skeleton of discrete points of a lane that are not perceivable by the sensor.
122. The apparatus of claim 121, wherein the generation of the data comprises: (a) generating a representative shape at discrete points of the lane that cannot be perceived by a sensor; and (b) evaluating whether the representative shape is entirely within the imperceptible region.
123. The apparatus of claim 122, wherein the generation of the data comprises treating the representative shape as the hypothetical vehicle.
124. The apparatus of claim 116, wherein the generation of the data comprises applying temporal filtering to determine the location of the hypothetical vehicle.
125. The device of claim 124, wherein applying the temporal filtering comprises smoothing an unknown skeleton by a forward-propagating unknown skeleton, wherein the forward-propagating unknown skeleton is generated by moving an old unknown skeleton forward along a traffic lane.
126. The apparatus of claim 116, wherein the generation of the data comprises associating one or more attributes with the hypothetical vehicle.
127. The apparatus of claim 126, wherein the one or more of the attributes relate to a likely state of motion of the hypothetical vehicle.
128. The device of claim 127, wherein the possible motion states comprise stationary conditions.
129. The apparatus of claim 109, wherein the motion characteristic comprises an assumed velocity set to less than or equal to a predetermined maximum value.
130. The apparatus of claim 129, wherein the predetermined maximum value comprises a speed limit.
131. The apparatus of claim 129, wherein the predetermined maximum comprises a quantity derived from other objects concurrently observed or previously perceived in the environment.
132. The apparatus of claim 129, wherein the predetermined maximum comprises an amount derived from historical data, road configurations, traffic rules, events, time, weather conditions, and combinations of two or more thereof.
133. The apparatus of claim 109, wherein the computer accesses a database, the database comprising road network information.
134. The apparatus of claim 109, wherein the sensor comprises a radar sensor.
135. The apparatus of claim 109, wherein the sensor comprises a lidar sensor.
136. The device of claim 109, wherein the sensor comprises a camera sensor.
137. The device of claim 136, wherein the camera sensor comprises a stereo camera sensor.
138. The device of claim 136, wherein the camera sensor comprises a monocular camera sensor.
139. An apparatus, comprising:
an autonomous vehicle, the autonomous vehicle comprising:
a) a steering, acceleration, deceleration, or gear selection device, or a combination thereof, configured to effect movement of the autonomous vehicle on a road network; and
b) a computer having a processor for executing a process for (i) generating commands to the steering, acceleration, deceleration, or gear selection devices, or combinations thereof, to move the autonomous vehicle in accordance with driving decisions; and (ii) (a) updating the command in response to data representing motion characteristics of a hypothetical vehicle driving on the road network and (b) based on temporal filtering used to determine a location of the hypothetical vehicle.
140. The apparatus of claim 139, wherein the data representative of motion characteristics of a hypothetical vehicle is probabilistically derived based on previously observed vehicles in the environment of the autonomous vehicle.
141. The apparatus of claim 139, wherein a region perceptible by the sensor and a region imperceptible are separated by a boundary.
142. The apparatus of claim 141, wherein the computer detects the boundary.
143. The apparatus of claim 142, wherein the detection of the boundary is based on distinguishing perceptible ground from foreground blocking a portion of the ground using data from the sensor.
144. The apparatus of claim 143, wherein the sensor comprises a sensor on the autonomous vehicle.
145. The apparatus of claim 143, wherein the sensor comprises a sensor external to the autonomous vehicle.
146. The apparatus of claim 139, wherein the computer generates the data based on a known object sensed by the sensor.
147. The apparatus of claim 146, wherein the generation of the data includes querying a road network database for traffic lane information.
148. The apparatus of claim 146, wherein the generation of the data comprises using stored data to infer possible locations of the hypothetical vehicle.
149. The apparatus of claim 146, wherein the generation of the data comprises determining a location of the autonomous vehicle based on a road network database and the sensor.
150. The apparatus of claim 146, wherein the generation of the data includes querying traffic lane information from a database and discretizing the traffic lane into discrete points.
151. The device of claim 146, wherein the generation of the current data includes generating an unknown skeleton of discrete points of a lane that are not perceivable by a sensor.
152. The apparatus of claim 146, wherein the generation of the data comprises (a) generating representative shapes at discrete points of a lane that are not perceivable by a sensor; and (b) evaluating whether the representative shape is entirely within the imperceptible region.
153. The apparatus of claim 152, wherein the generation of the data comprises considering the representative shape as the hypothetical vehicle.
154. The device of claim 146, wherein applying the temporal filtering comprises smoothing an unknown skeleton with a forward-propagated unknown skeleton, wherein the forward-propagated unknown skeleton is generated by moving an old unknown skeleton forward along a traffic lane.
155. The apparatus of claim 146, wherein the generation of the data comprises associating one or more attributes with the hypothetical vehicle.
156. The apparatus of claim 155, wherein one or more of the attributes relate to possible states of motion of the hypothetical vehicle.
157. The device of claim 156, wherein the possible motion states comprise a stationary condition.
158. The apparatus of claim 146, wherein the motion characteristic comprises an assumed velocity set to less than or equal to a predetermined maximum value.
159. The apparatus of claim 158, wherein the predetermined maximum value comprises a speed limit.
160. The device of claim 159, wherein the predetermined maximum comprises a quantity derived from other objects perceived simultaneously or previously in the environment.
161. The device of claim 159, wherein the predetermined maximum comprises a quantity derived from historical data, road configuration, traffic rules, events, time, weather conditions, or a combination of two or more thereof.
162. The apparatus of claim 146, wherein the computer accesses a database, the database including road network information.
163. The apparatus of claim 146, wherein the sensor comprises a radar sensor.
164. The apparatus of claim 146, wherein the sensor comprises a lidar sensor.
165. The device of claim 146, wherein the sensor comprises a camera sensor.
166. The device of claim 165, wherein the camera sensor comprises a stereo camera sensor.
167. The device of claim 165, wherein the camera sensor comprises a monocular camera sensor.

Claims (138)

1. A computer-implemented method, comprising:
maintaining a world model of an environment of a vehicle; and
including hypothetical objects in the environment that are not perceivable by sensors of the vehicle in the world model.
2. The method of claim 1, wherein the hypothetical object comprises a moving object.
3. The method of claim 1, wherein the hypothetical objects comprise objects that use a travel path from which the vehicle is excluded.
4. The method of claim 1, wherein the hypothetical objects comprise at least one of: second vehicles, bicycles, buses, trains, pedestrians, and animals.
5. The method of claim 1, wherein including the hypothetical object in the world model comprises probabilistically selecting a type of the hypothetical object and attributes of the hypothetical object based on previously observed objects in the environment.
6. The method of claim 5, wherein the attribute comprises a size or a velocity.
7. The method of claim 1, comprising including known objects in the environment that are perceived or otherwise known by sensors of the vehicle in the world model.
8. The method of claim 7, wherein the hypothetical object and the known object maintained by the world model are in different portions of the environment.
9. The method of claim 8, wherein the different portions of the environment comprise a perceived world and an imperceptible world.
10. The method of claim 9, wherein the perceived world and the non-perceived world are separated by a boundary.
11. The method of claim 10, comprising detection of the boundary.
12. The method of claim 11, wherein the detection of the boundary comprises using data from one or more sensors to distinguish between the observable ground and a foreground blocking a portion of the ground.
13. The method of claim 11, wherein the one or more sensors comprise a sensor of the vehicle or a sensor external to the vehicle, or both.
14. The method of claim 1, wherein including the hypothetical object in the world model comprises querying a road network database for traffic lane information.
15. The method of claim 1, wherein including the hypothetical object in the world model comprises using stored data to infer a likely location of the hypothetical object.
16. The method of claim 1, wherein including the hypothetical object in the world model comprises determining a location of the vehicle based on a road network database and one or more sensors.
17. The method of claim 1, wherein including the hypothetical object in the world model comprises querying a database for traffic lane information and discretizing the traffic lane into discrete points.
18. The method of claim 1, wherein including the hypothetical object in the world model comprises an unknown skeleton that generates discrete points of a lane that is not perceivable by the sensor of the vehicle.
19. The method of claim 18, wherein including the hypothetical object in the world model comprises: (a) generating a representative shape at discrete points of the unknown skeleton; and (b) evaluating whether the representative shape is entirely within the unaware world.
20. The method of claim 1, wherein including the hypothetical object in the world model comprises treating a representative shape as the hypothetical object.
21. The method of claim 1, wherein including the hypothetical object in the world model comprises applying temporal filtering to determine a location of the hypothetical object.
22. The method of claim 21, wherein applying the temporal filtering comprises smoothing an unknown skeleton with a forward-propagating unknown skeleton, wherein the forward-propagating unknown skeleton is generated by moving an old unknown skeleton forward along a traffic lane.
23. The method of claim 1, wherein including the hypothetical object in the world model comprises associating one or more attributes with the hypothetical object.
24. The method of claim 23, wherein the one or more of the attributes relate to possible motion states of the hypothetical object.
25. The method of claim 24, wherein the motion state comprises a stationary condition.
26. The method of claim 24, wherein the motion state comprises a movement condition.
27. The method of claim 24, wherein the motion state comprises a speed and a direction of movement.
28. The method of claim 27, wherein the speed is set to less than or equal to a predetermined maximum value.
29. The method of claim 28, wherein the predetermined maximum value comprises a speed limit.
30. The method of claim 28, wherein the predetermined maximum value comprises a quantity derived from other objects concurrently observed or previously observed in the environment.
31. The method of claim 28, wherein the predetermined maximum value comprises an amount derived from historical data, road configuration, traffic rules, events, time, weather conditions, and combinations of two or more thereof.
32. The method of claim 1, wherein the maintaining of the world model comprises accessing a database, the database comprising road network information.
33. The method of claim 1, wherein the maintenance of the world model comprises using data from one or more sensors.
34. The method of claim 33, wherein the one or more sensors comprise radar sensors.
35. The method of claim 33, wherein the one or more sensors comprise lidar sensors.
36. The method of claim 33, wherein the one or more sensors comprise a camera sensor.
37. The method of claim 36, wherein the camera sensor comprises a stereo camera sensor.
38. The method of claim 36, wherein the camera sensor comprises a monocular camera sensor.
39. The method of claim 1, comprising updating a trajectory of the vehicle based on the world model.
40. The method of claim 39, comprising executing the trajectory of the vehicle.
41. The method of claim 1, wherein the vehicle comprises an autonomous vehicle.
42. A computer-implemented method, comprising:
receiving data from a sensor representative of an observable portion of an environment of a vehicle;
generating data representing a non-observable portion of the environment, including data representing at least one hypothetical object in the non-observable portion of the environment; and
generating a command for operation of the vehicle within the environment, the command being dependent on data representing the observable portion of the environment and on data representing the hypothetical object of the non-observable portion of the environment.
43. The method of claim 42, wherein the hypothetical object comprises a moving object.
44. The method of claim 42, wherein the hypothetical object comprises an object that uses a travel path from which the vehicle is excluded.
45. The method of claim 42, wherein the hypothetical objects comprise at least one of: vehicles, bicycles, buses, trains, pedestrians, and animals.
46. The method of claim 42, wherein generating data representative of non-observable portions of the environment comprises probabilistically selecting a type of the hypothetical object and attributes of the hypothetical object based on previously observed objects in the environment.
47. The method of claim 46, wherein the hypothetical object comprises a vehicle and the attributes comprise a size and a velocity.
48. The method of claim 42, wherein the observable portion is separated from the non-observable portion by a boundary.
49. The method of claim 48, comprising detection of the boundary.
50. The method of claim 49, wherein the detection of the boundary comprises using data from the sensors to distinguish between the observable ground and a foreground blocking a portion of the ground.
51. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises using stored data to infer a likely location of the hypothetical object.
52. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises querying a road network database for traffic lane information.
53. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises determining a location of the vehicle based on a road network database and one or more sensors.
54. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises querying a road network database for traffic lane information and discretizing the traffic lane into discrete points.
55. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises generating an unknown skeleton of discrete points of a lane that are not perceivable by the sensor.
56. The method of claim 55, wherein generating data representative of the non-observable portion of the environment comprises: (a) generating a representative shape at discrete points of the unknown skeleton; and (b) evaluating whether the representative shape is entirely within the non-observable portion.
57. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises treating a representative shape as the hypothetical object.
58. The method of claim 42, wherein generating data representative of the non-observable portion of the environment comprises applying temporal filtering to determine a location of the hypothetical object.
59. The method of claim 58, wherein applying the temporal filtering comprises smoothing an unknown skeleton with a forward-propagated unknown skeleton, wherein the forward-propagated unknown skeleton is generated by moving an old unknown skeleton forward along a traffic lane.
60. The method of claim 42, wherein including the hypothetical object in the world model comprises associating one or more attributes with the hypothetical object.
61. The method of claim 60, wherein the one or more of the attributes relate to possible motion states of the hypothetical object.
62. The method of claim 61, wherein the motion state comprises a stationary condition.
63. The method of claim 61, wherein the motion state comprises a movement condition.
64. The method of claim 61, wherein the motion state comprises a speed and a direction of movement.
65. The method of claim 64, wherein the speed is set to less than or equal to a predetermined maximum value.
66. The method of claim 65, in which the predetermined maximum value comprises a speed limit.
67. The method of claim 65, wherein the predetermined maximum value comprises a quantity derived from other objects concurrently observed or previously observed in the environment.
68. The method of claim 65, wherein the predetermined maximum value comprises an amount derived from historical data, road configuration, traffic rules, events, time, weather conditions, and combinations of two or more thereof.
69. The method of claim 42, comprising accessing a database comprising road network information.
70. The method of claim 69, comprising using data from a second set of sensors.
71. The method of claim 70, wherein the sensor or second set of sensors comprises radar sensors.
72. The method of claim 70, wherein the sensor or second set of sensors comprises lidar sensors.
73. The method of claim 70, wherein the sensor or second set of sensors comprises a camera sensor.
74. The method of claim 70, wherein the sensor or second set of sensors comprises a stereo camera sensor.
75. The method of claim 70, wherein the sensor or second set of sensors comprises a monocular camera sensor.
76. The method of claim 42, wherein generating the command comprises updating a trajectory of the vehicle.
77. The method of claim 76, comprising executing the trajectory of the vehicle.
78. The method of claim 42, wherein the vehicle comprises an autonomous vehicle.
79. A computer-implemented method, comprising:
generating instructions to cause the autonomous vehicle to travel on the road network at a specified speed and make a specified turn to reach the target location; and is
Updating the commands in response to current data representing an assumed speed and direction of movement of an assumed vehicle also traveling on the road network, the commands being updated to reduce a risk of collision of the autonomous vehicle with another vehicle on the road network.
80. The method of claim 79, wherein the assumed speed and direction of movement are derived probabilistically based on vehicles previously observed in the environment.
81. The method of claim 79, wherein the observable portion is separated from the non-observable portion by a boundary.
82. The method of claim 81, comprising detection of the boundary.
83. The method of claim 82, wherein the detection of the boundary comprises using data from one or more sensors to distinguish observable terrain and foreground blocking a portion of the terrain.
84. The method of claim 83, wherein the one or more sensors comprise a sensor on the autonomous vehicle.
85. The method of claim 83, wherein the one or more sensors comprise sensors external to the autonomous vehicle.
86. The method of claim 79, comprising generating the current data based on known objects perceived by one or more sensors.
87. The method of claim 86, wherein the generating of the current data comprises querying a road network database for traffic lane information.
88. The method of claim 86, wherein the generating of the current data comprises using stored data to infer possible locations of the hypothetical vehicle.
89. The method of claim 86, wherein the generating of the current data comprises determining a location of the autonomous vehicle based on a road network database and one or more sensors.
90. The method of claim 86, wherein the generation of the current data includes querying traffic lane information from a database and discretizing the traffic lane into discrete points.
91. The method of claim 86, wherein the generation of the current data comprises generating an unknown skeleton of discrete points of a lane that are not perceivable by the sensor.
92. The method of claim 91, wherein the generation of the current data comprises: (a) generating a representative shape at discrete points of the unknown skeleton; and (b) evaluating whether the representative shape is entirely within the unaware world.
93. The method of claim 92, wherein the generating of the current data comprises treating the representative shape as the hypothetical vehicle.
94. The method of claim 86, wherein the generating of the current data comprises applying temporal filtering to determine the location of the hypothetical vehicle.
95. The method of claim 94, wherein applying the temporal filtering comprises smoothing an unknown skeleton by a forward-propagating unknown skeleton, wherein the forward-propagating unknown skeleton is generated by moving an old unknown skeleton forward along a traffic lane.
96. The method of claim 86, wherein the generating of the current data comprises associating one or more attributes with the hypothetical vehicle.
97. The method of claim 96, wherein the one or more of the attributes relate to a likely state of motion of the hypothetical vehicle.
98. The method of claim 97, wherein the motion state comprises a stationary condition.
99. The method of claim 79, wherein the assumed speed is set to less than or equal to a predetermined maximum value.
100. The method of claim 99, wherein the predetermined maximum value comprises a speed limit.
101. The method of claim 99, wherein the predetermined maximum value comprises a quantity derived from other objects concurrently observed or previously observed in the environment.
102. The method of claim 99, wherein the predetermined maximum value comprises an amount derived from historical data, road configuration, traffic rules, events, time, weather conditions, and combinations of two or more thereof.
103. The method of claim 79, comprising accessing a database, the database comprising road network information.
104. The method of claim 79, wherein the sensor comprises a radar sensor.
105. The method of claim 79, wherein the sensor comprises a lidar sensor.
106. The method of claim 79, wherein the sensor comprises a camera sensor.
107. The method of claim 106, wherein the camera sensor comprises a stereo camera sensor.
108. The method of claim 106, wherein the camera sensor comprises a monocular camera sensor.
109. An apparatus, comprising:
an autonomous vehicle, comprising:
a) a controllable device configured to cause the autonomous vehicle to move over a road network;
b) a controller for providing commands to the controllable device; and
c) a computing element for updating the command in response to current data representing an assumed speed and a direction of movement of an assumed vehicle also driven on the road network.
110. The apparatus of claim 109, wherein the assumed speed and direction of movement are derived probabilistically based on vehicles previously observed in the environment.
111. The device of claim 109, wherein the observable portion is separated from the non-observable portion by a boundary.
112. The device of claim 111, wherein the computing element detects the boundary.
113. The device of claim 112, wherein detection of the boundary is based on using data from one or more sensors to distinguish observable terrain and foreground blocking a portion of the terrain.
114. The apparatus of claim 113, wherein the one or more sensors comprise a sensor on the autonomous vehicle.
115. The apparatus of claim 113, wherein the one or more sensors comprise a sensor outside of the autonomous vehicle.
116. The apparatus of claim 109, wherein the computing element generates the current data based on known objects perceived by one or more sensors.
117. The apparatus of claim 116, wherein the generation of the current data comprises querying a road network database for traffic lane information.
118. The apparatus of claim 116, wherein the generation of the current data comprises using stored data to infer possible locations of the hypothetical vehicle.
119. The apparatus of claim 116, wherein the generation of the current data comprises determining a location of the autonomous vehicle based on a road network database and one or more sensors.
120. The apparatus of claim 116, wherein the generation of the current data comprises querying traffic lane information from a database and discretizing the traffic lane into discrete points.
121. The device of claim 116, wherein the generation of the current data includes generating an unknown skeleton of discrete points of a lane that are not perceivable by the sensor.
122. The apparatus of claim 121, wherein the generation of the current data comprises: (a) generating a representative shape at discrete points of the unknown skeleton; and (b) evaluating whether the representative shape is entirely within the unaware world.
123. The apparatus of claim 122, wherein the generation of the current data comprises treating the representative shape as the hypothetical vehicle.
124. The apparatus of claim 116, wherein the generation of the current data comprises applying temporal filtering to determine the location of the hypothetical vehicle.
125. The device of claim 124, wherein applying the temporal filtering comprises smoothing an unknown skeleton by a forward-propagating unknown skeleton, wherein the forward-propagating unknown skeleton is generated by moving an old unknown skeleton forward along a traffic lane.
126. The apparatus of claim 116, wherein the generation of the current data comprises associating one or more attributes with the hypothetical vehicle.
127. The apparatus of claim 126, wherein the one or more of the attributes relate to a likely state of motion of the hypothetical vehicle.
128. The device of claim 127, wherein the motion state comprises a stationary condition.
129. The apparatus of claim 109, wherein the assumed speed is set to less than or equal to a predetermined maximum value.
130. The apparatus of claim 129, wherein the predetermined maximum value comprises a speed limit.
131. The apparatus of claim 129, wherein the predetermined maximum comprises quantities derived from other objects concurrently observed or previously observed in the environment.
132. The apparatus of claim 129, wherein the predetermined maximum comprises an amount derived from historical data, road configurations, traffic rules, events, time, weather conditions, and combinations of two or more thereof.
133. The device of claim 109, wherein the computing element accesses a database, the database including road network information.
134. The apparatus of claim 109, wherein the sensor comprises a radar sensor.
135. The apparatus of claim 109, wherein the sensor comprises a lidar sensor.
136. The device of claim 109, wherein the sensor comprises a camera sensor.
137. The device of claim 136, wherein the camera sensor comprises a stereo camera sensor.
138. The device of claim 136, wherein the camera sensor comprises a monocular camera sensor.
CN201880030067.6A 2017-03-07 2018-03-06 Planning for unknown objects by autonomous vehicles Pending CN114830202A (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US15/451,747 2017-03-07
US15/451,734 2017-03-07
US15/451,703 US10234864B2 (en) 2017-03-07 2017-03-07 Planning for unknown objects by an autonomous vehicle
US15/451,703 2017-03-07
US15/451,734 US10281920B2 (en) 2017-03-07 2017-03-07 Planning for unknown objects by an autonomous vehicle
US15/451,747 US10095234B2 (en) 2017-03-07 2017-03-07 Planning for unknown objects by an autonomous vehicle
PCT/US2018/021208 WO2018165199A1 (en) 2017-03-07 2018-03-06 Planning for unknown objects by an autonomous vehicle

Publications (1)

Publication Number Publication Date
CN114830202A true CN114830202A (en) 2022-07-29

Family

ID=63448894

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880030067.6A Pending CN114830202A (en) 2017-03-07 2018-03-06 Planning for unknown objects by autonomous vehicles

Country Status (3)

Country Link
EP (1) EP3593337A4 (en)
CN (1) CN114830202A (en)
WO (1) WO2018165199A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111683851B (en) * 2018-12-26 2023-09-12 百度时代网络技术(北京)有限公司 Mutual avoidance algorithm for self-steering lanes for autopilot
US20230159026A1 (en) * 2021-11-19 2023-05-25 Motional Ad Llc Predicting Motion of Hypothetical Agents

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060155464A1 (en) * 2004-11-30 2006-07-13 Circumnav Networks, Inc. Methods and systems for deducing road geometry and connectivity
CN101395649A (en) * 2006-03-01 2009-03-25 丰田自动车株式会社 Obstacle detection method, obstacle detection device, and standard mobile body model
JP2011248445A (en) * 2010-05-24 2011-12-08 Toyota Central R&D Labs Inc Moving object prediction device and program
US20130100286A1 (en) * 2011-10-21 2013-04-25 Mesa Engineering, Inc. System and method for predicting vehicle location
US20140088855A1 (en) * 2012-09-27 2014-03-27 Google Inc. Determining changes in a driving environment based on vehicle behavior
US20150377636A1 (en) * 2014-06-27 2015-12-31 International Business Machines Corporation Generating a road network from location data
US20160327953A1 (en) * 2015-05-05 2016-11-10 Volvo Car Corporation Method and arrangement for determining safe vehicle trajectories
US20170010106A1 (en) * 2015-02-10 2017-01-12 Mobileye Vision Technologies Ltd. Crowd sourcing data for autonomous vehicle navigation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013216994A1 (en) 2013-08-27 2015-03-05 Robert Bosch Gmbh Speed assistant for a motor vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060155464A1 (en) * 2004-11-30 2006-07-13 Circumnav Networks, Inc. Methods and systems for deducing road geometry and connectivity
CN101395649A (en) * 2006-03-01 2009-03-25 丰田自动车株式会社 Obstacle detection method, obstacle detection device, and standard mobile body model
JP2011248445A (en) * 2010-05-24 2011-12-08 Toyota Central R&D Labs Inc Moving object prediction device and program
US20130100286A1 (en) * 2011-10-21 2013-04-25 Mesa Engineering, Inc. System and method for predicting vehicle location
US20140088855A1 (en) * 2012-09-27 2014-03-27 Google Inc. Determining changes in a driving environment based on vehicle behavior
US20150377636A1 (en) * 2014-06-27 2015-12-31 International Business Machines Corporation Generating a road network from location data
US20170010106A1 (en) * 2015-02-10 2017-01-12 Mobileye Vision Technologies Ltd. Crowd sourcing data for autonomous vehicle navigation
US20160327953A1 (en) * 2015-05-05 2016-11-10 Volvo Car Corporation Method and arrangement for determining safe vehicle trajectories

Also Published As

Publication number Publication date
WO2018165199A4 (en) 2018-10-11
EP3593337A1 (en) 2020-01-15
WO2018165199A1 (en) 2018-09-13
EP3593337A4 (en) 2021-01-06

Similar Documents

Publication Publication Date Title
US11685360B2 (en) Planning for unknown objects by an autonomous vehicle
US11400925B2 (en) Planning for unknown objects by an autonomous vehicle
US10234864B2 (en) Planning for unknown objects by an autonomous vehicle
US11774261B2 (en) Automatic annotation of environmental features in a map during navigation of a vehicle
US11320826B2 (en) Operation of a vehicle using motion planning with machine learning
US11827241B2 (en) Adjusting lateral clearance for a vehicle using a multi-dimensional envelope
US20200216064A1 (en) Classifying perceived objects based on activity
CN109641589B (en) Route planning for autonomous vehicles
US11325592B2 (en) Operation of a vehicle using multiple motion constraints
CN113196011A (en) Motion map construction and lane level route planning
GB2620506A (en) Homotopic-based planner for autonomous vehicles
EP3647733A1 (en) Automatic annotation of environmental features in a map during navigation of a vehicle
KR102639008B1 (en) Systems and methods for implementing occlusion representations over road features
KR20230154470A (en) trajectory checker
US12030485B2 (en) Vehicle operation using maneuver generation
US20220357453A1 (en) Lidar point cloud segmentation using box prediction
CN114830202A (en) Planning for unknown objects by autonomous vehicles
KR102685453B1 (en) Fast collision free path generatoin by connecting c-slices through cell decomposition
KR102719202B1 (en) Vehicle operation using maneuver generation
US20240190452A1 (en) Methods and systems for handling occlusions in operation of autonomous vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Massachusetts

Applicant after: Dynamic ad Ltd.

Address before: Massachusetts

Applicant before: NUTONOMY Inc.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220809

Address after: Massachusetts

Applicant after: Motional AD LLC

Address before: Massachusetts

Applicant before: Dynamic ad Ltd.

WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220729