GB2510698A - Driver assistance system - Google Patents

Driver assistance system Download PDF

Info

Publication number
GB2510698A
GB2510698A GB1322445.6A GB201322445A GB2510698A GB 2510698 A GB2510698 A GB 2510698A GB 201322445 A GB201322445 A GB 201322445A GB 2510698 A GB2510698 A GB 2510698A
Authority
GB
United Kingdom
Prior art keywords
intersection
vehicle
driver
stop
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1322445.6A
Other versions
GB201322445D0 (en
GB2510698A8 (en
Inventor
Alexander Barth
Luca Delgrossi
Michael Maile
Gordon Peredo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mercedes Benz Group AG
Original Assignee
Daimler AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daimler AG filed Critical Daimler AG
Priority to GB1322445.6A priority Critical patent/GB2510698A/en
Publication of GB201322445D0 publication Critical patent/GB201322445D0/en
Publication of GB2510698A publication Critical patent/GB2510698A/en
Publication of GB2510698A8 publication Critical patent/GB2510698A8/en
Withdrawn legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18109Braking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18154Approaching an intersection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W50/16Tactile feedback to the driver, e.g. vibration or force feedback to the driver on the steering wheel or the accelerator pedal
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K26/00Arrangements or mounting of propulsion unit control devices in vehicles
    • B60K26/02Arrangements or mounting of propulsion unit control devices in vehicles of initiating means or elements
    • B60K26/021Arrangements or mounting of propulsion unit control devices in vehicles of initiating means or elements with means for providing feel, e.g. by changing pedal force characteristics
    • B60K2026/023Arrangements or mounting of propulsion unit control devices in vehicles of initiating means or elements with means for providing feel, e.g. by changing pedal force characteristics with electrical means to generate counter force or torque
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/60Traffic rules, e.g. speed limits or right of way
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • B60W2556/50External transmission of data to or from the vehicle of positioning data, e.g. GPS [Global Positioning System] data

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

A system, for assisting a driver of a vehicle 16 when reaching an intersection 10, comprises a detection module which detects a position, orientation, and motion state of at least one other traffic participant 17 approaching or at the intersection by means of at least one sensor. A localization module uses the detected information to compute a distance of the vehicles to their applicable stop position at the intersection 10. A situation analysis module combines the information from the detection module and the localization module to compute estimated arrival times, performs a precedence evaluation, and evaluates a probability for the vehicle 16 to have the right of way at the intersection depending on times at which the vehicle 16 and the other traffic participant 17 reach their respective stop positions. A communication module warns the driver if the driver is likely to run the stop position and does not have the right of way depending on the evaluated probability. The communication module may apply a brake and/or change gas pedal resistance. The detection module may detect at least one traffic control device, e.g. stop signs.

Description

A System for Assisting a Driver of a Vehicle The invention relates to a system for assisting a driver of a vehicle when reaching an intersection.
In US 6 516 273 Bi an apparatus for determining a potential violation of an intersection traffic control device can be found, the apparatus comprising a data storage device onboard a vehicle storing digital mapping data and intersection traffic control device data; a navigational positioning device generating an indication of position and heading of the vehicle based on global positioning data; a velocity determining device generating an indication of a velocity of the vehicle; a data processing device configured to determine a metric indicative of the vehicle's ability to stop prior to entry into an intersection controlled by a traffic control device; and a driver-vehicle-interface configured to generate a warning to a vehicle driver of the vehicle's impending inability to stop before the vehicle enters the intersection.
Furthermore, US 2008/0162027 Al shows a system that enables a vehicle to follow a traffic rule while traveling in a road network, the system comprising a database that stores data relating to at least one feature of the road network; a location detector that detects a location of the vehicle relative to the road network; a sensor that senses at least one object in a vicinity of the vehicle; and a processing system that controls the vehicle to autonomously obey at least one traffic rule, or provides a notification to a driver of the vehicle to enable the driver to obey at least one traffic rule, based on the detected location of the vehicle, data retrieved from the database relating to at least one feature of the road network, and data relating to at least one object sensed by the sensor.
It is an object of the present invention to provide a system by means of which the driver comfort in complex traffic scenarios can be increased.
This object is solved by a system having the features of patent claim 1. Advantageous embodiments with expedient and non-trivial developments of the invention are indicated in the other patent claims.
The system according to the present invention assists the driver of a vehicle when reaching an intersection. The vehicle is also referred to as "host vehicle". The system comprises a detection module, a localization module, a situation analysis module, and a communication module. The detection module detects the position, orientation, and motion state of at least one other traffic participant approaching or at the intersection by at least one sensor of the vehicle, the other traffic participant being different from the host vehicle. For example, the detection module detects the respective positions, orientations, and motion states of a plurality of other traffic participants approaching or at the intersection by the sensor.
The localization module relates the position, orientation, and motion state of the host vehicle and all traffic participants detected by the detection module at the intersection.
The localization module is used to compute the distance of a vehicle to their applicable stop position at the intersection.
For example, the detected position, orientation, and motion state of the other vehicle are provided as information by the detection module. Moreover, the outcome of the step of relating the position, orientation, and motion state and the outcome of the step of computing the distance are provided as information by the localization module.
Furthermore, the situation analysis module combines the information from the detection and localization modules in order to compute estimated arrival times and performs a precedence evaluation. It evaluates a probability for the host vehicle to have the right of way at the intersection depending on times at which the host vehicle and the other traffic participants reached their respective stop positions. The communication module warns the driver if the driver is likely to run the stop position and does not have the right of way depending on the evaluated probability.
Preferably, the system uses on-board sensors such as but not limited to a stereo vision camera, radar sensor, CR5 sensor (CR5 -global positioning system), odometry, etc. and map data input to detect an intersection and determine a traffic scenario at the intersection. For example, the respective stop position can be indicated by a stop line on the ground on which the vehicle and traffic participant are moving. The system increases driver comfort in complex traffic scenarios and reduces stress if multiple traffic participants, in particular cars, approach an intersection at the same time from different directions, or if the driver is uncertain if it is their turn to go or not. In addition the system can reduce stop line violations, or instances where the driver enters the intersection when it is not their turn.
The system can also be an essential component for automated driving. The system can also be an essential component for automated driving. In an automated vehicle the system can be part of the decision making module that determines when the host vehicle has the right of way.
The system according to the present invention provides a technical solution for analyzing traffic scenarios at intersections with one or more traffic control devices such as, for example, stop signs in real-time in order to assess the right of way which, for example, in the U.S. is typically governed by the order of arrival and the respective position of the vehicles or traffic participants in relation to the intersection. Such a traffic control device can be, for example, detected by means of the detection module. The intersection can be a 2-, 3-, 4-, or all-way stop intersection at which complex traffic scenarios can occur.
Based on said information, the system as a driver assistance system can discretely inform the driver when it is their turn or not to proceed through the intersection. For this purpose the system includes estimating the pose and motion state of traffic participants other than the host vehicle with respect to the given stop positions, in particular stop lines, precedence evaluation, and human-machine-interface.
In an automated driving mode, the system determines when it is the host vehicle's turn to proceed according to the presence of other vehicles and the given traffic rules.
Further advantages, features, and details of the invention derive from the following description of a preferred embodiment as well as from the drawing. The features and feature combinations previously mentioned in the description as well as the features and feature combinations mentioned in the following description of the figures and/or shown in the figures alone can be employed not only in the respective indicated combination but also in any other combination or taken alone without leaving the scope of the invention.
The drawing shows in: Fig. 1 a table of use cases describing traffic scenarios which can occur at an intersection; Fig. 2a-b respectively a table of intersection types; Fig. 3 a schematic view of an intersection; Fig. 4a a schematic view of an intersection; Fig. 4b a table of states of traffic participants being at an intersection or in the vicinity of the intersection; Fig. 4c a schematic view of the intersection according to Fig. 4a and traffic participants each having a state from the table shown in Fig. 4b; Fig. 5 a schematic view of an intersection having a first gate topology; Fig. 6 a table of traffic scenarios which can occur at the intersection according to Fig. 5; Fig. 7 a schematic view of an intersection having a second gate topology; Fig. 8 a schematic view of an intersection having a third gate topology; Fig. 9 a schematic view of an intersection having a fourth gate topology; Fig. 10 a schematic view of an intersection having a sixth gate topology; Fig. 11 a schematic view of an intersection having a seventh gate topology; Fig. 12 a schematic view of an intersection having an eighth gate topology; Fig. 13 a schematic view of an intersection having a ninth gate topology; Fig. 14 a schematic view of an intersection having a tenth gate topology; Fig. 15 a schematic view of an intersection having an eleventh gate topology; Fig. 16 a schematic and perspective top view of an intersection; Fig. 17 a schematic top view of the intersection according to Fig. 25; Fig. 1 8a-e respectively a schematic view of an image captured by a camera of a vehicle reaching the intersection according to Figs. 25 and 26; Fig. 19a-d respectively a schematic top view of an intersection, wherein a field of view of a camera is shown; Fig. 20 a schematic top view of an intersection; Fig. 21 a further schematic top view of the intersection according to Fig. 32,
wherein a field of view of a camera is shown;
Fig. 22 a schematic top view of a vehicle comprising a stereo camera system arranged at a fender of the vehicle; Fig. 23 a schematic perspective view of the vehicle according to Fig. 34; Fig. 24 a schematic top view of a vehicle comprising lateral radar sensors; Fig. 25 a schematic top view of a vehicle comprising a surround view camera; Fig. 26 a schematic and perspective top view of an intersection; Fig. 27 a diagram illustrating the architecture of a system for assisting a driver of a vehicle when reaching an intersection; Fig. 28 a diagram illustrating the internal state machine of the system; Fig. 29 a first algorithm of the system; Fig. 30 a second algorithm of the system; Fig. 31 a schematic top view of an intersection for illustrating coordinate systems used for localization purposes in the system; Fig. 32 a table of attributes used in the system; Fig. 33 a further schematic top view of the intersection according to Fig. 43 for illustrating coordinate systems used for object detection purposes in the system; Fig. 34 a further table of attributes used in the system; Fig. 35 a further table of attributes used in the system; Fig. 36 schematic perspective views of an intersection; Fig. 37 a schematic view of an intersection; Fig. 38 a diagram for illustrating a state machine by means of which the behavior of the system is modeled; Fig. 39 a schematic front view of a human-machine-interface for outputting at least one signal in the interior of a vehicle; Fig. 40 a further schematic front view of the human-machine-interface; Fig. 41 a further schematic front view of the human-machine-interface; Fig. 42 a further schematic front view of the human-machine-interface; Fig. 43 a further schematic front view of the human-machine-interface; Fig. 44 a further schematic front view of the human-machine-interface; Fig. 45 a schematic front view of a second human-machine-interface; and Fig. 46 a further schematic front view of a third human-machine-interface; In the figures the same elements or elements having the same function are designated with the same reference sign.
The Figs. provide an overview of a system for assisting a driver of a vehicle when reaching an intersection. In complex traffic constellations it can be difficult for the driver to keep track of the order of arrival of other cars at an intersection, in particular a four way or all way stop intersection.
The system mainly comprises two features: Firstly, as the host vehicle is moving towards an intersection with a stop sign and/or stop line: discretely inform the driver about the upcoming stop; if the movement of the vehicle shows potential danger of overshooting the stop line, warn the driver; optionally, slow down the vehicle if the vehicle enters the intersection without stopping. Secondly, as the vehicle is stopped at the stop line: inform the driver if it is not safe to enter the intersection; warn the driver if the vehicle starts to enter the intersection when it is not safe to do so.
The first feature is not restricted to US-traffic and can be used in other areas as well. The system is intended to work at all-way stop intersections (i.e., there are stop signs at all approaching roads). Such intersections can be 4-way cross roads, T-intersections, or even intersections without any signage. The precise methods to name and describe such intersections are provided in the following. The system is also potentially functional in situations where the approaching road has a stop sign and is connected to a cross road without stop signs (which is also known as 2-way stop). Furthermore, traffic rules in the U.S. treat an instrumented intersection (i.e., with traffic lights) that is flashing red as an intersection with stop signs. The system could be extended to work in these two environments as well. The system is based on camera and radar sensors, as well as a map database of intersection topologies. Such assumptions are described in detail in the subsequent sections.
The vehicle in which the system is performed is also referred to as the "ego vehicle" or "host vehicle" so that the host vehicle can be distinguished from other objects and other traffic participants, in particular other vehicles precisely. As the host vehicle is in danger of running through a stop line, the assistance system could trigger slowing down the vehicle, or stop the vehicle at the stop line. As the driver starts a stopped vehicle to enter the intersection when it is not safe to do so, the assistance system could make the gas pedal more resistant but would never completely override the driver. The assistance system informs the driver about the precedence order and warns the driver if it is not safe to enter an intersection.
S
In an automated vehicle, the system signals the right-of-way status to a decision making and planning unit that replaces the human driver.
For example, the implied ability of the system to analyze the current traffic condition and to indicate when it is not the right time or safe to enter an intersection could be extended to form the logic to decide when an autonomous vehicle should enter such an intersection.
For example, in the U.S., traffic and driving rules are set by individual states. For example, California has its own traffic rules and so does Florida. These rules from different states generally are consistent with each other, but do have some differences in certain areas. An initial survey of rules regarding intersections with stop signs in various states found no disagreement among the states. Not all states have a specific section on intersections with stop signs in their DMV (Department of Motor Vehicles) guidebook for drivers. The rules could be dispersed in various sections regarding yielding to other vehicles, traffic signage, etc. Additionally, there are (sometimes unwritten) protocols regulating drivers taking turns entering an all-stop intersection. In states on the US west coast, drivers generally favor a first-come-first-go principle in determining which driver should enter the intersection first.
This principle is actually written down as guidance in the Oregon DMV's guide book. In Midwest states, drivers emphasize creating an alternating order more than the first-come-first-go principle. No corresponding written rules have been uncovered in those states. At any intersection with stop signs in all four directions, it is common courtesy to allow the driver who stops first to go first. If in doubt, yield to the driver to the right.
In the following, two different representation methods that will be used to fully define an intersection scenario in a formal sense will be described. The first method describes use cases shown in Fig. 1, the use cases respectively representing a situation the host vehicle is in when approaching a four way stop. The second method provides a formalized notation of the topology of an intersection. With this kind of setup it is possible to apply different use cases to different intersection topologies shown in Figs. 2a to 15.
For a better differentiation of the diversity of intersection types, a set of seven categories O = (A, B,C,D, E, F,G} is defined, the categories being shown in Figs. 2a and 2b. The set O represents common and regularly shaped intersections that can be precisely described in efficient notations. We further introduce a catch-all category H to group all irregularly shaped intersections. A more elaborate notation is necessary to describe topologies of such intersections. In Figs. 2a and 2b, the respective intersection is shown in the column "illustration". The system includes, but is not restricted to the intersections and their variants defined by the categories above. The notation used in the following to describe four way stop and three way stop scenarios is described with regard to Fig. 3.
Fig. 3 shows an intersection 10. A gate g of the intersection 10 describes a spatial area where vehicles can enter an intersection. In most cases it can be understood as the area close to the stop line of an incoming lane. Stop lines of the intersection are designated with reference sign 12 in Fig. 3. The actual spatial extension of such areas should incorporate the fact that vehicles do not always stop exactly prior to the stop line 12, i.e., it corresponds to the area within vehicles are likely to stop. It should not exceed the size of typical vehicles in order to prevent that two vehicles are present within that area at the same time. A gate g could be described by a rectangular area, but also more sophisticated spatial functions. An intersection or the intersection 10 consists of multiple legs, also denoted as branches b in the following, which depict all the incoming streets.
One branch b can have one or more gates gi. Moreover, traffic control devices in the form of stop signs 14 are shown in Fig. 3.
The gate topology describes an intersection in terms of branches and gates. Different branches of an intersection get counted in counter clock wise order, starting with the street where the host vehicle approaches the intersection. Exiting gates are not considered in this notation. A gate topology is represented as follows: F= y0xy1 X***XYMI,y,,,m N,0«=m«=M Where ç indicates the number of gates of branch bm. The total number of branches is indicated by M. For a four way stop M=4. The operator x' separates the different branches from each other. In combination with the introduced set 0 of categories, an intersection I of type toe Q is described as follows: I:=w(F)=w(yOxyIx...xyMI) ,M >2 For example, 1 =A(ix2xlx2) or 1 =B(2x2x1) Hence, the most common intersections have the same amount of gates on all their incoming branches, an abbreviated notation is Al:= A( lxix lxi) A2:= A(2x2x2x2) B1 =80 xix 1) D2:= D(2x2x2) For example, the intersection 10 shown in Fig. 4a can be described by A(2 x 2 x 2 xl).
There are four branches at the intersection 10 according to Fig. 4a. The first three, starting at the branch where the host vehicle enters the intersection and going in counter clock wise order, have two gates each. The fourth branch only has one gate. In Fig. 4a, the host vehicle is designated with reference sign 16.
Moreover, a gate configuration 0 is used to describe the current state of all gates and hold the information about every vehicle that enters the stop procedure.
g70_1;...;g_1,g1e {E.A,S,S.L.D,-}, = P7=0 Starting with the branch on which the host vehicle is located, the gate indices are counted in counter clock wise order beginning with the far left gate. A, denotes the separation of two gates on the same branch and a denotes the beginning of a new branch. The states a gate can be in is shown in Fig. 4b. As can be seen in Fig. 4a, the host vehicle 16 itself is marked with an underline and can therefore have all of the states above.
An example is shown in Fig. 4c, in which G = g0.g1.g,.g.g4,g,g6 with g0 = gl=gH =S, g2=E, g,=A, g4=S, gç=L, g=S andtherefore G = D,S,E.A,S,L.SJ, Different traffic scenarios which can occur at respective intersections are shown in Figs. 5 to 24. Preferably, all different combinations are covered.
The detection module has to cover the entire intersection, i.e. traffic from the left, right, as well as oncoming traffic has to be detected. Given the large extension of typical intersections, this requires a large field of view of sensors used for the system. Such a sensor is, for example, capable of capturing at least one image of at least a portion of the vicinity of the host vehicle and, thus, the intersection. The larger the field of view is, the larger the portion is. For example, a stereo vision system utilizing at least one stereo camera and additional radar sensors at the front left and front right corner of the host vehicle are used.
Fig. 16 shows an intersection 10 having the topology Al, the intersection 10 shown in Fig. being used as reference for a simulation. This intersection 10 shown in Fig. 25 can be simulated in simulation software for driver assistance systems. Fig. 26 shows the simulated intersection 10 at which the host vehicle 16 and three other cars 17 are standing. A virtual camera is placed behind the windshield of the host vehicle.
Figs. 1 Ba and 1 Bc each show an image captured by the virtual camera having a field of view (FOV) of 50 degrees. The camera can be, for example, a stereo multi-purpose camera (SMFC). Figs. 1 Sb and 1 Sd each show an image captured by the virtual camera having a FOV of 75 degrees. In Figs. 1 Ba and 1 Sb, the host vehicle is approaching the intersection. In Figs. 18c and 18d, the host vehicle is standing at the stop line. Fig. lSe shows an image captured by the virtual camera having a FOV of 120 degrees (deg), wherein the host vehicle is standing at the stop line.
In order to also cover larger intersections, the detection system is extended by additional sensors. A sensor configuration was chosen that uses radar sensors mounted in the front left and front right corners of the vehicle. Alternative sensor configurations include, for example, additional cameras looking to the sides (e.g. wafer-level cameras); surround view cameras at the front; fusion of side radar and monocular camera; front lidar; or side lidar.
Fig. 24 shows a passenger vehicle 20 comprising side radar sensors 26. The positioning of the radar sensors 26 at the front sides allows for valuable input also for other applications. The distance range depends on the exact positioning of the sensor and the antenna design but the range is typically about 30-80 m. However, the accuracy of detected object positions is not equal over the whole field of view of the radar sensor and depends on the antenna design.
Figs. 22 and 23 show a vehicle in the form of a passenger vehicle 20 comprising a second stereo camera system 22 which is also referred to as a "second stereo system".
For example, the second stereo system is used in addition to the SMPC arranged behind the windshield. As can be seen in Figs. 34 and 35, the second stereo system can be arranged at a fender 24 of the passenger vehicle 20.
When the second stereo system is used, the same algorithms used for the front camera (SMPC) can also be used for the side-looking second stereo system. A fusion can be done either on the object level or on the intermediate data level (e.g. Stixels), assuming the relative position and orientation between both stereo systems is known from a calibration step.
Alternatively or additionally, at least one side mono camera could be used. The object detection gets easier if the host vehicle is stationary (e.g. at stop line). The side camera could benefit from recent developments of miniature wafer-level cameras.
Fig. 25 shows a passenger vehicle 20 comprising at least one front surround view camera 28 arranged at the front. Such a surround view camera at the front can cover a wide field-of view. However, it has a very limited distance range and no stereo information on the 3D scene and large image distortions can occur.
Moreover, a fusion of a side radar and mono camera could be used. Thus, complementary properties of sensors can be exploited, e.g., the camera can be used to verify radar objects or stabilize their lateral position. However, a separate algorithm for object detection and fusion based on radar and vision for vehicles from the left side is required. Furthermore. lidar approaches could be used.
The system relies on two fundamental components: Localization and object detection.
Localization means deriving the host vehicle's position with respect to a known intersection and in particular with respect to the stop lines at this intersection. This requires a precise knowledge of the intersection's topology, which is assumed to be available from a map. Once the relative position of the host vehicle with respect to the intersection is known, it is possible to define areas of interest around all stop lines, corresponding to the intersection's approaching gates. Fig. 26 shows a four way stop intersection 10 with the host vehicle 16 approaching. Three areas of interest 30 have to be analyzed for precedence of other vehicles.
Object detection involves vehicle and pedestrian detection and tracking in order to assign a state label to each gate of the intersection, e.g., empty, vehicle approaching, vehicle stopped, etc. Both the vehicle position and motion state including the acceleration behavior have to be considered. The areas of interest 30 of the localization step could control the focus of attention for the object detection.
In the following, an overview on the overall system is provided and the different tasks to be addressed are listed. Then a generic solution for the assistance function is sketched, given the assumptions and scope. Moreover, the requirements for localization and object detection are elaborated.
Fig. 27 shows a diagram which illustrates the high-level architecture of the system. There are three main components to be implemented: Localizer, object manager, state manager. The sensor inputs to this system are a global position, for example from GPS, results from a stereo camera, as well as radar objects if available. In addition, the system requires knowledge databases containing landmarks for precise localization and information on the intersection topology (e.g. stop line positions, gates, dimension, etc.).
The tasks of the different components will be addressed in the following. There are two relevant phases: Approaching the stop line and waiting at the stop line. When approaching the stop line: The localizer is used for a rough localization with respect to at least one global coordinate system (GPS). Data from on-board sensors or an IMU (inertial measurement unit) can be used to further refine or filter this global position. The localizer then computes the coarse distance to a next stop intersection or stop-line and refines the host vehicle's position and orientation (pose) based on additional landmarks detected by the vision system. The localizer predicts the host vehicle pose based on previous time steps and updates the pose based on real-time measurements and mapped landmarks when the host vehicle begins to approach a stop-line. Furthermore, the localizer detects when the host vehicle reached the stop-line and predicts areas of interest. The object manager detects and tracks objects near the predicted areas of interest (or all objects in the scene) and computes probabilities of existence. The state manager updates the internal state machine of the system, the state machine being shown in Fig. 28.
When waiting at the stop line: The localizer estimates other stop line positions based on the current pose and mapped data and defines areas of interest around stop line positions. The object manager detects and tracks stationary and moving vehicles within areas of interest and detects and tracks pedestrians at intersection crosswalks. Furthermore, the object manager computes probabilities of existence. The state manager updates the object to gate assignments (probabilistic approach) and updates the object behavior belief with regard to the gate (stopped, approaching, leaving). Furthermore, the state manager updates beliefs for gates being clear or occupied and incorporates context knowledge (pedestrians, timeouts, history of gate states). Additionally, the state manager evaluates whether there are other vehicles preceding the host vehicle (EWS logic) and updates the internal state machine.
Fig. 28 shows the state machine of the system. Each state represents a certain command, indicated by the boxes 32 on the lower row. "No Intersection": Normal driving with no STOP intersection in sight; or vehicle already has passed the stop line and continues driving; the driver is responsible to drive when it is safe. "Approaching Intersection": STOP intersection ahead; the host vehicle must prepare to slop at the stop line. Stopped at Stop Line": Host vehicle reached the stop line and must wait as long as there are other vehicles that have the right of way.
The most important point in time is when the host vehicle arrives at the stop line. At this time all stopped cars at other gates precede the host vehicle, all moving cars that have already left a gate (accelerating) precede the host vehicle, and the host vehicle precedes all other cars that have not yet stopped, i.e., that are approaching a gate (decelerating).
Any moving vehicles in the intersection are irrelevant for the precedence evaluation.
However, they must be considered if actually starting. As opposed to automated driving, the responsibility for assessing when it is safe to start the vehicle is on the driver and not on the driver assistance system.
The pseudo code or algorithm shown in Fig. 29 describes the above logic of the system in a formal sense. It can be used to determine whether the host vehicle has the right of way.
First, the set 0 contains the indices of all relevant gates that potentially precede the host vehicle. The relevance of a gate depends on the intersection type and maneuver intention. In the basic version, G contains all indices from 1 to N, indicating all gates are considered. The set G is then iteratively updated based on the current gate configuration.
The "updateGateStates" method uses the input from the object detection module to update the gate states for all gates in G. All gates with state S for stopped precede the host vehicle, and should therefore remain in 0. All other gates can be removed from 0, since the host vehicle precedes them, or at least does not have to wail for these particular gates from now on, in case a vehicle is just leaving the gate. If G equals the empty set, there is no gate left that precedes the host vehicle. However, this does not automatically imply that it is also safe to start, since it is not considered whether the inner of the intersection or the exit gates are free. This has to be addressed by a different module for an automated vehicle.
There are several extensions to be made to the base version introduced above. For two way stop intersections, the gates that correspond to branches that always have the right of way can be removed from the initial set directly, since the Stop-Logic does not apply to these gates. If the host vehicle's intention is known (e.g. based on the turn signal), one could further exclude some gates a priori. For example, if the host vehicle wants to turn right at a four way stop intersection, there is no conflict with gates from the right branch if a U-turn is not allowed for this branch. Furthermore if the intention of other vehicles is also known, either from on-board sensors or via communication, the initial gate set 0 could be reduced accordingly.
Therefore, the general form G = j(e) is introduced as illustrated in Fig. 42 by an algorithm, where 0 is a function of some parameters, that incorporate additional knowledge on the intersection type, the intention of drivers, and the legal maneuvers at this intersection. In addition, a special pedestrian handling is introduced.
Assuming one can distinguish between stopped cars that could start directly and stopped cars that have to give the right of way to a pedestrian, this mechanism prevents the host vehicle from entering the intersection before other cars that actually have the right of way but have to wait for a pedestrian. The pseudo-code samples or algorithms shown in Figs. 29 and 30 are only a simplified example of how to solve this problem.
Alternatively one can represent the problem by a graphical model, e.g. a factor graph, and compute a belief for "MUST_STOP" and "GO_WHEN_SAFE". In such a probabilistic approach, the actual decision is shifted to the very end of the processing chain, i.e.. it does not require hard decisions at earlier steps, for example, whether a given gate is clear or not based on the object motion state. This method is preferred due to better robustness.
In the following the main elements and strategies for localization are presented. The localization can be divided into two parts: A rough localization that is used to detect whether the host vehicle approaches a known stop intersection, and a precise localization of the host vehicle with respect to the intersection. The precise localization can be a refinement of the course localization.
With regard to Fig. 31, there are three coordinate systems that are used for localization: a world System, an intersection system and a host system. The world system is a global geographic reference system (e.g. the World Geodetic System, WGS84, used by OPS).
The intersection system has its origin at the center of the intersection, the x-axis pointing north, the y-axis pointing respectively. The host system has its origin at center rear axle of the host vehicle (on the ground), the x-axis being aligned with longitudinal the axis of the vehicle pointing to the front, the y-axis being aligned with the parallel axis of the vehicle pointing to the left. All coordinate systems are right-handed.
The transformation between the world system and the intersection system must be known. The transformation between the host system and the intersection system is unknown and has to be estimated as part of the localization.
The objective of the course localization via GPS is to derive a global position and orientation (heading) of the host vehicle. This position then defines a so called electronic horizon', a region around the host vehicle for which additional information is available from an offline map. If a known stop intersection falls into the electronic horizon of the host vehicle, the transformation between the host vehicle's local coordinate system and the intersection's coordinate system can be derived based on the known transformation from the world system to the intersection system as well as the host vehicle's pose in the Fig. 32 shows a table of attributes used in the coarse localization. The global accuracy of the coarse localization should be in the meter range in order to roughly estimate the distance to the stop intersection. Moreover, a precise localization via landmarks can be performed. For a precise definition of the areas of interest around the different intersection gates, the coarse localization is not sufficient. Especially errors in the heading direction lead to wrong areas of interest, thus, it is not possible to correctly assign objects to gates. To overcome this problem, the coarse pose of the GPS system has to be refined based on landmarks. Such landmarks have to be extracted in a preprocessing step and stored in a database for each intersection.
At runtime these landmarks have to be identified using sensors (e.g. the stereo camera) and matched with landmarks in the database. Given the known global position of each landmark, one can then derive the position and orientation of the sensors with regard to the landmarks, and thus, the pose of the host vehicle.
A landmark is a feature that consists of three parts: a set of specific attributes, global 3D coordinates (world system) and local 3D coordinates (intersection system). The actual attributes depend on the type of landmark. Landmarks can be characteristic points, lane markings, lines or line segments, or 3D structures. The global position can be used to derive an absolute global pose of the host vehicle, while the local coordinates are used to derive the relative pose between the intersection and the host vehicle. The latter can be typically much more precise.
There are several ways of extracting landmarks: Geo-referenced measurements in the field (manual), geo-referenced aerial imagery (manual or automated), geo-referenced lidar point clouds (manual or automated), geo-referenced street level imagery + 3D points from stereo vision (manual or automated).
For example, a vision-based localization with 2D features can be conducted. 2D image features with characteristic color or texture properties are extracted from input images. In particular, features on the road (lane markings, man holes, etc.) are well-suited for this due to a strong contrast. Road features further allow for monocular vision (e.g. rear view camera), assuming a known transformation between the camera and a planar ground plane.
If 3D point data from a stereo cameras available one can also obtain a 3D position for a given 2D feature in the image that does not have to lie on the ground, but can be part of, e.g., building facades. Once a set of feature points is assigned to landmarks, bundle adjustment techniques or other state estimation techniques, for example, particle filters, can be applied to estimate the actual pose together with some uncertainty measures.
Alternatively or additionally, a vision-based localization with 3D features can be performed. Beside point features one can also use other primitives as input for the bundle adjustment such as vertical lines in 3D (building edges), poles (traffic lights, traffic signs, trees,...), or planes (building facades). Such features are extracted from 3D point clouds.
To analyze the gates states, a relative precision between the host vehicle and the areas of interest is more important than an absolute accuracy in a global context. The positioning must be precise enough to define the right areas of interest! i.e., to identify all relevant stop lines based on the host vehicle's relative position and the stored map data.
Landmarks are stored in a Geo Information System (GIS). Such systems are special databases for geo-referenced content (vector data) that provide efficient ways to represent and access geo data. It must be possible to efficiently query a relevant set of landmarks given the current host vehicle's pose and field of view.
Furthermore, an intersection topology map can be used. The intersection topology map contains all necessary information on the intersection such as road configuration, number of gates, stop line positions, etc. It is the basis for defining the areas of interest used for object detection and tracking. It should also contain information on pedestrian crosswalks that might be indicated by markings on the ground or virtual markings based on the general assumption that there are always crosswalks at a STOP intersection.
Such information can be available from onboard map databases, or cached versions of mapped databases stored on a backend server which is connected to the vehicle via a communication channel. The mapped stop lines can either represent a true marking on the pavement or virtual markings in case there is no actual marking on the ground.
Alternatively, or in addition to the map data, vision-based or communication-based solutions could be used to detect the stop lines or stop signs.
In the following the general object detection capabilities required as input for the system are defined. It is assumed that objects result from a sensor fusion module, incorporating one or more sensors such as a stereo camera or radar. In the first version the objects might result from a single sensor (e.g. stereo camera) only. The object detection module should provide the following functionality: Vehicle Detection: Detect presence of a vehicle within a given area in 3D space, which is related to a particular gate of the intersection; return the position in terms of a defined reference point relative to the host vehicle (longitudinal and lateral distance) as Normal distribution with mean; and, covariance matrix, return the moving direction of this vehicle relative to the host vehicle as Normal distribution with mean and variance, return the 2D velocity vector of this as Normal distribution with mean and variance, return the 2D acceleration vector of this vehicle as Normal distribution with mean and variance, return width of dominant side (optional), return a confidence value between 0 and 1 which is proportional to the vehicle's probability of existence (0 means very unlikely, 1 means very likely). Optionally the detection module determines the other vehicle's yaw rate (change of heading).
Pedestrian Detection: Detect presence of all pedestrians and bicyclist within intersection range, return the relative position of the pedestrian's center to the host vehicle (longitudinal and lateral distance) as Normal distribution with mean and covariance matrix, return the 2D motion vector of the center of gravity, indicating the longitudinal and lateral velocity of the pedestrian, as Normal distribution with mean and covariance matrix, return a confidence value between 0 and 1 which is proportional to the pedestrian's probability of existence (0 means very unlikely, 1 means very likely), optional: Return a discrete label of pedestrian intention.
In Fig. 33, coordinate systems used for object detection are shown. There is an intersection system, a host system and an object system. The intersection System has its origin at the center of the intersection, the x-axis pointing north, the y-axis pointing respectively. The host system has its origin at center rear axle of the host vehicle (on the ground), the x-axis being aligned with the longitudinal axis of the vehicle and pointing to the front, the y-axis being aligned with the parallel axis of the vehicle and pointing to the left. The object system has its origin at a defined reference point on a particular object, the x-axis being aligned with the longitudinal axis of the vehicle and pointing to the front, the y-axis being aligned with the lateral axis and pointing to the left. Each vehicle has its own local object system.
Fig. 34 shows a table of example attributes used for vehicle detection. Fig. 35 shows a table of example attributes used for pedestrian detection. Potential attributes for vehicle or pedestrian detection are not limited to those provided in the figures.
Fig. 36 shows a parked car 34 at a four way stop intersection. It must not be confused with a car stopped at the stop line (gate 36). Since the assistance system is not a collision mitigation system, having a very precise knowledge of other objects' pose and motion state is not necessary. There is a significant safety distance to objects at other gates, thus, deriving the discrete states of the objects is most important. This means, the accuracy demands in general are less demanding for this assistance system compared to existing driver assistance functions. However, it must be possible to clearly assign objects to the intersection gates and resolve whether a vehicle is slowing down, waiting, or accelerating. Furthermore, the system must be able to distinguish between a car stopped at the stop line and a parked car at the road side as in the above example.
The object detection module needs to be able to deal with partial occlusions, e.g., by traffic signs, utility poles, trees, or parked cars. Fig. 49 shows an intersection at which an object 38 is arranged, the object 38 effecting a partial occlusion at the right side with respect to the host vehicle. Such occlusions mostly affect the system while approaching the intersection, or if pedestrians are present. However, there are also situations where the view to the area of interest is partly blocked by some infrastructure elements, even if the host vehicle stops at the stop line.
In the following, a basic HMI (human-machine-interface) concept for the assistance function s introduced. The key idea of the HMI design is to have an interface that is subtle enough to not distract the driver and visible enough to actually transmit the desired information. The HMI concept for the assistance function is based on existing driver assistance functions such as lane keeping, blind spot monitoring, and attention assist.
Fig. 39 shows an HMI State Machine of the assistance system. In other words, the behavior of the system is modeled as a state machine shown in Fig. 50. In Fig. 50, the boxes indicate the states that have to be visualized to the driver. These states are independent from the actual output that can be visual, acoustical, haptic, or a combination thereof.
With regard to Figs. 40 to 44, a display is introduced, the display being also referred to as "combi display". The display is arranged in the interior of the vehicle and used to output at least one signal to warn the driver if he has not the right of way and the host vehicle is nevertheless entering a stop intersection.
The HMI can be realized in future cars according to other advanced UX (user experience) concepts. The designs currently provided are an abridgement of those contemplated which aim to at least provide the basic information necessary to indicate to the vehicle operator/occupant(s) that the assistant system is actively performing the tasks discussed above.
The status of the system can be visualized via a (colored) stop sign icon as shown in Figs. 40 to 46. Optionally the system status can be communicated via an acoustical signal.
Alternative concepts include, for example, an LED light panel in the field of view of the driver shown in Fig. 47 to prevent the driver from looking away from the intersection; and/or a haptic feedback via the brake and/or throttle pedal.
The HMI for an automated vehicle can be of an alternate design. It can inform passengers in the car about the status of the system. It can further have an external communication module that communicates with the outside world, for instance, to signal the internal state to pedestrians or other traffic participants. For example, the car could signal to a pedestrian that it will wait for the pedestrian to cross the road.
Additionally, one can keep the host vehicle on hold when leaving the brake pedal, requiring a driver to push the gas pedal to start driving, slow the host vehicle down slightly if it is ignoring the stop line, or modify throttle control parameters to create a haptic feedback (leverage from economic driving activities).
The final HMI concept might be a combination of different elements, or look completely different. The final HMI concept should not induce the driver to take his eyes from the road by looking at a screen inside the vehicle.
List of reference signs intersection 12 stop line 14 stop sign 16 host vehicle 17 other car
18 field of view
passenger vehicle 22 second stereo camera system 24 fender 26 radar sensor 28 surround view camera area of interest 32 box 34 parked car 36 gate 38 object branch gi gate

Claims (5)

  1. Claims A system for assisting a driver of a vehicle (16) when reaching an intersection (10), the system comprising: -a detection module which detects a position, orientation, and motion state of at least one other traffic participant (17) approaching or at the intersection by means of at least one sensor of the vehicle (16), -a localization module which: o relates the position, orientation, and motion state of all traffic participants detected by the detection module and o computes a distance of the vehicle to their applicable stop position at the intersection, -a situation analysis module which: o combines the information from the detection module and the localization module in order to compute estimated arrival times, o performs a precedence evaluation, and o evaluates a probability for the vehicle (16) to have the right of way at the intersection depending on times at which the vehicle (16) and the other traffic participant (17) reached their respective stop positions, -a communication module which warns the driver if the driver is likely to run the stop position and does not have the right of way depending on the evaluated probability.
  2. 2. The system according to claim 1, characterized in that the communication module informs the driver about the upcoming stop position when reaching the intersection (10).
  3. 3. The system according to any one of claims 1 or 2, characterized in that the system comprises an actuating module which automatically brakes the vehicle (16) and/or increases the resistance of agas pedal of the vehicle (16) in dependency on the evaluated probability.
  4. 4. The system according to any one of the preceding claims, characterized in that the detection module detects at least one traffic control device (14) of the intersection (10), wherein the probability is evaluated in dependency on the traffic control device.
  5. 5. A vehicle (16) having the system according to any one of the preceding claims.
GB1322445.6A 2013-12-18 2013-12-18 Driver assistance system Withdrawn GB2510698A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1322445.6A GB2510698A (en) 2013-12-18 2013-12-18 Driver assistance system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1322445.6A GB2510698A (en) 2013-12-18 2013-12-18 Driver assistance system

Publications (3)

Publication Number Publication Date
GB201322445D0 GB201322445D0 (en) 2014-02-05
GB2510698A true GB2510698A (en) 2014-08-13
GB2510698A8 GB2510698A8 (en) 2014-09-17

Family

ID=50071043

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1322445.6A Withdrawn GB2510698A (en) 2013-12-18 2013-12-18 Driver assistance system

Country Status (1)

Country Link
GB (1) GB2510698A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105006148A (en) * 2015-07-06 2015-10-28 同济大学 Intersection turning vehicle number estimating method and system
WO2016094224A1 (en) * 2014-12-09 2016-06-16 Toyota Motor Engineering & Manufacturing North America, Inc. Autonomous vehicle detection of and response to yield scenarios
CN111415529A (en) * 2019-01-04 2020-07-14 丰田自动车工程及制造北美公司 System, method, and computer-readable storage medium for traffic intersection passage
US11046317B2 (en) 2019-05-31 2021-06-29 Waymo Llc Multi-way stop intersection precedence for autonomous vehicles
US11055997B1 (en) 2020-02-07 2021-07-06 Honda Motor Co., Ltd. System and method for resolving ambiguous right of way
WO2022175461A1 (en) * 2021-02-22 2022-08-25 Bayerische Motoren Werke Aktiengesellschaft Vehicle control system and method for operating a driving function at a traffic node
US20220306096A1 (en) * 2021-03-25 2022-09-29 Toyota Jidosha Kabushiki Kaisha Drop-off assist device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016220947A1 (en) * 2016-10-25 2018-06-28 Ford Global Technologies, Llc Driver assistance system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008126755A (en) * 2006-11-17 2008-06-05 Toyota Motor Corp Travel support device
JP2009298193A (en) * 2008-06-10 2009-12-24 Fuji Heavy Ind Ltd Driving support device for vehicle
JP2010033441A (en) * 2008-07-30 2010-02-12 Fuji Heavy Ind Ltd Vehicle driving support device
US20130253815A1 (en) * 2012-03-23 2013-09-26 Institut Francais Des Sciences Et Technologies Des Transports, De L'amenagement System of determining information about a path or a road vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008126755A (en) * 2006-11-17 2008-06-05 Toyota Motor Corp Travel support device
JP2009298193A (en) * 2008-06-10 2009-12-24 Fuji Heavy Ind Ltd Driving support device for vehicle
JP2010033441A (en) * 2008-07-30 2010-02-12 Fuji Heavy Ind Ltd Vehicle driving support device
US20130253815A1 (en) * 2012-03-23 2013-09-26 Institut Francais Des Sciences Et Technologies Des Transports, De L'amenagement System of determining information about a path or a road vehicle

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016094224A1 (en) * 2014-12-09 2016-06-16 Toyota Motor Engineering & Manufacturing North America, Inc. Autonomous vehicle detection of and response to yield scenarios
US9534910B2 (en) 2014-12-09 2017-01-03 Toyota Motor Engineering & Manufacturing North America, Inc. Autonomous vehicle detection of and response to yield scenarios
CN105006148A (en) * 2015-07-06 2015-10-28 同济大学 Intersection turning vehicle number estimating method and system
CN105006148B (en) * 2015-07-06 2017-10-31 同济大学 A kind of crossing turns to vehicle number estimation method and system
CN111415529A (en) * 2019-01-04 2020-07-14 丰田自动车工程及制造北美公司 System, method, and computer-readable storage medium for traffic intersection passage
CN111415529B (en) * 2019-01-04 2022-09-06 丰田自动车工程及制造北美公司 System, method and computer-readable storage medium for traffic intersection passage
US11046317B2 (en) 2019-05-31 2021-06-29 Waymo Llc Multi-way stop intersection precedence for autonomous vehicles
US11760354B2 (en) 2019-05-31 2023-09-19 Waymo Llc Multi-way stop intersection precedence for autonomous vehicles
US11055997B1 (en) 2020-02-07 2021-07-06 Honda Motor Co., Ltd. System and method for resolving ambiguous right of way
WO2022175461A1 (en) * 2021-02-22 2022-08-25 Bayerische Motoren Werke Aktiengesellschaft Vehicle control system and method for operating a driving function at a traffic node
US20220306096A1 (en) * 2021-03-25 2022-09-29 Toyota Jidosha Kabushiki Kaisha Drop-off assist device

Also Published As

Publication number Publication date
GB201322445D0 (en) 2014-02-05
GB2510698A8 (en) 2014-09-17

Similar Documents

Publication Publication Date Title
JP7125214B2 (en) Programs and computing devices
JP7045628B2 (en) Vehicle equipment, vehicles, and computer programs for controlling vehicle behavior
US9921585B2 (en) Detailed map format for autonomous driving
EP4273837A2 (en) Systems and methods for predicting blind spot incursions
US9915951B2 (en) Detection of overhanging objects
CN109426256A (en) The lane auxiliary system based on driver intention of automatic driving vehicle
JP2023134478A (en) System and method for anonymizing navigation information
GB2510698A (en) Driver assistance system
JP2022553491A (en) Systems and methods for vehicle navigation
JP2022535351A (en) System and method for vehicle navigation
CN110347145A (en) Perception for automatic driving vehicle assists
CN111027420A (en) System and method for simulating a leading vehicle
CN116249644B (en) Method and system for performing out-of-path inference by autonomous vehicles to determine viable paths through an intersection
JP2023532482A (en) System and method for detecting open doors
CN108466621A (en) effective rolling radius
US11820397B2 (en) Localization with diverse dataset for autonomous vehicles
CN109085818A (en) The method and system of car door lock based on lane information control automatic driving vehicle
JP2023539868A (en) Map-based real world modeling system and method
JP2023519940A (en) Control loop for navigating the vehicle
CN114930123A (en) System and method for detecting traffic lights
JP2023504604A (en) System and method for selectively decelerating a vehicle
CN117651668A (en) System and method for monitoring the quality of lane markings
WO2022165498A1 (en) Methods and system for generating a lane-level map for an area of interest for navigation of an autonomous vehicle
US11851083B2 (en) Methods and system for constructing data representation for use in assisting autonomous vehicles navigate intersections
CN114341939A (en) Real world image road curvature generation as a data enhancement method

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)