EP4030408A1 - Autonomous junction crossing of automated vehicle - Google Patents

Autonomous junction crossing of automated vehicle Download PDF

Info

Publication number
EP4030408A1
EP4030408A1 EP21152407.9A EP21152407A EP4030408A1 EP 4030408 A1 EP4030408 A1 EP 4030408A1 EP 21152407 A EP21152407 A EP 21152407A EP 4030408 A1 EP4030408 A1 EP 4030408A1
Authority
EP
European Patent Office
Prior art keywords
automated vehicle
junction
neural network
detected
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21152407.9A
Other languages
German (de)
French (fr)
Inventor
Ee Heng Chen
Joeran Zeisler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayerische Motoren Werke AG
Original Assignee
Bayerische Motoren Werke AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke AG filed Critical Bayerische Motoren Werke AG
Priority to EP21152407.9A priority Critical patent/EP4030408A1/en
Publication of EP4030408A1 publication Critical patent/EP4030408A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes

Definitions

  • the present disclosure relates to a method for planning and/or controlling an autonomous junction crossing of an automated vehicle, a control unit comprising means to execute the method, an automated vehicle comprising the control unit, and a program recorded on a computer-readable recording medium configured to execute the method.
  • An information fusion processing module performs fusion processing on visual information, modeling information, dynamic information, positioning information and road information of a high-precision map.
  • the information fusion process is to integrate the visual information, the modeling information, the dynamic information, the positioning information and the road information of the high-precision map into a three-dimensional coordinate.
  • a driving behavior decision module determines a driving scenario in which an autonomous driving bus is located according to the information after the integration process.
  • a first aspect of the invention relates to a method for planning and/or controlling an autonomous junction crossing of an automated vehicle.
  • the method comprises two steps.
  • the first step is a junction analysis step and comprises detecting a traffic junction based on image data corresponding to an environment of the automated vehicle using a first neural network.
  • the second step is a decision-making step and comprises deciding if the automated vehicle can cross the detected traffic junction based on the image data using a Bayesian network.
  • traffic junction may be a location where two or more roads meet.
  • automated vehicle may be a vehicle that acts or operates completely independent of a human driver, may be a vehicle that acts or operates independent of a human driver in some instances while in other instances a human driver may be able to operate the vehicle and/or may be a vehicle that is predominantly operated by a human driver, but with the assistance of an automated driving/assistance system.
  • the automated vehicle may be a vehicle according to SAE J3016 level 1 with a driving mode-specific execution by a driver assistance system of either steering or acceleration and/or deceleration using information about a driving environment and with an expectation that a human driver performs all remaining aspects of a dynamic driving task of the vehicle.
  • the vehicle may be a vehicle according to SAE J3016 level 2 with the driving mode-specific execution by one or more driver assistance systems of both steering and acceleration and/or deceleration using information about the environment of the vehicle and with the expectation that the human driver performs all remaining aspects of the dynamic driving task.
  • the vehicle may be a vehicle according to SAE J3016 level 3 with the driving mode-specific execution by an automated driving system of all aspects of the dynamic driving task with an expectation that the human driver will respond to a request to intervene.
  • the vehicle may be a vehicle according to SAE J3016 level 4 with the driving mode-specific execution by an automated driving system of all aspects of the dynamic driving task even if a human driver does not respond to a request to intervene, the vehicle can pull over safely by a guiding system.
  • the vehicle may be a vehicle according to SAE J3016 level 5 with the driving mode-specific execution by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions of the vehicle that can be managed by a human driver.
  • neural network may be a computational model based on a structure and functions of biological neural networks.
  • the neural network may be a nonlinear statistical data modeling tool where relationships between inputs and outputs may be modeled.
  • Bayesian network may be a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph. Advantages of the Bayesian network may comprise an efficient solution of a decision-making problem under an uncertainty.
  • the junction analysis step may comprise capturing image data corresponding to the environment of the automated vehicle and segmenting out a traffic junction area from the captured image data using a semantic segmentation model.
  • the semantic segmentation model may be based on the first neural network.
  • the junction analysis step may comprise detecting and obtaining information about static objects in the environment of the automated vehicle based on the captured image data using a first object detection model.
  • the first object detection model may be based on a second neural network.
  • the junction analysis step may comprise detecting and obtaining information about moving objects in the environment of the automated vehicle based on the captured image data using a second object detection model.
  • the second object detection model may be based on a third neural network.
  • the junction analysis step may comprise detecting the traffic junction based on the traffic junction area.
  • Semantic segmentation model may be a model for a process of partitioning a digital image into multiple segments or parts. The goal thereof is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze for a control unit.
  • Semantic segmentation is an approach detecting, for some or every pixel, a belonging class of an object.
  • the belonging classes may include a traffic junction area class and a non-traffic junction area class, wherein parts of the captured image data corresponding to pixels representing the traffic junction area are assigned to the traffic junction area class and parts of the captured image data corresponding to pixels not representing the traffic junction area, e.g. representing an environment around the traffic junction area, are assigned to the non-traffic junction area class.
  • the semantic segmentation model may be a model for a process of partitioning a digital image into multiple segments or parts. The goal thereof is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze for a control unit.
  • Semantic segmentation is an approach detecting, for
  • object detection model may be a model enabling a control unit to detect instances of semantic objects of a certain class (such as humans, buildings, traffic signs, traffic lights and/or vehicles, e.g. cars) in digital images, here in the captured image data.
  • semantic objects of a certain class such as humans, buildings, traffic signs, traffic lights and/or vehicles, e.g. cars
  • the decision-making step may comprise assigning a value between 0 and 1 to the information about the detected static objects and/or the information about the detected moving objects as affordance values and inputting the affordance values into the Bayesian network to decide if the automated vehicle can cross the detected traffic junction.
  • the Bayesian network can be used to encode traffic rules, e.g. based on the detected moving and/or static objects, such as traffic lights, signs and/or vehicles, and to decide if the automated vehicle can cross the detected traffic junction without colliding with another moving and/or static object.
  • traffic rules e.g. based on the detected moving and/or static objects, such as traffic lights, signs and/or vehicles, and to decide if the automated vehicle can cross the detected traffic junction without colliding with another moving and/or static object.
  • Affordance value may be a value corresponding to an attribute of an object in the environment of the automated vehicle which define or limit a space of allowed actions of the automated vehicle. Affordance values are obtained by converting a state of the detected moving objects and/or the detected static objects in the environment of the automated vehicle, which could be discrete (for example for a red or green traffic light) or continuous (for example for a velocity of a moving object), to a range of 0 to 1.
  • affordance values may represent information about the detected moving and/or static objects with a comparably low amount of data, thus may be processed efficiently.
  • the decision-making step may only be carried out when the traffic junction is detected in the junction analysis step.
  • the first neural network, the second neural network and/or the third neural network may be a convolutional neural network.
  • convolutional neural network may be a neural network which consists of an input and an output layer, as well as at least one hidden layer.
  • An activation function may be a rectified linear unit layer and may be subsequently followed by additional convolutions such as pooling layers, fully connected layers and/or normalization layers, referred to as hidden layers because their inputs and outputs are masked by the activation function and final convolution.
  • a convolutional neural network may not be a neural network with a full connectivity between nodes, it may need less memory and less time to detect moving and/or static objects in the environment of the automated vehicle based on the image data than another neural network, if being processed by the same control unit.
  • the decision-making step may further comprise outputting a control signal to the automated vehicle.
  • the control signal may cause the automated vehicle to cross the detected traffic junction, if it is decided that the automated vehicle can cross the detected traffic junction, or to stop the automated vehicle prior to the detected traffic junction, if it is decided that the automated vehicle cannot cross the detected traffic junction.
  • a second aspect of this invention relates to a control unit comprising means configured to execute the method as described above.
  • control unit may be an embedded system in automotive electronics that controls one or more electrical systems or subsystems in the automated vehicle.
  • the control unit may comprise a junction analysis means for executing the junction analysis step, a decision-making means for executing the decision-making step and/or a combined means that can execute the junction analysis step and the decision-making step.
  • the control unit may also comprise a means for a signal input, e.g. the detected image data, and/or a means for a signal output, e.g. to output the control signal.
  • a third aspect of this invention relates to an automated vehicle comprising the control unit as described above.
  • the vehicle may also comprise one or more cameras to provide image data, i.e. the image data used in the above described method, of an environment of the automated vehicle.
  • a fourth aspect of this invention relates to a program recorded on a computer-readable recording medium configured to execute the method as described above.
  • FIG 1 a schematic flow diagram of a method for planning and/or controlling an autonomous junction crossing of an automated vehicle 10 is shown.
  • the method comprises a junction analysis step S1 and a decision-making step S2.
  • the method is further explained in the description of figure 2 .
  • FIG 2 a schematic structural diagram of an embodiment of a control unit 1 is shown.
  • the control unit 1 comprises means to execute the method for planning and/or controlling the autonomous junction crossing of the automated vehicle 10. More specifically, the control unit 1 comprises a junction analysis means 2 and a decision-making means 3.
  • the junction analysis means 2 is configured to execute the junction analysis step S1 and the decision-making means is configured to execute the decision-making step S2.
  • the automated vehicle 10 comprises a camera 4. In another embodiment it would be conceivable to use more than one camera 4.
  • the junction analysis means 2 comprises a first neural network 21, a second neural network 22 and a third neural network 23.
  • the first neural network 21, the second neural network 22 and the third neural network 23 are convolutional neural networks, respectively.
  • the decision-making means 3 comprises a Bayesian network 31.
  • image data 421 is captured by the camera 4.
  • the captured image data 421 is an input for the junction analysis means 2, i.e. for the first neural network 21, the second neural network 22 and the third neural network 23.
  • the first neural network 21 uses the captured image data 421 to segment out a traffic junction area 211 using a semantic segmentation model.
  • the second neural network 22 uses the captured image data 421 to detect and obtain information about static objects 221 in the environment of the automated vehicle 10, e.g. traffic signs and/or traffic lights, using a first object detection model. This information about the detected static objects 221 is used as an input for the decision-making means 3.
  • the third neural network 23 uses the captured image data 421 to detect and obtain information about moving objects 231 in the environment of the automated vehicle 10, e.g. a velocity and/or a position of another vehicle in the environment of the automated vehicle. This information about the detected moving objects 211 is used as an input for the decision-making means 3.
  • the junction analysis means 2 Based on the traffic junction area 211, the junction analysis means 2 detects a presence of a traffic junction 241.
  • the decision-making means 3 gets activated.
  • the decision-making means 3 assigns a value between 0 and 1 to the information about the detected static objects 221 and/or the information about the detected moving objects 231 as affordance values 311.
  • Affordance values 311 are obtained by converting a state of the detected static objects and/or the detected moving objects in the environment of the automated vehicle, which could be discrete (for example for a red or green traffic light) or continuous (for example for a velocity of a moving object), to a range of 0 to 1.
  • the affordance values 311 are an input for the Bayesian network 31.
  • the Bayesian network 31 decides if the automated vehicle 10 can cross the detected traffic junction 241.
  • the Bayesian network 31 is used to encode traffic rules, e.g. based on the detected static objects 221, such as traffic lights, signs, and/or the detected moving objects 231, such as vehicles, and is used to decide if the automated vehicle 10 can cross the detected traffic junction 241 without colliding with another moving and/or static object.
  • the decision-making means 3 puts out a control signal 312 to the automated vehicle 10.
  • the control signal 312 causes the automated vehicle 10 to cross the detected traffic junction 241 if the Bayesian network 31 decides that the automated vehicle 10 can cross the detected traffic junction 241. If the Bayesian network 31 decides that the automated vehicle 10 cannot cross the detected traffic junction 241, the control signal 312 causes the automated vehicle 10 to stop prior to the detected traffic junction 241.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided is a method for planning and/or controlling an autonomous junction crossing of an automated vehicle, wherein the method comprises a junction analysis step, wherein the junction analysis step comprises detecting a traffic junction based on image data corresponding to an environment of the automated vehicle using a first neural network, and a decision-making step, wherein the decision-making step comprises deciding if the automated vehicle can cross the detected traffic junction based on the image data using a Bayesian network.

Description

  • The present disclosure relates to a method for planning and/or controlling an autonomous junction crossing of an automated vehicle, a control unit comprising means to execute the method, an automated vehicle comprising the control unit, and a program recorded on a computer-readable recording medium configured to execute the method.
  • In prior art, methods are disclosed for the autonomous crossing of traffic junctions. These methods are based on an environment model. In this approach, localization algorithms are used to identify positions of the automated vehicle, e.g. of an ego car, and other traffic participants in a known map. Once the positions are known, they are then used to determine if the traffic junction is safe to be crossed, and subsequently to plan a successful junction crossing.
  • This approach is disclosed in CN107272687A , which describes a driving behavior decision-making system of an autopilot bus. An information fusion processing module performs fusion processing on visual information, modeling information, dynamic information, positioning information and road information of a high-precision map. The information fusion process is to integrate the visual information, the modeling information, the dynamic information, the positioning information and the road information of the high-precision map into a three-dimensional coordinate. A driving behavior decision module determines a driving scenario in which an autonomous driving bus is located according to the information after the integration process.
  • Problems with the prior art include the accumulation of errors that can occur due to the generation of the environment model and the position needed to localize the automated vehicle in the environment model.
  • It is inter alia an object of the invention to provide a method which at least reduces the above problems.
  • Said object is solved by the features of the independent claims. Advantageous embodiments are described in the dependent claims.
  • A first aspect of the invention relates to a method for planning and/or controlling an autonomous junction crossing of an automated vehicle. The method comprises two steps. The first step is a junction analysis step and comprises detecting a traffic junction based on image data corresponding to an environment of the automated vehicle using a first neural network. The second step is a decision-making step and comprises deciding if the automated vehicle can cross the detected traffic junction based on the image data using a Bayesian network.
  • As used herein, "traffic junction" may be a location where two or more roads meet.
  • As used herein, "automated vehicle" may be a vehicle that acts or operates completely independent of a human driver, may be a vehicle that acts or operates independent of a human driver in some instances while in other instances a human driver may be able to operate the vehicle and/or may be a vehicle that is predominantly operated by a human driver, but with the assistance of an automated driving/assistance system.
  • In other words, the automated vehicle may be a vehicle according to SAE J3016 level 1 with a driving mode-specific execution by a driver assistance system of either steering or acceleration and/or deceleration using information about a driving environment and with an expectation that a human driver performs all remaining aspects of a dynamic driving task of the vehicle.
  • The vehicle may be a vehicle according to SAE J3016 level 2 with the driving mode-specific execution by one or more driver assistance systems of both steering and acceleration and/or deceleration using information about the environment of the vehicle and with the expectation that the human driver performs all remaining aspects of the dynamic driving task.
  • The vehicle may be a vehicle according to SAE J3016 level 3 with the driving mode-specific execution by an automated driving system of all aspects of the dynamic driving task with an expectation that the human driver will respond to a request to intervene.
  • The vehicle may be a vehicle according to SAE J3016 level 4 with the driving mode-specific execution by an automated driving system of all aspects of the dynamic driving task even if a human driver does not respond to a request to intervene, the vehicle can pull over safely by a guiding system.
  • The vehicle may be a vehicle according to SAE J3016 level 5 with the driving mode-specific execution by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions of the vehicle that can be managed by a human driver.
  • As used herein "neural network" may be a computational model based on a structure and functions of biological neural networks. The neural network may be a nonlinear statistical data modeling tool where relationships between inputs and outputs may be modeled.
  • As used herein, "Bayesian network" may be a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph. Advantages of the Bayesian network may comprise an efficient solution of a decision-making problem under an uncertainty.
  • Since the above described method is, in difference to the ones described in the prior art, not based on an environment model, any error that accumulates due to the generation of the environment model and the position needed to localize the automated vehicle in it is eliminated.
  • The junction analysis step may comprise capturing image data corresponding to the environment of the automated vehicle and segmenting out a traffic junction area from the captured image data using a semantic segmentation model. The semantic segmentation model may be based on the first neural network.
  • Furthermore, the junction analysis step may comprise detecting and obtaining information about static objects in the environment of the automated vehicle based on the captured image data using a first object detection model. The first object detection model may be based on a second neural network.
  • The junction analysis step may comprise detecting and obtaining information about moving objects in the environment of the automated vehicle based on the captured image data using a second object detection model. The second object detection model may be based on a third neural network.
  • The junction analysis step may comprise detecting the traffic junction based on the traffic junction area.
  • As used herein, "semantic segmentation model" may be a model for a process of partitioning a digital image into multiple segments or parts. The goal thereof is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze for a control unit. Semantic segmentation is an approach detecting, for some or every pixel, a belonging class of an object. The belonging classes may include a traffic junction area class and a non-traffic junction area class, wherein parts of the captured image data corresponding to pixels representing the traffic junction area are assigned to the traffic junction area class and parts of the captured image data corresponding to pixels not representing the traffic junction area, e.g. representing an environment around the traffic junction area, are assigned to the non-traffic junction area class. Thus, it is possible to segment out a traffic junction area from the captured image data using the semantic segmentation model.
  • As used herein, "object detection model" may be a model enabling a control unit to detect instances of semantic objects of a certain class (such as humans, buildings, traffic signs, traffic lights and/or vehicles, e.g. cars) in digital images, here in the captured image data.
  • The decision-making step may comprise assigning a value between 0 and 1 to the information about the detected static objects and/or the information about the detected moving objects as affordance values and inputting the affordance values into the Bayesian network to decide if the automated vehicle can cross the detected traffic junction.
  • The Bayesian network can be used to encode traffic rules, e.g. based on the detected moving and/or static objects, such as traffic lights, signs and/or vehicles, and to decide if the automated vehicle can cross the detected traffic junction without colliding with another moving and/or static object.
  • As used herein, "affordance value" may be a value corresponding to an attribute of an object in the environment of the automated vehicle which define or limit a space of allowed actions of the automated vehicle. Affordance values are obtained by converting a state of the detected moving objects and/or the detected static objects in the environment of the automated vehicle, which could be discrete (for example for a red or green traffic light) or continuous (for example for a velocity of a moving object), to a range of 0 to 1.
  • Advantages of the affordance values comprise that affordance values may represent information about the detected moving and/or static objects with a comparably low amount of data, thus may be processed efficiently.
  • The decision-making step may only be carried out when the traffic junction is detected in the junction analysis step.
  • The first neural network, the second neural network and/or the third neural network may be a convolutional neural network.
  • As used herein, "convolutional neural network" may be a neural network which consists of an input and an output layer, as well as at least one hidden layer. An activation function may be a rectified linear unit layer and may be subsequently followed by additional convolutions such as pooling layers, fully connected layers and/or normalization layers, referred to as hidden layers because their inputs and outputs are masked by the activation function and final convolution.
  • Since a convolutional neural network may not be a neural network with a full connectivity between nodes, it may need less memory and less time to detect moving and/or static objects in the environment of the automated vehicle based on the image data than another neural network, if being processed by the same control unit.
  • The decision-making step may further comprise outputting a control signal to the automated vehicle. The control signal may cause the automated vehicle to cross the detected traffic junction, if it is decided that the automated vehicle can cross the detected traffic junction, or to stop the automated vehicle prior to the detected traffic junction, if it is decided that the automated vehicle cannot cross the detected traffic junction.
  • A second aspect of this invention relates to a control unit comprising means configured to execute the method as described above.
  • As used herein, "control unit" may be an embedded system in automotive electronics that controls one or more electrical systems or subsystems in the automated vehicle.
  • The control unit may comprise a junction analysis means for executing the junction analysis step, a decision-making means for executing the decision-making step and/or a combined means that can execute the junction analysis step and the decision-making step. The control unit may also comprise a means for a signal input, e.g. the detected image data, and/or a means for a signal output, e.g. to output the control signal.
  • A third aspect of this invention relates to an automated vehicle comprising the control unit as described above.
  • The vehicle may also comprise one or more cameras to provide image data, i.e. the image data used in the above described method, of an environment of the automated vehicle.
  • A fourth aspect of this invention relates to a program recorded on a computer-readable recording medium configured to execute the method as described above.
  • In the following, an embodiment is described with reference to figures 1 and 2.
  • Fig. 1
    is a schematic flow diagram of a method for planning and/or controlling an autonomous junction crossing of an automated vehicle.
    Fig. 2
    is a schematic structural diagram of an embodiment of a control unit comprising means to execute the method for planning and/or controlling the autonomous junction crossing of the automated vehicle.
  • In figure 1 a schematic flow diagram of a method for planning and/or controlling an autonomous junction crossing of an automated vehicle 10 is shown. The method comprises a junction analysis step S1 and a decision-making step S2. The method is further explained in the description of figure 2.
  • In figure 2 a schematic structural diagram of an embodiment of a control unit 1 is shown. The control unit 1 comprises means to execute the method for planning and/or controlling the autonomous junction crossing of the automated vehicle 10. More specifically, the control unit 1 comprises a junction analysis means 2 and a decision-making means 3. The junction analysis means 2 is configured to execute the junction analysis step S1 and the decision-making means is configured to execute the decision-making step S2. The automated vehicle 10 comprises a camera 4. In another embodiment it would be conceivable to use more than one camera 4.
  • The junction analysis means 2 comprises a first neural network 21, a second neural network 22 and a third neural network 23. In the present embodiment, the first neural network 21, the second neural network 22 and the third neural network 23 are convolutional neural networks, respectively.
  • The decision-making means 3 comprises a Bayesian network 31.
  • In the present embodiment, image data 421 is captured by the camera 4. The captured image data 421 is an input for the junction analysis means 2, i.e. for the first neural network 21, the second neural network 22 and the third neural network 23.
  • The first neural network 21 uses the captured image data 421 to segment out a traffic junction area 211 using a semantic segmentation model.
  • The second neural network 22 uses the captured image data 421 to detect and obtain information about static objects 221 in the environment of the automated vehicle 10, e.g. traffic signs and/or traffic lights, using a first object detection model. This information about the detected static objects 221 is used as an input for the decision-making means 3.
  • The third neural network 23 uses the captured image data 421 to detect and obtain information about moving objects 231 in the environment of the automated vehicle 10, e.g. a velocity and/or a position of another vehicle in the environment of the automated vehicle. This information about the detected moving objects 211 is used as an input for the decision-making means 3.
  • Based on the traffic junction area 211, the junction analysis means 2 detects a presence of a traffic junction 241.
  • When the junction analysis means 2 detects the presence of the traffic junction 241 the decision-making means 3 gets activated.
  • The decision-making means 3 assigns a value between 0 and 1 to the information about the detected static objects 221 and/or the information about the detected moving objects 231 as affordance values 311. Affordance values 311 are obtained by converting a state of the detected static objects and/or the detected moving objects in the environment of the automated vehicle, which could be discrete (for example for a red or green traffic light) or continuous (for example for a velocity of a moving object), to a range of 0 to 1.
  • The affordance values 311 are an input for the Bayesian network 31. The Bayesian network 31 decides if the automated vehicle 10 can cross the detected traffic junction 241.
  • The Bayesian network 31 is used to encode traffic rules, e.g. based on the detected static objects 221, such as traffic lights, signs, and/or the detected moving objects 231, such as vehicles, and is used to decide if the automated vehicle 10 can cross the detected traffic junction 241 without colliding with another moving and/or static object.
  • The decision-making means 3 puts out a control signal 312 to the automated vehicle 10. The control signal 312 causes the automated vehicle 10 to cross the detected traffic junction 241 if the Bayesian network 31 decides that the automated vehicle 10 can cross the detected traffic junction 241. If the Bayesian network 31 decides that the automated vehicle 10 cannot cross the detected traffic junction 241, the control signal 312 causes the automated vehicle 10 to stop prior to the detected traffic junction 241.
  • Reference signs list
  • 1
    Control unit
    10
    Automated vehicle
    2
    Junction analysis means
    21
    First neural network
    22
    Second neural network
    23
    Third neural network
    211
    Traffic junction area
    221
    Information about the detected static objects in the environment of the automated vehicle
    231
    Information about the detected moving objects in the environment of the automated vehicle
    241
    Traffic junction
    3
    Decision-making means
    31
    Bayesian network
    311
    Affordance values
    312
    Control signal
    4
    Camera
    421
    Image data
    S1
    Junction analysis step
    S2
    Decision-making step

Claims (9)

  1. A method for planning and/or controlling an autonomous junction crossing of an automated vehicle (10), wherein the method comprises:
    - a junction analysis step (S1), wherein the junction analysis step (S1) comprises detecting a traffic junction (241) based on image data (421) corresponding to an environment of the automated vehicle (10) using a first neural network (21), and
    - a decision-making step (S2), wherein the decision-making step (S2) comprises deciding if the automated vehicle (10) can cross the detected traffic junction (241) based on the image data (421) using a Bayesian network (31).
  2. The method according to claim 1, wherein the junction analysis step (S1) comprises:
    - capturing image data (421) corresponding to the environment of the automated vehicle (10),
    - segmenting out a traffic junction area (211) from the captured image data (421) using a semantic segmentation model, the semantic segmentation model being based on the first neural network (21),
    - detecting and obtaining information about static objects (221) in the environment of the automated vehicle (10) based on the captured image data (421) using a first object detection model, the first object detection model being based on a second neural network (22),
    - detecting and obtaining information about moving objects (231) in the environment of the automated vehicle (10) based on the captured image data (421) using a second object detection model, the second object detection model being based on a third neural network (23), and
    - detecting the traffic junction (241) based on the traffic junction area (211).
  3. The method according to claim 2, wherein the decision-making step (S2) comprises:
    - assigning a value between 0 and 1 to the information about the detected static objects (221) and the information about the detected moving objects (231) as affordance values (311), and
    - inputting the affordance values (311) into the Bayesian network (31) to decide if the automated vehicle (10) can cross the detected traffic junction (241).
  4. The method according to any of the claims 1 to 3, wherein the decision-making step (S2) is only carried out when the traffic junction (241) is detected in the junction analysis step (S1).
  5. The method according to any of the claims 1 to 4, wherein the first neural network (21), the second neural network (22) and/or the third neural network (23) is a convolutional neural network.
  6. The method according to any of the claims 1 to 5, wherein the decision-making step (S2) further comprises outputting a control signal (312) to the automated vehicle (10), wherein the control signal (312) causes the automated vehicle (10) to cross the detected traffic junction (241), if it is decided that the automated vehicle (10) can cross the detected traffic junction (241), or to stop the automated vehicle (10) prior to the detected traffic junction (241), if it is decided that the automated vehicle (10) cannot cross the detected traffic junction (241).
  7. A control unit (1) comprising means configured to execute the method according to any of claims 1 to 6.
  8. An automated vehicle (10) comprising the control unit (1) according to claim 7.
  9. A program recorded on a computer-readable recording medium configured to execute the method of any of the claims 1 to 6.
EP21152407.9A 2021-01-19 2021-01-19 Autonomous junction crossing of automated vehicle Pending EP4030408A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21152407.9A EP4030408A1 (en) 2021-01-19 2021-01-19 Autonomous junction crossing of automated vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP21152407.9A EP4030408A1 (en) 2021-01-19 2021-01-19 Autonomous junction crossing of automated vehicle

Publications (1)

Publication Number Publication Date
EP4030408A1 true EP4030408A1 (en) 2022-07-20

Family

ID=74191640

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21152407.9A Pending EP4030408A1 (en) 2021-01-19 2021-01-19 Autonomous junction crossing of automated vehicle

Country Status (1)

Country Link
EP (1) EP4030408A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107272687A (en) 2017-06-29 2017-10-20 深圳市海梁科技有限公司 A kind of driving behavior decision system of automatic Pilot public transit vehicle
US20200410254A1 (en) * 2019-06-25 2020-12-31 Nvidia Corporation Intersection region detection and classification for autonomous machine applications

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107272687A (en) 2017-06-29 2017-10-20 深圳市海梁科技有限公司 A kind of driving behavior decision system of automatic Pilot public transit vehicle
US20200410254A1 (en) * 2019-06-25 2020-12-31 Nvidia Corporation Intersection region detection and classification for autonomous machine applications

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOEL JANAI ET AL: "Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art", 17 December 2019 (2019-12-17), XP055657497, Retrieved from the Internet <URL:https://arxiv.org/pdf/1704.05519.pdf> [retrieved on 20200114] *
WELLHAUSEN LORENZ ET AL: "Map-optimized probabilistic traffic rule evaluation", 2016 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), IEEE, 9 October 2016 (2016-10-09), pages 3012 - 3017, XP033011816, DOI: 10.1109/IROS.2016.7759466 *

Similar Documents

Publication Publication Date Title
CN109117709B (en) Collision avoidance system for autonomous vehicles
US9947228B1 (en) Method for monitoring blind spot of vehicle and blind spot monitor using the same
US10481609B2 (en) Parking-lot-navigation system and method
Apostoloff et al. Robust vision based lane tracking using multiple cues and particle filtering
CN111382768A (en) Multi-sensor data fusion method and device
CN112930554A (en) Electronic device, system and method for determining a semantic grid of a vehicle environment
CN111874006A (en) Route planning processing method and device
CN113343461A (en) Simulation method and device for automatic driving vehicle, electronic equipment and storage medium
CN105684039B (en) Condition analysis for driver assistance systems
US11702044B2 (en) Vehicle sensor cleaning and cooling
US11908142B2 (en) Method, artificial neural network, device, computer program, and machine-readable memory medium for the semantic segmentation of image data
CN115179959A (en) Intelligent driving vehicle behavior prediction method based on self-adaptive updating threshold of driving road
US20220301099A1 (en) Systems and methods for generating object detection labels using foveated image magnification for autonomous driving
CN116703966A (en) Multi-object tracking
CN115546756A (en) Enhancing situational awareness within a vehicle
CN114764392A (en) Computer-implemented method and test unit for determining traffic scene similarity values
Ilic et al. Development of sensor fusion based ADAS modules in virtual environments
CN113178074A (en) Traffic flow machine learning modeling system and method applied to vehicle
EP4030408A1 (en) Autonomous junction crossing of automated vehicle
US11922703B1 (en) Generic obstacle detection in drivable area
CN112249016A (en) U-turn control system and method for autonomous vehicle
WO2022089627A1 (en) Method and system for motion planning for an autonmous vehicle
CN111174796B (en) Navigation method based on single vanishing point, electronic equipment and storage medium
CN115546744A (en) Lane detection using DBSCAN
CN116252813A (en) Vehicle driving track prediction method, device and storage medium

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20220922

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230424

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20240314