WO2021170580A1 - Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit de programme informatique et véhicule automobile - Google Patents

Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit de programme informatique et véhicule automobile Download PDF

Info

Publication number
WO2021170580A1
WO2021170580A1 PCT/EP2021/054442 EP2021054442W WO2021170580A1 WO 2021170580 A1 WO2021170580 A1 WO 2021170580A1 EP 2021054442 W EP2021054442 W EP 2021054442W WO 2021170580 A1 WO2021170580 A1 WO 2021170580A1
Authority
WO
WIPO (PCT)
Prior art keywords
mission
simulation
algorithm
motor vehicle
driving
Prior art date
Application number
PCT/EP2021/054442
Other languages
German (de)
English (en)
Inventor
Christoph THIEM
Ulrich Eberle
Original Assignee
Psa Automobiles Sa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Psa Automobiles Sa filed Critical Psa Automobiles Sa
Priority to EP21707961.5A priority Critical patent/EP4111438A1/fr
Priority to CN202180017212.9A priority patent/CN115176297A/zh
Publication of WO2021170580A1 publication Critical patent/WO2021170580A1/fr

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3453Special cost functions, i.e. other than distance or default speed limit of road segments
    • G01C21/3492Special cost functions, i.e. other than distance or default speed limit of road segments employing speed data or traffic data, e.g. real-time or historical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Definitions

  • a method for training at least one algorithm for a control unit of a motor vehicle, a computer program product and a motor vehicle are described here.
  • Methods for training at least one algorithm for a control device of a motor vehicle, computer program products and motor vehicles of the type mentioned at the beginning are known in the prior art.
  • the first partially automated vehicles (corresponds to SAE Level 2 in accordance with SAE J3016) have reached series production readiness in recent years.
  • Automated (corresponds to SAE level> 3 in accordance with SAE J3016) or autonomously (corresponds to SAE level 4/5 in accordance with SAE J3016) motor vehicles must be able to operate independently with maximum safety in unfamiliar traffic situations based on a variety of specifications, for example destination and compliance with current traffic rules can react. Since the reality of traffic is highly complex due to the unpredictability of the behavior of other road users, especially human road users, it is almost impossible to program corresponding control units of motor vehicles with conventional methods and on the basis of man-made rules.
  • a system and a method for training a machine-learning model on a simulation platform for operating an autonomous vehicle are known from US 2019/0318267 A1. While driving, driving statistics and environmental data for a plurality of driving scenarios are collected with a human driver so that the model continuously learns the driving style and the preferences of the driver.
  • the object is thus to develop methods for training at least one algorithm for a control unit of a motor vehicle, computer program products and motor vehicles of the type mentioned at the outset so that autonomously driving motor vehicles can better adapt to a traffic flow.
  • the object is achieved by a method for training at least one algorithm for a control unit of a motor vehicle according to claim 1, a computer program product according to the independent claim 14 and a motor vehicle according to the independent claim 15. Further refinements and developments are the subject of the dependent claims .
  • a method for training at least one algorithm for a control unit of a motor vehicle is described below, the control unit being provided for implementing an automated or autonomous driving function with intervention in units of the motor vehicle on the basis of input data using the at least one algorithm, with the algorithm is trained by a self-learning neural network, comprising the following steps: a) providing a computer program product module for the automated or autonomous driving function, the computer program product module containing the algorithm to be trained and the self-learning neural network, b) Providing a simulation environment with simulation parameters, the simulation environment containing map data of a real existing application area, the motor vehicle, the behavior of the motor vehicle being determined by a rule set, c) providing a mission for the motor vehicle, d) providing real-time traffic data the real existing application area as well as re-enactment of the traffic situation in the simulation environment; e) determining a travel time for the mission on the basis of the real-time traffic data; f) performing a simulation of the mission in the simulation environment and determining a simulation driving time for completing the
  • step g) (i) is reached, another mission is selected and the method is repeated with the other mission.
  • step g) (i) is reached, another mission is selected and the method is repeated with the other mission.
  • driving data and routes from certain road users are selected as real-time traffic data, with missions being selected on the basis of locations on the routes of the road users.
  • Real-time traffic data can contain statistical data on the flow of traffic, but also travel data from specific road users.
  • Statistical data can be, for example, data that can be contained by a route calculation algorithm based on environmental parameters such as maximum permitted speeds, traffic lights and traffic volume.
  • Driving data from specific road users allow a comparison with individual driving performance. Different drivers have different driving styles, some drive more defensively, some less defensively.
  • these specific road users may drive individual routes from the starting point to the destination. These routes can be the basis for the selection of corresponding missions and starting and destination points as well as intermediate destinations can be determined on the basis of the individual routes.
  • the starting and destination points as well as intermediate destinations can be certain characteristic points along the corresponding routes, for example intersections.
  • the real-time traffic data contain infrastructure information.
  • Infrastructure information can be, for example, information on traffic light switching, road blocks, lane guidance and the like.
  • the inclusion of this information increases the degree of reality of the simulation and allows the driving time for the mission to be assessed. For example, a driving time for which a driver only had green traffic lights can be disqualified.
  • an optimization algorithm is used in order to minimize deviations between the simulation environment and the real-time traffic data.
  • a particularly realistic traffic scenario can be generated in the simulation environment, which simulates the real-time traffic data particularly well, which improves the comparability of the driving time and the simulation driving time.
  • the parameters are changed in a randomized manner.
  • An over-specialization of the algorithm can also be prevented by randomization.
  • a driving time of a road user for carrying out the mission from the real-time traffic data is used as the driving time, or a driving time of the road user is determined with the help of an agent in the simulation environment.
  • the performance of one algorithm can be compared with that of another algorithm.
  • the algorithm and / or the at least one rule set is trained by means of a reinforcing learning algorithm.
  • a reinforcement learning algorithm allows the algorithm to be improved through a reward function.
  • the reward function can be triggered by approximating the simulation driving time to the driving time.
  • the driving time is an expected value which is determined from driving times of multiple iterations of the simulation of the mission.
  • the driving time is an expected value that represents the driving times of several real road users who are carrying out the mission in the operational area.
  • the reference driving time from the real traffic environment is closer to an average driving time, which means that statistical deviations of individual driving times and influences of differently defensive or aggressive driving human drivers can be reduced.
  • the self-learning neural network modifies the rule set beyond predetermined rule set limits.
  • Corresponding rule set limits are, for example, permissible maximum speeds on a certain route or permissible periods of time when driving over traffic lights that change to red, permissibility of crossing solid lines and the like.
  • a standard deviation is taken into account when comparing the simulation driving time and driving time.
  • Mission can generally be defined as reaching a target point starting from a starting point.
  • several different routes may be driven between the starting point and the destination point or a specific route.
  • the comparability of travel times suffers with different routes. If the route is fixed, the comparability of the simulation driving time with the driving time can be increased.
  • a mission represents driving a route from at least one starting point to at least one destination point.
  • a first independent subject relates to a device for training at least one algorithm for a control unit of a motor vehicle, wherein the control unit is provided for implementing an automated or autonomous driving function with intervention in aggregates of the motor vehicle on the basis of input data using the at least one algorithm, wherein a self-learning neural network is provided for training the algorithm, the device being designed to carry out the following steps: a) providing a computer program product module for the automated or autonomous driving function, the computer program product module containing the algorithm to be trained and the self-learning neural network, b) Providing a simulation environment with simulation parameters, the simulation environment containing map data of a real existing application area, the motor vehicle, wherein a behavior of the motor vehicle is indicated by a Reg the sentence is determined c) provision of a mission for the motor vehicle, d) provision of real-time traffic data of the real existing area of use as well as re-enactment of the traffic situation in the simulation environment; e) determining a travel time for the mission on the basis of the real-time traffic data; f
  • step g) (i) is reached, another mission is selected and the method is repeated with the other mission.
  • driving data and routes from certain road users are selected as real-time traffic data, missions being selected on the basis of locations on the routes of the road users.
  • the real-time traffic data contain infrastructure information.
  • the device is designed to use an optimization algorithm when simulating the traffic situation in the simulation environment in order to minimize deviations between the simulation environment and the real-time traffic data.
  • the device is set up to vary the mission by changing parameters of the traffic situation in the simulation environment and the method with the modified one Carry out mission.
  • the device is set up to carry out the change in the parameters in a randomized manner.
  • a driving time of a road user for carrying out the mission from the real-time traffic data is provided as the expected value, or the device is set up to determine a driving time of the road user with the help of an agent in the simulation environment .
  • the device is set up to train the algorithm and / or the at least one rule set by means of a reinforcing learning algorithm.
  • the driving time is an expected value which is determined from driving times of multiple iterations of the simulation of the mission.
  • the driving time is an expected value that represents the driving times of several real road users who are carrying out the mission in the operational area.
  • the device is set up to modify the rule set beyond predetermined rule set limits by means of the self-learning neural network.
  • the device is designed to take into account a standard deviation when comparing the simulation driving time and driving time.
  • a mission represents driving a route from at least one starting point to at least one destination point.
  • Another independent subject matter relates to a computer program product with a computer-readable storage medium on which instructions are embedded which, when executed by at least one computing unit, have the effect that the at least computing unit is set up to carry out the method of the aforementioned type.
  • the method can be carried out on one or more processing units distributed so that certain method steps are carried out on one processing unit and other process steps are carried out on at least one other processing unit, with calculated data being able to be transmitted between the processing units if necessary.
  • Another independent subject relates to a motor vehicle with a computer program product of the type described above. Further features and details emerge from the following description in which - if necessary with reference to the drawing - at least one,sbei game is described in detail. Described and / or graphically represented features form the subject per se or in any meaningful combination, possibly also independently of the claims, and can in particular also be subject of one or more separate applications. Identical, similar and / or functionally identical parts are provided with the same reference numerals. They show schematically:
  • Fig. 1 is a plan view of a motor vehicle;
  • Figure 2 shows a computer program product module;
  • 3 shows a road map of a real existing application area with traffic flow information;
  • FIG. 4 shows the road map from FIG. 3 with a mission, as well as
  • FIG. 5 shows a flow diagram of a training method.
  • Fig. 1 shows a motor vehicle 2, which is set up for automated or autonomous driving.
  • the motor vehicle 2 has a control unit 4 with a computing unit 6 and a memory 8.
  • a computer program product is stored in the memory 8 and is described in more detail below in connection with FIGS.
  • the control unit 4 is connected, on the one hand, to a number of environmental sensors that allow the current position of the motor vehicle 2 and the respective traffic situation to be recorded. These include environmental sensors 10, 11 at the front of the motor vehicle 2, environmental sensors 12, 13 at the rear of the motor vehicle 2, a camera 14 and a GPS module 16.
  • the environmental sensors 10 to 13 can, for example, radar, lidar and / or Include ultrasonic sensors.
  • sensors for detecting the state of the motor vehicle 2 are provided, including wheel speed sensors 16, acceleration sensors 18 and pedal sensors 20, which are connected to the control unit 4. With the aid of this motor vehicle sensor system, the current state of motor vehicle 2 can be reliably detected.
  • the computing unit 6 has loaded the computer program product stored in the memory 8 and executes it. On the basis of an algorithm and the input signals, the computing unit 6 decides on the control of the motor vehicle 2, which the computing unit 6 would achieve by intervening in the steering 22, engine control 24 and brakes 26, which are each connected to the control unit 4.
  • Data from sensors 10 to 20 are continuously temporarily stored in memory 8 and discarded after a predetermined period of time so that these environmental data can be available for further evaluation.
  • the algorithm was trained according to the method described below.
  • the computer program product module 30 has a self-learning neural network 32 that trains an algorithm 34.
  • the self-learning neural network 32 learns according to methods of reinforcement learning, ie the neural network 32 tries by varying the algorithm 34 to receive rewards for improved behavior according to one or more metrics or standards, i.e. for improvements to the algorithm 34.
  • known learning methods of supervised and unsupervised learning as well as combinations of these learning methods can be used.
  • the algorithm 34 can essentially consist of a complex filter with a matrix of values, usually called weights by those skilled in the art, that define a filter function that determines the behavior of the algorithm 34 as a function of input variables that are presently recorded by the environmental sensors 10 to 20 are determined and control signals for controlling the motor vehicle 2 are generated.
  • the computer program product module 30 can be used both in the motor vehicle 2 and outside the motor vehicle 2. It is thus possible to train the computer program product module 30 both in a real environment and in a simulation environment. In particular, according to the teaching described here, the training takes place in a simulation environment, since this is safer than training in a real environment.
  • the computer program product module 30 is set up to set up a metric that is to be improved.
  • the metric is a time until a given mission is reached (hereinafter referred to as simulation driving time), for example the driving time from a starting point to a destination, compared to expected values of actually existing driving times.
  • the algorithm 34 can either be optimized with regard to another mission and trained further, or the algorithm can be tested in a real environment.
  • Fig. 3 shows a simulation environment 36, which is a road map of a real existing Area of application 37.
  • the road map of the operational area 37 serves as a simulation environment 36 for training the algorithm 34.
  • the road map of the operational area 37 has traffic flow information relating to the traffic flow on different roads.
  • This traffic flow information is real-time information that can be made available via various services. Such real-time information can be determined, for example, from cell phone location data, vehicle navigation data, camera recordings from traffic monitoring cameras and the like.
  • slow traffic can be defined as traffic that flows at an average speed of less than 20 km / h
  • very slow traffic can be defined as traffic that flows at an average speed of less than 5 km / h.
  • FIG. 4 shows the simulation environment 36 of the road map of the operational area 37 as well as a mission for the algorithm 34.
  • the present mission is to drive the simulated motor vehicle 2 along a specific route from a starting point S to a destination point Z.
  • the computer program product module 30 uses the real-time traffic data to calculate an expected value of a vehicle for completing the mission, taking into account the prevailing congested traffic 38 and the prevailing heavily congested traffic 40.
  • This expected value is the reference value for a travel time TS that is required for completing the mission by simulated motor vehicle 2.
  • the computer program product module contains the algorithm to be trained and a self-learning neural network.
  • a simulation environment is then made available on the basis of real map data.
  • the simulation environment can also contain other road users and their missions.
  • a mission is determined in the simulation environment. As shown in connection with FIG. 4, the mission can be the driving of a specific route from a starting point to a destination point.
  • an expected value for a driving time can be simulated on the basis of the real traffic data for the. Area of application can be calculated.
  • the simulation is carried out and a simulation driving time is determined.
  • a simulation driving time is determined.
  • agents in the simulation environment who create a traffic situation comparable to that which exists in the real environment. This can also include infrastructure information such as traffic lights.
  • the simulation driving time is then compared with the expected value. If the simulation driving time is not sufficiently close to the expected value, the algorithm and / or the rule set is varied and the simulation is repeated. This step corresponds to the principle of reinforcement learning with a reward metric that the algorithm wants to achieve.
  • the algorithm can be trained by different missions, for example a mission with the same start and destination but different traffic situation or with a new mission that has a different start and / or a different destination.
  • the algorithm can only be frozen when all metrics have been reached.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé d'entraînement d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile au moyen d'un réseau neuronal à auto-apprentissage, comprenant les étapes consistant à : fournir un environnement de simulation à des paramètres de simulation, ledit environnement de simulation contenant des données cartographiques d'une zone de fonctionnement actuelle réelle et du véhicule automobile, le comportement dudit véhicule automobile étant déterminé par un ensemble de règles ; fournir une mission pour le véhicule automobile ; fournir des données de trafic en temps réel de la zone de fonctionnement actuelle réelle et réajuster la situation de trafic dans l'environnement de simulation ; déterminer une durée d'entraînement pour la mission à l'aide des données de trafic en temps réel ; réaliser une simulation de la mission dans l'environnement de simulation et déterminer une durée d'entraînement de simulation pour achever la mission ; et comparer la durée d'entraînement de simulation avec la durée d'entraînement, si la durée d'entraînement de simulation dure plus longtemps que la durée d'entraînement de plus d'un intervalle de temps spécifié, l'au moins un algorithme et/ou l'au moins un ensemble de règles sont modifiés et la mission est répétée.
PCT/EP2021/054442 2020-02-27 2021-02-23 Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit de programme informatique et véhicule automobile WO2021170580A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21707961.5A EP4111438A1 (fr) 2020-02-27 2021-02-23 Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit de programme informatique et véhicule automobile
CN202180017212.9A CN115176297A (zh) 2020-02-27 2021-02-23 用于训练用于机动车的控制器的至少一个算法的方法、计算机程序产品以及机动车

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020202540.1 2020-02-27
DE102020202540.1A DE102020202540A1 (de) 2020-02-27 2020-02-27 Verfahren zum Trainieren wenigstens eines Algorithmus für ein Steuergerät eines Kraftfahrzeugs, Computerprogrammprodukt sowie Kraftfahrzeug

Publications (1)

Publication Number Publication Date
WO2021170580A1 true WO2021170580A1 (fr) 2021-09-02

Family

ID=74732920

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/054442 WO2021170580A1 (fr) 2020-02-27 2021-02-23 Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit de programme informatique et véhicule automobile

Country Status (4)

Country Link
EP (1) EP4111438A1 (fr)
CN (1) CN115176297A (fr)
DE (1) DE102020202540A1 (fr)
WO (1) WO2021170580A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114944057A (zh) * 2022-04-21 2022-08-26 中山大学 一种路网交通流量数据的修复方法与系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022203422A1 (de) 2022-04-06 2023-10-12 Psa Automobiles Sa Test einer automatischen Fahrsteuerfunktion mittels semi-realer Verkehrsdaten

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3418996A1 (fr) * 2017-06-19 2018-12-26 Hitachi, Ltd. Prédiction de trajectoire d'état de véhicule en temps réel pour la gestion d'énergie d'un véhicule et entraînement autonome
DE102017007136A1 (de) * 2017-07-27 2019-01-31 Opel Automobile Gmbh Verfahren und Vorrichtung zum Trainieren selbstlernender Algorithmen für ein automatisiert fahrbares Fahrzeug
DE102017216202A1 (de) * 2017-09-13 2019-03-14 Bayerische Motoren Werke Aktiengesellschaft Verfahren zur Prädiktion einer optimalen Fahrspur auf einer mehrspurigen Straße
DE102018217004A1 (de) * 2017-10-12 2019-04-18 Honda Motor Co., Ltd. Autonome Fahrzeugstrategiegenerierung
US20190113918A1 (en) * 2017-10-18 2019-04-18 Luminar Technologies, Inc. Controlling an autonomous vehicle based on independent driving decisions
US20190318267A1 (en) 2018-04-12 2019-10-17 Baidu Usa Llc System and method for training a machine learning model deployed on a simulation platform

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102012214979A1 (de) 2012-08-23 2014-02-27 Robert Bosch Gmbh Spurwahlassistent zur Optimierung des Verkehrsflusses (Verkehrsflussassistent)
DE102017003742A1 (de) 2017-04-19 2018-10-25 Daimler Ag Verfahren zum Bestimmen einer optimalen Fahrtroute

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3418996A1 (fr) * 2017-06-19 2018-12-26 Hitachi, Ltd. Prédiction de trajectoire d'état de véhicule en temps réel pour la gestion d'énergie d'un véhicule et entraînement autonome
DE102017007136A1 (de) * 2017-07-27 2019-01-31 Opel Automobile Gmbh Verfahren und Vorrichtung zum Trainieren selbstlernender Algorithmen für ein automatisiert fahrbares Fahrzeug
DE102017216202A1 (de) * 2017-09-13 2019-03-14 Bayerische Motoren Werke Aktiengesellschaft Verfahren zur Prädiktion einer optimalen Fahrspur auf einer mehrspurigen Straße
DE102018217004A1 (de) * 2017-10-12 2019-04-18 Honda Motor Co., Ltd. Autonome Fahrzeugstrategiegenerierung
US20190113918A1 (en) * 2017-10-18 2019-04-18 Luminar Technologies, Inc. Controlling an autonomous vehicle based on independent driving decisions
US20190318267A1 (en) 2018-04-12 2019-10-17 Baidu Usa Llc System and method for training a machine learning model deployed on a simulation platform

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114944057A (zh) * 2022-04-21 2022-08-26 中山大学 一种路网交通流量数据的修复方法与系统

Also Published As

Publication number Publication date
DE102020202540A1 (de) 2021-09-02
EP4111438A1 (fr) 2023-01-04
CN115176297A (zh) 2022-10-11

Similar Documents

Publication Publication Date Title
EP3970077B1 (fr) Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique, véhicule automobile ainsi que système
WO2021083785A1 (fr) Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit programme informatique et véhicule automobile
DE102017200180A1 (de) Verfahren und Testeinheit zur Bewegungsprognose von Verkehrsteilnehmern bei einer passiv betriebenen Fahrzeugfunktion
DE102019104974A1 (de) Verfahren sowie System zum Bestimmen eines Fahrmanövers
EP3543985A1 (fr) Simulation des situations de circulation diverses pour un véhicule d'essai
DE102019203712B4 (de) Verfahren zum Trainieren wenigstens eines Algorithmus für ein Steuergerät eines Kraftfahrzeugs, Computerprogrammprodukt, Kraftfahrzeug sowie System
DE102014216257A1 (de) Verfahren zum Bestimmen einer Fahrstrategie
DE102017205508A1 (de) Verfahren zur automatischen Bewegungssteuerung eines Fahrzeugs
WO2021170580A1 (fr) Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit de programme informatique et véhicule automobile
DE102019205892B4 (de) Verfahren zum Betreiben eines Kraftfahrzeugs sowie Kraftfahrzeug, das dazu ausgelegt ist, ein derartiges Verfahren durchzuführen
WO2021115918A1 (fr) Procédé de création d'un algorithme d'usager de la route permettant la simulation informatique d'usagers de la route, procédé de formation d'au moins un algorithme pour une unité de commande d'un véhicule automobile, produit programme informatique et véhicule automobile
EP3891664A1 (fr) Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique ainsi que véhicule automobile
DE102018213971A1 (de) Verfahren und Vorrichtung zur Auswahl eines Fahrmanövers
EP2964503B1 (fr) Estimation de la vitesse future et/ou distance d'un véhicule d'un point de référence et estimation de l'accélération future
EP4107590A1 (fr) Procédé pour entraîner au moins un algorithme destiné à un appareil de commande d'un véhicule à moteur, procédé pour optimiser un flux de trafic dans une région, produit-programme d'ordinateur et véhicule à moteur
DE102012218361A1 (de) Verfahren zum sicheren Betrieb eines Kraftfahrzeugs
WO2020108839A1 (fr) Calcul d'une vitesse prescrite de véhicule en fonction d'une situation
AT523834B1 (de) Verfahren und System zum Testen eines Fahrerassistenzsystems
EP3256358B1 (fr) Procédé et système pour effectuer des man uvres de conduite automatisées
WO2022077042A1 (fr) Dispositif et système pour tester un système d'aide à la conduite pour un véhicule
DE102019101613A1 (de) Simulieren verschiedener Verkehrssituationen für ein Testfahrzeug
WO2022184363A1 (fr) Procédé mis en œuvre par ordinateur pour l'entrainement d'au moins un algorithme pour une unité de commande d'un véhicule à moteur, produit programme d'ordinateur, unité de commande et véhicule à moteur
WO2023245217A1 (fr) Procédé d'apprentissage d'un réseau neuronal artificiel d'un modèle de conducteur
DE102022208519A1 (de) Computerimplementiertes Verfahren und Computerprogramm zur Bewegungsplanung eines Ego-Fahrsystems in einer Verkehrssituation, computerimplementiertes Verfahren zur Bewegungsplanung eines Ego-Fahrsystems in einer realen Verkehrssituation Steuergerät für ein Ego-Fahrzeug
DE102021003856A1 (de) Verfahren zum Betrieb eines automatisiert fahrenden Fahrzeuges

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21707961

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021707961

Country of ref document: EP

Effective date: 20220927