EP3891664A1 - Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique ainsi que véhicule automobile - Google Patents

Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique ainsi que véhicule automobile

Info

Publication number
EP3891664A1
EP3891664A1 EP19800939.1A EP19800939A EP3891664A1 EP 3891664 A1 EP3891664 A1 EP 3891664A1 EP 19800939 A EP19800939 A EP 19800939A EP 3891664 A1 EP3891664 A1 EP 3891664A1
Authority
EP
European Patent Office
Prior art keywords
quality
motor vehicle
computer program
program product
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19800939.1A
Other languages
German (de)
English (en)
Inventor
Ulrich Eberle
Sven Hallerbach
Jakob Kammerer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Stellantis Auto SAS
Original Assignee
PSA Automobiles SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PSA Automobiles SA filed Critical PSA Automobiles SA
Publication of EP3891664A1 publication Critical patent/EP3891664A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/06Improving the dynamic response of the control system, e.g. improving the speed of regulation or avoiding hunting or overshoot
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Definitions

  • a method for training at least one algorithm for a control device of a motor vehicle the control device for implementing an autonomous driving function with intervention in motor vehicle units, a computer program product and a motor vehicle being described.
  • DE 10 2015 007 493 A1 discloses a method for training a decision algorithm based on machine learning that is used in a control device of a motor vehicle, the decision algorithm depending on the current operating state and / or the input data describing the current driving situation for controlling the Output data to be taken into account during operation of the motor vehicle and a reliability value describing the reliability of the output data are determined and used in the motor vehicle on the basis of a basic training data set was trained, whereby if the reliability value falls below a threshold value, the input data on which the determination of the output data associated with the reliability value is based are stored as assessment input data and are displayed at a later point in time to a human assessor, after which output data corresponding assessment output data are received by an operator input of the assessment person and the Decision algorithm is trained on the basis of an improvement training data record formed from the assessment input data and the assigned assessment output data.
  • a disadvantage of the known methods is that the development of series-ready algorithms for autonomously driving motor vehicles is complex and takes a very long time.
  • the task thus arises to further develop methods for training at least one algorithm for a control unit of a motor vehicle, computer program products and motor vehicles of the type mentioned at the outset in such a way that autonomous driving functions can be implemented faster and with higher quality than previously in autonomously driving motor vehicles.
  • the object is achieved by a method for training at least one algorithm for a control device of a motor vehicle according to claim 1, a computer program product according to the independent claim 9 and a motor vehicle according to the independent claim 11. Further refinements and developments are the subject of the dependent claims.
  • a method for training at least one algorithm for a control device of a motor vehicle is described below, the control device for implementing an autonomous driving function while engaging in units of the motor vehicle on the The basis of input data using the at least one algorithm is provided, the algorithm being trained by a self-learning neural network, comprising the following steps:
  • step e) (i) if the quality in step d) is worse than the first quality measure, the method is continued from step c), or
  • step d if the quality in step d) is better than the first quality measure and worse than the second quality measure, the process is continued from step d).
  • an algorithm for developing an autonomous driving function that develops through a self-learning new ronal network can be developed faster and more reliably than with conventional methods.
  • the algorithm can reach a certain level of maturity before the self-learning neural network takes the algorithm in a next step towards a more complex situation in a secure virtual environment due to the real motor vehicle Adapt environment can.
  • the increased complexity results, for example, from the variance of sensor input signals from real sensors, delays in the signal chain, temperature dependencies and similar phenomena.
  • step d By introducing the quality measure for the algorithm by which the determined metric is measured, if the algorithm is unsuitable in the higher reality level in step d), a long learning process can be avoided by initially learning the less complex full simulation in step c ) is reset and the algorithm is further developed there.
  • Corresponding metrics can be, for example, average number of accidents per route, number of hazardous situations per route, number of disregard for traffic rules per route, etc.
  • a quality can be determined from the metrics, which are measured using quality measures. For example, stricter quality measures mean fewer accidents per route, fewer hazardous situations per route, etc. The training can only be continued in the next stage if the quality standards are not exceeded. This can prevent unstable algorithms from taking long learning times and a higher quality algorithm can be achieved earlier.
  • step f) if the quality in step f) is worse than the second quality measure, the process is continued from step e).
  • the algorithm in a next step the algorithm can be further developed by the self-learning neural network in a mixed-real environment in which the risk to road users is minimized.
  • the learning process can also be accelerated by checking the quality on the basis of the quality measure and possibly returning to an earlier stage in the development of the algorithm.
  • Another possible further development provides that h) a simulation of traffic situations relevant to the autonomous driving function in a real environment and a training of the self-learning neural network by simulating critical scenarios and determining the quality are carried out until a fourth quality measure is met , where the fourth quality measure is stricter than the third quality measure, where,
  • step i) if the quality in step h) is worse than the third quality measure, the process is continued from step g) or if the quality in step h) is worse than the second quality measure, the process is continued from step e).
  • the algorithm in a next step the algorithm can be further developed by the self-learning neural network in a real environment. At this point it can be assumed that the algorithm is already stable enough that road safety is no longer at risk.
  • the learning process can also be accelerated by checking the quality and possibly returning to an earlier stage in the development of the algorithm.
  • Another possible further embodiment provides that if the metric fulfills the fourth quality measure, the computer program product module is released for use in road traffic.
  • Another possible further embodiment provides that method steps f) and / or h) are carried out by safety drivers.
  • the metric has a measure of accidents per route unit and / or time-to-collision and / or time-to-brake and / or required deceleration. Corresponding metrics are easy to determine.
  • neural network learns according to the “reinforcing learning” method.
  • Reinforcement learning stands for a number of machine learning methods in which an agent, here the self-learning neural network, constantly learns a strategy to maximize the rewards received.
  • the agent is not shown which action is the best in which situation, but receives a reward at certain times, which can also be negative.
  • the agent approximates a utility function that describes the value of a particular state or action.
  • the self-learning neural network can constantly further develop the algorithm.
  • Another possible further development provides that the neural network tries out variations to the existing algorithm at random.
  • a first independent subject relates to a device for training at least one algorithm for a control device of a motor vehicle, the control device being provided for implementing an autonomous driving function by engaging aggregates of the motor vehicle on the basis of input data using the at least one algorithm, the algorithm is trained by a self-learning neural network, the device being set up to carry out the following steps: a) providing a computer program product module for the autonomous driving function, the computer program product module containing the algorithm to be trained and the self-learning neural network;
  • the computer program product module in a simulation environment to simulate at least one traffic situation relevant to the autonomous driving function, the simulation environment being based on map data of a real environment and on a digital vehicle model of the motor vehicle, such as training the self-learning neural network by simulating critical scenarios and determining a quality, the Quality is a result of a quality function that is at least one metric until a first quality measure is met;
  • step e) (i) if the quality in step d) is worse than the first quality measure, the method is continued from step c), or
  • step d if the quality in step d) is better than the first quality measure and worse than the second quality measure, the process is continued from step d).
  • step f) if the quality in step f) is worse than the second quality measure, the process is continued from step e).
  • step h) a simulation of traffic situations relevant to the autonomous driving function in a real environment and a training of the self-learning neural network by simulating critical scenarios and determining the quality is undertaken until a fourth quality standard is met, the fourth quality standard being stricter than the third measure of quality, whereby if the quality in step h) is worse than the third quality measure, the process is continued from step g) or if the quality in step h) is worse than the second quality measure, the process is continued from step e).
  • Another possible further embodiment provides that the device is furthermore set up for this purpose, if the quality meets the fourth quality standard, the computer program product module is released for use in road traffic.
  • Another possible further embodiment provides that the device is set up so that method steps f) and / or h) can be carried out by safety drivers.
  • the device is set up to use a measure of accidents-per-route unit and / or time-to-collision and / or time-to-brake and / or required deceleration as a metric.
  • neural network is set up to learn according to the “reinforcing learning” method.
  • Another possible further embodiment provides that the neural network is set up to try out variations to the existing algorithm at random.
  • Another independent subject relates to a computer program product with a computer-readable storage medium on which instructions are embedded which, when executed by a computing unit, cause the computing unit to be set up to carry out the method according to one of the preceding claims.
  • a first further embodiment of the computer program product provides that the commands have the computer program product module of the type described above.
  • Another independent object relates to a motor vehicle with a computing unit and a computer-readable storage medium, a computer program product of the type described above being stored on the storage medium.
  • a first further embodiment provides that the computing unit is part of the control unit.
  • Another further embodiment provides that the computing unit is networked with environmental sensors.
  • 1 shows a motor vehicle which is set up for autonomous driving
  • Fig. 2 shows a computer program product for the motor vehicle from Fig. 1, as well
  • Fig. 3 is a flowchart of the method.
  • FIG. 1 shows a motor vehicle 2 which is set up for autonomous driving.
  • the motor vehicle 2 has a motor vehicle control unit 4 with a computing unit 6 and a memory 8.
  • a computer program product is stored in the memory 8 and is described in more detail below, in particular in the context of FIGS. 2 and 3.
  • the motor vehicle control unit 4 is connected, on the one hand, to a number of environmental sensors, which allow the current position of the motor vehicle 2 and the respective traffic situation to be detected. These include environmental sensors 10, 12 on the front of motor vehicle 2, environmental sensors 14, 16 on the rear of motor vehicle 2, a camera 18 and a GPS module 20. Depending on the configuration, further sensors can be provided, for example wheel speed sensors, acceleration sensors etc., which are connected to the motor vehicle control unit 4.
  • computing unit 6 has loaded the computer program product stored in memory 8 and is executing it. On the basis of an algorithm and the input signals, the computing unit 6 decides on the control of the motor vehicle 2, which the computing unit 6 can achieve by intervening in the steering 22, engine control 24 and brakes 26, each of which is connected to the motor vehicle control unit 4.
  • FIG. 2 shows a computer program product 28 with a computer program product module 30.
  • the computer program product 30 has a self-learning neural network 32 that trains an algorithm 34.
  • the self-learning neural network 32 learns according to methods of reinforcing learning, i. H. by varying the algorithm 34, the neural network 32 tries to obtain rewards for improved behavior according to one or more criteria or standards, that is to say for improvements in the algorithm 34.
  • the algorithm 34 can essentially consist of a complex filter with a matrix of values, often called weights, which define a filter function which determines the behavior of the algorithm 34 depending on input variables, which are recorded in the present case by the environmental sensors 10 to 20 and control signals for controlling the motor vehicle 2 are generated.
  • the quality of the algorithm 34 is monitored by a further computer program product module 36, which monitors input variables and output variables, determines metrics therefrom, and checks the compliance with the quality by the functions using the metrics.
  • the computer program product module 36 can give negative and positive rewards for the neural network 32.
  • FIG. 3 shows a flow chart of the method.
  • the computer program product module and a learning environment are provided.
  • both the motor vehicle as a model and the environment are provided virtually.
  • the model of the motor vehicle corresponds to the later real model in terms of its parameters, sensors, driving characteristics and behavior.
  • the model of the environment is based on map data of a real environment in order to make the model as realistic as possible.
  • the quality GM results from a quality function G (M), which is a function of at least one metric M.
  • G M
  • a corresponding metric M can be a measure such as accident-per-route unit and / or time-to-collision and / or time-to-brake and / or have similar measured variables, for example required decelerations, lateral acceleration, falling below safety margins, violations of applicable traffic regulations etc.
  • the training takes place using a real motor vehicle in a virtual environment.
  • the algorithm 34 can be developed further so that it can take into account the behavior of the real motor vehicle 2. Differences can arise, for example, from the use of real sensors, which can have different signal levels, noise, etc.
  • the quality function G (M) is always monitored during the training.
  • the aim is that the quality G M is better than a second quality measure G2.
  • the second quality measure G2 is stricter than the first quality measure G1.
  • the quality G M may occur below the first quality measure G1 falls. In this case, the system switches back to the purely virtual environment and the training is continued until the algorithm 34 exceeds the first quality measure G1 and the training with the real motor vehicle 2 is continued.
  • the method is reset to the previous training step. If the quality function even falls below the threshold value of the first quality measure G1, the method is reset to the initial training step.
  • the algorithm 34 can be released for free traffic.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Traffic Control Systems (AREA)
  • Feedback Control In General (AREA)

Abstract

Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile pour la mise en œuvre d'une fonction de pilotage autonome, l'algorithme étant entraîné par un réseau neuronal à auto-apprentissage, comprenant les étapes suivantes : a) mise à disposition d'un module de programme informatique pour la fonction de pilotage autonome, le module de programme informatique contenant l'algorithme à entraîner et le réseau neuronal à auto-apprentissage; b) mise à disposition d'au moins une métrique et d'une fonction de récompense; c) intégration du module de programme informatique dans un environnement de simulation pour la simulation d'au moins une situation de trafic pertinente, ainsi qu'entraînement du réseau neuronal à auto-apprentissage par la simulation de scénarios critiques et détermination de la métrique (M), jusqu'à ce qu'une mesure de qualité (G1) soit satisfaire; d) intégration du module de programme informatique entraîné dans l'appareil de commande du véhicule automobile pour la simulation de situations de trafic pertinentes ainsi que pour l'entraînement du réseau neuronal à auto-apprentissage par la simulation de scénarios critiques et la détermination de la métrique (M) jusqu'à ce qu'une deuxième mesure de qualité soit satisfaite, e) (i) lorsque la métrique (M) de l'étape d) est moins bonne que la première mesure de qualité (G1), le procédé est continué à partir de l'étape c), ou (ii) lorsque la métrique (M) de l'étape d) est meilleure que la première mesure de qualité (G1) est moins bonne que la deuxième mesure de qualité (G2), le procédé est continué à partir de l'étape d).
EP19800939.1A 2018-12-03 2019-10-24 Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique ainsi que véhicule automobile Pending EP3891664A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102018220865.4A DE102018220865B4 (de) 2018-12-03 2018-12-03 Verfahren zum Trainieren wenigstens eines Algorithmus für ein Steuergerät eines Kraftfahrzeugs, Computerprogrammprodukt sowie Kraftfahrzeug
PCT/EP2019/078978 WO2020114674A1 (fr) 2018-12-03 2019-10-24 Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique ainsi que véhicule automobile

Publications (1)

Publication Number Publication Date
EP3891664A1 true EP3891664A1 (fr) 2021-10-13

Family

ID=68501579

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19800939.1A Pending EP3891664A1 (fr) 2018-12-03 2019-10-24 Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique ainsi que véhicule automobile

Country Status (6)

Country Link
US (1) US20220009510A1 (fr)
EP (1) EP3891664A1 (fr)
CN (1) CN113168570A (fr)
DE (1) DE102018220865B4 (fr)
MA (1) MA54363A (fr)
WO (1) WO2020114674A1 (fr)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3116634B1 (fr) * 2020-11-23 2022-12-09 Commissariat Energie Atomique Dispositif apprenant pour système cyber-physique mobile
DE102021202083A1 (de) * 2021-03-04 2022-09-08 Psa Automobiles Sa Computerimplementiertes Verfahren zum Trainieren wenigstens eines Algorithmus für eine Steuereinheit eines Kraftfahrzeugs, Computerprogrammprodukt, Steuereinheit sowie Kraftfahrzeug
US11745750B2 (en) * 2021-10-19 2023-09-05 Cyngn, Inc. System and method of large-scale automatic grading in autonomous driving using a domain-specific language
DE102022204295A1 (de) 2022-05-02 2023-11-02 Robert Bosch Gesellschaft mit beschränkter Haftung Verfahren zum Trainieren und Betreiben eines Transformationsmoduls zur Vorverarbeitung von Eingaberecords zu Zwischenprodukten
WO2023247767A1 (fr) * 2022-06-23 2023-12-28 Deepmind Technologies Limited Simulation d'installations industrielles pour la commande
DE102022208519A1 (de) 2022-08-17 2024-02-22 STTech GmbH Computerimplementiertes Verfahren und Computerprogramm zur Bewegungsplanung eines Ego-Fahrsystems in einer Verkehrssituation, computerimplementiertes Verfahren zur Bewegungsplanung eines Ego-Fahrsystems in einer realen Verkehrssituation Steuergerät für ein Ego-Fahrzeug
DE102022132912A1 (de) 2022-12-12 2024-06-13 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Computerimplementiertes Verfahren zur Anpassung realer Parameter eines realen Sensorsystems
DE102022132917A1 (de) 2022-12-12 2024-06-13 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Verfahren und System zur Bestimmung der Kritikalität und Kontrollierbarkeit von Szenarien für automatisierte Fahrfunktionen
DE102023200314A1 (de) 2023-01-17 2024-07-18 Stellantis Auto Sas Erzeugung maschinenlesbarer Szenariobeschreibungen aus menschlichen Beschreibungen
US20240330674A1 (en) 2023-03-27 2024-10-03 Dspace Gmbh Virtual training method for a neural network for actuating a technical device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015007493B4 (de) 2015-06-11 2021-02-25 Audi Ag Verfahren zum Trainieren eines in einem Kraftfahrzeug eingesetzten Entscheidungsalgorithmus und Kraftfahrzeug
WO2017019555A1 (fr) * 2015-07-24 2017-02-02 Google Inc. Commande continue avec apprentissage par renforcement profond
CN105654808A (zh) * 2016-02-03 2016-06-08 北京易驾佳信息科技有限公司 一种基于实际机动车的机动车驾驶人智能化训练系统
US10521677B2 (en) * 2016-07-14 2019-12-31 Ford Global Technologies, Llc Virtual sensor-data-generation system and method supporting development of vision-based rain-detection algorithms
CN107862346B (zh) * 2017-12-01 2020-06-30 驭势科技(北京)有限公司 一种进行驾驶策略模型训练的方法与设备
US11613249B2 (en) * 2018-04-03 2023-03-28 Ford Global Technologies, Llc Automatic navigation using deep reinforcement learning
CN108920805B (zh) * 2018-06-25 2022-04-05 大连大学 具有状态特征提取功能的驾驶员行为建模系统

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
ALEX KENDALL ET AL: "Learning to Drive in a Day", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 2 July 2018 (2018-07-02), XP081197602 *
CUTLER MARK ET AL: "Autonomous drifting using simulation-aided reinforcement learning", 2016 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE, 16 May 2016 (2016-05-16), pages 5442 - 5448, XP032908826, DOI: 10.1109/ICRA.2016.7487756 *
DAVID ISELE ET AL: "Transferring Autonomous Driving Knowledge on Simulated and Real Intersections", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 30 November 2017 (2017-11-30), XP081298898 *
FAYJIE ABDUR R ET AL: "Driverless Car: Autonomous Driving Using Deep Reinforcement Learning in Urban Environment", 2018 15TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR), IEEE, 26 June 2018 (2018-06-26), pages 896 - 901, XP033391036, DOI: 10.1109/URAI.2018.8441797 *
HAOYANG FAN ET AL: "An Auto-tuning Framework for Autonomous Vehicles", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 15 August 2018 (2018-08-15), XP080907856 *
OKUYAMA TAKAFUMI ET AL: "Autonomous Driving System based on Deep Q Learnig", 2018 INTERNATIONAL CONFERENCE ON INTELLIGENT AUTONOMOUS SYSTEMS (ICOIAS), IEEE, 1 March 2018 (2018-03-01), pages 201 - 205, XP033421432, ISBN: 978-1-5386-6329-5, [retrieved on 20181016], DOI: 10.1109/ICOIAS.2018.8494053 *
See also references of WO2020114674A1 *
WOLF PETER ET AL: "Learning how to drive in a real world simulation with deep Q-Networks", 2017 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), IEEE, 11 June 2017 (2017-06-11), pages 244 - 250, XP033133715, DOI: 10.1109/IVS.2017.7995727 *
XINLEI PAN ET AL: "Virtual to Real Reinforcement Learning for Autonomous Driving", PROCEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2017, 26 September 2017 (2017-09-26), XP055610078, ISBN: 978-1-901725-60-5, DOI: 10.5244/C.31.11 *

Also Published As

Publication number Publication date
DE102018220865B4 (de) 2020-11-05
CN113168570A (zh) 2021-07-23
WO2020114674A1 (fr) 2020-06-11
MA54363A (fr) 2022-03-09
DE102018220865A1 (de) 2020-06-18
US20220009510A1 (en) 2022-01-13

Similar Documents

Publication Publication Date Title
EP3891664A1 (fr) Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique ainsi que véhicule automobile
EP3970077B1 (fr) Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique, véhicule automobile ainsi que système
EP4052178A1 (fr) Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit programme informatique et véhicule automobile
DE102019203712B4 (de) Verfahren zum Trainieren wenigstens eines Algorithmus für ein Steuergerät eines Kraftfahrzeugs, Computerprogrammprodukt, Kraftfahrzeug sowie System
DE102006044086A1 (de) System und Verfahren zur Simulation von Verkehrssituationen, insbesondere unfallkritischen Gefahrensituationen, sowie ein Fahrsimulator
AT523834B1 (de) Verfahren und System zum Testen eines Fahrerassistenzsystems
DE102016224291A1 (de) Verfahren zur rechnergestützten Adaption eines vorgegebenen teilautomatisierten Fahrsystems eines Kraftfahrzeugs
WO2021115918A1 (fr) Procédé de création d'un algorithme d'usager de la route permettant la simulation informatique d'usagers de la route, procédé de formation d'au moins un algorithme pour une unité de commande d'un véhicule automobile, produit programme informatique et véhicule automobile
DE102021004426A1 (de) Verfahren zum Trainieren einer autonomen Fahrfunktion
DE102013200116A1 (de) Verfahren zum Entwickeln und/oder Testen eines Fahrerassistenzsystems
EP4111438A1 (fr) Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit de programme informatique et véhicule automobile
DE202013010566U1 (de) Fahrerassistenzsystem für ein Kraftfahrzeug
WO2018134026A1 (fr) Procédé de navigation d'un véhicule automobile le long d'un itinéraire pouvant être prédéfini
WO2019206513A1 (fr) Procédé d'aide à une manœuvre de conduite d'un véhicule, dispositif, programme informatique et produit-programme d'ordinateur
WO2022077042A1 (fr) Dispositif et système pour tester un système d'aide à la conduite pour un véhicule
DE102014201769A1 (de) Verfahren zur Bestimmung einer Fahrbahnsteigung
DE102020201931A1 (de) Verfahren zum Trainieren wenigstens eines Algorithmus für ein Steuergerät eines Kraftfahrzeugs, Verfahren zur Optimierung eines Verkehrsflusses in einer Region, Computerprogrammprodukt sowie Kraftfahrzeug
DE102019213797A1 (de) Verfahren zur Bewertung einer Sequenz von Repräsentationen zumindest eines Szenarios
DE102007050254A1 (de) Verfahren zum Herstellen eines Kollisionsschutzsystems für ein Kraftfahrzeug
WO2022184363A1 (fr) Procédé mis en œuvre par ordinateur pour l'entrainement d'au moins un algorithme pour une unité de commande d'un véhicule à moteur, produit programme d'ordinateur, unité de commande et véhicule à moteur
DE102017221971A1 (de) Verfahren zur Anpassung eines Fahrzeugregelsystems
DE102019212830A1 (de) Analyse und Validierung eines neuronalen Netzes für ein Fahrzeug
DE112020007528T5 (de) Vorrichtung und Verfahren zur Fahrüberwachung
WO2023275401A1 (fr) Simulation d'usagers de la route avec des émotions
DE102019128115A1 (de) Fahrzeugmodell für Längsdynamik

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210617

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
RAV Requested validation state of the european patent: fee paid

Extension state: MA

Effective date: 20210617

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: STELLANTIS AUTO SAS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20240617

17Q First examination report despatched

Effective date: 20240625