WO2022184363A1 - Procédé mis en œuvre par ordinateur pour l'entrainement d'au moins un algorithme pour une unité de commande d'un véhicule à moteur, produit programme d'ordinateur, unité de commande et véhicule à moteur - Google Patents
Procédé mis en œuvre par ordinateur pour l'entrainement d'au moins un algorithme pour une unité de commande d'un véhicule à moteur, produit programme d'ordinateur, unité de commande et véhicule à moteur Download PDFInfo
- Publication number
- WO2022184363A1 WO2022184363A1 PCT/EP2022/052455 EP2022052455W WO2022184363A1 WO 2022184363 A1 WO2022184363 A1 WO 2022184363A1 EP 2022052455 W EP2022052455 W EP 2022052455W WO 2022184363 A1 WO2022184363 A1 WO 2022184363A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motor vehicle
- accident
- algorithm
- computer
- parameters
- Prior art date
Links
- 238000004422 calculation algorithm Methods 0.000 title claims abstract description 66
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000012549 training Methods 0.000 title claims abstract description 17
- 238000004590 computer program Methods 0.000 title claims description 32
- 238000004088 simulation Methods 0.000 claims abstract description 50
- 238000013528 artificial neural network Methods 0.000 claims abstract description 24
- 238000005457 optimization Methods 0.000 claims abstract description 16
- 230000007613 environmental effect Effects 0.000 claims description 36
- 238000003860 storage Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 4
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 claims description 3
- 230000001419 dependent effect Effects 0.000 claims description 3
- 230000010076 replication Effects 0.000 claims 1
- 230000006399 behavior Effects 0.000 description 20
- 230000006870 function Effects 0.000 description 19
- 238000011161 development Methods 0.000 description 9
- 230000018109 developmental process Effects 0.000 description 9
- 230000002787 reinforcement Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000010200 validation analysis Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012432 intermediate storage Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Definitions
- a computer-implemented method for training at least one algorithm for a control unit of a motor vehicle, a computer program product, a control unit and a motor vehicle are described here.
- the first semi-automated vehicles (corresponds to SAE Level 2 according to SAE J3016) have reached series maturity in recent years.
- the state of the art is therefore, among other things, scenario-based simulations with the help of a so-called digital twin resorted to.
- the digital twin ie a simulation of the vehicle model to be trained, has the algorithm to be trained, which is trained in such a way that the digital twin in the simulation essentially imitates the driving behavior of the real motor vehicle.
- Relevant scenarios are used for training in order to examine the behavior of the automated vehicle model to be tested (“system under test” or “SuT”).
- system under test or “SuT”
- DE 102006044086 A1 discloses a system for the virtual simulation of traffic situations, with a display unit for graphically displaying the virtual traffic situations and an input unit, and with at least one modeled reference vehicle and at least one other modeled road user, with the reference vehicle being a modeled one
- the task is therefore to develop methods for training at least one algorithm for a control unit of a motor vehicle, computer program products and motor vehicles of the type mentioned at the outset such that critical situations can be better simulated when training the algorithm.
- the object is achieved by a method for training at least one algorithm for a control unit of a motor vehicle according to claim 1, a computer program product according to the independent claim 9, a control unit according to the independent claim 12, and a motor vehicle according to the independent claim 13. Further developments and developments are the subject of the dependent claims.
- An autonomous driving function takes over the control of a motor vehicle in whole or in part by detecting the surroundings of the motor vehicle, deriving a sensible behavior of the motor vehicle based on the situation and controlling the motor vehicle by intervening in the steering, accelerator, brakes and/or other units, e.g. lights. Indicators etc. controlled according to the planned behavior.
- a general desire particularly in mixed autonomous/non-autonomous traffic, to imitate the behavior of natural drivers so that unpredictable behavior does not cause any safety risks.
- the driving function can access navigation data.
- One A corresponding autonomous driving function can be a traffic jam assistant, for example, which automatically steers the motor vehicle in heavy traffic at slow speeds.
- current data relating to the environment can be considered as input data, for example data obtained by means of various sensors.
- sensors can include, for example, cameras, radar, lidar and/or ultrasonic sensors, but also other sensors such as position sensors, e.g. GPS, magnetic field detecting sensors and the like.
- color planning data which are obtained from a navigation destination, for example, and possibly traffic data that determine the traffic flow on the route.
- Other data can be communication data, for example, obtained from car-to-car or car-to-infrastructure systems, e.g. on traffic light phases or similar.
- the self-learning neural network can be based on various learning principles, in particular it can use methods of reinforcement learning.
- Reinforcement learning stands for a set of methods of machine learning in which the neural network autonomously learns a strategy in order to maximize the rewards received.
- the neural network is not shown which action is the best in which situation, but receives a reward at certain points in time, which can also be negative. Using these rewards, it approximates a utility function that describes the value of a particular state or action.
- One way of achieving driving behavior that is as natural as possible is to define a virtual twin of a real motor vehicle with a real driver in a simulation environment that is modeled on a real environment.
- the real driver and the virtual twin are given identical driving tasks, with the real journey serving as the reference.
- the task of the virtual twin is to emulate the driving behavior of the real driver.
- a deviation between the trajectory of the real driver and the trajectory of the virtual twin are suitable as reward criteria.
- the self-learning neural network is given the task of minimizing the deviation. In this way, the self-learning neural network imitates the behavior of real drivers. With regard to critical situations or even accidents, however, this is not so easy, since it is ethically unacceptable to deliberately put real drivers in dangerous situations or accident situations.
- a point in time immediately before the occurrence of the critical situation or the accident can be selected as the starting time t_n, since there is often no longer any reliable data available after a critical situation or accident.
- process steps can be dependent on an increment size
- Deviations from the previous step as well as deviations over several iteration stages, e.g. between the start time and the target time, can be considered as deviations.
- Several deviations can be measured, e.g. distances, speeds or times.
- the driving parameters and/or the environmental parameters have variable driving parameters and/or the variable environmental parameters, with at least one of the variable driving parameters and/or environmental parameters being varied, with the simulation being repeated with the varied parameters is carried out.
- Specified parameters - such parameters have a value that is specified for the critical driving situation or accident. This can e.g. B. all values of the traffic infrastructure such as the course and width of the road and the like or also a friction coefficient of the road, if this is known.
- Variable environmental parameters - Value ranges can be specified here, but their concrete values are specified before the actual start of the simulation. This can be done, for example, by planning an experiment. Examples would also be the friction coefficient of the road, other road users such as pedestrians, drivers and the like, but also the position of the sun, sensor status, etc.
- Design parameters - Value ranges can be specified here. These value ranges for the (numerical) optimization algorithm in the Simulation phase varies to reach the target point. These include, for example, trajectories of different road users.
- the optimization algorithm according to step g) varies the variable driving parameters and/or the variable environmental parameters between two iterations of the optimization.
- step h) provision can be made for the at least one deviation to be evaluated, with step h) being carried out if the at least one deviation is smaller than the threshold value.
- the storage can be an intermediate storage. After saving, a further iteration step can be carried out or, after validation of the driving function, another situation can be selected for this case, on which the at least one pre-trained algorithm is trained further.
- the at least one algorithm is used and trained in further simulations of the same critical driving situation or the accident or other critical driving situations or accidents.
- the at least one algorithm can be trained on many different critical situations or accident situations.
- the driving function for the selected critical driving situation or the accident is validated.
- the at least one deviation from a start time (t_s) to the target time (t_n) is not below the threshold value, the driving function for the selected critical driving situation or the accident is validated.
- the at least one deviation becomes too large.
- the motor vehicle model can avoid the critical situation or the accident by driving differently than the human driver. This result can be used to validate the corresponding situation, since the neural network knows how to avoid the critical situation or the accident.
- a first independent subject relates to a device for training at least one algorithm for a control unit of an autonomously or semi-autonomously driving motor vehicle to implement an autonomous driving function by intervening in assemblies of the motor vehicle on the basis of input data using the at least one algorithm described, with a simulation environment for Training of the at least one algorithm is provided by a self-learning neural network using a motor vehicle model of the motor vehicle, the simulation environment being designed to carry out the following steps: a) providing at least one computer program product module for the autonomous driving function, the at least one computer program product module containing the motor vehicle model, the at least one algorithm to be trained, containing the self-learning neural network and the simulation environment; b) selection of a critical driving situation or an accident of the motor vehicle, a plurality of driving parameters of the motor vehicle and environmental parameters being determined for the critical driving situation or the accident; c) simulating the critical driving situation or the accident from step b) in the simulation environment and defining a target time (t_n), the target time (t_n) being comprised of the driving
- the simulation environment is set up to h) carry out steps d) to g) for a time period between (t_n-1) and an earlier time period (t_n-2).
- the simulation environment is set up to evaluate the at least one deviation, with step h) only being initiated if the at least one deviation is below a threshold value.
- the driving parameters and/or the environmental parameters have variable driving parameters and/or the variable environmental parameter, with the simulation environment being set up to vary at least one of the variable driving parameters and/or environmental parameters and the Carry out the simulation again with the varied parameters.
- the simulation environment is set up so that the optimization algorithm according to step g) varies the variable driving parameters and/or the variable environmental parameters between two iterations of the optimization.
- the simulation environment is set up to evaluate the at least one deviation, with step h) being carried out if the at least one deviation is smaller than the threshold value.
- the simulation environment is set up to store the at least one algorithm if the at least one deviation is below the threshold value.
- the simulation tion environment is set up to use and train the at least one algorithm in further simulations of the same critical driving situation or accident or other critical driving situations or accidents.
- the simulation environment is set up to validate the driving function for the selected critical driving situation or the accident if the at least one deviation from a starting time (t_s) to the target time (t_n) is not among the threshold is.
- Another independent subject relates to a computer program product with a permanent, computer-readable storage medium on which instructions are embedded which, when executed by at least one processing unit, cause the at least one processing unit to be set up to carry out the method of the aforementioned type .
- the method can be distributed over one or more computing units, so that certain method steps are executed on one computing unit and other method steps are executed on at least one other computing unit, with calculated data being able to be transmitted between the computing units if necessary.
- the processing unit can be part of the control unit.
- the commands have the computer program product module of the type described above.
- Another independent subject relates to a control unit with a permanent, computer-readable storage medium, with a computer program product of the type described above being stored on the storage medium.
- Another independent subject relates to a motor vehicle with a control unit of the type described above.
- the computing unit is part of the control unit.
- provision can be made for the computing unit to be networked with environmental sensors.
- 1 shows a motor vehicle that is set up for automated or autonomous driving
- FIG. 2 shows a computer program product for the motor vehicle from FIG. 1 ;
- FIG. 3 shows a simulation environment with the motor vehicle from FIG. 1, as well as
- Fig. 1 shows a motor vehicle 2, which is set up for automated or autonomous driving.
- the motor vehicle 2 has a control unit 4 with a computing unit 6 and a memory 8 .
- the memory 8 is a permanent memory whose data is not lost when the memory 8 is de-energized.
- a computer program product is stored in memory 8, which will be described in more detail below in connection with FIGS.
- the control unit 4 is connected on the one hand to a number of environmental sensors that allow the current position of the motor vehicle 2 and the respective traffic situation to be detected. These include environmental sensors 10, 11 on the front of the vehicle vehicle 2, environmental sensors 12, 13 at the rear of the motor vehicle 2, a camera 14 and a GPS module 16.
- the environmental sensors 10 to 13 can include radar, lidar and/or ultrasonic sensors, for example.
- sensors for detecting the state of the motor vehicle 2 are provided, including wheel speed sensors 16, acceleration sensors 18 and pedal sensors 20, which are connected to the control unit 4. With the help of these motor vehicle sensors 16, 18, 20, the current state of the motor vehicle 2 can be reliably detected.
- the computing unit 6 has loaded the computer program product stored in the memory 8 and executes it. On the basis of an algorithm and the input signals, the arithmetic unit 6 decides on the control of the motor vehicle 2, which the arithmetic unit 6 would achieve by intervening in the steering 22, engine control 24 and brakes 26, each of which is connected to the control unit 4.
- Data from the sensors 10 to 20 are continuously buffered in the memory 8 and discarded after a predetermined period of time so that these environmental data can be made available for further evaluation.
- the algorithm was trained according to the procedure described below.
- Fig. 2 shows a computer program product 28 with a computer program product module 30.
- the computer program product module 30 has a self-learning neural network 32 that trains an algorithm 34 .
- the self-learning neural network 32 learns using methods of reinforcement learning, i. H. the algorithm 34 tries by varying the neural network 32 to obtain rewards for improved behavior according to one or more metrics or benchmarks, ie for improvements in the algorithm 34 .
- known learning methods of supervised and unsupervised learning, as well as combinations of these learning methods can also be used.
- the neural network 32 can essentially be a matrix of values, typically called weights, that define a complex filter function that describes the behavior of the Algorithm 34 is determined as a function of input variables, which are recorded via environmental sensors 10 to 20 in the present case, and control signals for controlling motor vehicle 2 are generated.
- the algorithm presented here is a so-called deep neural network, which has at least one hidden layer in addition to an input layer and an output layer.
- the computer program product module 30 can be used both in the motor vehicle 2 and outside of the motor vehicle 2 . It is thus possible to train the computer program product module 30 both in a real environment and in a simulation environment in which a virtual twin of the motor vehicle 2 is trained.
- training begins in a simulation environment as this is safer than training in a real environment.
- the computer program product module 30 is configured to establish a metric to be improved.
- a metric can be the achievement of a specific target state, that of the real motor vehicle 2 . If the metric has exceeded a certain threshold, e.g. a deviation smaller than a certain threshold value, the metric can be considered fulfilled and the algorithm can be frozen or saved in this regard. Then it can either be optimized with regard to another metric and further trained using another mission, or the algorithm can be tested in a real environment.
- the computer program product module 30 has a driving function 31, e.g. an autonomous driving program, which is based on input data, for example environmental data from the environmental sensors 10 to 13, the camera 14 and the GPS module 15 and possibly environmental databases and driving data from the sensors 16 to 18 on the basis of a mission, for example to reach a certain destination, interventions in the steering 22, engine control 24 and brakes 26 plans.
- a driving function 31 e.g. an autonomous driving program, which is based on input data, for example environmental data from the environmental sensors 10 to 13, the camera 14 and the GPS module 15 and possibly environmental databases and driving data from the sensors 16 to 18 on the basis of a mission, for example to reach a certain destination, interventions in the steering 22, engine control 24 and brakes 26 plans.
- a neural network 32 is provided, which is designed here as a deep neural network with at least one hidden level and is part of an algorithm 34 that implements parts of the driving function 31 .
- a simulation environment 36 has the motor vehicle model 2 'that a virtual Representation of the motor vehicle 2 represents.
- the motor vehicle model 2' reproduces the motor vehicle 2 in terms of its driving characteristics.
- a physics model 38 is provided that calculates the dynamic behavior of the components of the simulation environment 36 on the basis of driving parameters pF, for example speed, cornering speed, acceleration, brake actuation, mass, etc., and environmental parameters pU, for example weather and road friction values. simulated.
- driving parameters pF for example speed, cornering speed, acceleration, brake actuation, mass, etc.
- environmental parameters pU for example weather and road friction values. simulated.
- the parameters pF and pU are partly fixed and partly variable in increments or in stages
- an optimization algorithm 39 which varies the driving parameters pF and/or environmental parameters pU in order to influence the behavior of the motor vehicle model 2'.
- FIG. 3 shows a reproduction of the simulation environment 36 with a motor vehicle model 2', which is a virtual twin of the motor vehicle 2 from FIG.
- the model for the simulation environment 36 is an accident description in an accident database, e.g. GI DAS, in which an accident between two motor vehicles took place.
- an accident database e.g. GI DAS
- Equipment traction control, electronic stability program, brake assistant, lane departure warning, tire pressure monitor, cruise control system
- a road 40 is shown to simulate the previously described accident, on which a further motor vehicle 42 is located next to the motor vehicle 2 at the edge of the road 40. The time at which motor vehicle 2 collides with motor vehicle 40 is shown.
- the method of reinforcement learning is used, as already described above, with the algorithm 34 being trained to imitate the behavior of the real motor vehicle 2 as precisely as possible.
- comparative driving data that the algorithm 34 is intended to imitate are instead generated using the data from the accident database.
- the known data on a trajectory 44 of the motor vehicle 2 are iteratively imitated in the simulation environment 36 and optimized as far as possible with the aid of the optimization algorithm 39, so that the motor vehicle model 2' emulates the known behavior of the motor vehicle 2 in the best possible way.
- the journey of the motor vehicle model 2' is then simulated using the physics model 38, initially from a time period t_n-1 lying one time increment ⁇ t before a target time t_n (time of the collision) to the target time t_n and varied by varying the changeable driving parameters pF until the simulated Motor vehicle model 2' collides with motor vehicle 42 at the same point, except for a deviation 46 that is less than a predetermined threshold value S_46 with the same values.
- an accident situation is selected from a database and entered into the simulation environment.
- the driving situation is simulated up to the target time t_n.
- the simulation of driving the motor vehicle model then begins for a time increment duration ⁇ t from a time t_n ⁇ 1 to the target time t_n.
- the location of the motor vehicle model 2' is compared with the data on the real motor vehicle 2. If this is not within an acceptable deviation 46, the optimization algorithm 39 is applied and variable driving parameters pF are varied.
- the neural network 32 is then trained until the accident can be avoided.
Abstract
L'invention concerne un procédé d'entraînement d'au moins un algorithme pour une unité de commande d'un véhicule à moteur autonome à l'aide dudit algorithme, ledit ou lesdits algorithmes étant entraînés dans un environnement de simulation par un réseau neuronal à auto-apprentissage, par sélection d'une situation de conduite critique ou d'un accident impliquant le véhicule à moteur ; émulation de la situation de conduite critique ou de l'accident dans l'environnement de simulation ; calcul d'une zone d'occupation du modèle de véhicule à moteur à un moment antérieur sur la base des paramètres de conduite et d'environnement à l'aide d'au moins un modèle physique ; simulation de la situation de conduite critique ou de l'accident : détermination d'écarts entre un état de la situation de conduite critique ou de l'accident et sa simulation, et application d'un algorithme d'optimisation afin de réduire au maximum les écarts entre la situation de conduite critique ou l'accident et leur simulation.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102021202083.6A DE102021202083A1 (de) | 2021-03-04 | 2021-03-04 | Computerimplementiertes Verfahren zum Trainieren wenigstens eines Algorithmus für eine Steuereinheit eines Kraftfahrzeugs, Computerprogrammprodukt, Steuereinheit sowie Kraftfahrzeug |
DE102021202083.6 | 2021-03-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022184363A1 true WO2022184363A1 (fr) | 2022-09-09 |
Family
ID=80624098
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2022/052455 WO2022184363A1 (fr) | 2021-03-04 | 2022-02-02 | Procédé mis en œuvre par ordinateur pour l'entrainement d'au moins un algorithme pour une unité de commande d'un véhicule à moteur, produit programme d'ordinateur, unité de commande et véhicule à moteur |
Country Status (2)
Country | Link |
---|---|
DE (1) | DE102021202083A1 (fr) |
WO (1) | WO2022184363A1 (fr) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102006044086A1 (de) | 2006-09-20 | 2008-04-10 | Audi Ag | System und Verfahren zur Simulation von Verkehrssituationen, insbesondere unfallkritischen Gefahrensituationen, sowie ein Fahrsimulator |
DE102018008024A1 (de) * | 2018-10-10 | 2019-04-11 | Daimler Ag | Verfahren zur Bewertung einer Verkehrssituation |
WO2020114674A1 (fr) * | 2018-12-03 | 2020-06-11 | Psa Automobiles Sa | Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique ainsi que véhicule automobile |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102008001256A1 (de) | 2008-04-18 | 2009-10-22 | Robert Bosch Gmbh | Verkehrsobjekt-Erkennungssystem, Verfahren zum Erkennen eines Verkehrsobjekts und Verfahren zum Einrichten eines Verkehrsobjekt-Erkennungssystems |
DE102008027509A1 (de) | 2008-06-10 | 2009-12-31 | Audi Ag | Verfahren zur prognostischen Bewertung wenigstens eines vorausschauenden Sicherheitssystems eines Kraftfahrzeugs |
DE102019206908B4 (de) | 2019-05-13 | 2022-02-17 | Psa Automobiles Sa | Verfahren zum Trainieren wenigstens eines Algorithmus für ein Steuergerät eines Kraftfahrzeugs, Computerprogrammprodukt, Kraftfahrzeug sowie System |
-
2021
- 2021-03-04 DE DE102021202083.6A patent/DE102021202083A1/de active Pending
-
2022
- 2022-02-02 WO PCT/EP2022/052455 patent/WO2022184363A1/fr active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102006044086A1 (de) | 2006-09-20 | 2008-04-10 | Audi Ag | System und Verfahren zur Simulation von Verkehrssituationen, insbesondere unfallkritischen Gefahrensituationen, sowie ein Fahrsimulator |
DE102018008024A1 (de) * | 2018-10-10 | 2019-04-11 | Daimler Ag | Verfahren zur Bewertung einer Verkehrssituation |
WO2020114674A1 (fr) * | 2018-12-03 | 2020-06-11 | Psa Automobiles Sa | Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique ainsi que véhicule automobile |
Non-Patent Citations (3)
Title |
---|
CATHY WU ET AL: "Flow: Architecture and Benchmarking for Reinforcement Learning in Traffic Control", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 16 October 2017 (2017-10-16), XP080829046 * |
NEUROHR CHRISTIAN ET AL: "Criticality Analysis for the Verification and Validation of Automated Vehicles", IEEE ACCESS, IEEE, USA, vol. 9, 21 January 2021 (2021-01-21), pages 18016 - 18041, XP011834952, DOI: 10.1109/ACCESS.2021.3053159 * |
SVEN HALLERBACH ET AL: "Simulation-Based Identification of Critical Scenarios for Cooperative and Automated Vehicles", SAE INTERNATIONAL JOURNAL OF CONNECTED AND AUTOMATED VEHICLES, vol. 1, no. 2, 16 February 2018 (2018-02-16), pages 93 - 106, XP055663706, ISSN: 2574-075X, DOI: 10.4271/2018-01-1066 * |
Also Published As
Publication number | Publication date |
---|---|
DE102021202083A1 (de) | 2022-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020229116A1 (fr) | Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique, véhicule automobile ainsi que système | |
WO2019060938A1 (fr) | Procédé et dispositif pour générer un profil de vitesse dynamique d'un véhicule automobile | |
DE102016012465B4 (de) | Verfahren zur Bestimmung einer Änderung im auf ein Kraftfahrzeug wirkenden Luftwiderstand | |
DE102019203712B4 (de) | Verfahren zum Trainieren wenigstens eines Algorithmus für ein Steuergerät eines Kraftfahrzeugs, Computerprogrammprodukt, Kraftfahrzeug sowie System | |
DE102013222634A1 (de) | Verfahren zur Prognostizierung eines Fahrbahn-Reibungsbeiwerts sowie Verfahren zum Betrieb eines Kraftfahrzeugs | |
EP4052178A1 (fr) | Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit programme informatique et véhicule automobile | |
EP1623284B1 (fr) | Procede d'optimisation de vehicules et de moteurs servant a l'entrainement de tels vehicules | |
AT523834B1 (de) | Verfahren und System zum Testen eines Fahrerassistenzsystems | |
EP3891664A1 (fr) | Procédé pour l'entraînement d'au moins un algorithme pour un appareil de commande d'un véhicule automobile, produit de programme informatique ainsi que véhicule automobile | |
EP4111438A1 (fr) | Procédé d'apprentissage d'au moins un algorithme pour un dispositif de commande d'un véhicule automobile, produit de programme informatique et véhicule automobile | |
DE102018005864A1 (de) | Verfahren zum Testen eines Totwinkelassistenzsystems für ein Fahrzeug | |
WO2021175821A1 (fr) | Procédé mis en œuvre par ordinateur pour le calcul d'itinéraires d'un véhicule à moteur à conduite autonome, procédé de conduite d'un véhicule à moteur à conduite autonome, produit programme informatique et véhicule à moteur | |
WO2022184363A1 (fr) | Procédé mis en œuvre par ordinateur pour l'entrainement d'au moins un algorithme pour une unité de commande d'un véhicule à moteur, produit programme d'ordinateur, unité de commande et véhicule à moteur | |
DE102019105213A1 (de) | Möglichkeit zur Fahrerbewertung | |
WO2022251890A1 (fr) | Procédé et système pour tester un système d'aide à la conduite d'un véhicule | |
WO2022183228A1 (fr) | Procédé pour tester un système d'aide à la conduite d'un véhicule | |
WO2022077042A1 (fr) | Dispositif et système pour tester un système d'aide à la conduite pour un véhicule | |
DE102019101613A1 (de) | Simulieren verschiedener Verkehrssituationen für ein Testfahrzeug | |
DE102018207102A1 (de) | Verfahren zur Ermittlung der Trajektorienfolgegenauigkeit | |
WO2023275401A1 (fr) | Simulation d'usagers de la route avec des émotions | |
DE102022200497A1 (de) | Verfahren, Recheneinheit und Computerprogramm zur Abbildung eines Fahrerverhaltens in einer Fahrzeugsimulation | |
WO2023066559A1 (fr) | Procédé et système pour éviter des accidents de la faune sauvage | |
DE102021214095A1 (de) | Verfahren und System zum Erkennen von kritischen Verkehrsszenarien und/oder Verkehrssituationen | |
DE102021110810A1 (de) | Verfahren, System und Computerprogramm zum Erzeugen von Daten zum Entwickeln, Absichern, Trainieren und/oder Betreiben eines Fahrzeugsystems | |
EP4309141A1 (fr) | Procédé, programme informatique, unité de commande et véhicule automobile pour réaliser une fonction de conduite automatisée |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22706532 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22706532 Country of ref document: EP Kind code of ref document: A1 |