EP4248367A1 - Lernvorrichtung für ein mobiles cyberphysikalisches system - Google Patents
Lernvorrichtung für ein mobiles cyberphysikalisches systemInfo
- Publication number
- EP4248367A1 EP4248367A1 EP21815481.3A EP21815481A EP4248367A1 EP 4248367 A1 EP4248367 A1 EP 4248367A1 EP 21815481 A EP21815481 A EP 21815481A EP 4248367 A1 EP4248367 A1 EP 4248367A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- learning
- learning unit
- environment
- sensor
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/02—Knowledge representation; Symbolic representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/092—Reinforcement learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
Definitions
- the invention relates to the field of learning distributed systems, in particular mobile cyber-physical systems comprising a learning artificial intelligence able to control the movement and evolution of such systems in their environment or more generally the interaction of these systems. with their environment.
- the invention applies in particular to the field of autonomous vehicles, but also to the field of robotics or drones.
- a general problem in the field of autonomous systems relates to the automatic piloting of such systems moving and interacting with their environment.
- autonomous systems use machine learning algorithms to learn to recognize obstacles in their environment and determine the most optimal trajectories.
- the learning phase is generally carried out under test conditions with dedicated test systems in a controlled environment.
- learning is carried out on test vehicles in a secure environment.
- the learning software is then downloaded to each vehicle in operational condition for use.
- a disadvantage of this method is that it does not take into account the specificities of each vehicle and the differences with respect to the test vehicle.
- the physical characteristics of a vehicle can change over time, for example because a tire deflates or certain sensors or motors deteriorate.
- a learning algorithm optimized for certain test conditions is therefore not necessarily adapted to a real operational situation, which can lead to trajectory errors in real conditions.
- Another solution consists in updating the learning carried out under test conditions from local characteristics of the vehicle on which the autopilot software is installed. This is called local overfitting.
- the present invention aims to provide a learning cyber-physical system that combines conventional offline learning with simulated learning from data acquired directly by the system.
- the system control algorithm can be updated regularly on the basis of new situations with which the system (or similar systems cooperating together) is confronted and/or by taking into account the evolution of the physical characteristics of the system.
- the subject of the invention is a learning device intended to be embedded in a mobile cyber-physical system fitted with actuators, the device comprising at least one sensor for perceiving the external environment of the system, at least one internal sensor able to provide information on the state of the system, a first learning unit configured to restore a perception of the environment from the data acquired by the at least one perception sensor, a second learning unit configured to control the actuators, a generator of simulation scenarios of the system in its environment controlled by the first learning unit and the second learning unit, a scenario simulator and a virtualization platform for simulating the behavior of a digital twin of the system in the scenarios simulated by the generator and adapting the parameters of the second learning unit in order to control the system so that it adapts to its environment, the second learning unit implementing an automatic learning algorithm for controlling the actuators from the at least one perception sensor, from the at least one internal sensor, the automatic learning algorithm being trained by means of the scenarios of simulations simulated in the virtualization platform, the device comprising a member for triggering the simulation scenario generator according to a
- the cyber-physical system is an autonomous vehicle, a robot or a drone.
- the at least one perception sensor is taken from among a camera, a Lidar, a laser, an acoustic sensor.
- the at least one internal sensor is taken from among a temperature sensor, a pressure sensor, a speed sensor.
- the first learning unit implements an automatic learning algorithm configured to generate data characteristic of the environment from the at least one perception sensor.
- the device comprises a data storage unit for saving the data generated by the first learning unit over a predetermined period.
- the simulation scenario generator and/or the simulator and/or the virtualization platform are capable of being deported to a centralized server.
- the virtualization platform is able to receive simulation scenarios generated by remote cyber-physical systems belonging to a fleet of systems.
- the device further comprises a unit for converting a simulation scenario into a textual semantic description intended to be transmitted to other systems belonging to the same fleet and a generating unit of a simulation scenario from a textual semantic description received.
- the invention also relates to a mobile cyber-physical system provided with actuators comprising a learning device according to the invention configured to control the actuators to control said system in its environment.
- FIG. 1 represents a diagram of a first variant embodiment of a cyber-physical system according to the invention
- FIG. 2 represents a second alternative embodiment of the system of FIG. 1,
- FIG. 3 represents a third alternative embodiment of the system of FIG. 1,
- FIG. 4 represents a fourth alternative embodiment of the system of FIG. 1,
- FIG. 5 represents an example of distributed implementation of the system according to the invention.
- Figure 1 illustrates, in a diagram, an example of a cyber-physical system according to the invention comprising a learning device.
- the system 101 is mobile in an environment 102.
- the system 101 is a motor vehicle moving on a road, or a robot or even a drone.
- the system 101 moves in its environment by means of actuators 105.
- the actuators designate all the elements of the system which allow it to move or to interact with its environment.
- the actuators 105 notably include the wheels, the steering wheel, the gear lever.
- the actuators 105 also comprise an articulated arm of the robot making it possible to grasp an object.
- the actuators 105 are driven by a command (for example an electrical signal) to interact with the environment 102.
- a command for example an electrical signal
- the system 101 is provided with a learning device which comprises the following elements.
- One or more external sensors 103 are placed on the system 101 to acquire data or environmental perception measurements 102.
- the external sensors or perception sensors 103 include, for example, a camera, a Lidar device, a laser, an acoustic sensor or any other sensor making it possible to measure information on the environment 102.
- the external sensor(s) 103 are connected to a first learning unit
- the learning unit 104 has the function of detecting and characterizing objects in the acquired images, in particular obstacles such as pedestrians or buildings or even to detect the limits of a road.
- the learning unit 104 implements an automatic learning algorithm, for example an algorithm based on an artificial neural network.
- a second learning unit 106 is used to control the actuators 105 depending, in particular, on the data provided by the first learning unit 104 to characterize the environment.
- the second learning unit 106 implements another automatic learning algorithm which has the function of controlling the actuators
- the system 101 is a vehicle
- one objective of the second learning unit 106 is to control the movement of the vehicle in its environment while avoiding collisions with obstacles and respecting the rules of the road.
- the system 101 is a robot
- one objective of the second learning unit 106 is to control the movement of the robot and to control its articulated arm to carry out a predetermined mission.
- the learning algorithm(s) implemented by the second learning unit 106 are, beforehand, trained to achieve the targeted objective on learning data in a learning environment. test.
- the training is carried out in particular by means of scenarios 110 for simulating the environment 102 which make it possible to train the unit 106 to achieve the target objective for a set of predetermined scenarios.
- An objective of the invention is in particular to improve the learning carried out by the unit 106 to take into account more finely the evolution of the environment 102 but also the evolution of the characteristics of the system 101 over time. .
- the system 101 is also equipped with internal sensors 112 whose role is to measure characteristics relating to the state of the system 101, in particular the state of the actuators 105.
- the internal sensors 112 comprise temperature sensors, pressure sensors, in particular tire pressure of a vehicle, speed sensors.
- the measurements provided by the internal sensors 112 are also taken into account in the learning of the second learning unit 106 to control the actuators 105.
- the data generated by the first learning unit 104 is stored in a memory 108 over a predefined time interval.
- the learning device with which the system 101 is equipped also comprises a generator 109 of simulation scenarios of the environment 102.
- This generator is on the one hand powered by a definition of a set of initial scenarios 110 predetermined to carry out the learning unit 106.
- it is fed by the environmental perception data stored in the memory 108 to generate new scenarios from the information acquired by the external sensors 103.
- the generation of scenarios also takes into account information provided by the learning unit 106.
- the generator 109 is activated following a trigger event.
- This event can be triggered manually by a user of the system 101, for example by the driver of a vehicle who wishes to update the learning of the unit 106 following a particular event, for example a collision of the system with an obstacle.
- the triggering of the generator 109 can also be carried out automatically by means of an automatic learning algorithm configured to detect a particular event, for example a collision or non-compliance with the highway code or even non-compliance with a mission entrusted to a robot, or even unacceptable performance of the robot for the task performed, for example, excessive execution time.
- the detection of this event can be performed by the first learning unit 104.
- the generator 109 Following the triggering event, the generator 109 generates a new scenario of the environment 102 from the data stored in the memory 108 and corresponding to a predefined time interval before the triggering event.
- This new scenario is supplied as input to a simulator 107 capable of simulating the system 101 in its simulated environment corresponding to the generated scenario.
- a virtualization platform 111 is then used to simulate the overall behavior of the system 101 including the configuration of the learning unit 106.
- the virtualization platform 111 is able to model a digital twin of the system 101 from a initial model of the system and of the measurements provided by the internal sensors 112.
- the digital twin makes it possible to faithfully reproduce the system 101 and its evolutions over time and to take these evolutions into account in the learning of the piloting of the system by the learning unit 106.
- the virtualization platform 111 uses the data recorded in the memory 108 over a predefined time interval before the triggering event, including the data fed back from the internal sensors, to virtually reproduce the scenario that led to the event .
- the learning algorithm implemented by the learning unit 106 re-parameterizes the actuators of the system 101 so as to virtually produce an acceptable scenario in the same simulated environment.
- the virtualization platform 111 simulates the behavior of the digital twin of the system in the scenario simulated by the simulator 107.
- a new learning of the algorithm The automatic learning process executed by the learning unit 106 is carried out with the aim of controlling the system in order to avoid the incident which triggered the new scenario. For example, if the triggering event corresponds to a collision of the vehicle with an obstacle which has not been detected, the learning algorithm uses the data from the sensors corresponding to a time interval preceding and integrating this collision as learning data in order to modify the parameterization of the trajectory of the vehicle in order to learn how to avoid this type of obstacle.
- the learning algorithm uses these new learning data in order to modify the parameter setting of the trajectory of the vehicle to avoid such a line crossing.
- the data saved in memory 108 and corresponding to a triggering event is used as new learning data to update the automatic learning algorithm so that this type of event no longer occurs. in the future or in other words that the system 101 be configured to avoid the occurrence of such an event.
- This new learning is carried out for the new simulated scenario but also for all the initial scenarios 110 in order to always verify that the control of the system is compatible with all the scenarios provided.
- the new configuration of the actuators for example the control of the transmission in the case where the system is a vehicle for which the triggering event is a bad trajectory due to an under-inflated tire, is simulated for all the scenarios 110 to verify that the new learning does not generate other undesirable events.
- the modification of the configuration of an actuator can, potentially, generate other undesired events in the context of the scenarios previously tested, it is therefore important to execute all the scenarios for each new data set of learning available following a triggering event.
- the learning algorithm implemented by the learning unit 106 is executed in the virtualization platform 111 with the simulation parameters to carry out a new learning of this algorithm.
- the new parameters of the algorithm determined by the virtualization platform 111 are transmitted to the learning unit 106 which will update its learning algorithm to modify the control of the system in actual conditions.
- the new learning phase carried out by the virtualization platform 111 is, for example, carried out during a period of inactivity of the system 101 . It is also possible to carry out the learning phase on the virtualization platform in parallel with the operation of the system, and transfer the improved parameters to the system once it is stopped or in a safe condition.
- the system 101 can improve its reaction in order to avoid a new incident.
- a triggering event for the generation of a new scenario is, for example, crossing a line, not respecting a traffic light or a collision with an obstacle or more generally a traffic accident.
- the generator 109 produces a simulation scenario corresponding to this accident from the data recorded in memory 108.
- the virtualization platform 111 will then carry out a new learning of the control algorithm from this scenario with the aim of modifying the steering of the vehicle to avoid an accident.
- the new parameters of the artificial intelligence algorithm executed by the learning unit 106 are then updated so that the vehicle improves its reaction if the scenario which led to the accident is reproduced.
- the updated parameters remain compatible with other previously validated scenarios.
- the virtualization platform 111 takes into account, via the simulation of the digital twin of the vehicle, the internal characteristics of the car, for example the level of tire pressure or else their state of use which can be estimated through a correlation between the time elapsed since they were changed, and their level of use.
- the learning aims for example to improve the handling of an articulated arm of the robot in order to improve its grip in order to grasp certain types of objects or perform certain tasks which require precision.
- An advantage provided by the invention is that it makes it possible to improve the learning of the learning unit in charge of controlling the system according to events which occur in operational conditions.
- the invention makes it possible to react to specific events which were not foreseen in the initial learning scenarios used to develop the learning algorithm.
- the invention takes into account, via a digital twin of the system, the evolution over time of the state of the system.
- the learning unit 106 in charge of controlling the system 101 executes one or more automatic learning algorithms which receive as input all the data acquired by the external 103 and internal 112 sensors as well as the perception data of the environment produced by the first learning unit 104 and produce as output one or more command(s) intended for the actuators 105.
- Reference [1] describes a vehicle parking aid algorithm.
- Reference [2] describes a method for detecting events of known nature which can be used to detect a particular event and trigger the generation of a new scenario.
- Reference [3] describes an algorithm making it possible to adapt the control of a vehicle in real time.
- Reference [4] describes an artificial intelligence algorithm which makes it possible to adapt the control of a vehicle in a modeled terrain.
- Reference [5] describes an artificial intelligence algorithm to adapt the generation of mobile robot trajectories.
- Reference [6] describes yet another example of a learning algorithm for autonomous driving.
- the first learning unit 104 also executes one or more automatic learning algorithm(s) which aim to characterize the environment of the system from the data acquired by the external sensors 103.
- unit 104 The algorithms implemented by unit 104 can be chosen from state-of-the-art algorithms known to those skilled in the art. Without being exhaustive, several possible examples of such algorithms are cited.
- Reference [7] describes an algorithm for detecting particular events in a video sequence.
- Reference [9] describes a detection method for measurements acquired by environmental sensors.
- Reference [10] describes a method for detecting pedestrians in a video sequence.
- Reference [11] describes another method for detecting objects in images.
- Reference [13] describes a method for characterizing a 3D scene.
- Reference [14] describes a method for recognizing objects in a scene observed in 3D.
- reference [8] describes a method for generating a simulated environment which can be implemented by the generator 109.
- Reference [12] describes a system for generating a simulated scenario from data supplied by sensors which can also be used to produce generator 109.
- the processor may be a generic processor, a specific processor, an application-specific integrated circuit (also known as an ASIC for "Application-Specific Integrated Circuit") or an array of field-programmable gates (also known as the English name of FPGA for “Field-Programmable Gate Array”).
- the learning device according to the invention can use one or more dedicated electronic circuits or a circuit for use general.
- the technique of the invention can be implemented on a reprogrammable calculation machine (a processor or a microcontroller for example) executing a program comprising a sequence of instructions, or on a dedicated calculation machine (for example a set of gates such as an FPGA or an ASIC, or any other hardware module, in particular neuromorphic electronic modules suitable for embedded learning).
- a reprogrammable calculation machine a processor or a microcontroller for example
- a dedicated calculation machine for example a set of gates such as an FPGA or an ASIC, or any other hardware module, in particular neuromorphic electronic modules suitable for embedded learning.
- Figure 1 describes a first embodiment of the invention for which all the components of the learning device are embedded in the cyber-physical system 101.
- Figure 2 describes a second embodiment of the invention for which the simulation scenario generator 109 is remote outside the system 101, for example in a remote server.
- Figure 3 describes a third alternative embodiment of the invention for which the simulation scenario generator 109, the simulator 107 and the virtualization platform 111 are deported to a remote server.
- Figure 4 describes a fourth embodiment of the invention for which, in addition, the storage or memory unit 108 which makes it possible to save the perception data over a time interval is also deported to a remote server. .
- each of the components 109,107,111, 108 can be remoted alone or in combination with another to a calculation server having increased calculation resources.
- the system 101 includes communication equipment making it possible to exchange data with the remote server. This may, for example, be radio communication equipment based on wireless technology (e.g. 5G technology).
- FIG. 5 describes another embodiment of the invention for which the simulation scenarios generated following an event are shared between several cooperating systems 501,502,503 together within a fleet.
- An advantage of this variant is that it allows cooperative learning, with all the vehicles benefiting from the new scenarios generated by each vehicle following an event, and leading to an acceleration of the overall safety level of the fleet.
- the new scenarios generated by any one of the systems are retransmitted to all the other systems 502,503 of the fleet so that they realize a new learning.
- the simulation scenarios are transmitted to the other vehicles of the fleet in a compressed form, for example in the form of a semantic description. In this way, the bandwidth consumed by these data transfers is reduced.
- the system 501 which generates a new scenario also generates a semantic description of this scenario.
- a semantic description can be obtained using semantic image algorithms that create a textual description from an image.
- the generated scenario textual description is then transmitted to other fleet systems 502,503 which can re-generate the simulation scenarios from this textual description using a generative scene-to-text algorithm.
- Reference [15] gives an example of a semantic description generation method from images.
- Reference [16] gives an example of a scene generation method from a semantic description (ontology).
- This variant embodiment has a significant advantage in terms of limiting the quantity of data exchanged between the systems of the fleet to share the scenarios.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Feedback Control In General (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Traffic Control Systems (AREA)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR2011987A FR3116634B1 (fr) | 2020-11-23 | 2020-11-23 | Dispositif apprenant pour système cyber-physique mobile |
| PCT/EP2021/082153 WO2022106545A1 (fr) | 2020-11-23 | 2021-11-18 | Dispositif apprenant pour système cyber-physique mobile |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4248367A1 true EP4248367A1 (de) | 2023-09-27 |
Family
ID=74860038
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP21815481.3A Pending EP4248367A1 (de) | 2020-11-23 | 2021-11-18 | Lernvorrichtung für ein mobiles cyberphysikalisches system |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US20230401453A1 (de) |
| EP (1) | EP4248367A1 (de) |
| FR (1) | FR3116634B1 (de) |
| WO (1) | WO2022106545A1 (de) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12423759B2 (en) * | 2023-03-21 | 2025-09-23 | King Saud University | System and method for cybersecurity risk monitoring and evaluation in connected and autonomous vehicles |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| FR3021938B1 (fr) | 2014-06-04 | 2016-05-27 | Commissariat Energie Atomique | Dispositif d'aide au parking et vehicule equipe d'un tel dispositif. |
| FR3026526B1 (fr) | 2014-09-26 | 2017-12-08 | Commissariat Energie Atomique | Procede et systeme de detection d'evenements de nature connue |
| FR3044450B1 (fr) | 2015-12-01 | 2017-11-24 | Commissariat Energie Atomique | Procede de caracterisation d'une scene par calcul d'orientation 3d |
| FR3054062B1 (fr) | 2016-07-13 | 2018-08-24 | Commissariat Energie Atomique | Systeme et procede de capture embarquee et de reproduction 3d/360° du mouvement d'un operateur dans son environnement |
| US10481044B2 (en) * | 2017-05-18 | 2019-11-19 | TuSimple | Perception simulation for improved autonomous vehicle control |
| US11042155B2 (en) * | 2017-06-06 | 2021-06-22 | Plusai Limited | Method and system for closed loop perception in autonomous driving vehicles |
| FR3076028B1 (fr) | 2017-12-21 | 2021-12-24 | Commissariat Energie Atomique | Methode de reconnaissance d'objets dans une scene observee en trois dimensions |
| WO2019191306A1 (en) * | 2018-03-27 | 2019-10-03 | Nvidia Corporation | Training, testing, and verifying autonomous machines using simulated environments |
| DE102018220865B4 (de) * | 2018-12-03 | 2020-11-05 | Psa Automobiles Sa | Verfahren zum Trainieren wenigstens eines Algorithmus für ein Steuergerät eines Kraftfahrzeugs, Computerprogrammprodukt sowie Kraftfahrzeug |
| DE102019206908B4 (de) * | 2019-05-13 | 2022-02-17 | Psa Automobiles Sa | Verfahren zum Trainieren wenigstens eines Algorithmus für ein Steuergerät eines Kraftfahrzeugs, Computerprogrammprodukt, Kraftfahrzeug sowie System |
| US11727169B2 (en) * | 2019-09-11 | 2023-08-15 | Toyota Research Institute, Inc. | Systems and methods for inferring simulated data |
-
2020
- 2020-11-23 FR FR2011987A patent/FR3116634B1/fr active Active
-
2021
- 2021-11-18 EP EP21815481.3A patent/EP4248367A1/de active Pending
- 2021-11-18 WO PCT/EP2021/082153 patent/WO2022106545A1/fr not_active Ceased
- 2021-11-18 US US18/037,544 patent/US20230401453A1/en active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| FR3116634B1 (fr) | 2022-12-09 |
| FR3116634A1 (fr) | 2022-05-27 |
| US20230401453A1 (en) | 2023-12-14 |
| WO2022106545A1 (fr) | 2022-05-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10867409B2 (en) | Methods and systems to compensate for vehicle calibration errors | |
| CN111273655B (zh) | 用于自动驾驶车辆的运动规划方法和系统 | |
| US10929995B2 (en) | Method and apparatus for predicting depth completion error-map for high-confidence dense point-cloud | |
| CN111923927B (zh) | 用于交互感知交通场景预测的方法和装置 | |
| CN111923928A (zh) | 用于自动车辆的决策制定方法和系统 | |
| CN110901656B (zh) | 用于自动驾驶车辆控制的实验设计方法和系统 | |
| US11242050B2 (en) | Reinforcement learning with scene decomposition for navigating complex environments | |
| EP4256412B1 (de) | System und verfahren zur steuerung von auf maschinenlernen basierenden fahrzeugen | |
| US11603119B2 (en) | Method and apparatus for out-of-distribution detection | |
| US11628865B2 (en) | Method and system for behavioral cloning of autonomous driving policies for safe autonomous agents | |
| CN111208814A (zh) | 用于自动车辆的、利用动态模型的、基于记忆的最优运动规划 | |
| Swief et al. | A survey of automotive driving assistance systems technologies | |
| Menhour et al. | A new model-free design for vehicle control and its validation through an advanced simulation platform | |
| Curiel-Ramirez et al. | Hardware in the loop framework proposal for a semi-autonomous car architecture in a closed route environment | |
| US20200033870A1 (en) | Fault Tolerant State Estimation | |
| US11989020B1 (en) | Training machine learning model(s), in simulation, for use in controlling autonomous vehicle(s) | |
| EP4248367A1 (de) | Lernvorrichtung für ein mobiles cyberphysikalisches system | |
| US20210056014A1 (en) | Method for rating a software component of an sil environment | |
| Baskaran et al. | End-to-end drive by-wire PID lateral control of an autonomous vehicle | |
| Catozzi | Design of a NMPC system for automated driving and integration into the CARLA simulation environment | |
| EP4448365A1 (de) | Verfahren zur überwachung des betriebs eines kraftfahrzeugs | |
| EP4392843B1 (de) | Verfahren zur modellierung einer navigationsumgebung eines kraftfahrzeugs | |
| Gupta et al. | Smart autonomous vehicle using end to end learning | |
| EP4517467A1 (de) | Verwaltung von gegnerischen agenten zum testen von autonomen fahrzeugen | |
| Zhang et al. | Shared Control of Teleoperated Vehicles With Delay-Compensated Safety Filtering |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20230524 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) | ||
| RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: COMMISSARIAT A L'ENERGIE ATOMIQUE ET AUX ENERGIESALTERNATIVES |