CN115136081A - Method for training at least one algorithm for a controller of a motor vehicle, method for optimizing a traffic flow in a region, computer program product and motor vehicle - Google Patents

Method for training at least one algorithm for a controller of a motor vehicle, method for optimizing a traffic flow in a region, computer program product and motor vehicle Download PDF

Info

Publication number
CN115136081A
CN115136081A CN202180015176.2A CN202180015176A CN115136081A CN 115136081 A CN115136081 A CN 115136081A CN 202180015176 A CN202180015176 A CN 202180015176A CN 115136081 A CN115136081 A CN 115136081A
Authority
CN
China
Prior art keywords
motor vehicle
traffic
algorithm
task
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180015176.2A
Other languages
Chinese (zh)
Inventor
U·埃贝勒
C·蒂姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PSA Automobiles SA
Original Assignee
PSA Automobiles SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PSA Automobiles SA filed Critical PSA Automobiles SA
Publication of CN115136081A publication Critical patent/CN115136081A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0088Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Abstract

A method for training at least one algorithm of a control unit for a motor vehicle by means of a self-learning neural network is specified, said method having the following steps: providing a simulated environment having map data of a real-existing usage area, wherein the behavior of the motor vehicle is determined by a set of rules, providing real-time traffic data of the real-existing usage area and readjusting traffic conditions in the simulated environment; providing a mission for the motor vehicle, in which mission the motor vehicle is driven in front of at least one other simulated motor vehicle; performing a simulation of the task in the simulation environment, determining a traffic flow indicator for the task, wherein the at least one algorithm and/or the at least one rule set is modified and the task is repeated when the traffic flow indicator is below a threshold value, or the task is classified as successful when the traffic flow indicator is above a threshold value.

Description

Method for training at least one algorithm for a controller of a motor vehicle, method for optimizing a traffic flow in a region, computer program product and motor vehicle
Technical Field
A method for training at least one algorithm for a controller of a motor vehicle, a method for optimizing a traffic flow in a region, a computer program product and a motor vehicle are described herein.
Background
Methods for training at least one algorithm for a controller of a motor vehicle, methods for optimizing a traffic flow in a region, computer program products and motor vehicles of the type mentioned at the outset are known from the prior art. In the last years, the first part of automatically driven vehicles (corresponding to SAE grade 2 according to SAE J3016) has reached the level of mass production (series reife). Motor vehicles, whether driven automatically (corresponding to SAE level > 3 according to SAE J3016) or autonomously (corresponding to SAE level 4/5 according to SAE J3016), must react independently to unknown traffic conditions with maximum safety on the basis of various pre-specifications (e.g. compliance with driving destinations and common traffic regulations). Since traffic reality is highly complex due to the unpredictability of the behavior of other traffic participants, in particular human traffic participants, it is considered almost impossible to program the respective controllers of motor vehicles in a conventional manner and on the basis of rules established by humans.
In order to solve complex problems by means of computers, it is also known to develop algorithms by means of machine learning methods or artificial intelligence or by means of self-learning neural networks. On the one hand, such algorithms are able to react more gracefully to complex traffic conditions than traditional algorithms. On the other hand, with the aid of artificial intelligence, it is possible in principle to further develop and continuously improve algorithms during the development process and in daily life by continuous learning. Alternatively, the state of the algorithm may be frozen after the training phase in the development process and verified by the manufacturer.
Especially in urban areas, but also traffic junctions on highways, are subject to a large volume of traffic flow at least during rush hour traffic, especially in commuter traffic. The problem is triggered in part by the generally high traffic density at these times, but on the other hand also by the behavior of human traffic participants. Human traffic participants tend to exacerbate congestion rather than prevent congestion, for example, when traveling too fast for reducing the speed limit of traffic because no obstacles appear visible.
It is also known to regulate the flow of traffic according to the lead vehicle, for example by a driver with national authorization, for example the police. By means of the guiding vehicles, the traffic flow in heavily loaded areas can be adjusted more effectively than by speed limitation alone and thus the average speed of the respective adjusted fleet is increased as a whole.
DE 102017007468 a1 discloses a method for operating a vehicle, in which measured data of road traffic conditions are determined and future road traffic conditions are determined from the measured data in a traffic simulation. The vehicle-mounted acquisition unit of the vehicle and/or of at least one further vehicle involved in the road traffic situation acquires the measurement data and transmits the measurement data to the central computing unit. By means of the central computing unit, a traffic simulation is carried out within a predetermined time range (Zeithorizont), and vehicle parameters are determined by means of the central computing unit as a function of the results of the traffic simulation and transmitted to the vehicle and/or the at least one further vehicle in such a way that, when the vehicle parameters are set, the driving pattern of the vehicle and/or of the at least one further vehicle is adapted in order to promote the traffic flow.
Disclosure of Invention
The object is therefore to develop a method for training at least one algorithm for a controller of a motor vehicle, a method for optimizing the traffic flow in a region, a computer program product and a motor vehicle of the type mentioned at the outset in such a way that the traffic flow in a high-load region can be better adjusted.
This object is achieved by a method for training at least one algorithm for a controller of a motor vehicle according to claim 1, a method for optimizing traffic flows in a region according to the parallel claim 9, a computer program product according to the parallel claim 11 and a motor vehicle according to the parallel claim 12. Extended configurations and embodiments are the subject matter of the dependent claims.
The following describes a method for training at least one algorithm of a control unit for a motor vehicle, the control unit being provided to carry out an automated or autonomous driving function by intervening in a group of the motor vehicle based on input data using the at least one algorithm, the algorithm being trained by a self-learning neural network, the method comprising the following steps:
a) providing a computer program product module for the automated or autonomous driving function, wherein the computer program product module contains an algorithm to be trained and a self-learning neural network,
b) providing a simulation environment with simulation parameters, wherein the simulation environment contains map data of a real-existing usage area, the motor vehicle, wherein the behavior of the motor vehicle is determined by a set of rules,
c) providing real-time traffic data of a real-existing use area and readjusting traffic conditions in the simulated environment;
d) determining a traffic hotspot (Verkehrssbrennpenkten) according to the traffic flow index (Verkehrssflussmetrik) and the real-time traffic data;
e) providing a task for the motor vehicle, in which task the motor vehicle is driven in front of the simulated at least one further motor vehicle;
f) performing a simulation of the task in the simulation environment;
g) determining a traffic flow indicator for the task, wherein,
(i) modifying the at least one algorithm and/or the at least one rule set and repeating the task when the traffic flow indicator is below a threshold, or
(ii) Classifying the task as successful when the traffic flow indicator is above a threshold.
A motor vehicle equipped with a corresponding algorithm can be used as an efficient guidance vehicle, since the algorithm used by the motor vehicle is trained in the following ways: traffic flow at traffic hotspots is improved by forcing rear-driving, conventional, human-driven vehicles to take traffic-flow-favorable actions.
The corresponding traffic flow indicator may be, for example, an average speed which should correspond as far as possible to a certain minimum speed. A minimum speed of this type can be predetermined or can be derived from theoretical considerations, for example taking into account the characteristics of the relevant region, for example taking into account the applicable maximum speed. Parameters such as traffic light facilities may also be considered.
The corresponding traffic hotspot may be in the following range: in this range, traffic flows particularly poorly. Such a range may be, for example, a range with many entrances and exits and/or a range with many traffic light installations.
In a first extended configuration, it can be provided that, when step g) (i) is carried out, a further task is selected and the method is repeated for the further task.
By repeating the method for additional tasks, excessive specialization of the algorithm can be avoided.
In a further embodiment, it can be provided that the mission is the travel of a route from at least one starting point to at least one destination point via a traffic hotspot.
The route via the respective traffic hot spot maps the real usage scenario, i.e. the guidance of the traffic by the use of the respective automatically driven guidance vehicle.
In a further embodiment, it can be provided that the real-time traffic data contains infrastructure information.
By including the infrastructure information, it is possible to map traffic situations more realistically and train the algorithm with respect to other factors predetermined by the traffic infrastructure, such as the duration of the individual traffic light phases in a predetermined traffic hotspot region and the synchronization of the same traffic light phase in a predetermined traffic hotspot region.
In a further embodiment, it can be provided that the mission is changed by changing a parameter of the traffic situation in the simulated environment and the method is carried out for the modified mission.
By changing the parameters, excessive specialization of the algorithm can be prevented.
In a further embodiment, it can be provided that, when the traffic conditions in the simulated environment are readjusted, an optimization algorithm is used in order to minimize the deviation between the simulated environment and the real-time traffic data.
This traffic simulation may more accurately map the actual traffic conditions in the area of use, which may be achieved by using an optimization algorithm.
In a further embodiment, it can be provided that the traffic flow indicator comprises an average speed during the mission.
The average speed during the mission is a suitable indicator for analyzing the traffic flow.
In a further embodiment, it can be provided that the algorithm is trained by means of a reinforcement learning algorithm.
Reinforcement learning algorithms are particularly well suited for optimization tasks such as current optimization tasks.
A further independent subject matter relates to a device for training at least one algorithm of a control unit for a motor vehicle, wherein the control unit is provided for carrying out an automated or autonomous driving function by intervening in a group of the motor vehicle based on input data using the at least one algorithm, wherein the algorithm is trained by means of a self-learning neural network, wherein the device is designed to carry out the following steps:
a) providing a computer program product module for the automated or autonomous driving function, wherein the computer program product module contains an algorithm to be trained and a self-learning neural network,
b) providing a simulation environment with simulation parameters, wherein the simulation environment contains map data of a real-existing usage area, the motor vehicle, wherein the behavior of the motor vehicle is determined by a rule set,
c) providing real-time traffic data of the real-existing usage area and readjusting traffic conditions in the simulated environment;
d) determining a traffic hot spot according to the traffic flow index and the real-time traffic data;
e) providing a mission for the motor vehicle, in which the motor vehicle is driven in front of the simulated at least one further motor vehicle;
f) performing a simulation of the task in the simulation environment;
g) determining a traffic flow indicator for the task, wherein,
(i) modifying the at least one algorithm and/or the at least one rule set and repeating the task when the traffic flow indicator is below a threshold, or
(ii) Classifying the task as successful when the traffic flow indicator is above a threshold.
In a first extended configuration, provision may be made for the device to be configured, when step i) is carried out, to select a further task and to repeat the method for the further task.
In a further embodiment, it can be provided that the mission is the travel of a route from at least one starting point to at least one destination point via a traffic hotspot.
In a further embodiment, it can be provided that the real-time traffic data contains infrastructure information.
In a further embodiment, the device can be designed to change the task by changing a parameter of the traffic situation in the simulated environment and to carry out the method for the modified task.
In a further embodiment, the device can be designed to use an optimization algorithm in the readjustment of the traffic situation in the simulated environment in order to minimize deviations between the simulated environment and the real-time traffic data.
In a further embodiment, it can be provided that the traffic flow indicator comprises an average speed during the mission.
In a further embodiment, the device can be designed to train the algorithm by means of a reinforcement learning algorithm.
Another independent subject matter relates to a method for optimizing the traffic flow in an area, wherein at least one guide vehicle is used in order to adjust the driving behavior of motor vehicles driving behind the guide vehicle by means of the guide vehicle, wherein the at least one guide vehicle is a motor vehicle driving autonomously, wherein the at least one guide vehicle is controlled by means of an algorithm which has been trained according to the above-mentioned method.
The traffic flow at the traffic hot spot can be regulated effectively by means of corresponding autonomously driven guide vehicles.
In a first extended configuration, it can be provided that at least one guide vehicle obtains infrastructure information from the infrastructure control device, wherein the guide vehicle adapts its driving behavior to the received infrastructure information.
By using infrastructure information, such as information about traffic light switches, lane adjustments or the like, the traffic flow can be adjusted more efficiently, since the lead vehicle can adapt its driving style to a given situation, for example in order to drive over as many traffic lights as possible in the case of green lights.
Another independent subject matter relates to a guidance vehicle for optimizing a traffic flow by adjusting a driving behavior of a motor vehicle in a region driving behind the guidance vehicle, wherein the at least one guidance vehicle is a motor vehicle driving autonomously, wherein the at least one guidance vehicle has an algorithm for controlling the guidance vehicle, wherein the algorithm is trained according to the method described above.
In a further embodiment, it can be provided that at least one guide vehicle has a device for receiving infrastructure information from an infrastructure control device, wherein the guide vehicle is provided for adapting its driving behavior to the received infrastructure information.
A further independent subject matter relates to a computer program product having a computer-readable storage medium on which instructions are embedded, which instructions, when executed by at least one computing unit, cause the at least one computing unit to be configured to carry out a method of the above-mentioned type.
The method can be implemented on one computing unit or distributed over a plurality of computing units, so that certain method steps are carried out on one computing unit and further implementation steps are carried out on at least one further computing unit, wherein the computed data (if required) can be transferred between the computing units.
Another independent subject matter relates to a motor vehicle having a computer program product of the above-mentioned type.
Drawings
Further features and details emerge from the following description, in which (if appropriate with reference to the drawings) at least one embodiment is described in detail. The features described and/or shown graphically form the subject matter individually or in any meaningful combination, if appropriate also independently of the claims, and can in particular additionally also be the subject matter of one or more individual applications. Identical, similar and/or functionally identical components are provided with the same reference numerals. Here, schematically shown:
fig. 1 shows a top view of a motor vehicle;
FIG. 2 illustrates computer program product modules;
FIG. 3 shows a road map of a real-existing usage area, an
Fig. 4 shows a flow chart of a training method.
Detailed Description
Fig. 1 shows a motor vehicle 2, which is provided for automated or autonomous driving. The motor vehicle 2 is arranged to guide the vehicle for regulating the traffic at a traffic hotspot.
The motor vehicle 2 has a controller 4 with a computing unit 6 and a memory 8. In the memory 8a computer program product is stored, which is described in more depth below in connection with fig. 2 to 4.
The controller 4 is connected on the one hand to a series of environmental sensors which allow the current position of the motor vehicle 2 and the corresponding traffic situation to be detected. The environmental sensor includes: environmental sensors 10, 11 at the front of the motor vehicle 2, environmental sensors 12, 13 at the rear of the motor vehicle 2, a camera 14 and a GPS module 16. The environmental sensors 10 to 13 can comprise, for example, radar sensors, lidar sensors and/or ultrasonic sensors.
Furthermore, sensors for detecting the state of the motor vehicle 2, in particular a wheel speed sensor 16 and an acceleration sensor 18, are provided, which are connected to the controller 4. The present state of the motor vehicle 2 can be reliably detected by means of these motor vehicle sensors.
Furthermore, a communication device 20 is provided, which is designed for wireless communication with remotely arranged computing means or computing centers in order to obtain infrastructure information, i.e. information about the traffic light phase, the traffic flow in a certain area, etc., from these computing means or computing centers in order to utilize this information in the driving planning. The communication device 20 may transmit and receive data and use a common standard for this purpose, such as LTE, 3G, 4G or the like. The Car2X communication possibility also allows direct communication with active objects in the infrastructure, for example with traffic light facilities capable of communication.
During operation of the motor vehicle 2, the computing unit 6 loads the computer program product stored in the memory 8 and executes it. Based on the algorithm and the input signals, the computing unit 6 decides on the control of the motor vehicle 2, which the computing unit 6 will implement by intervening on the steering device 22, the motor control device 24 and the brake 26, which are connected to the controller 4, respectively.
The data of the sensors 10 to 18 are continuously buffered in the memory 8 and removed after a predetermined time duration, so that these environmental data can be made available for further evaluation.
The algorithm has been trained according to the method described below.
Fig. 2 shows a computer program product 28 with computer program product modules 30.
The computer program product module 30 has a self-learning neural network 32 that trains an algorithm 34. The self-learning neural network 32 learns according to a reinforcement learning method, i.e. the neural network 32 tries to obtain a reward by changing the algorithm 34 for improved behavior corresponding to one or more indicators or metrics (Ma β stab), i.e. for improvement of the algorithm 34.
In alternative embodiments, other known learning methods of supervised learning and unsupervised learning, and combinations of these learning methods, may also be used.
The algorithm 34 can essentially consist of a complex (komplexer) filter with a matrix of values, which are generally referred to as weights by those skilled in the art, which define a filter function which determines the behavior of the algorithm 34 as a function of input variables which are recorded in the present case by the ambient sensors 10 to 18 and which generates control signals for controlling the motor vehicle 2.
The computer program product module 30 can be used not only in the motor vehicle 2 but also outside the motor vehicle 2. It is thus possible to train the computer program product module 30 not only in a real environment but also in a simulated environment. According to the teachings described herein, the training is started especially in a simulated environment, as this is safer than training in a real environment.
The computer program product module 30 is provided for formulating and analyzing the indicators that the processing should improve.
In the present case, such an indicator is, for example, the average speed for passing through a traffic hotspot, i.e. the following range: in this range, the traffic flows particularly poorly at least at certain times.
If the indicator is already above a certain threshold, for example the speed of a fleet guided behind the motor vehicle 2 acting as a lead vehicle is greater than a limit speed, the indicator can be considered to have been met and a further task is selected, according to which the algorithm is further trained or frozen in respect of the indicator. The algorithm can then either be optimized and further trained in terms of additional metrics or can be tested in a real environment.
Fig. 3 shows a site, simulated environment 38 of a real-life intersection 40.
The intersection 40 represents a traffic hotspot as follows: at this traffic hotspot, traffic flows particularly poorly at rush hour traffic.
In order to improve the traffic flow, autonomously driven vehicles 2 are used, whose driving pattern is optimized toward an optimal traffic flow, in the present case the maximum possible average speed vd. Motor vehicle 2 represents a lead vehicle which sets the driving behavior of the traffic participants (here the motorcyclist 42 and the motor vehicles 44, 46) traveling behind the lead vehicle 2. The vehicles 42, 44, 46 cannot drive faster than the motor vehicle 2.
However, the motor vehicle 2 not only regulates the speed of the following vehicles 42, 44 and 46, but also regulates other behaviors, in particular the braking behaviors of the respective vehicles 42, 44 and 46, for example by limiting the maximum deceleration of the motor vehicle 2, i.e. by particularly gentle driving. The following probabilities can thereby be reduced: the fleet of vehicles 42, 44 and 46 brakes too hard and the traffic flow is thus stuck.
The motor vehicle 2 can communicate with the traffic infrastructure (here traffic light 48) by means of its communication device. This makes it possible to adapt the driving speed to the respective traffic light phase in order to drive through the traffic light as always as possible at the beginning of the green phase.
Fig. 4 shows a flow chart of the method.
First, computer program product modules are provided after the start. The computer program product module contains an algorithm to be trained and a self-learning neural network.
Subsequently, a simulated environment is provided on the basis of the real map data. The simulation environment may contain other traffic participants and their tasks in addition to the roads and the defined rules.
In another step, a task in the simulated environment is determined. As shown in connection with fig. 3, the task may be the travel of a determined route from a starting point to a target point via a traffic hotspot.
The simulation is then performed and the average velocity is determined. The average speed is compared with an average speed to be achieved as an index of traffic flow. The driving behavior of the autonomously driven motor vehicle 2 using the respective algorithm 34 can thus be optimized by the reinforcement learning method in such a way that the respective autonomously driven motor vehicle 2 can be used as a guidance vehicle.
If this corresponding index is implemented, the method can be repeated according to other tasks and the algorithm becomes more universally available.
Although the subject matter has been illustrated and described in more detail by way of examples, the invention is not limited to the examples disclosed and other variants can be derived therefrom by the person skilled in the art. Therefore, a plurality of variant possibilities are evident. It is also obvious that the embodiments mentioned by way of example represent only examples, which should not be construed in any way as limiting the scope of protection, the possibilities of application or the configurations of the invention. Rather, the foregoing description and drawings describe exemplary embodiments and are intended to enable those skilled in the art to practice the same, wherein various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope of the invention as set forth in the appended claims and their legal equivalents (e.g., as further set forth in the specification), in view of the teaching disclosed herein.
List of reference numerals
2 Motor vehicle
4 controller
6 calculating unit
8 memory
10 environmental sensor
11 environmental sensor
12 Environment sensor
13 Environment sensor
14 vidicon
15 GPS module
16 wheel revolution sensor
18 acceleration sensor
20 communication device
22 steering device
24 motor control device
26 brake
28 computer program product
30 computer program product module
32 neural network
34 algorithm
38 simulation environment
40 crossroad
42 motorcycle
44 motor vehicle
46 Motor vehicle
48 traffic signal lamp
vd mean velocity

Claims (12)

1. Method for training at least one algorithm (34) for a controller (4) of a motor vehicle (2), wherein the controller (4) is provided for implementing an automated or autonomous driving function by intervening on a unit (22, 24, 26) of the motor vehicle (2) based on input data using the at least one algorithm (34), wherein the algorithm (34) is trained by means of a self-learning neural network (32), comprising the following steps:
a) providing a computer program product module (30) for the automated or autonomous driving function, wherein the computer program product module (30) contains an algorithm (34) to be trained and the self-learning neural network (32),
b) providing a simulated environment (38) with simulation parameters, wherein the simulated environment (38) contains map data (38) of a real-existing usage area (40), wherein the behavior of the motor vehicle (2) is determined by a set of rules,
c) providing real-time traffic data of the real-existing usage area (40) and readjusting traffic conditions in the simulated environment (38);
d) determining a traffic hot spot (40) from the real-time traffic data according to a traffic flow indicator (vd),
e) providing a mission for the motor vehicle (2), in which mission the motor vehicle (2) is driven in front of at least one simulated further motor vehicle (2);
f) performing a simulation of the task in the simulation environment (38);
g) determining a traffic flow indicator (vd) for the task, wherein,
(i) modifying the at least one algorithm (38) and/or the at least one rule set and repeating the task when the traffic flow indicator (vd) is below a threshold, or
(ii) Classifying the task as successful when the traffic flow indicator (vd) is above the threshold.
2. The method of claim 1, wherein when step g) (i) is implemented, a further task is selected and the method is repeated for the further task.
3. The method according to claim 1 or 2, wherein the mission is the travel of a route from at least one origin point to at least one destination point via a traffic hotspot (40).
4. The method of any preceding claim, wherein the real-time traffic data comprises infrastructure information.
5. The method of any preceding claim, wherein the task is varied by changing a parameter of traffic conditions in the simulated environment and the method is performed for the modified task.
6. The method of any preceding claim, wherein an optimization algorithm is used in readjusting traffic conditions in the simulated environment so as to minimize a deviation between the simulated environment and the real-time traffic data.
7. The method of any of the above claims, wherein the traffic flow indicator comprises an average speed during the mission.
8. The method of any one of the preceding claims, wherein the algorithm is trained by means of a reinforcement learning algorithm.
9. A method for optimizing the traffic flow in an area, wherein at least one guide vehicle (2) is used in order to adjust the driving behavior of motor vehicles (42, 44, 46) driving behind the guide vehicle (2) by means of the guide vehicle (2), wherein the at least one guide vehicle is an autonomously driven motor vehicle (2), wherein the at least one guide vehicle (2) is controlled by means of an algorithm (34) which has been trained according to the above-mentioned method.
10. The method according to claim 9, wherein the at least one lead vehicle (2) obtains infrastructure information from an infrastructure control device, wherein the lead vehicle (2) matches its driving behavior to the received infrastructure information.
11. A computer program product having a computer-readable storage medium (8) on which instructions are embedded, which instructions, when executed by at least one computing unit (6), cause the at least one computing unit (6) to be arranged for carrying out the method according to any one of the preceding claims 1 to 10.
12. A motor vehicle having a computer program product according to claim 11.
CN202180015176.2A 2020-02-17 2021-02-10 Method for training at least one algorithm for a controller of a motor vehicle, method for optimizing a traffic flow in a region, computer program product and motor vehicle Pending CN115136081A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102020201931.2 2020-02-17
DE102020201931.2A DE102020201931A1 (en) 2020-02-17 2020-02-17 Method for training at least one algorithm for a control unit of a motor vehicle, method for optimizing a traffic flow in a region, computer program product and motor vehicle
PCT/EP2021/053181 WO2021165113A1 (en) 2020-02-17 2021-02-10 Method for training at least one algorithm for a control device of a motor vehicle, method for optimising traffic flow in a region, computer program product, and motor vehicle

Publications (1)

Publication Number Publication Date
CN115136081A true CN115136081A (en) 2022-09-30

Family

ID=74591993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180015176.2A Pending CN115136081A (en) 2020-02-17 2021-02-10 Method for training at least one algorithm for a controller of a motor vehicle, method for optimizing a traffic flow in a region, computer program product and motor vehicle

Country Status (4)

Country Link
EP (1) EP4107590A1 (en)
CN (1) CN115136081A (en)
DE (1) DE102020201931A1 (en)
WO (1) WO2021165113A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102022106338A1 (en) 2022-03-18 2023-09-21 Joynext Gmbh Adapting driving behavior of an autonomous vehicle
DE102022113744A1 (en) 2022-05-31 2023-11-30 ASFINAG Maut Service GmbH Method for creating a data set

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017200180A1 (en) 2017-01-09 2018-07-12 Bayerische Motoren Werke Aktiengesellschaft Method and test unit for the motion prediction of road users in a passively operated vehicle function
DE102017212166A1 (en) 2017-07-17 2019-01-17 Audi Ag Method for operating start-stop systems in motor vehicles and communication system
DE102017007136A1 (en) 2017-07-27 2019-01-31 Opel Automobile Gmbh Method and device for training self-learning algorithms for an automated mobile vehicle
DE102017007468A1 (en) 2017-08-08 2018-04-19 Daimler Ag Method for operating a vehicle
DE102018216719A1 (en) * 2017-10-06 2019-04-11 Honda Motor Co., Ltd. Keyframe-based autonomous vehicle operation
CN109709956B (en) * 2018-12-26 2021-06-08 同济大学 Multi-objective optimized following algorithm for controlling speed of automatic driving vehicle

Also Published As

Publication number Publication date
DE102020201931A1 (en) 2021-08-19
WO2021165113A1 (en) 2021-08-26
EP4107590A1 (en) 2022-12-28

Similar Documents

Publication Publication Date Title
US20200269871A1 (en) Method and system for determining a driving maneuver
CN108475056B (en) Method for fully automatically guiding a vehicle system and motor vehicle
CN110562258B (en) Method for vehicle automatic lane change decision, vehicle-mounted equipment and storage medium
CN112292719B (en) Adapting the trajectory of an ego-vehicle to a moving foreign object
US11279361B2 (en) Efficiency improvement for machine learning of vehicle control using traffic state estimation
EP3588226B1 (en) Method and arrangement for generating control commands for an autonomous road vehicle
CN111688663A (en) Autonomous driving system and control logic for vehicle route planning and mode adaptation using maneuver criticality
CN113460042B (en) Vehicle driving behavior recognition method and recognition device
CN110843789B (en) Vehicle lane change intention prediction method based on time sequence convolution network
CN111332283B (en) Method and system for controlling a motor vehicle
JP2023508114A (en) AUTOMATED DRIVING METHOD, RELATED DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
US20210009119A1 (en) Method and control device for a system for the control of a motor vehicle
CN106573618A (en) Travel control device and method for vehicle
CN113071487B (en) Automatic driving vehicle control method and device and cloud equipment
WO2019106789A1 (en) Processing device and processing method
US20170168500A1 (en) System and method to determine traction ability of vehicles in operation
CN111833597A (en) Autonomous decision making in traffic situations with planning control
CN114667545A (en) Method for training at least one algorithm for a control unit of a motor vehicle, computer program product and motor vehicle
CN115136081A (en) Method for training at least one algorithm for a controller of a motor vehicle, method for optimizing a traffic flow in a region, computer program product and motor vehicle
CN114730186A (en) Method for operating an autonomous driving function of a vehicle
CN113076897A (en) Game dynamic driving safety measurement and control method and regulation and control terminal of intelligent networked automobile
CN111754015A (en) System and method for training and selecting optimal solutions in dynamic systems
US11433924B2 (en) System and method for controlling one or more vehicles with one or more controlled vehicles
CN113474228A (en) Calculating the due vehicle speed according to the condition
CN115176297A (en) Method for training at least one algorithm for a control unit of a motor vehicle, computer program product and motor vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination