CN113661111A - Control unit and method for the identification, classification and prediction of the interaction demand of an autonomous vehicle - Google Patents

Control unit and method for the identification, classification and prediction of the interaction demand of an autonomous vehicle Download PDF

Info

Publication number
CN113661111A
CN113661111A CN202080027899.XA CN202080027899A CN113661111A CN 113661111 A CN113661111 A CN 113661111A CN 202080027899 A CN202080027899 A CN 202080027899A CN 113661111 A CN113661111 A CN 113661111A
Authority
CN
China
Prior art keywords
vehicle
driving situation
interaction
control unit
current driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080027899.XA
Other languages
Chinese (zh)
Inventor
D·里特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayerische Motoren Werke AG
Original Assignee
Bayerische Motoren Werke AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke AG filed Critical Bayerische Motoren Werke AG
Publication of CN113661111A publication Critical patent/CN113661111A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0013Planning or execution of driving tasks specially adapted for occupant comfort
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • B60W2556/50External transmission of data to or from the vehicle for navigation systems

Abstract

The invention relates to a control unit (121) for operation of an autonomous vehicle (100), the control unit being arranged to: a current driving situation of the vehicle (100) is detected, in which there is an interaction demand of the vehicle (100) with a vehicle external unit (110), and one of a plurality of different interaction categories is assigned to the interaction demand of the previous driving situation. Then, an interaction with the vehicle exterior unit (110) can be performed with respect to the current driving situation according to the assigned interaction category. Furthermore, the control unit (121) is further arranged to predict a likely driving situation of the autonomous vehicle that may occur in a future period, in which likely driving situation there is a need for interaction of the vehicle (100) with a vehicle external unit (110). One or more measures can then be initiated to avoid the possible driving situation, and/or to change the interactive needs of the possible driving situation.

Description

Control unit and method for the identification, classification and prediction of the interaction demand of an autonomous vehicle
Technical Field
The present invention relates to a method and a corresponding control unit for supporting an autonomous vehicle in response to a specific driving situation.
Background
An autonomous or autonomously driven vehicle may be in a driving situation in which safe and reliable autonomous driving operation is possible and in which all subsystems of the autonomous vehicle are operating normally, but which may still interfere with the driving operation of the vehicle. For example, an autonomous vehicle may be trapped behind an obstacle (e.g., a parked transit vehicle) in the lane currently being traveled because traffic regulations must be violated to avoid the obstacle (e.g., because a solid line must be crossed to travel to another lane). Such driving conditions may cause the autonomous vehicle to be in a congested state for a long time, thereby affecting the reliability and comfort of the autonomous vehicle.
Disclosure of Invention
The present application relates to the technical objective of improving the reliability and/or comfort of operation of an autonomous vehicle.
This object is achieved by each of the independent claims. Advantageous embodiments are specified in particular in the dependent claims. It is noted that additional features of the patent claims depending on the independent patent claims may form separate inventions independent of all feature combinations of the independent patent claims without features of the independent patent claims or only in combination with feature subsets of the independent patent claims, which may be the subject of independent claims, divisional applications or subsequent applications. The same applies to the technical theory described in the description, which may form an invention independent of the features of the independent patent claims.
According to one aspect, a control unit for the operation of an autonomous vehicle, in particular a motor vehicle, is specified. The vehicle may have a degree of automation according to SAE level 3 or higher (in particular according to SAE level 4 or higher).
The control unit may be arranged to detect a (actual) current driving situation of the vehicle in which there is a need for interaction between the vehicle and the vehicle external unit. In other words, the actual current driving situation with the vehicle interaction demand may be detected. Here, the interaction requirement may include an interaction of the vehicle with a person at a unit outside the vehicle (e.g., with a remote operator).
In this case, the (actual) current driving situation may lead to an at least temporary blockage of the vehicle (so that the vehicle cannot continue to travel). Alternatively or additionally, the (actual) current driving situation may be a situation that may be addressed by a violation of a traffic rule (although the violation may not be performed and/or initiated by the vehicle independently herein). Alternatively or additionally, the (actual) current driving situation may be a situation that does not cause an error message of a (in particular any) subsystem of the autonomous vehicle. On the other hand, the (actual) current driving situation may comprise an accident and/or technical malfunction of the autonomous vehicle.
Further, the control unit may be configured to: one of a plurality of different interaction categories is assigned to the interaction demand of the (actual) current driving situation. In other words, a classification of the interaction requirements and/or the current driving situation may be made. A plurality of different interaction categories may require interaction with at least partially different vehicle exterior units. In the context of classification, a specific vehicle external unit (for example a specific server, if appropriate with a specific type of docking contact person) can thus be selected from a plurality of different vehicle external units (for example for remote operation of the vehicle, for vehicle services, etc.). Alternatively or additionally, a plurality of different interaction categories may require at least partially different data to be transmitted to the vehicle external unit. In this way, it is possible to determine, in the category of the classification, which data have to be exchanged with the vehicle-external unit and/or which data have to be transmitted to the vehicle-external unit (for example which sensor data are relevant to the current driving situation).
Further, the control unit may be configured to: and interacting with the vehicle external unit according to the assigned interaction category, wherein the interaction is related to the current driving situation. The control unit may in particular be arranged to: the interaction with the (selected) vehicle external unit with respect to the (actual) current driving situation is performed for the assigned interaction category and/or the interaction with respect to the (actual) current driving situation is performed using data required for the assigned interaction category. Thereby, the actual current driving situation of the autonomous vehicle can be efficiently and reliably eliminated or solved.
Furthermore, the control unit may be further configured to: a possible driving situation that may occur for an autonomous vehicle in a future period is predicted, in which there is a need for interaction of the vehicle with a unit external to the vehicle. In other words, it may be possible to predict in advance that an autonomous vehicle will enter the driving situation with interactive demand within a certain future period of time even before the driving situation with interactive demand occurs.
The (predicted) possible driving situation may result in an at least temporary blockage of the vehicle, corresponding to the (actual) current driving situation. Alternatively or additionally, possible driving conditions may be addressed by violating traffic regulations. Alternatively or additionally, the possible driving situation may be a situation in which there is no error message of a (in particular any) subsystem of the autonomous vehicle. On the other hand, possible driving situations may include accidents and/or technical malfunctions of the autonomous vehicle.
The control unit may then (i.e. when a future possible driving situation with interactive demand is identified) be arranged to: one or more measures are initiated to avoid the possible driving situation and/or to change the interaction requirement in the context of the possible driving situation, in particular to reduce the interaction requirement in the context of the possible driving situation. The one or more measures may include, for example: adjusting a driving strategy of the autonomous vehicle; causing a lane change of the autonomous vehicle; and/or to initiate interaction with a unit external to the vehicle before a possible driving situation occurs.
The control unit can improve the comfort and reliability of the operation of the autonomous vehicle.
The control unit may be arranged to detect the current driving situation based on one or more machine learning models, in particular based on one or more learned neural networks. Alternatively or additionally, the control unit may be arranged to determine the interaction class based on one or more machine learning models, in particular based on one or more learned neural networks. Alternatively or additionally, the control unit may be arranged to predict the likely driving situation based on one or more machine learning models, in particular based on one or more learned neural networks. The machine learning models may each learn in advance for a particular task. The measures described in this document can be implemented accurately and efficiently through the use of machine learning models.
The control unit may be arranged to predict the predicted likely driving situation by means of at least one machine-learned predictor (having one or more models or neural networks). Here, the predictor may learn on the basis of data relating to the (detected) current driving situation and/or on the basis of data relating to the interaction category assigned to the interaction demand of the current driving situation. The learned data for the predictor may at least partly comprise sensor data collected in the domain of the (actual) current driving situation. This can further improve the reliability and comfort of the operation of the autonomous vehicle.
The vehicle may include one or more ambient sensors (e.g., cameras, radar sensors, lidar sensors, etc.) configured to determine ambient data related to the environment proximate the vehicle. The control unit may be arranged to detect a current driving situation, determine an interaction category and/or predict a likely driving situation based on ambient data.
Alternatively or additionally, the vehicle may comprise a position sensor arranged to determine position data relating to the position of the vehicle. The control unit may be arranged to detect a current driving situation, determine an interaction class and/or predict a likely driving situation based on the location data and based on digital map information relating to a road network on which the vehicle is travelling.
Alternatively or additionally, the vehicle may comprise one or more vehicle sensors arranged to determine vehicle data relating to at least one state variable of the vehicle (e.g. the driving speed). The control unit may be arranged to detect a current driving situation, determine an interaction category and/or predict a likely driving situation based on the vehicle data.
The control unit may in particular be arranged to determine feature values of the plurality of features on the basis of ambient data of one or more ambient sensors of the vehicle, of vehicle data of one or more vehicle sensors of the vehicle, of location data of a location sensor of the vehicle, of traffic data relating to traffic in a road network on which the vehicle is travelling and/or of digital map information relating to the road network. Furthermore, the control unit may be arranged to detect a current driving situation, determine an interaction class and/or predict a likely driving situation based on the characteristic values and by means of a machine learning model.
The comfort and reliability of an autonomous vehicle can be improved in a particularly robust manner by using sensor data of one or more different sensors of the vehicle.
According to a further aspect, a (road) motor vehicle (in particular a passenger car or a truck or a bus) comprising a control unit as described in this document is specified.
According to another aspect, a (computer-implemented) method for operation of an autonomous vehicle is described. The method comprises detecting a (actual) current driving situation of the vehicle, in which there is an interaction need of the vehicle with a unit external to the vehicle. In addition, the method includes assigning one of a plurality of different interaction categories to the interaction demand of the current driving situation. Furthermore, the method comprises interacting with the vehicle exterior unit with respect to the current driving situation according to the assigned interaction category. In addition, the method includes predicting a likely driving situation that may occur for the autonomous vehicle over a future period of time, where there is a need for interaction of the vehicle with a unit external to the vehicle. Furthermore, the method comprises performing one or more measures to avoid the possible driving situation and/or to change the interaction need of the possible driving situation, in particular to reduce the interaction need of the possible driving situation.
According to another aspect, a software program is described. The software program may be arranged to run on a processor, e.g. on a control unit of a vehicle, to perform the methods described in this document.
According to another aspect, a storage medium is described. The storage medium may comprise a software program arranged to run on a processor so as to perform the method described herein.
In the context of this document, the term "autonomous driving" may be understood as driving with automatic longitudinal or lateral control or autonomous driving with automatic longitudinal and lateral control. For example, autonomous driving may be a relatively long run on a highway or a limited time run in the case of parking or maneuvering a vehicle. The term "automatic driving" includes automatic driving with any degree of automation. Exemplary degrees of automation are assisted driving, partially autonomous driving, highly autonomous driving, or fully autonomous driving. The degree of automation is defined by the federal highway research institute (BASt) (see the BASt publication "research report", version 11/2012). In assisted driving, the driver continues to perform longitudinal or lateral control, while the system takes over the respective other functions within certain limits. In partial automatic driving (TAF), the system takes over longitudinal and lateral control for a certain period of time and/or in certain situations, wherein the driver has to continuously monitor the system as in assisted driving. In highly automated driving (HAF), the system takes over longitudinal and lateral control over a period of time without the driver continuously monitoring the system; the driver must however be able to take over the vehicle control within a certain time. In fully automated driving (VAF), the system can automatically manage driving in all cases for a particular application; the driver is no longer required for this application. The four degrees of automation correspond to SAE levels 1 to 4 of the SAE J3016 standard (SAE-American society of automotive Engineers). For example, high automatic steering (HAF) corresponds to level 3 of the SAE J3016 standard. In addition, SAE level 5 is also specified in SAE J3016 as the highest degree of automation, which is not included in the definition of BASt. SAE 5 level corresponds to unmanned driving, where the system can automatically handle all situations as a human driver during the entire driving; the driver is generally no longer required. The aspects described in this document relate in particular to vehicles complying with SAE level 3 and higher.
It is noted that the methods, devices, and systems described herein can be used alone or in combination with other methods, devices, and systems described herein. Moreover, any aspects of the methods, apparatus and systems described herein may be combined with one another in a variety of ways. In particular the features of the claims can be combined with each other in a number of ways.
Drawings
The invention is explained in more detail below with the aid of examples. Wherein:
FIG. 1a illustrates an exemplary driving scenario of an autonomous vehicle;
FIG. 1b illustrates exemplary components of a vehicle;
FIG. 2a illustrates an exemplary neural network;
FIG. 2b shows an exemplary neuron; and is
FIG. 3 shows a flow chart of an exemplary method for operating an autonomous vehicle.
Detailed Description
As mentioned at the outset, the present application relates to the technical object of improving the comfort and/or reliability of an autonomous vehicle. In this regard, fig. 1a illustrates an exemplary driving situation of an autonomous vehicle 100 traveling on a first lane 101 of a multi-lane road and obstructed by an obstacle 104 (e.g., a parked vehicle).
To address this situation, the autonomous vehicle 100 will have to drive into an adjacent second lane 102 (shown by the curved arrow), which is however separated from the first lane 101 by a solid line 103 in the example shown. Because traffic regulations must be violated in order to change lanes, the autonomous vehicle 100 may stop behind the obstacle 104 and be blocked.
FIG. 1b illustrates exemplary components of the vehicle 100. The vehicle 100 comprises one or more ambient sensors 122 (e.g. cameras, radar sensors, lidar sensors, ultrasonic sensors, microphones, etc.) arranged for acquiring sensor data (also referred to as ambient data in this application) related to the surroundings of the vehicle 100. In addition, the vehicle 100 also comprises a position sensor 123 arranged to determine position data (e.g. GPS coordinates) related to the current position of the vehicle 100. The location data may be used in conjunction with digital map information about the road network on which the vehicle 100 is traveling to determine the exact location of the vehicle 100 within the road network. Furthermore, the vehicle 100 may also include one or more vehicle sensors 124 arranged to determine sensor data (also referred to herein as vehicle data) related to state variables of the vehicle 100. Exemplary state variables are the travel speed of the vehicle, the yaw rate of the vehicle, and the like.
The vehicle 100 comprises a control unit 121 arranged for automatic longitudinal and/or lateral control of the vehicle 100 based on the surroundings data, the vehicle data, the position data and/or the digital map information. Furthermore, the control unit 121 is arranged to identify a driving situation that requires interaction with the vehicle exterior unit 110 (in particular with a person at the vehicle exterior unit 110) to address the driving situation based on the above data. Thereby, the control unit 121 may be arranged to identify the interaction requirement in a first step.
Furthermore, the control unit 121 may be arranged to classify the interaction requirements. For example, a plurality of interaction categories can be defined, wherein different interaction categories are each associated with different interaction partners or different external units 110, if appropriate.
The vehicle 100 may comprise a communication unit 125 arranged to communicate with one or more external units 110 over a (wireless) communication link 111 (e.g. WLAN, 3G, 4G, 5G, etc.). In particular, a message that there is an interactive need to cope with the current driving situation may be sent to the external unit 110 through the communication unit 125. The external unit 110 may then exchange data with the vehicle 100 to cope with the driving situation. For example, remote control of the vehicle 100 may be initiated by the external unit 110 (e.g., by a person at the external unit 110) to address current driving conditions.
Furthermore, the control unit 121 may also be arranged to use the data collected in the domain of the detected driving situation for learning of a (machine-learned) predictor in order to predict or predict future driving situations with possible interaction demands. The machine-learned predictor may in particular enable early recognition that the vehicle 100 is about to enter into a driving situation requiring interaction. This information may then be used to adjust the driving strategy of the autonomous vehicle 100 to avoid driving situations with interactive demand. This can improve the reliability of the autonomous vehicle 100.
The identification of the driving situation with the interaction requirement, the classification of the interaction requirement and/or the prediction of the driving situation with the interaction requirement may each be realized by means of a learnt neural network.
Fig. 2a shows an exemplary neural network 200, in particular a feed forward network. In the example shown, the network 200 comprises two input neurons or input nodes 202 which receive, at a particular point in time, the current values of the measured variables or of the characteristics, respectively, as input values 201. Exemplary input values 201 are vehicle data, ambient data, location data, and/or digital map information or data derived therefrom (in particular values of one or more features derived therefrom). One or more input nodes 202 are part of input layer 211.
In addition, the neural network 200 also includes neurons 220 in one or more hidden layers 212 of the neural network 200. Each neuron element 220 has as input values the respective output values of the neuron elements of the previous layer 212, 211. Processing is performed in each neuron 220 to determine an output value of the neuron 220 from an input value. The output values of the neurons 220 of the last hidden layer 212 may be processed in the output neurons or output nodes 220 of the output layer 213 to determine the output values 203 of the neural network 200. Exemplary output values 203 of the neural network 200 indicate, for example, the existence of a driving situation with an interaction requirement, the existence of which interaction category and/or the fact that the vehicle 100 is entering a driving situation with an interaction requirement.
Fig. 2b shows exemplary signal processing within the neuron 220, in particular within the neuron 202 of one or more hidden layers 212 and/or output layers 213. The input values 221 of the neurons 220 are weighted with individual weights 222 in order to determine a weighted sum 224 of the input values 221 (taking into account the deviation or offset 230 if necessary) in a summation unit 223. The weighted sum 224 may be mapped to an output value 226 of the neuron 220 by an activation function 225. In this case, the value range can be limited, for example, by the activation function 225. For the neurons 220, for example, a sigmoid function or a hyperbolic tangent (tanh) function or a rectified linear unit (ReLU) may be used as the activation function 225, e.g., f (x) max (0, x). The activation function 225 may change the value of the weighted sum 224 by an offset 230, if necessary.
Thus, the neuron 220 has a weight 222 and/or an offset 230 as neuron parameters. The neuron parameters of the neurons 220 of the neural network 200 may be learned in a training phase in order to cause the neural network 200 to perform specific functions, such as identification of driving situations with interactive demands, classification of interactive demands and/or prediction of upcoming driving situations with interactive demands.
For example, learning of the neural network 200 may be performed by means of a back propagation algorithm. To this end, in a first phase of the qth phase of the learning algorithm, respective output values 203 at the outputs of one or more output neurons 220 are determined for input values 201 at one or more input nodes 202 of the neural network 200. The input values 201 may be taken from training data (i.e. actual vehicle data, ambient data, location data and/or digital map information) which also indicate the corresponding target output values (i.e. presence or absence of driving situations with interactive requirements, interactive categories of interactive requirements and/or presence or absence of future driving situations with interactive requirements). The actual output values determined or predicted by the neural network 200 may be compared to target output values from the training data to determine the value of the optimization function.
In the second phase of the qth phase of the learning algorithm, the back propagation of the error from the output to the input of the neural network is performed in order to change the neuron parameters of the neurons 220 layer by layer. Here, the determined optimization function may be derived at the output in part from each individual neuron parameter of the neural network 200 in order to determine the extent to which the individual neuron parameters are adjusted. The learning algorithm may iteratively repeat a plurality of time periods until a predefined convergence criterion is reached. Here, at least partially different training data may be used at different times.
Thus, a system for autonomous driving of a vehicle 100 is described that answers the following questions: 1) is the vehicle 100 in need of assistance, support and/or interaction with external units 110 and/or people? 2) What form or category of assistance, support and/or interaction is needed? And/or 3) how likely it is that the vehicle 100 needs assistance, support, and/or interaction with the external unit 110 and/or human for a period of time ahead?
This illustrates in particular a three-stage system: 1. detecting an interaction need (e.g., a remote operation of the vehicle 100 or another (remote) service interaction); 2. categorizing the interaction requirements for targeted triggering or access to one external unit 110 of a plurality of different external units 110 (e.g., triggering a remote operation or service or a trailer service or an institution); predicting future interaction needs to avoid future problems or to solve problems that may arise more quickly.
The detection of interaction requirements may be mapped by anomaly detection with different inputs or input values 201. This has the advantage that the system can learn without or with relatively few problem cases if necessary (in contrast to other forms of machine learning, which typically require relatively large amounts of training data (also for error cases)).
The classification of the communication demand may be provided by another training model. Here, the trigger of the recognized driving situation with the interaction requirement can be used as input value 201. Furthermore, the input values 201 in the model for identifying driving conditions with interactive demands may be used in the model for classifying communication demands.
In the context of a model for predicting a driving situation with an interaction demand, superordinate data from a model for identifying a driving situation with an interaction demand and/or from a model for classifying an interaction demand can be used.
The three phases or steps mentioned above may each be implemented as a cascade of (machine learning) models. Each sub-model explicitly handles a specific task. The output 203 of one submodel may be the input value 201 (or characteristic) of another submodel. Exemplary input values 201 for a model for identifying driving conditions with interactive demand are:
image data of a camera of the vehicle 100;
object classification of one or more objects 104 in the vehicle 100 surroundings;
duration of vehicle 100 stationary;
the number of times the vehicle 100 has been overtaken;
identification of horn signals;
increased pedestrian attention;
recognition of gestures of an occupant of the vehicle 100;
occupant condition (e.g., stress) of an occupant of the vehicle 100; this information may be collected by an interior sensor (e.g., an interior camera) of the vehicle 100;
the location and/or time of day;
history of current driving conditions; and/or
The state of the vehicle 100.
The individual input values 201 and/or features may be modeled using probability density functions, for example, in a multi-dimensional probability density function of multiple features and/or as a single probability density function of a single feature. Then, on this basis an anomaly detection can be performed in order to identify as an initial trigger a driving situation where there is a need for interaction (with a person).
In order to classify the interaction demand, image data of a camera of the vehicle 100 and the fact that a driving situation with the interaction demand has been recognized are used as input values 201. The type or category of driving situation (e.g., accident of the vehicle 100, parked transport vehicle, people and/or animals on the lane, etc.) may then be identified, for example. Communication with the identified external unit 110 may then be initiated. In this case, targeted information about the current driving situation can be transmitted to the external unit 110.
A predictor for predicting driving situations with interactive demand that have not yet occurred may be run in parallel with the above-described phases. Here, the identified driving situation with the interaction demand and/or the interaction category of the identified driving situation may be used for further learning of the predictor. The predicted (likely) driving situation with the interactive demand may be used to adjust the driving strategy of the vehicle 100 to avoid the actual occurrence of the predicted driving situation.
By combining different phases or steps, the reliability of the autonomous vehicle 100 may be improved to a certain extent. Here, the separate machine learning model (in particular, the neural network 200) may be executed locally on the vehicle 100 and/or the backend server.
FIG. 3 shows a flow chart of an exemplary method 300 for operating the autonomous vehicle 100. The method 300 may be performed by the control unit 121 of the vehicle 100. The method 300 includes: an (actual) current driving situation of the vehicle 100 is detected 301, in which there is a need for interaction of the vehicle 100 with the vehicle exterior unit 110, in particular with a human agent at the vehicle exterior unit 110.
Further, the method 300 includes: one of a plurality of different interaction categories is assigned 302 to the interaction demand of the current driving situation. In particular, it can be determined with which vehicle exterior unit 110 of a plurality of different vehicle exterior units 110 an interaction requirement exists. Alternatively or additionally, it can be determined which data should be transmitted in the context of the interaction.
Further, the method 300 further comprises: according to the assigned interaction category, an interaction with the vehicle exterior unit 110 is performed 303 with respect to the current driving situation. In particular, the vehicle external unit 110 associated with the assigned interaction level and/or the data associated with the assigned interaction category may be interacted with. In the context of the interaction, the driving situation (which may lead to a blockage and/or standstill of the vehicle 100, for example) can be resolved so that the vehicle 100 can continue to travel.
Further, the method 300 further comprises: a likely driving situation is predicted 304 that may occur over a future period of time for the autonomous vehicle 100, where there is a need for interaction of the vehicle 100 with the vehicle external unit 110. It can thus be checked beforehand whether the vehicle 100 will enter a possible driving situation with interactive demands. For this purpose, a machine-learned predictor (which optionally learns or has learned based on one or more data of the actual current driving situation with interactive demands) may be used.
Further, the method 300 further comprises: one or more measures are performed 305 to avoid the possible driving situation and/or to change the interaction demand of the possible driving situation, in particular to reduce the interaction demand of the possible driving situation (to reduce the time required for the interaction demand). For example, the driving strategy of the vehicle 100 may be adjusted early, and/or interaction with the vehicle external unit 110 may have been initiated before a possible driving situation occurs.
Overall, the reliability and comfort of the autonomous vehicle 100 with respect to driving situations with interactive demands can thereby be increased.
The invention is not limited to the embodiments shown. It should be expressly noted that the description and drawings are only intended to illustrate the principles of the proposed method, apparatus and system.

Claims (12)

1. A control unit (121) for operation of an autonomous vehicle (100); wherein the control unit (121) is arranged to:
-detecting a current driving situation of the vehicle (100) in which there is an interaction need of the vehicle (100) with a vehicle external unit (110);
-assigning one of a plurality of different interaction categories to the interaction demand of the current driving situation;
-performing an interaction with a vehicle external unit (110) regarding the current driving situation according to the assigned interaction category;
-predicting a likely driving situation of the autonomous vehicle (100) likely to occur over a future period, in which likely driving situation there is a need for interaction of the vehicle (100) with a vehicle external unit (110); and is
-initiating one or more measures to avoid said possible driving situation and/or to change the interactive need of said possible driving situation, in particular to reduce the interactive need of said possible driving situation.
2. The control unit (121) according to claim 1, wherein the control unit (121) is arranged to:
-detecting the current driving situation based on one or more machine learning models, in particular based on one or more learned neural networks (200); and/or
-determining the interaction class based on one or more machine learning models, in particular based on one or more learned neural networks (200); and/or
-predicting the likely driving situation based on one or more machine learning models, in particular based on one or more learned neural networks (200).
3. The control unit (121) according to any one of the preceding claims, wherein the control unit (121) is arranged to:
-predicting the likely driving situation by means of at least one machine-learned predictor; and the number of the first and second electrodes,
-performing learning of the predictor based on data relating to the current driving situation and/or based on data relating to an interaction category assigned to the interaction demand for the current driving situation.
4. The control unit (121) as claimed in any one of the preceding claims, wherein the interaction requirement comprises an interaction with a person at a vehicle exterior unit (110).
5. The control unit (121) as claimed in any one of the preceding claims, wherein the current driving situation and/or the possible driving situation
-causing an at least temporary blockage of the vehicle (100);
-can be resolved by violating traffic rules;
-an error message of a subsystem of the vehicle (100) not causing autonomous driving; and/or
-accident and/or technical malfunction of the vehicle (100) including autonomous driving.
6. The control unit (121) as set forth in any of the preceding claims, wherein the one or more measures include:
-adjusting a driving strategy of the autonomous vehicle (100);
-lane change of the vehicle (100) causing autonomous driving; and/or
-initiating interaction with a vehicle external unit (110) before the possible driving situation occurs.
7. The control unit (121) according to any one of the preceding claims, wherein
-the plurality of different interaction categories require interaction with at least partially different vehicle exterior units (110); and/or
-the plurality of different interaction categories require at least partly different data to be transmitted to the vehicle external unit (110); and is
-the control unit (121) is arranged to: -interacting with the vehicle external unit (110) regarding the current driving situation for the assigned interaction category, and/or-interacting with respect to the current driving situation using data required for the assigned interaction category.
8. The control unit (121) according to any one of the preceding claims, wherein the control unit (121) is arranged to:
-determining feature values (201) of a plurality of features based on ambient environment data of one or more ambient environment sensors (122) of the vehicle (100), based on vehicle data of one or more vehicle sensors (124) of the vehicle (100), based on location data of a location sensor (123) of the vehicle (100), based on traffic data related to traffic in a road network on which the vehicle (100) is travelling and/or based on digital map information related to the road network; and is
-detecting the current driving situation, determining the interaction class and/or predicting the likely driving situation based on the feature values (201) and by means of a machine learning model.
9. The control unit (121) according to any one of the preceding claims, wherein
-the vehicle (100) comprises one or more surroundings sensors (122), the one or more surroundings sensors (122) being arranged for determining surroundings data relating to the surroundings of the vehicle (100); and is
-the control unit (121) is arranged to: based on the ambient data, detecting the current driving situation, determining the interaction category, and/or predicting the likely driving situation.
10. The control unit (121) according to any one of the preceding claims, wherein
-the vehicle (100) comprises a position sensor (123), the position sensor (123) being arranged for determining position data relating to a position of the vehicle (100); and is
-the control unit (121) is arranged to: detecting the current driving situation, determining the interaction category and/or predicting the likely driving situation based on the location data and based on digital map information relating to a road network on which the vehicle (100) is travelling.
11. The control unit (121) according to any one of the preceding claims, wherein
-the vehicle (100) comprises one or more vehicle sensors (123) arranged for determining vehicle data related to state variables of the vehicle (100); and is
-the control unit (121) is arranged to: based on the vehicle data, detecting the current driving situation, determining the interaction category, and/or predicting the likely driving situation.
12. A method (300) for operation of an autonomous vehicle (100); wherein the method (300) comprises:
-detecting (301) a current driving situation of the vehicle (100) in which there is an interaction need of the vehicle (100) with a vehicle external unit (110);
-assigning (302) one of a plurality of different interaction categories to the interaction demand of the current driving situation;
-performing (303) an interaction with a vehicle external unit (110) with respect to the current driving situation according to the assigned interaction category;
-predicting (304) a likely driving situation of the autonomous vehicle (100) likely to occur over a future period, in which likely driving situation there is a need for interaction of the vehicle (100) with a vehicle external unit (110); and is
-performing (305) one or more measures to avoid the possible driving situation and/or to change the interactive need of the possible driving situation, in particular to reduce the interactive need of the possible driving situation.
CN202080027899.XA 2019-04-16 2020-03-02 Control unit and method for the identification, classification and prediction of the interaction demand of an autonomous vehicle Pending CN113661111A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102019110040.2 2019-04-16
DE102019110040.2A DE102019110040A1 (en) 2019-04-16 2019-04-16 Control unit and method for the recognition, classification and prediction of a need for interaction of an automated driving vehicle
PCT/EP2020/055439 WO2020212007A1 (en) 2019-04-16 2020-03-02 Control unit and method for detecting, classifying, and predicting an interaction requirement of an automatically driven vehicle

Publications (1)

Publication Number Publication Date
CN113661111A true CN113661111A (en) 2021-11-16

Family

ID=69804839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080027899.XA Pending CN113661111A (en) 2019-04-16 2020-03-02 Control unit and method for the identification, classification and prediction of the interaction demand of an autonomous vehicle

Country Status (4)

Country Link
US (1) US20220153301A1 (en)
CN (1) CN113661111A (en)
DE (1) DE102019110040A1 (en)
WO (1) WO2020212007A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63150710A (en) * 1986-12-16 1988-06-23 Shinko Electric Co Ltd Method for evading collision in autonomous unmanned vehicle system
DE19856732A1 (en) * 1998-12-09 2000-06-15 Bayerische Motoren Werke Ag Gear change controller for continuous automatic gearbox for motor vehicle with electronic controller controls gear changes depending on input signals reproducing ambient conditions
DE102015223481A1 (en) * 2015-11-26 2017-06-01 Bayerische Motoren Werke Aktiengesellschaft Method for adjusting a steering assistance of a vehicle
US20170192423A1 (en) * 2016-01-04 2017-07-06 Cruise Automation, Inc. System and method for remotely assisting autonomous vehicle operation
DE102016215314A1 (en) * 2016-08-17 2018-02-22 Bayerische Motoren Werke Aktiengesellschaft Driver assistance system, means of transportation and method for predicting a traffic situation
US10042359B1 (en) * 2016-01-22 2018-08-07 State Farm Mutual Automobile Insurance Company Autonomous vehicle refueling
CN108510771A (en) * 2017-02-27 2018-09-07 奥迪股份公司 Driving assistance system and vehicle including the driving assistance system
US20190065869A1 (en) * 2017-08-26 2019-02-28 Here Global B.V. Predicting features on a road network with repeating geometry patterns

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9082239B2 (en) * 2012-03-14 2015-07-14 Flextronics Ap, Llc Intelligent vehicle for assisting vehicle occupants
DE102014111023A1 (en) * 2014-08-04 2016-02-04 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and device for controlling an automated vehicle
DE102015220481A1 (en) * 2015-10-21 2017-05-11 Volkswagen Aktiengesellschaft Method and device in a traffic unit for the cooperative tuning of driving maneuvers of at least two motor vehicles
US9983591B2 (en) * 2015-11-05 2018-05-29 Ford Global Technologies, Llc Autonomous driving at intersections based on perception data
US9952056B2 (en) * 2016-03-11 2018-04-24 Route4Me, Inc. Methods and systems for detecting and verifying route deviations
DE102016205972A1 (en) * 2016-04-11 2017-11-09 Volkswagen Aktiengesellschaft Method for the autonomous or semi-autonomous execution of a cooperative driving maneuver
DE102016210760A1 (en) * 2016-06-16 2017-12-21 Bayerische Motoren Werke Aktiengesellschaft Method for interaction between a vehicle and road users
DE102016216680A1 (en) * 2016-09-02 2018-03-08 Bayerische Motoren Werke Aktiengesellschaft Communication of the intention of a vehicle to another road user
DE102017205230A1 (en) * 2017-03-28 2018-10-04 Continental Teves Ag & Co. Ohg Method for determining a cooperation partner for carrying out a driving maneuver and system
EP3457382A1 (en) * 2017-09-15 2019-03-20 Volkswagen Aktiengesellschaft Method for planning a collision avoidance maneuver, corresponding control unit and vehicle equipped with a control unit as well as computer program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63150710A (en) * 1986-12-16 1988-06-23 Shinko Electric Co Ltd Method for evading collision in autonomous unmanned vehicle system
DE19856732A1 (en) * 1998-12-09 2000-06-15 Bayerische Motoren Werke Ag Gear change controller for continuous automatic gearbox for motor vehicle with electronic controller controls gear changes depending on input signals reproducing ambient conditions
DE102015223481A1 (en) * 2015-11-26 2017-06-01 Bayerische Motoren Werke Aktiengesellschaft Method for adjusting a steering assistance of a vehicle
US20170192423A1 (en) * 2016-01-04 2017-07-06 Cruise Automation, Inc. System and method for remotely assisting autonomous vehicle operation
US10042359B1 (en) * 2016-01-22 2018-08-07 State Farm Mutual Automobile Insurance Company Autonomous vehicle refueling
DE102016215314A1 (en) * 2016-08-17 2018-02-22 Bayerische Motoren Werke Aktiengesellschaft Driver assistance system, means of transportation and method for predicting a traffic situation
CN108510771A (en) * 2017-02-27 2018-09-07 奥迪股份公司 Driving assistance system and vehicle including the driving assistance system
US20190065869A1 (en) * 2017-08-26 2019-02-28 Here Global B.V. Predicting features on a road network with repeating geometry patterns

Also Published As

Publication number Publication date
US20220153301A1 (en) 2022-05-19
DE102019110040A1 (en) 2020-10-22
WO2020212007A1 (en) 2020-10-22

Similar Documents

Publication Publication Date Title
US10921814B2 (en) Vehicle control system and method, and travel assist server
US10037036B2 (en) Method and arrangement for determining safe vehicle trajectories
US20190023208A1 (en) Brake prediction and engagement
US11587329B2 (en) Method and apparatus for predicting intent of vulnerable road users
CN116209611B (en) Method and system for using other road user's responses to self-vehicle behavior in autopilot
US11753048B2 (en) Monitoring of neural-network-based driving functions
US20230415753A1 (en) On-Vehicle Driving Behavior Modelling
EP3674972A1 (en) Methods and systems for generating training data for neural network
US20230037071A1 (en) Method and system for training an autonomous vehicle motion planning model
US11554794B2 (en) Method and system for determining a mover model for motion forecasting in autonomous vehicle control
CN116507541A (en) Method and system for predicting the response of other road users in autopilot
US20240017746A1 (en) Assessing present intentions of an actor perceived by an autonomous vehicle
CN113460083A (en) Vehicle control device, vehicle control method, and storage medium
JP6796679B2 (en) Vehicle control system and method, and driving support server
CN113661111A (en) Control unit and method for the identification, classification and prediction of the interaction demand of an autonomous vehicle
US11697435B1 (en) Hierarchical vehicle action prediction
US20230073933A1 (en) Systems and methods for onboard enforcement of allowable behavior based on probabilistic model of automated functional components
EP4292900A1 (en) Method for operating a vehicle and vehicle
US20230368541A1 (en) Object attention network
RU2763215C2 (en) Methods and systems for forming training data for a neural network
US20230294717A1 (en) Method for Determining a Trajectory for Controlling a Vehicle
Bezerra et al. Machine Learning in Connected Vehicle Environments
CN117022262A (en) Unmanned vehicle speed planning control method and device, electronic equipment and storage medium
CN117302215A (en) Vehicle for performing minimum risk maneuvers and method of operating the same
CN117087685A (en) Method, computer program and device for context awareness in a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination