EP3702971A1 - Method for estimating the uncertainty of a recognition process in a recognition unit; electronic control unit and motor vehicle - Google Patents
Method for estimating the uncertainty of a recognition process in a recognition unit; electronic control unit and motor vehicle Download PDFInfo
- Publication number
- EP3702971A1 EP3702971A1 EP19159652.7A EP19159652A EP3702971A1 EP 3702971 A1 EP3702971 A1 EP 3702971A1 EP 19159652 A EP19159652 A EP 19159652A EP 3702971 A1 EP3702971 A1 EP 3702971A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- nodes
- data
- variance
- value
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 83
- 230000008569 process Effects 0.000 title claims abstract description 42
- 238000013528 artificial neural network Methods 0.000 claims abstract description 62
- 230000000644 propagated effect Effects 0.000 claims abstract description 8
- 230000001902 propagating effect Effects 0.000 claims abstract description 5
- 230000004913 activation Effects 0.000 claims description 73
- 238000010304 firing Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 18
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 12
- 230000006870 function Effects 0.000 description 54
- 238000005070 sampling Methods 0.000 description 19
- 230000008901 benefit Effects 0.000 description 10
- 238000002347 injection Methods 0.000 description 7
- 239000007924 injection Substances 0.000 description 7
- 239000000243 solution Substances 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000012614 Monte-Carlo sampling Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000010924 continuous production Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/04—Monitoring the functioning of the control system
- B60W50/045—Monitoring control system parameters
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0088—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0062—Adapting control system settings
- B60W2050/0075—Automatic parameter input, automatic initialising or calibrating means
- B60W2050/0083—Setting, resetting, calibration
- B60W2050/0088—Adaptive recalibration
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60Y—INDEXING SCHEME RELATING TO ASPECTS CROSS-CUTTING VEHICLE TECHNOLOGY
- B60Y2302/00—Responses or measures related to driver conditions
- B60Y2302/05—Leading to automatic stopping of the vehicle
Definitions
- a driving system for automated and/or autonomous driving of a vehicle is controlled on the basis of the uncertainty data.
- the driving system may react to the degree of uncertainty associated to or regarding the output data of the recognition unit. For example, the driving system may start an emergency-stop maneuver in the case that the uncertainty data indicate that uncertainty is above a predefined threshold and/or the output data may be ignored in the case that the uncertainty data indicate that uncertainty is above the threshold.
- driving system 11 may consider or take into account at least one object 17 in the surroundings 18 of the vehicle 10.
- objects 17 are traffic signs (as shown), other traffic participants (vehicles, cyclists, pedestrians), parked cars, traffic lights, roads, just to name a few examples.
- the detection and/or recognition of each object 17 in the surroundings 18 may be performed in an automated way by means of at least one sensor 19 and an electronic control unit 20.
- the at least one sensor 19 may provide a detection range 21 that may be directed towards the surroundings 18.
- the at least one sensor 19 may generate sensor signals or sensor data 22 representing or describing the surroundings 18 as it may be sensed by the at least one sensor 19 within the detection range 21. Examples for possible sensors are a camera, a radar, a lidar, an ultrasonic sensor.
- the sensor data 22 may be image data.
- the activation level An may be used as an input value for the activation function 41.
- the output of activation function 41 may then be used as activation level 37 of the node n.
- the activation function ⁇ may for example be based on one of the following functions: tanh, sinh, ReLU.
- Fig. 5 illustrates the application of the method described in connection with Fig. 4 to a neural network that provides a regression calculation.
- a diagram 50 the result for the described MC sampling is shown wherein in a diagram 51 the result for the above method is shown.
- the described method provides results very similar to those of the MC sampling.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Automation & Control Theory (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Medical Informatics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Feedback Control In General (AREA)
Abstract
Description
- The invention is concerned with a method for estimating the uncertainty of a recognition process in a recognition unit. Such a recognition unit may be provided in a driving system for automated and/or autonomous driving. The invention also provides an electronic control unit that is designed to perform the inventive method. The invention also provides a motor vehicle comprising the inventive electronic control unit.
- Artificial neural networks have found many applications in tasks such as computer vision in automated and autonomous driving. The problem is that the majority of these neural networks give point estimates as output data without any notion of their uncertainty. This is a problem in safety critical environments such as autonomous driving since uncertainty may define planning and system behavior (e.g. be more or less aggressive at an intersection).
- Uncertainty in neural networks is an active topic of research, taking the name (generally) as Bayesian deep learning. The latest methods that are applied to modern neural networks are either variational inference, or Monte-Carlo (MC) based sampling techniques. A prior-art of MC sampling is e.g. published by Yarin et al. (https://arxiv.org/pdf/1506.02142.pdf). A prior-art of variational inference is published by Hinton et al. (www.cs.toronto.edu/∼fritz/absps/nlgbn.pdf, https://arxiv.org/abs/1312.6114).
- The draw-back of MC sampling is that, since it is based on sampling (recursively applying inference on input data with noise), it is very computationally expensive, and therefore not suitable for online, real-time problems such as detecting objects on an embedded computer in an autonomously driving vehicle.
- It is an object of the present invention to provide uncertainty data that describe the uncertainty associated with a specific recognition process in which an artificial neural network provides output data as a recognition result as an answer to specific input data, like e.g. sensor data. Another term for "recognition process" is "inference".
- The object is accomplished by the subject matter of the independent claims. Advantageous developments with convenient and non-trivial further embodiments of the invention are specified in the following description, the dependent claims and the figures.
- The invention provides a method for estimating or evaluating the uncertainty of a recognition process in a recognition unit. Such a recognition unit may be provided for detecting and/or recognizing at least one object on the basis of sensor data, like e.g. image data and/or radar data and/or lidar data. The recognition unit may be a software or computer program that may be running on a processing device of an electronic control unit of a vehicle. The method assumes that the recognition unit receives input data describing at least one object in the surroundings of the recognition unit and generates output data describing a recognition result concerning the at least one object (i.e. the object may be detected and/or classified). In addition to the output data, by means of the method, uncertainty data regarding the recognition result (i.e. the output data) is also determined.
- According to the method, the recognition process comprises that the recognition unit operates an artificial neural network. In the following, the "artificial neural network" is also termed "neural network" (in order to simplify the description). The neural network comprises at least one input layer with nodes for receiving the input data and at least one output layer with nodes for generating the output data and at least one hidden layer with nodes for connecting the at least one input layer to the at least one output layer. The wording "at least one" means "one or more than one" in each respective case. Another term for "node" is "artificial neuron" or "unit". The nodes are interconnected by links which connect the node to form the neural network. The links between the nodes may be defined by the type of the neural network. The links define the structure of the neural network. By these links, the nodes of the at least one hidden layer and the at least one output layer are each linked to respective preceding ones of the nodes of the network. In other words, from the point of view of each node of the at least one hidden layer and the at least one output layer, these nodes receive values from their respective preceding nodes during the recognition process. These values are termed "activation values" in the following. That is to say, from the point of view of each of these nodes, there are preceding nodes that each provide or deliver an activation value to the respective node. Likewise, each node of the at least one hidden layer delivers its own activation value to at least one following node, i.e. to at least one node of another hidden layer and/or an output layer. In the case of a recursive neural network (where links may end at at least one node of an input layer), a node from an input layer may receive an activation value from at least one other node.
- In the recognition process, the nodes of the at least one hidden layer and the at least one output layer calculate their own respective activation value by means of a predefined "firing function" from the activation values of the respective preceding nodes. Nodes from the input layer may generate their respective activation value on the basis of the input data. The activation values generated by the nodes of the at least one output layer may be provided as the output data. This recognition process generally applies to any artificial neural network.
- In this context, when a neural networks generates output data from input data, it is of interest to measure or quantify the uncertainty associated with the output data. For example, if the recognition process comprises detecting or recognizing an object on the basis of image data that were obtained at bright daylight, the object may be clearly visible in the image data. If the same object is to be detected on the basis of image data that were obtained in the dark or at twilight, the neural network will also provide a recognition result like in the case of image data from bright daylight. Therefore, from the output data alone, it cannot be decided, how "sure" or certain the neural network is regarding the recognition result.
- The invention provides a solution that generates uncertainty data for expressing or describing the epistemic uncertainty, but without the use of sampling. The term "epistemic" refers to the fact that the uncertainty data do not describe a general uncertainty of the neural network, but rather the uncertainty that is associated with a specific recognition process, i.e. that is linked to specific output data of the neural network and may therefore describe the current recognition situation.
- According to the method, a predefined subset of the nodes of the network is defined and a respective noise model of a noise comprising a variance value is associated with each of the nodes of the subset. In other words, a calculation is performed that resembles the injection of noise at certain nodes in the neural network (i.e. at the nodes of the subset). Said subset of nodes can be, e.g. the nodes of a specific hidden layer or of several hidden layers and/or the nodes of an output layer or of several output layers. For example, the nodes of the ultimate hidden layer and/or the penultimate hidden layer and/or the 3rd-last hidden layer may be chosen to constitute the subset.
- Note that the "injection" of the noise is only meant to be understood figuratively, as will be explained shortly. Once the noise is injected into the neural network at the nodes of the subset, the noise will propagate through the network until it reaches the nodes of the at least one output layer. Only for the sake of understanding, it is stated here that by injecting noise into the neural network, it is measured how easily the neural network will deviate from its original recognition result, i.e. from the output data. This sensitivity to the noise may be interpreted as a degree of uncertainty.
- However, the noise will not be injected during the recognition process, i.e. the actual output data are calculated from the input data without the influence of the noise. Then, once the output data are calculated, the noise is virtually "injected" into the neural network while the neural network is still in the state it took from the recognition process, when the activation values of each of the nodes are known or present. As has been explained above, the "injection" of the noise does not require that a noise process needs to be sampled. Instead, the method calculates how the variance of the injected noise would change while the noise is propagated through the network. This does not require sampling a noise process (i.e. repeating the recognition process several times with noise added to the nodes of said subset). Instead, only one calculation of the propagation of the variance values is needed.
- To this end, by the inventive method, starting from the nodes of said subset of nodes, the variance value of the respective associated noise model is propagated through the neural network to the at least one output layer, wherein propagating the variance values comprises that each node that receives the variance values from its respective preceding nodes calculates its own variance value from the received variance values of the respective preceding nodes by means of a predefined variance propagation function. The variance propagation function defines how the variance value of the noise, as it results at a specific node, is calculated from the variance values of the preceding nodes.
- Finally, the resulting variance values of the nodes of the at least one output layer are provided as the uncertainty data. In other words, the variance of the noise as it is arrives at the at least one output layer is a measure for quantifying the uncertainty of the neural network when each of the nodes of the neural network is in the state that it took due to the recognition process.
- The invention provides the advantage that no sampling or repetitions of the recognition process itself needs to be performed in order to quantify the uncertainty of the neural network with regard to specific output data. Nevertheless, the method provides an approximation of said sampling method.
- The invention also comprises embodiments that provide features which afford additional technical advantages.
- In one embodiment the variance propagation function of the respective node comprises that the respective variance value of each of the respective preceding nodes is weighted or multiplied by a weighting factor value that is based on the gradient or partial derivative of the firing function or a predefined approximation of that partial derivative, wherein the partial derivative is calculated with respect to the activation value of the respective preceding node. In other words, it is considered how steep or how flat the firing function is at the specific point or state of the preceding node as defined by its activation value. A flat firing function (partial derivative or gradient smaller than 1) will reduce the variance value that is passed on or propagated to the following node, a steep firing function (partial derivative or gradient greater than 1) will increase the variance value that is passed on to the following node. As the partial derivative is calculated with respect to the activation value, this is a way of considering the current state of each preceding node. As an example for calculating a variance value Vn of a node n from the variance values V1, V2,...Vk of the k preceding nodes (with k a natural number), it may be assumed that the activation value Xn of node n is calculated on the basis of a firing function F(X1, X2,..., Xk) from the activation values X1, X2,..., Xk of the k preceding nodes. The symbol "..." represents that fact that overall, 2 or more than 2 preceding nodes may exist and all k preceding node shall be considered. The partial derivatives of the firing function F with regard to the respective activation values X1, X2,..., Xk may then be denoted as δF/δX1 for X1, δF/δX2 for X2 and δF/δXk for Xk. The calculation of a partial derivative for a given firing function F is known to the skilled person.
- In one embodiment, in the variance propagation function a value of the local partial derivative or its approximation is square and the squared value is used as a factor for multiplying the respective variance value. This provides the advantage that the propagation of a real noise through the network is simulated. One embodiment for calculating the variance value Vn of a node n from the variance values V1, V2,..., Vk of its k preceding nodes can therefore be expressed by the following equation:
- In one embodiment the 1 st order Taylor expansion of the firing function is used as the partial derivative. More generally, the variance propagation function comprises that the squared value of the first order Taylor expansion of the firing function is used. The 1 st (first) order Taylor expansion provides a reliable way for approximating the variance of a node or unit after applying the activation function given the variance prior to the activation function. It has been discovered that using the 1st order Taylor expansion is sufficient for providing meaningful or significant uncertainty data. Note that in the case of a firing function that is an affine function or a linear function, the 1st order Taylor expansion even provides the exact solution.
- In one embodiment the firing function of each respective node comprises the steps of weighting the respective activation value of each preceding node with a respective individual link weighting value, summing-up the weighted activation values and applying an activation function ϕ to the summed-up weighted activation values, wherein the output of the activation function is the activation value of the node itself. For example, for the above example and assuming that the weighting values for the preceding nodes may be denoted as W1, W2,..., Wk, the activation value Xn of node n may be calculated by the following equation:
- In one embodiment the respective noise model provides a variance value that is a function of the squared value of the activation value of the node the model is associated to. In other words, if the activation value is, for example, X1, then the variance value would be proportional to the squared value, i.e. (X1)2. This provides the advantage that the power of the noise is a function of the contribution of that node to the recognition result or the output data. This delivers more realistic uncertainty data.
- In one embodiment, for modelling the noise in the respective noise model, a predefined drop-out rate p of the nodes of the described subset is defined and the variance value of the noise of that noise model is given as or is proportional to p*(1-p). This provides the advantage that the method may even simulate the MC sampling method that was already described. Alternatively, the respective noise model models a predefined analytically-defined noise source. In one embodiment, such a noise source is a Gaussian noise source describing a Gaussian noise. Preferably, each noise model provides a zero-mean noise, i.e. a noise with a mean value of 0. This provides the advantage that only variance values need to be propagated.
- In one embodiment, the recognition process provides output data that comprise a regression calculation for the input data. Alternatively, the output data may provide a classification result for the input data. The special advantage for providing uncertainty data for a regression calculation is that a regression calculation is otherwise difficult to evaluate with regards to uncertainty.
- In one embodiment, a driving system for automated and/or autonomous driving of a vehicle is controlled on the basis of the uncertainty data. This provides the advantage that the driving system may react to the degree of uncertainty associated to or regarding the output data of the recognition unit. For example, the driving system may start an emergency-stop maneuver in the case that the uncertainty data indicate that uncertainty is above a predefined threshold and/or the output data may be ignored in the case that the uncertainty data indicate that uncertainty is above the threshold.
- In one embodiment as the input data, sensor signals or sensor data from at least one sensor that is observing the surroundings are used. This provides the advantage that the sensor data may be evaluated regarding their suitability for detecting the at least one object in the surroundings.
- As has been explained above, the "injection" of the noise as it is part of the recognition process does not require that a noise is applied to or processed by the neural network. The injection is only virtual. All that is needed is the propagation of the variance values. However, in one embodiment, as part of the training process of the neural network (as opposed to the recognition process) the respective noise that is modelled by the respective noise model is actually applied as a part of training data to the respective node of the subset. In other words, actual samples of the noise that corresponds to the respective noise model is provided as noise data. This correspond to a real application or injection of noise. This way the neural network learns or is trained to minimize the influence of the injected noise on the training data (i.e. minimize the variance in the output) by propagating the actual injected noise at training time in the training process to the output. This way, at inference time (i.e. in the recognition process), use is made of the fact that variance of the noise is known and this variance value may be propagated analytically because the neural network learned or is trained how to deal with this type of noise injection by means of the training process at training time. The inference of noise as part of the training process can be performed on the basis of the drop-out technique which is well known in the prior art.
- For performing the inventive method in a motor vehicle, an electronic control unit is provided. The inventive electronic control unit comprises a digital processing device that is designed to perform an embodiment of the inventive method. The processing device may comprise one or more processing units, e.g. a central processing unit CPU and/or signal processing unit SPU and/or graphical processing unit GPU and/or neural processing unit NPU. The processing device may comprise a software or program code that comprises computer readable instructions which cause the processing device to perform the embodiment of the method if executed by the processing device. The software or program code may be stored in a data storage of the processing device.
- The invention also provides a motor vehicle comprising an autonomous driving system for automated and/or autonomous driving of the vehicle. The motor vehicle also comprises an electronic control unit that is an embodiment of the inventive control unit. The driving system is controlled on the basis of output data and uncertainty data that are both generated by the electronic control unit, i.e. on the basis of an embodiment of the inventive method. The motor vehicle may be designed as a passenger vehicle or a truck or a flying vehicle (e.g. plane or helicopter) or a motorcycle or a boat.
- The invention also comprises the combinations of the features of the different embodiments.
- In the following an exemplary implementation of the invention is described. The figures show:
- Fig. 1
- a schematic illustration of an embodiment of the inventive motor vehicle;
- Fig. 2
- a schematic illustration of an artificial neural network that is operated in a recognition unit of an electronic control unit of the motor vehicle;
- Fig. 3
- a sketch for illustrating a node in a layer of the neural network together with its preceding and its following nodes;
- Fig. 4
- a sketch of the node of
Fig. 3 for illustrating the propagation of variance values through the neural network; - Fig. 5
- diagrams showing output data and uncertainty data;
- Fig. 6
- a schematic illustration of input data and output data as they may be generated by the neural network; and
- Fig. 7
- a schematic illustration of uncertainty data that may be associated with the output data of
Fig. 6 . - The embodiment explained in the following is a preferred embodiment of the invention. However, in the embodiment, the described components of the embodiment each represent individual features of the invention which are to be considered independently of each other and which each develop the invention also independently of each other and thereby are also to be regarded as a component of the invention in individual manner or in another than the shown combination. Furthermore, the described embodiment can also be supplemented by further features of the invention already described.
- In the figures identical reference signs indicate elements that provide the same function.
-
Fig. 1 shows amotor vehicle 10 that can be, for example, a passenger vehicle or a truck or one of the other vehicle types already mentioned.Vehicle 10 may be driving in an automated or autonomous mode. To this end, thevehicle 10 may comprise adriving system 11 for automated and/or autonomous driving. By means of the drivingsystem 11, control signals 12 may be generated for controlling a steering 13 and/or an engine 14 and/or breaks 15 ofvehicle 10. The drivingsystem 11 may be designed to plan a drivingtrajectory 16 along which drivingsystem 11 may drive orlead vehicle 10 by means of generating the control signals 12. Drivingsystem 11 may comprise at least one computer for performing the automated or autonomous driving. - For planning the
trajectory 16, drivingsystem 11 may consider or take into account at least oneobject 17 in thesurroundings 18 of thevehicle 10. Examples forobjects 17 are traffic signs (as shown), other traffic participants (vehicles, cyclists, pedestrians), parked cars, traffic lights, roads, just to name a few examples. The detection and/or recognition of eachobject 17 in thesurroundings 18 may be performed in an automated way by means of at least onesensor 19 and anelectronic control unit 20. The at least onesensor 19 may provide adetection range 21 that may be directed towards thesurroundings 18. The at least onesensor 19 may generate sensor signals orsensor data 22 representing or describing thesurroundings 18 as it may be sensed by the at least onesensor 19 within thedetection range 21. Examples for possible sensors are a camera, a radar, a lidar, an ultrasonic sensor. For example, thesensor data 22 may be image data. - The
electronic control unit 20 may receive thesensor data 22. Based onsensor data 22, theelectronic control unit 20 may provideoutput data 23 which may describe the at least oneobject 17 in thesurroundings 18 such that knowledge about the at least oneobject 17 is available to the drivingsystem 11. For example, theoutput data 23 may describe the shape and/or position and/or a class (vehicle/cyclist/pedestrian/traffic sign/road) of therespective object 17. - For extracting or deriving the
output data 23 from thesensor data 22, theelectronic control unit 20 may operate arecognition unit 24. To this end,electronic control unit 20 may comprise aprocessing device 24 which may be based on at least one digital processor, e.g. at least one CPU and/or GPU. Therecognition unit 24 may be a software which may receive thesensor data 22 asinput data 26.Recognition unit 24 may feed or provide the receivedinput data 26 to an artificialneural network 27.Neural network 27 may be trained in such a way that based on theinput data 26 theoutput data 23 are generated.Neural network 27 may be trained to apply a classification and/or a regression to theinput data 26 in order to generate theoutput data 23. - Once the
output data 23 are provided to the drivingsystem 11, drivingsystem 11 has knowledge or information about the at least oneobject 17 in the surrounding 18 and may then plan thetrajectory 16 accordingly. This can be a continuous process such that during a movement ofvehicle 10,trajectory 16 may be adapted to changes insurroundings 18. - The generation of the
output data 23 from theinput data 26 is arecognition process 28. Thisrecognition process 28 is associated with an uncertainty regarding the degree of certainty or sureness that theoutput data 23 are actually correct. Therefore, invehicle 10, drivingsystem 11 may consider this uncertainty while driving thevehicle 10, e.g. while planning thetrajectory 16. To this end,recognition unit 24 may also provideuncertainty data 29 that may indicate a degree of uncertainty regarding the recognition result of therecognition process 28, i.e. the uncertainty regarding the correctness ofoutput data 23 with regards to the at least oneobject 17 in thesurroundings 18. - For generating the
uncertainty data 29,recognition unit 24 takes into account the current state ofneural network 27 as it results from theactual recognition process 28. For explaining the generation of theuncertainty data 29, in the following, theactual recognition process 28 is explained on the basis ofFig. 2 andFig. 3 , then, based onFig. 4 , the generation of theuncertainty data 29 is explained. -
Fig. 2 shows one exemplary embodiment ofneural network 27, i.e. one possible topology, although this topology is only exemplary. Any topology available to the skilled person may be used. -
Neural network 27 may comprise at least oneinput layer 30, at least onehidden layer 31 and at least oneoutput layer 32. For the sake of simplicity, only oneinput layer 30, only one hiddenlayer 31 and only oneoutput layer 32 is shown. In each of thelayers nodes 33 may be provided for processing data. Each of thenodes 33 of the at least oneinput layer 30 may each receive at least a part ofinput data 26. Thenodes 33 of the at least oneoutput layer 32 may each generate at least a part of theoutput data 23. The at least oneinput layer 30 is connected to the at least oneoutput layer 32 over the at least onehidden layer 31. The connection is defined bylinks 34 each of which defines a connection between two of thenodes 33. InFig. 2 only some of thelinks 34 are provided with reference signs for the sake of facility of inspection. -
Fig. 3 illustrates the point of view of onesingle node 33 withinneural network 27. For the following explanation, this node is referred to as node n. The node n may be part of a hiddenlayer 31 or anoutput layer 32. From the point of view of node n, by means of thelinks 34 several preceding nodes 35 are defined.Fig. 3 shows that node n may have k preceding nodes 35 each of which provides an output value oractivation value 36 to node n over therespective link 34. The activation values 36 are denoted as X1, X2,..., Xk for the k preceding nodes 35. On the basis of the receivedactivation values 36, node n may itself generate an output value oractivation value 37 which is denoted as Xn. In the case that node n is part of a hiddenlayer 31, theactivation value 37 of node n may be provided to following nodes 38 overrespective links 34. In the case that node n is part of anoutput layer 32, itsactivation value 37 may become part of theoutput data 23. - From the point of view of node n, by calculating its
activation value 37 from the activation values 36 of the preceding nodes 35, theactivation value 37 is a function of the activation values 36 of the preceding nodes 35. This function comprises all the calculation steps needed for obtainingactivation value 37 from activation values 36. This function is termed here as firing function F such that the following equation applies: - In
Fig. 3 , one possible way of calculating theactivation level 37 is illustrated. Eachlink 34 may be associated withweighting value 39. For the illustrated preceding nodes 35, the weighting values 39 are denoted here as W1, W2,..., Wk. Furthermore, a summingfunction 40 and anactivation function 41 may be provided for node n. Theactivation function 41 is denoted here as function ϕ (phi).The activation values 36 from the preceding nodes 35 may each be weighted with therespective weighting value 39 from therespective link 34, the weighted activation values may be summed up by means of the summingfunction 40, which may result in an activation level An, which represents the summed-up weighted activation values 36. The activation level An may be used as an input value for theactivation function 41. The output ofactivation function 41 may then be used asactivation level 37 of the node n. For the example shown inFig. 3 , the firing function F may therefore be expressed by the following equation: -
Fig. 4 illustrates, how on the basis of the firing function F a propagation function P may be provided for calculating theuncertainty data 29.Fig. 4 shows node n and its preceding nodes 35 (and its possible following nodes 38) in the same manner as is done inFig. 3 . - After or while processing the
input data 26, for eachnode 33 of theneural network 27, itsrespective activation value neural network 27 has taken a specific state that results from processinginput data 26 for generatingoutput data 23 inrecognition process 28. Based on this current state, theuncertainty data 29 are generated. - To this end, a
predefined subset 42 ofnodes 33 is selected. InFig. 4 , it is assumed, that the preceding nodes 35 of node n belong to thesubset 42. There may be alsoother nodes 33 that belong tosubset 42. For example allnodes 33 belonging to one specific hiddenlayer 31 or to severalhidden layers 31 may be part ofsubset 42. - For each
node 33 that is part ofsubset 42, apredefined noise model 43 is associated. For example, a zero-meanGaussian noise 44 maybe modelled or described by therespective noise model 43. Thenoise model 43 provides arespective variance value 45 of thenoise 44. For the preceding nodes 35 of node n, the respective variance values 45 are denoted as V1, V2,..., Vk. - Starting from the
nodes 33 belonging tosubset 42, the variance values 45 are propagated through theneural network 27 to the at least oneoutput layer 32 in a similar manner as has been done for the activation values X1, X2,..., Xk, Xn. The variance values resulting at thenodes 33 of the at least oneoutput layer 32 may then be used asuncertainty data 29. - For propagating the variance values 45 through the
neural network 27, exemplary calculations are explained here for node n, which represents any node that follows thenodes 33 that are part ofsubset 42. - For node n, from every preceding node 35, the
respective variance value 45 may be received and from the receivedvariance values 45, avariance value 46 of node n may be calculated on the basis of the variance propagation function P, resulting in the following equation:variance value 46 of node n. - The propagation function P may be based on the firing function F of node n. The partial derivative or the first order Taylor series expansion of firing function F may be used resulting in the following equation:
- The definition of the respective partial derivative depends on the specific activation function ϕ that is used in
neural network 27. For deriving the partial derivative, the skilled person may refer to mathematical literature. - Note that the illustrations in
Fig. 3 andFig. 4 may not reflect the actual structure ofneural network 27. For example, thelinks 34 and the activation values 36, 37 may also be represented by matrices and the described calculations may be implemented as matrix calculations, as is known to the skilled person concerned with neural networks. -
Fig. 5 illustrates the application of the method described in connection withFig. 4 to a neural network that provides a regression calculation. In a diagram 50 the result for the described MC sampling is shown wherein in a diagram 51 the result for the above method is shown. As can be seen, the described method provides results very similar to those of the MC sampling. - The underlying experiment provided input data I in the form of values in a range from -10 to 30. First, the MC sampling is explained.
- The neural network was trained on the basis of
ground truth data 51 associating the input data I with specific groundtruth output data 52 such that the groundtruth output data 52 correspond to values of a sine function of the input data I. However, for the training, not all input data I were provided or used, but only the input data I in the range from 0 to 20. In other words, during training the neural network did not see theground truth data 51 for the input data I in the range from -10 to 0 and in the range from 20 to 30. Accordingly, theoutput data 53 of the neural network for input data I in the range from -10 to 0 and 20 to 30 deviate from theground truth data 51. - Furthermore, on the basis MC sampling additional recognition results 54 were calculated which define
uncertainty data 55 indicating that the further the input data I deviate from the trained interval 0 to 20 of the input data I, the larger the uncertainty is, which is expressed by the size of theinterval 55 defined by theuncertainty data 54. - For obtaining
uncertainty data 54 by means of MC sampling, several repetitions of therecognition process 28 are necessary which demands a corresponding computational power ofprocessing device 25, ifsensor data 22 are supposed to be processed in real-time. - An important parameter of the MC sampling is the dropout rate p for single nodes of the network during the repetition of the
recognition process 28 for generating the additional recognition results 54. - Based on the method illustrated in
Fig. 4 and setting the variance values 45 of thenoise models 43 to p*(1-p), the same result as with the MC sampling can be obtained without the need of performing therecognition process 28 repeatedly, but rather only onerepetition process 28 is needed (for obtaining the output data 23). Theuncertainty data 29 nevertheless indicate the same uncertainty level as in the case of MC sampling. -
Fig. 6 illustrates the result of an experiment based onsensor data 22 that may be, for example, image data of asensor 19 that is a camera. Thesensor data 22 were provided torecognition unit 24 asinput data 26. As a result,output data 23 were generated that indicate which part ofinput data 26, i.e. which part or region of the image belongs to the drivinglanes 60, 61 and which part or region constitutes abackground 62 ofsurroundings 18. Thelanes 60, 61 and thebackground 62 each may constitute anobject 17 ofsurroundings 18.Lane 60 may be the so called ego-lane that thevehicle 10 should follow or use. -
Fig. 7 illustrates howuncertainty data 29 may be associated to theoutput data 23 on the basis of the method according toFig. 4 . For example, for each pixel of an image, a respective recognition result may be provided as theoutput data 23, for example, indicating, whether this pixel is showinglanes 60 or 61 or background 62 (seeFig. 6 ). Likewise, for each pixel, a variance value Vn may be provided indicating the uncertainty. The may express the epistemic uncertainty per pixel. The larger the variance value, the larger the uncertainty is.Fig. 7 illustrates, that theborder 63 betweenlanes 60 and 61 and theedges 64 of thelanes 60, 61 cause uncertainty to a higher degree (larger variance values) thanregions 65 where the recognition result is clear or certain. - In the following, a preferred embodiment of the invention is described.
- We start with the following assumptions:
- 1. We approximate the distribution of the noise in the hidden units (nodes of the at least one hidden layer) in the neural network as Gaussian.
- 2. We assume that the hidden units are independent random variables with regard to the noise.
- Given these assumptions, we can approximate the output of dropout based Monte-Carlo sampling techniques with uncertainty propagation (which is a developed field in itself). We can say that dropout induces a distribution of the n-th activation of a hidden unit in the neural network with a variance given analytically by:
- Then, we propagate this uncertainty (given by the variance) using a first-order Taylor expansion at the non-linearities in the output of the network (it is applied to the output because this is the quantity we are interested in, but in principle it can be applied to any stage in the network).
- In the case where there are no non-linearities in the case of Gaussian noise applied only to the 2nd to last layer, then the approximation is actually the correct analytical solution.
- As illustrated in the example of
Fig. 5 , we trained a model on a simple sinusoidal wave between 0 and 20, and we should see an increasing epistemic uncertainty as we move away from this trained data distribution. Indeed we see this with i.e. Monte-Carlo based sampling, but we also see that this matches perfectly with the analytical solution hereby provided. - Another way of expressing the calculation of the variance values is the following:
At each layer, an activation value fi of a node i is a function of input activation values xj of all the preceding nodes j (with j the index of the preceding nodes). If the standard deviation is denoted as σ (such that σ2 is the variance value), the following equation may be used for calculating the variance value of a given node i:
This approximation progressively gets worse the more layers we place between dropout and the output, but usually dropout is placed at the end anyway. - Removing the need for sampling means that we can have uncertainty estimates that are at best identical to an infinite number of samples (if you have a regression with no non-linearities with Gaussian noise regularization). At worse, it will still be only as bad as the 1st order Taylor expansion (if placed before the last layer).
- Overall, the example shows how a sample-free approximation of an epistemic uncertainty measure in an artificial neural network is provided by the invention.
Claims (13)
- Method for estimating the uncertainty of a recognition process (28) in a recognition unit (24), wherein in the recognition process (28), the recognition unit (24) receives input data (26) describing at least one object (17) in surroundings (18) of the recognition unit (24) and generates output data (23) describing a recognition result concerning the at least one object (17), wherein by the method uncertainty data (29) regarding the recognition result is determined,
wherein the recognition unit (24) operates an artificial neural network (27) that comprises at least one input layer (30) with nodes (33) for receiving the input data (26) and at least one output layer (32) with nodes (33) for generating the output data (23) and at least one hidden layer (31) with nodes (33) for connecting the at least one input layer (30) to the at least one output layer (32), wherein the nodes (33) of the at least one hidden layer (31) and of the at least one output layer (32) are each linked to respective preceding ones (35) of the nodes (33) of the network (27) and in the recognition process (28) the nodes (33) generate a respective activation value (37) which is calculated by means of a predefined firing function (F) from the activation values (36) of the respective preceding nodes (35),
characterized in that
for a predefined subset (42) of the nodes (33) of the network (27) a respective noise model (43) of a noise (44) comprising a variance value (45) is associated with each of the nodes (33) of the subset (42),
and starting from the nodes (33) of the subset (42) the variance value (45) of the respective associated noise model (43) is propagated through the neural network (27) to the at least one output layer (32) wherein propagating the variance values (45) comprises that at each node (n) that receives the variance values (45) of its respective preceding nodes (35), the variance value (Vn) of that node (n) is calculated from the variance values (V1, V2,...,Vk) of the respective preceding nodes (35) by means of a predefined variance propagation function (P), and
the resulting variance values (Vn) of the nodes (33) of the at least one output layer (32) are provided as the uncertainty data (23). - Method according to claim 1, wherein the variance propagation function (P) comprises that the respective variance value (45) of each of the preceding nodes (35) is weighted by a weighting factor that is based on the partial derivative of the firing function or a predefined approximation of the partial derivative, wherein the partial derivative is calculated with respect to the activation value (36) of that respective preceding node (35).
- Method according to claim 2, wherein in the variance propagation function (P) a value of the local partial derivative or its approximation is squared and the squared value is used as a factor for the respective variance value (45).
- Method according to any preceding claim, wherein the variance propagation function (P) comprises that the squared value of the first order Taylor expansion of the firing function (f) is used.
- Method according to any of the preceding claims, wherein the firing function (f) of each respective node (33) comprises the steps of weighting the respective activation value (36) of each preceding node (35) with a respective individual link weighting value (39), summing-up the weighted activation values and applying an activation function ϕ to the summed-up weighted activation values (An), wherein the output of the activation function ϕis the activation value (37) of the node (33) itself.
- Method according to any of the preceding claims, wherein the respective noise model (43) provides a variance value (V1, V2, Vk) that is a function of the squared value of the activation value (36) of the node (33) the model is associated to.
- Method according to any of the preceding claims, wherein for modelling the noise (44) in the respective noise model (43), a predefined drop-out rate p of the nodes (33) of the subset (42) is defined and the variance value (V1,V2,Vk) is proportional to p*(1-p) or wherein the respective noise model (43) models a predefined analytically-defined noise (44), in particular a Gaussian noise.
- Method according to any of the preceding claims, wherein the recognition process (28) provides output data (23) that comprise a regression calculation for the input data (26).
- Method according to any of the preceding claims, wherein on the basis of the uncertainty data (29), a driving system (11) for automated and/or autonomous driving of a vehicle (10) is controlled.
- Method according to any of the preceding claims, wherein as the input data (26), sensor data (22) from at least one sensor (19) that is observing the surroundings is provided.
- Method according to any of the preceding claims, wherein as part of a training process of the neural network (27) the respective noise (44) that is modelled by the respective noise model (43) is applied as a part of training data to the respective node (33) of the subset (42).
- Electronic control unit (20) comprising a digital processing device (25) that is designed to perform a method according to any of the preceding claims.
- Motor vehicle (10) comprising an driving system (11) for automated and/or autonomous driving of the vehicle (10) and comprising an electronic control unit (20) according to claim 12, wherein the driving system (11) is controlled on the basis of output data (23) and uncertainty data (29) of the electronic control unit (20).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19159652.7A EP3702971A1 (en) | 2019-02-27 | 2019-02-27 | Method for estimating the uncertainty of a recognition process in a recognition unit; electronic control unit and motor vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19159652.7A EP3702971A1 (en) | 2019-02-27 | 2019-02-27 | Method for estimating the uncertainty of a recognition process in a recognition unit; electronic control unit and motor vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3702971A1 true EP3702971A1 (en) | 2020-09-02 |
Family
ID=65628638
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19159652.7A Withdrawn EP3702971A1 (en) | 2019-02-27 | 2019-02-27 | Method for estimating the uncertainty of a recognition process in a recognition unit; electronic control unit and motor vehicle |
Country Status (1)
Country | Link |
---|---|
EP (1) | EP3702971A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230061614A1 (en) * | 2021-08-31 | 2023-03-02 | Micron Technology, Inc. | Vehicle-based apparatus for noise injection and monitoring |
DE102022209635A1 (en) | 2022-09-14 | 2024-03-14 | Volkswagen Aktiengesellschaft | Method for operating a learning system, computer program product and vehicle |
US12122400B2 (en) * | 2021-08-31 | 2024-10-22 | Micron Technology, Inc. | Vehicle-based apparatus for noise injection and monitoring |
-
2019
- 2019-02-27 EP EP19159652.7A patent/EP3702971A1/en not_active Withdrawn
Non-Patent Citations (5)
Title |
---|
"Advances in Independent Component Analysis", 1 January 2000, SPRINGER-VERLAG, ISBN: 978-1-4471-0443-8, article HARRI LAPPALAINEN ET AL: "Bayesian Nonlinear Independent Component Analysis by Multi-Layer Perceptrons", pages: 93 - 121, XP055608862, DOI: 10.1007/978-1-4471-0443-8_6 * |
HONKELA A ED - DIBAZAR A A ET AL: "Approximating nonlinear transformations of probability distributions for nonlinear independent component analysis", NEURAL NETWORKS, 2004. PROCEEDINGS. 2004 IEEE INTERNATIONAL JOINT CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, vol. 3, 25 July 2004 (2004-07-25), pages 2169 - 2174, XP010759255, ISBN: 978-0-7803-8359-3, DOI: 10.1109/IJCNN.2004.1380955 * |
PIERRE BALDI ET AL: "Understanding dropout", PROCEEDINGS OF THE 27TH ANNUAL CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS (NIPS'13), 5 December 2013 (2013-12-05), XP055193473, Retrieved from the Internet <URL:http://papers.nips.cc/paper/4878-understanding-dropout.pdf> [retrieved on 20150603] * |
RHIANNON MICHELMORE ET AL: "Evaluating Uncertainty Quantification in End-to-End Autonomous Driving Control", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 16 November 2018 (2018-11-16), XP081051099 * |
SIMON J. JULIER ET AL: "New extension of the Kalman filter to nonlinear systems", PROCEEDINGS SPIE 7513, 2009 INTERNATIONAL CONFERENCE ON OPTICAL INSTRUMENTS AND TECHNOLOGY, vol. 3068, 28 July 1997 (1997-07-28), 1000 20th St. Bellingham WA 98225-6705 USA, pages 182, XP055608764, ISSN: 0277-786X, ISBN: 978-1-5106-2781-9, DOI: 10.1117/12.280797 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230061614A1 (en) * | 2021-08-31 | 2023-03-02 | Micron Technology, Inc. | Vehicle-based apparatus for noise injection and monitoring |
US12122400B2 (en) * | 2021-08-31 | 2024-10-22 | Micron Technology, Inc. | Vehicle-based apparatus for noise injection and monitoring |
DE102022209635A1 (en) | 2022-09-14 | 2024-03-14 | Volkswagen Aktiengesellschaft | Method for operating a learning system, computer program product and vehicle |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7367183B2 (en) | Occupancy prediction neural network | |
US11189171B2 (en) | Traffic prediction with reparameterized pushforward policy for autonomous vehicles | |
Al-Qizwini et al. | Deep learning algorithm for autonomous driving using googlenet | |
Kim et al. | Robust lane detection based on convolutional neural network and random sample consensus | |
CN110647839A (en) | Method and device for generating automatic driving strategy and computer readable storage medium | |
CN112487954B (en) | Pedestrian crossing behavior prediction method for plane intersection | |
CN111738037B (en) | Automatic driving method, system and vehicle thereof | |
CN112703459A (en) | Iterative generation of confrontational scenarios | |
US11967103B2 (en) | Multi-modal 3-D pose estimation | |
KR20210002018A (en) | Method for estimating a global uncertainty of a neural network | |
Wang et al. | STMAG: A spatial-temporal mixed attention graph-based convolution model for multi-data flow safety prediction | |
Hu et al. | Learning a deep cascaded neural network for multiple motion commands prediction in autonomous driving | |
CN114514524A (en) | Multi-agent simulation | |
CN114787739A (en) | Smart body trajectory prediction using vectorized input | |
EP3690756A1 (en) | Learning method and learning device for updating hd map by reconstructing 3d space by using depth estimation information and class information on each object, which have been acquired through v2x information integration technique, and testing method and testing device using the same | |
CN114067166A (en) | Apparatus and method for determining physical properties of a physical object | |
US20240096076A1 (en) | Semantic segmentation neural network for point clouds | |
CN114299607A (en) | Human-vehicle collision risk degree analysis method based on automatic driving of vehicle | |
Mirus et al. | An investigation of vehicle behavior prediction using a vector power representation to encode spatial positions of multiple objects and neural networks | |
EP3702971A1 (en) | Method for estimating the uncertainty of a recognition process in a recognition unit; electronic control unit and motor vehicle | |
CN114723097A (en) | Method and system for determining weights for attention-based trajectory prediction methods | |
CN112597996B (en) | Method for detecting traffic sign significance in natural scene based on task driving | |
CN112698578B (en) | Training method of automatic driving model and related equipment | |
CN114889608A (en) | Attention mechanism-based vehicle lane change prediction method | |
Zhao et al. | Efficient textual explanations for complex road and traffic scenarios based on semantic segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ARGO AI GMBH |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: FERRONI, FRANCESCO Inventor name: POSTELS, JANIS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20210303 |