EP3973455A1 - Procédé pour évaluer une robustesse spécifique de la fonction d'un réseau neuronal - Google Patents

Procédé pour évaluer une robustesse spécifique de la fonction d'un réseau neuronal

Info

Publication number
EP3973455A1
EP3973455A1 EP20724043.3A EP20724043A EP3973455A1 EP 3973455 A1 EP3973455 A1 EP 3973455A1 EP 20724043 A EP20724043 A EP 20724043A EP 3973455 A1 EP3973455 A1 EP 3973455A1
Authority
EP
European Patent Office
Prior art keywords
training data
activation
neural network
differences
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20724043.3A
Other languages
German (de)
English (en)
Inventor
Nikhil KAPOOR
Peter Schlicht
Nico Maurice SCHMIDT
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Volkswagen AG
Original Assignee
Volkswagen AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Volkswagen AG filed Critical Volkswagen AG
Publication of EP3973455A1 publication Critical patent/EP3973455A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Definitions

  • the invention relates to a method for assessing a function-specific robustness of a neural network.
  • the invention also relates to a device for data processing, a computer program product and a computer-readable storage medium.
  • Machine learning for example based on neural networks, has great potential for use in modern driver assistance systems and automated vehicles.
  • Functions based on deep neural networks process raw sensor data (e.g. from cameras, radar or lidar sensors) in order to derive relevant information from it.
  • This information includes, for example, a type and a position of objects in an environment of the motor vehicle, a behavior of the objects or a road geometry or topology.
  • raw sensor data e.g. from cameras, radar or lidar sensors
  • This information includes, for example, a type and a position of objects in an environment of the motor vehicle, a behavior of the objects or a road geometry or topology.
  • convolutional neural networks have proven to be particularly suitable for applications in image processing. While these neural networks surpass classical approaches in terms of functional accuracy, they also have disadvantages. For example, interfering influences in recorded sensor data or on adversarial
  • Segmentation takes place. Knowledge of a function-specific robustness of a neural network with respect to such interference is therefore desirable.
  • Data signal interference depending on the data signals of the training data set, the associated desired semantic segmentation and estimated semantic segmentations of the data signals to which the data signal interference is applied. Furthermore, a method for assessing the robustness of an actuator control system with a Machine learning system described, in which, depending on an undisturbed control signal and a disturbed control signal, it is decided whether the
  • Actuator control system is robust or not.
  • the invention is based on the object of improving a method and a device for assessing a function-specific robustness of a neural network.
  • a method for assessing a function-specific robustness of a neural network comprising the steps:
  • the neural network being trained or having been trained on the basis of a training data set comprising training data
  • Neural network through the training data of the original training data set and activation through the respective corresponding training data of the at least one changed training data set
  • an apparatus for data processing comprising means for carrying out the method steps of the method according to any of the described embodiments.
  • a computer program is created, comprising instructions which, when the computer program is executed by a computer, cause the computer to carry out the method steps of the method according to any of the described embodiments.
  • a computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method steps of the method according to any one of the described embodiments.
  • the method and the device make it possible to assess the robustness of a neural network, in particular of a convolution network, with respect to interference.
  • a training data set with which the neural network was trained or is being trained is changed.
  • the changes made to the training data set do not change a semantically meaningful content, but only change semantically insignificant content.
  • a semantically meaningful content denotes in particular a semantic context that is important for a function of the trained neural network.
  • the semantically meaningful content is in particular the content that the function of the trained neural network within the scope of semantic segmentation or
  • the semantically insignificant content is, in particular, content that can ideally be designed as desired without a function of the trained neural network being impaired as a result.
  • the training data set changed in this way and the original training data set are then applied to the trained neural network, that is, the training data and the changed training data are each fed to the trained neural network as input data. Then at least one activation difference between one through the
  • the original (i.e. undisturbed) and the changed (i.e. disturbed) training data are always considered in pairs.
  • the at least one activation difference determined is then provided and represents a measure of the sensitivity or robustness of the neural network to a change made when the training data set is changed by means of a manipulation method.
  • the neural network can be assessed as more robust, the lower the at least one
  • the advantage of the method is that the robustness of a neural network with respect to disturbed input data can be assessed in an improved manner, since an activation or an activation difference, in particular within, the neural network is considered.
  • a neural network is in particular an artificial neural network, in particular a convolutional neural network.
  • the neural network is particularly up trains a certain function, for example the perception of pedestrians in captured camera images.
  • the training data of the training data set can be one-dimensional or multidimensional, the training data being marked (“labeled”) with regard to semantically meaningful content.
  • the training data can be camera images that have been captured and marked with regard to semantic content.
  • a neural network is trained, for example, to recognize pedestrians in captured camera images
  • camera images used as training data are changed when they are changed in such a way that one or more pedestrians present in a captured camera image are not changed or only changed in an irrelevant manner.
  • the following manipulation methods can be used: photometric manipulation methods (e.g. a change in brightness, contrast, saturation), noise and blurring (e.g. Gaussian blurring, Gaussian noise, salt & pepper noise) or adversarial manipulation methods (e.g. " Fast Gradient Sign
  • Pedestrians is removed. Furthermore, for example, colors, textures or other properties of objects and / or surfaces of the objects can be changed,
  • a color of a motor vehicle can be changed or a
  • Reflection behavior of a surface of the motor vehicle can be carried out individually or in combination with one another: an added sensor noise in the training data, contrast, brightness and / or
  • An activation is determined in particular on the basis of (inferred) values at the outputs of neurons of the neural network.
  • the activation difference in particular the (inferred) values at the outputs of the neurons in the neural network are compared with one another in pairs for the original and the changed training data.
  • the method is carried out in particular as a computer-implemented invention.
  • the method is carried out by means of a computing device that can access a memory.
  • the computing device can be designed as a combination of hardware and software, for example as program code that is based on a
  • Microcontroller or microprocessor is running.
  • a robustness measure is derived and made available on the basis of the provided at least one activation difference. This can be a real number, for example, which provides an assessment of the robustness and a
  • activation differences are determined and provided by neurons and / or in areas. This makes it possible to identify neurons and / or areas of the neural network that are particularly affected or sensitive by manipulation of the training data. This enables a detailed analysis of sensitive neurons and / or areas of the neural network, which can be taken into account, for example, in a subsequent adjustment of parameters or a structure or architecture of the neural network. For this purpose, for example, activation differences between the outputs of the neurons of the neural network are formed and provided individually and / or in areas. It can be provided, for example, that an L2 distance (L2 norm) is formed between activation vectors which describe activation of the neurons or areas.
  • L2 norm L2 distance
  • the neural network is designed as a convolution network, it can be provided, for example, that an activation difference is determined and provided for each filter in the convolution network.
  • Activation differences averaged over several neurons and / or over a range the averaged activation differences are provided. This enables an analysis of the activation differences or a sensitivity of the neural network to be analyzed and evaluated more efficiently.
  • a medium For example, a medium
  • Activation difference for several neurons and / or areas can be calculated.
  • the averaging can take place in particular with the aid of statistical methods, for example an expected value can be determined for averaging.
  • weighted Depending on a position of an associated neuron layer within the neural network are provided weighted. This makes it possible to take into account an expected influence on the outputs of the neural network, because as a rule an increased sensitivity of a neuron layer in the vicinity of the input has a smaller influence on the end result delivered by the neural network than an increased sensitivity of a neuron layer in the vicinity of the output. If activation differences of neurons and / or areas of the neural network are averaged, then the weighting can be taken into account when averaging in accordance with a position of the neuron layer in the neural network.
  • the mean can be
  • an expected value can be determined for averaging.
  • activation differences are averaged over several inference runs, the averaged in each case
  • the multiple inference runs are each carried out for training data modified with different manipulation methods.
  • activation differences of individual neurons and / or activation differences averaged over several neurons and / or over areas can also be averaged and taken into account over several interfering influences.
  • the averaging can take place in particular with the aid of statistical methods, for example an expected value can be determined for averaging.
  • certain activation differences are each provided as a function of an associated manipulation method.
  • the respective activation differences can be determined for all neurons in the neural network and made available in each case as a function of the associated manipulation method. This allows neurons and / or areas of the neural network to be sensitive to specific ones
  • a mean value or expected value of the activation difference for the neurons and / or areas of the neural network can be determined, the respective activation differences being weighted into account for the respective associated manipulation method. This gives according to the one used in each case
  • neurons and / or areas of the neural network are sorted as a function of the activation differences determined for them and an associated ranking is provided. It can be provided, for example, to sort all (individual or averaged) activation differences according to their amount and to make them available according to a ranking resulting from the sorting. This makes it possible to cover all of the most sensitively responsive areas either
  • Manipulation procedures averaged or to be identified for individual manipulation procedures In a possibly subsequent step for adapting a structure of the neural network, provision can then be made, for example, to change only the upper 5% or 10% of the most sensitive neurons or areas, but to leave the rest of the neural network unchanged.
  • FIG. 1 shows a schematic representation of an apparatus for carrying out the method
  • FIG. 2 shows a schematic flow diagram of an embodiment of the method for
  • FIG. 3 shows a schematic flow diagram of an embodiment of the method for
  • FIG. 5 shows a schematic representation of activation differences determined in each case for individual filters of a convolution network as a function of different manipulation methods.
  • the device 30 comprises means 31 for carrying out the method.
  • the means 31 include a computing device 32 and a memory 33. To carry out the
  • the computing device 32 can access the memory 33 and carry out computing operations in it.
  • a neural network 1 and a training data set 2 are stored in the memory 33. After the method has been carried out, at least one changed training data set 4 and activations 5 are also determined
  • Activation differences 7 and possibly averaged activation differences 10 and a robustness measure 9 are stored in the memory 33.
  • Activation differences 7 and possibly the averaged activation differences 10 and the robustness measure 9 are output by the computing device 32, for example via a suitable interface (not shown).
  • FIG. 2 shows a schematic flow diagram to illustrate an embodiment of the method for assessing a function-specific robustness of a neural network 1.
  • the neural network 1 has already been trained on the basis of a training data set 2.
  • Training data contained in training data set 2 can be changed for this purpose while maintaining semantically meaningful content.
  • the training data set 2 and the modified training data set 4 are each applied to the neural network 1, that is, they are each fed to the neural network 1 as input data, the input data being supplied by the neural network 1 in the frame be propagated through a feed-forward sequence, so that at an output of the
  • Neural network 1 inferred results can be provided.
  • the neural network 1 receives an undisturbed camera image of the original
  • Training data set 2 supplied. Furthermore, a manipulated or disturbed camera image from the changed training data set 4 is (subsequently) fed to the neural network 1. In this case, activations 5 are determined for individual neurons and / or areas of the neural network and in pairs (undisturbed camera image / disturbed
  • This difference formation step 6 supplies activation differences 7 for the neurons and / or areas under consideration. The activation differences 7 determined are then provided.
  • a robustness measure 9 is determined and provided on the basis of the determined activation differences 7 in a robustness measure determination step 8. For example, a real number between 0 and 1 can be assigned to the specific activation differences 7. Such a robustness measure 9 enables a comparison of the robustness between different neural networks.
  • certain activation differences 7 are provided in a weighted manner as a function of a position of an associated neuron layer within the neural network 1.
  • Inference runs are averaged, the averaged activation differences 10 being provided in each case.
  • neurons and / or areas of the neural network 1 are sorted as a function of the activation differences 7 determined in each case for them and an associated ranking is provided.
  • FIG. 3 shows a schematic flow diagram of an embodiment of the method for assessing a function-specific robustness of a neural network.
  • a neural network is provided.
  • a structure and weightings of the neural network are stored, for example, in a memory of a computer.
  • the neural network has either already been trained on the basis of a training data set comprising training data or is trained in the context of method step 100 on the basis of the training data set.
  • the neural network is trained, for example, to evaluate captured camera images and to determine whether a pedestrian is shown in the camera images.
  • the input data of the neural network are therefore two-dimensional camera images.
  • Manipulation of the training data set is generated, with the training data for this purpose in each case retaining semantically meaningful content (e.g. pedestrians in the
  • Form training data set are changed for this purpose with the help of manipulation methods.
  • noise e.g. Gaussian noise, Salt & Pepper noise
  • a method step 102 the training data of the training data set and the respective associated changed training data of the changed training data set are fed to the neural network as input data, that is, output data are inferred by means of the trained neural network on the basis of this input data.
  • at least one activation difference is determined between activation of the neural network by the training data of the original training data set and activation by the respectively corresponding changed training data of the changed training data sets.
  • a metric for determining the activation differences of the individual filters is the following:
  • Activation difference function an output function of the filter with the index i, Wi x Hi a size of the output feature map of the filter with the index i, N a number of images, x n the original camera image (i.e. the original training date), A x "the changed camera image (ie the changed training date) and f (x) an output function of the filter with the index i.
  • another metric can also be used.
  • An exemplary result of activation differences for each of the filters in a convolution network is shown schematically in FIG. 4, the x-axis 20 showing the index i of the filters in the convolution network and the y-axis 21 showing a normalized activation difference.
  • Activation differences are normalized to the maximum activation difference.
  • a brightness in camera images of the training data set was changed, for example. It can be seen in this example that the convolution network is designed to be particularly sensitive or not very robust, particularly in the case of filters around the filter index of 1000.
  • the activation differences determined are provided.
  • the activation differences can be output in the form of a digital data packet, for example. In the simplest case, only the activation differences are output, for example as measures in a range of 0 (no
  • a robustness measure is derived and provided. This can be done, for example, by deriving a characteristic number for all neurons and / or all areas of the neural network. In the simplest case, for example, all (normalized) activation differences can be added up and made available. However, it can also be provided to provide a function for deriving the robustness measure, which applies the activation differences to a range of real numbers between 0 (neural network is not robust to the disturbances in the input data) and 1 (neural network is completely robust to the disturbances in the input data).
  • Activation differences are provided weighted as a function of a position of an associated neuron layer within the neural network.
  • activation differences of neurons or areas in neuron layers that are closer to the input of the neural network are weighted less heavily than activation differences of neurons or areas in neuron layers that are closer to the output of the neural network.
  • a sensitivity of layers of neurons that are closer to the output of the Neural network this can have a greater influence on the assessment of robustness.
  • activation differences are averaged over several inference runs, the averaged in each case
  • changed training data that have been changed using different manipulation methods can be averaged over the inference runs.
  • the robustness can be assessed averaged over the individual manipulation methods.
  • an expected value is determined for the activation differences determined on the basis of the changed training data (i.e. for an individual neuron or for averaged areas).
  • Activation differences are each provided as a function of an associated manipulation method. This is shown by way of example in FIG. 5, in which activation differences for individual filters of a convolution network are shown for various manipulation methods according to the metric specified above, the x-axis 20 being the index i of the filters in the convolution network and the y-axis 21 being a shows the activation difference normalized to the maximum activation difference. It can be clearly seen that the activation differences for different manipulation methods relate to different areas of the neural network designed as a convolution network. For example, adding noise (Fig. 5: “Gaussian noise” and “Salt & Pepper”) affects almost all filters more or less equally.
  • the determined activation differences can be provided in a weighted manner as a function of a respective associated manipulation method.
  • the individual activation differences would, depending on the respective associated manipulation method, with a
  • Weighting coefficients are multiplied and the products are then added up for the individual filters.
  • the result could be represented graphically in the same way and shows the sensitivity of the neural network averaged over the manipulation methods used.
  • the activation differences shown in FIGS. 4 and 5 and provided with an index i of the filters can be sorted according to their respective height and a ranking corresponding to the sorting can be formed.
  • a number of the filters with the greatest activation differences can then be identified and provided, for example in order to change the neural network on the basis of this information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé pour évaluer une robustesse spécifique de la fonction d'un réseau neuronal (1), comprenant les étapes consistant à : fournir le réseau neuronal (1) entraîné sur la base d'un ensemble de données d'entraînement (2) comprenant des données d'entraînement, produire au moins un ensemble de données d'entraînement modifié (4) par manipulation de l'ensemble de données d'entraînement (2), les données d'entraînement étant modifiées respectivement avec conservation d'un contenu ayant un sens sémantique, déterminer au moins une différence d'activation (7) entre une activation du réseau neuronal (1) au moyen des données d'entraînement de l'ensemble de données d'entraînement initial (2) et une activation au moyen des données d'entraînement correspondantes de l'au moins un ensemble de données d'entraînement modifié (2) et fournir l'au moins une différence d'activation déterminée (7). L'invention concerne en outre un dispositif (30), un produit-programme d'ordinateur et un support d'enregistrement lisible par ordinateur.
EP20724043.3A 2019-05-23 2020-04-30 Procédé pour évaluer une robustesse spécifique de la fonction d'un réseau neuronal Pending EP3973455A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102019207575.4A DE102019207575A1 (de) 2019-05-23 2019-05-23 Verfahren zum Beurteilen einer funktionsspezifischen Robustheit eines Neuronalen Netzes
PCT/EP2020/062110 WO2020233961A1 (fr) 2019-05-23 2020-04-30 Procédé pour évaluer une robustesse spécifique de la fonction d'un réseau neuronal

Publications (1)

Publication Number Publication Date
EP3973455A1 true EP3973455A1 (fr) 2022-03-30

Family

ID=70554037

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20724043.3A Pending EP3973455A1 (fr) 2019-05-23 2020-04-30 Procédé pour évaluer une robustesse spécifique de la fonction d'un réseau neuronal

Country Status (5)

Country Link
US (1) US20220318620A1 (fr)
EP (1) EP3973455A1 (fr)
CN (1) CN113826114A (fr)
DE (1) DE102019207575A1 (fr)
WO (1) WO2020233961A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102021200215A1 (de) 2021-01-12 2022-07-14 Zf Friedrichshafen Ag Ermitteln einer Konfidenz eines künstlichen neuronalen Netzwerks

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10316381A1 (de) * 2003-04-10 2004-10-28 Bayer Technology Services Gmbh Verfahren zum Training von neuronalen Netzen
DE102018200724A1 (de) 2017-04-19 2018-10-25 Robert Bosch Gmbh Verfahren und Vorrichtung zum Verbessern der Robustheit gegen "Adversarial Examples"

Also Published As

Publication number Publication date
DE102019207575A1 (de) 2020-11-26
US20220318620A1 (en) 2022-10-06
CN113826114A (zh) 2021-12-21
WO2020233961A1 (fr) 2020-11-26

Similar Documents

Publication Publication Date Title
DE102019218613B4 (de) Objektklassifizierungsverfahren, Objektklassifizierungsschaltung, Kraftfahrzeug
DE102019212020A1 (de) Verfahren zum Erkennen einer adversarialen Störung in Eingangsdaten eines Neuronalen Netzes
EP3983936A1 (fr) Procédé et générateur servant à créer des données d'entrée perturbées destinées à un réseau neuronal
DE102019208735B4 (de) Verfahren zum Betreiben eines Fahrassistenzsystems eines Fahrzeugs und Fahrerassistenzsystem für ein Fahrzeug
EP3973455A1 (fr) Procédé pour évaluer une robustesse spécifique de la fonction d'un réseau neuronal
EP3973466A1 (fr) Procédé pour rendre un réseau neuronal plus robuste de manière spécifique à son fonctionnement
EP3948649A1 (fr) Masquage d'objets contenus dans une image
DE102021133977A1 (de) Verfahren und System zur Klassifikation von Szenarien eines virtuellen Tests sowie Trainingsverfahren
WO2021078512A1 (fr) Procédé pour robustifier un réseau neuronal vis-à-vis de perturbations antagonistes
EP3973458A1 (fr) Procédé pour faire fonctionner un réseau neuronal profond
EP4078237A1 (fr) Procédé et appareil de reconnaissance de suppression d'un domaine de données de capteur à partir d'un domaine de données de référence
DE102020204321A1 (de) Verfahren zum Betreiben eines Fahrerassistenzsystems und Fahrerassistenzsystem für ein Fahrzeug
DE102019217951A1 (de) Verfahren und Vorrichtung zum Bestimmen einer Domänendistanz zwischen mindestens zwei Datendomänen
DE102019213459A1 (de) Verfahren zum Komprimieren eines Neuronalen Netzes
DE102020205535A1 (de) Charakterisierung, Training und Anwendung von Bildklassifikatoren
DE102019213458A1 (de) Verfahren zum Komprimieren eines Neuronalen Netzes
DE102022213064A1 (de) Erkennung unbekannter Objekte durch neuronale Netze für Fahrzeuge
DE102022208384A1 (de) Verfahren zum Ermitteln eines Qualitätszustands eines Prüfobjekts
DE102021214253A1 (de) Bestimmung der für eine Bildverarbeitung mit einem Transformer-Netzwerk entscheidungsrelevanten Bildanteile
WO2022013121A1 (fr) Procédé et dispositif pour évaluer et certifier une robustesse d'un système de traitement d'information fondé sur l'ia
DE102021206106A1 (de) Vorrichtung und Verfahren zum Trainieren eines Maschinenlernsystems zum Entrauschen eines Eingangssignals
WO2024037811A1 (fr) Procédé d'évaluation d'un réseau neuronal profond entraîné
WO2021122340A1 (fr) Procédé et dispositif pour générer et fournir une base de données dans laquelle sont stockées des pièces de données de capteur destinées à être utilisées dans le matelassage
DE102021214552A1 (de) Verfahren zum Evaluieren eines trainierten tiefen neuronalen Netz
DE102021210566A1 (de) Quantitative Bewertung der Unsicherheit von Aussagen eines Klassifikators anhand von Messdaten und mehreren Verarbeitungsprodukten derselben

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20211223

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)