WO2020216622A1 - Détection et suppression de parasites dans des étiquettes de données d'apprentissage pour des modules à capacité d'apprentissage - Google Patents
Détection et suppression de parasites dans des étiquettes de données d'apprentissage pour des modules à capacité d'apprentissage Download PDFInfo
- Publication number
- WO2020216622A1 WO2020216622A1 PCT/EP2020/060006 EP2020060006W WO2020216622A1 WO 2020216622 A1 WO2020216622 A1 WO 2020216622A1 EP 2020060006 W EP2020060006 W EP 2020060006W WO 2020216622 A1 WO2020216622 A1 WO 2020216622A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- learning
- variable values
- output variable
- learning data
- training
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
Definitions
- the present invention relates to the training of trainable modules, such as are used, for example, for classification tasks and / or object recognition in at least partially automated driving.
- the driving of a vehicle in traffic by a human driver is usually trained by repeatedly confronting a learner driver with a certain canon of situations as part of his training.
- the learner driver has to react to these situations and receives feedback from comments or even intervention by the driving instructor as to whether his reaction was correct or incorrect.
- This training with a finite number of situations is intended to enable the learner driver to master even unfamiliar situations while driving the vehicle independently.
- modules that can be trained in a very similar way.
- These modules receive, for example, sensor data from the vehicle environment as input variables and, as output variables, supply control signals with which the operation of the vehicle is intervened, and / or preliminary products from which such control signals are formed.
- a classification of objects in the vicinity of the vehicle can be such a preliminary product.
- a sufficient amount of learning data sets is required, each of which includes learning input variable values and associated learning output variable values.
- the learning input variable values can include images and can be labeled as learning output variable values with the information about which objects are contained in the images.
- the trainable module translates one or more input variables into one or more output variables.
- a trainable module is viewed in particular as a module that embodies a function parameterized with adaptable parameters with great force for generalization.
- the parameters can in particular be adapted in such a way that when learning input variable values are input into the module, the associated learning output variable values are reproduced as well as possible.
- the trainable module can in particular contain an artificial neural network, ANN, and / or it can be an ANN.
- the training takes place on the basis of learning data sets which contain learning input variable values and associated learning output variable values as labels.
- the learning input variable values include measurement data obtained through a physical measurement process and / or through a partial or complete simulation of such a measurement process and / or through a partial or complete simulation of a technical system that can be observed with such a measurement process.
- the associated learning output variable values are not immediately available as labels, but these labels must be determined in a process that is more or less complex depending on the technical application. Most of the time, this process requires human work and is accordingly prone to errors.
- the term “learning data set” does not designate the entirety of all available learning data, but a combination of one or more learning input variable values and learning output variable values assigned to precisely these learning input variable values as labels. With one for them
- a learning data record for example, an image as a matrix of learning input variable values in combination with the Softmax scores, which the trainable module should ideally generate therefrom, as a vector of learning output variable values.
- a plurality of modifications of the trainable module are each pre-trained with at least a subset of the learning data records.
- the modifications differ so widely that they are not transferred congruently into one another as the learning progresses.
- the modifications can be structurally different, for example.
- several modifications of ANNs can be generated by deactivating different neurons as part of a "dropout".
- the modifications can also be generated, for example, by pre-training with sufficiently different subsets of the total learning data sets that are present, and / or by pre-training based on sufficiently different initializations.
- the modifications can, for example, be pre-trained independently of one another. However, it is also possible, for example, to bundle the pre-training in that only one trainable module or a modification is trained and further modifications are only generated from this module or this modification after this training has been completed.
- learning input variable values of at least one learning data set are fed to all modifications as input variables. These identical learning input variable values are used by the different learning input variable values.
- Modifications translated into different output values A measure for the uncertainty of these output variable values is determined from the deviation of these output variable values from one another.
- the output variable values can be, for example, Softmax scores which indicate the probabilities with which the learning data set is classified into which of the possible classes.
- Any statistical function can be used to determine the uncertainty from a large number of output variable values.
- Examples of such statistical functions are the variance, the standard deviation, the mean value, the median, a suitably chosen quantile, the entropy and the variation ratio.
- Deviations between those output variable values which are supplied by modifications generated in different ways are compared separately from one another. For example, the
- Deviations between output variable values that were supplied by modifications resulting from "dropouts" and the deviations between output variable values that were supplied by modifications that were otherwise structurally changed are considered separately from one another.
- the terms “deviations” and “uncertainty” are not restricted to the one-dimensional, univariate case, but encompass sizes of any dimension. For example, there can be several
- Uncertainty features are combined to get a multivariate uncertainty. This increases the accuracy of differentiation between learning data sets with an appropriate assignment of the learning output variable values to the learning input variable values (i.e. "appropriately labeled" learning data sets) on the one hand and learning data sets with an incorrect assignment (i.e.
- the weighting of the learning data set is adjusted in the training of the trainable module, and / or one or more learning output variable values of the learning data set are adjusted. It was recognized that with an appropriate assignment of the learning output variable values to the learning input variable values, the different modifications of the trainable module have a tendency to
- the adjustment of the weighting can go so far that a learning data set recognized as incorrectly labeled is no longer taken into account in further training.
- one or more learning output variable values of the learning data set can also be adapted.
- a partially or completely incorrect assignment of learning output variable values to learning input variable values is the ultimate cause of a greater uncertainty in the output variable values.
- An adjustment of the learning output values tackles the evil of the high uncertainty, so to speak, at the root.
- the adaptation of the learning output variable value can be aimed specifically at reducing the uncertainty. So it can be aimed specifically at reducing the uncertainty. So it can be aimed specifically at reducing the uncertainty. So it can be aimed specifically at reducing the uncertainty. So it can be aimed specifically at reducing the uncertainty. So it can be aimed specifically at reducing the uncertainty. So it can be aimed specifically at reducing the uncertainty. So it can be aimed specifically at reducing the uncertainty. So it can be aimed specifically at reducing the uncertainty. So it can
- the learning output variable value can be varied in accordance with any optimization algorithm or any other search strategy with the optimization aim of reducing the uncertainty.
- Such a correction is self-consistent and does not require any prior knowledge as to which new learning output variable is correct.
- a combination of both measures can be useful, for example, if the efforts to obtain more accurate learning output variable values (labels) are only successful for some of the learning data sets. Learning data sets whose learning output variable values prove to be incorrect and also cannot be improved can then, for example, be underweighted or completely disregarded.
- adaptable parameters that characterize the behavior of the trainable module are optimized. The aim of this optimization is to improve the value of a cost function.
- the cost function measures the extent to which the trainable module maps the learning input variable values contained in the learning data sets to the associated learning output variable values.
- the cost function measures how well the learning output variable values are reproduced.
- Any desired error measure can be used to assess the extent to which the learning output variable values are reproduced as desired, such as the cross entropy or the method of the smallest error sum of squares.
- This process is modified in such a way that, in response to the fact that the specified criterion is met, the weighting of at least one learning data set in the cost function is reduced.
- a learning data set can be weighted less, the higher the uncertainty of the output variable values determined on the basis of its learning input variable values. This can go to the point that, in response to the fact that the uncertainty fulfills a given criterion, this learning data set falls out of the cost function entirely, i.e. is no longer used for further training of the trainable module. This is based on the knowledge that the additional benefit brought about by taking into account another learning data set is the result of an imprecise or incorrect learning output value in the training process
- the specified criterion can in particular include, for example, that the uncertainty is greater or smaller than a specified quantile of the learning input variable values determined from a large number of other learning data sets Uncertainties or as a predetermined threshold.
- the criterion can include that the learning data record belongs to those k% learning data records whose learning input variable values have
- Output variable values are translated with the highest uncertainties. This means that the uncertainties for which these k% learning data sets are responsible are at least as great as the uncertainties that arise from all other learning data sets that do not belong to the k%. This is based on the knowledge that selective measures, such as adjusting the weighting and / or the label, have the greatest effect on those learning data sets that are labeled the most inappropriately.
- the adaptation of one or more learning output variable values can in particular include requesting the input of at least one new learning output variable value via an input device.
- the assignment of the new learning output variable value to the learning input variable values can be carried out by an expert with regard to the correct interpretation of the learning input variable values.
- images obtained by medical imaging can be labeled by specialists with regard to the anomaly, the presence or form of which is to be determined with the trainable module.
- Measurement data recorded on mass-produced products can be labeled for these products by experts who, for example, saw through a copy of the product and examine it from the inside.
- Images of traffic situations that the trainable module is supposed to classify for the purpose of at least partially automated driving can be labeled, for example, by an expert in traffic law who also knows how to correctly interpret complicated traffic situations with a combination of several traffic signs.
- the adaptation of one or more learning output variable values includes at least one output variable value that the trainable module and / or a modification of this trainable module assigns to the learning input variable values of the learning data set at its current training level, and /or one
- Offsetting several such output variable values to set this learning data set as a new learning output variable value can include, for example, a mean value or median.
- the uncertainty whose course is examined can be measured with a different statistical measure than the uncertainty with which it is established at all that there is a need for improvement with regard to at least one learning output variable. It was recognized that this minimum marks the transition between the aforementioned learning of the basics, which is not yet affected by the influence of false labels, and the increasing dilution of this
- the training level that is optimal in this respect can alternatively or also in combination with this also be determined via the accuracy with which the validation input variable values from validation data sets are transferred to associated
- Validation output values are mapped.
- the validation data records are data records which, analogous to the learning data records, contain an assignment of input variable values to target output variable values.
- the trainable module is deliberately not trained on the validation data sets. Therefore, the accuracy determined with the help of the validation data sets measures the ability of the trainable module to generalize the knowledge learned from the learning data sets. So good values for the accuracy cannot be determined by mere
- the validation data records can also advantageously be characterized in that the assignment of the validation output variable values to the validation input variable values comes from a particularly reliable source.
- the validation data records can therefore be labeled in particular, for example, with a particularly reliable and therefore complex method, by a specially designated expert for the respective application, and / or according to a “gold standard” recognized for the respective application.
- the effort per label is therefore usually considerably greater for the validation data sets than for the learning data sets. Accordingly, there are typically significantly fewer
- Validation data sets are available as learning data sets.
- the relevant learning output variable values are assigned to the learning input variable values in all learning data records, then it is to be expected that when testing the trainable module with the validation data records, the accuracy increases monotonically with increasing epoch number e of the training until it is at some point in a saturation goes. If, on the other hand, the assignment is incorrect for some of the learning data sets, then the accuracy will assume a maximum after the said learning of the basics, before it decreases again due to learning from the wrong labels.
- the validation input variable values from validation data sets are related to the associated validation Output variable values are mapped, assuming a maximum at an epoch number b2
- Error measure such as a mean
- the qualitative course of both the uncertainty and the accuracy changes significantly as a function of the number of epochs e if a certain proportion of incorrectly labeled learning data sets is added to correctly labeled learning data sets. Therefore, in a further particularly advantageous embodiment, the course of the uncertainty, and / or the accuracy, as a function of the epoch number e of the preliminary training thereupon
- the epoch number e of the pre-training predominantly increases, it can be established that the assignment of the learning output variable values to the learning input variable values is essentially correct in all learning data sets.
- “predominantly” is to be understood as meaning, for example, an essentially monotonic curve which converges to a constant value. Small statistical fluctuations in the other direction do not affect this.
- the course of the uncertainty as a function of the epoch number e of the preliminary training is evaluated only for those uncertainties that are greater or smaller than a predetermined quantile of the uncertainties determined from learning input variable values from a large number of learning data sets or as a given
- the course of the uncertainty can, for example, be evaluated with a summary statistic. For example, a mean, a median, a variance and / or a
- Standard deviation used to determine the uncertainties of the output variable values.
- the summarizing statistics can be kept separately, for example, on those output variable values which are applicable or not applicable in the light of the respective learning output variable values in accordance with a predetermined criterion. If the trainable module is designed as a classifier, for example, a first mean value or median of the uncertainties of those can be used
- Output variable values are formed which assign the respective learning input variable values to the correct class.
- a second mean value or median can be formed from the uncertainties of those output variable values that assign the respective learning input variable values to the wrong class.
- the output variables supplied by the trainable module can in particular contain a classification, regression and / or semantic segmentation of the input measurement data. Especially in determining this
- the invention also relates to a parameter set with parameters which characterize the behavior of a trainable module and were obtained with the method described above. These parameters can
- weights can be used to activate inputs from neurons or other processing units in an ANN.
- Arithmetic units are offset. This parameter set embodies the effort that has been invested in the training and is therefore an independent product.
- the invention also relates to a further method which the
- a trainable module is first trained with the method described above. This trainable module is then operated by feeding it input variable values. This
- Input variable values include measurement data obtained by a physical measurement process and / or by a partial or complete simulation of such a measurement process and / or by a partial or complete simulation
- Classification system and / or a system for quality control of mass-produced products, and / or a system for medical imaging, controlled with a control signal.
- the methods can be implemented entirely or partially by computer.
- the invention therefore also relates to a computer program machine-readable instructions which, when executed on one or more computers, cause the computer or computers to carry out one of the methods described. In this sense are too
- Control units for vehicles and embedded systems for technical devices which are also able to execute machine-readable instructions, are to be regarded as computers.
- the invention also relates to a machine-readable one
- a download product is a product that can be transmitted over a data network, i.e. digital product downloadable by a user of the data network that
- Figure 1 embodiment of the method 100 for training
- FIG. 2 exemplary embodiment of the method 200 with continuation of the functional chain up to the control of physical systems 50, 60, 70, 80;
- FIG. 3 recognition of whether incorrectly labeled learning data records are still available via the course of the uncertainty 13b as a function of the epoch number e;
- FIG. 4 recognition of whether incorrectly labeled learning data records are still available via the course of the accuracy 15 as a function of the epoch number e.
- FIG. 1 shows an exemplary embodiment of the method 100 for training a trainable module 1.
- step 110 a plurality of modifications 1a-1c of the trainable module 1 are pretrained with at least a subset of the existing learning data sets 2.
- Each learning data record 2 contains learning input variable values 11a and associated learning output variable values 13a.
- step 120 learning input variable values 11a from learning data records 2 are supplied to all modifications 1a-1c as input variables 11.
- Each modification la-lc generates its own output variable value 13 from this.
- step 125 one or more modifications la-lc are tested on the basis of validation data sets 3.
- the modification la-lc is supplied with the validation input variable values 11a * of each validation data record 3 as input variables 11.
- the accuracy 15 is determined with which the modification la-lc reproduces the respective validation output variable values 13a * from this. This accuracy 15 has depending on the
- the epoch number e of the preliminary training 110 has a time course 15 (e).
- a measure for the uncertainty 13b of the output variable values 13 is determined in step 130 from the deviations of these output variable values 13 from one another.
- the accuracy 15 can be derived from the direct comparison of the
- Output variable values 13 can be determined with the learning output variable values 13a with any degree of error.
- the uncertainty 13b, its course 13b (e) Depending on the epoch number e of the preliminary training 110 and the accuracy 15 can be evaluated in the following steps individually or in combination, as described below.
- step 140 it is checked whether the uncertainty 13b of the output variable values 13, which were determined using the learning input variable values 11a of this learning dataset 2, meets a predetermined criterion for at least one learning data record 2. If this is the case (truth value 1), then in step 180 the weighting of the learning data record 2 is adapted in the training of the trainable module 1, and / or one or more learning output variable values 13a of the learning data record 2 are adapted in step 190 .
- step 170 the extent to which the assignment of the learning output variable values 13a to the learning input variable values 11a in the learning data records 2 is evaluated from the course 13b (e) of the uncertainty 13b, or from the course 15 (e) of the accuracy 15 is applicable overall. That is, it is checked whether the existing learning data records 2 are essentially all correctly labeled or whether the correctly labeled learning data records 2 are also joined by incorrectly labeled learning data records 2 to a significant extent.
- the focus is specifically on those uncertainties 13b that are greater or smaller than a predetermined quantile of the uncertainties 13b determined from learning input variable values 11a of a plurality of learning data records 2 or as a
- predetermined threshold For example, only the largest 25% of the uncertainties 13b can be taken into account.
- step 180 the weighting of the learning data record 2 is adjusted in the training of the trainable module 1, then this can be integrated into the training with a cost function 14, for example.
- adjustable parameters 12 which characterize the behavior of the trainable module 1 are optimized with the aim of improving the value of the cost function 14.
- the cost function 14 measures the extent to which the trainable module 1 maps the learning input variable values 11a contained in learning data sets 2 to the associated learning output variable values 13a.
- the weighting of at least one learning data record 2 in the cost function 14 is reduced. According to block 182a, for example, this can go up to the point at which the learning data record 2 is no longer taken into account in the cost function 2.
- At least one new learning output variable value 13a can for example according to block 191 via a
- the new learning output variable value 13a can be used, for example, by an expert on the basis of the learning input variables values 11a can be determined and entered. Alternatively or in combination with this, at least one output variable value 13, which the trainable module 1 and / or one of its modifications 1a-1c determines to the learning input variable values 11a, can be determined as the new learning output variable value 13a.
- the trainable module 1 can in a certain way use a “self-healing power”. As explained above, this works particularly well if the trainable module 1 has not yet learned too much from incorrectly labeled learning data sets 2, that is to say the training status of a suitable epoch (for example ei or b2) is selected for this.
- FIG. 2 shows an exemplary embodiment of the method 200.
- a trainable module 1 is trained using the method 100 described above.
- the module trained in this way is operated in step 220 in that input variable values 11 with physically recorded and / or simulated measurement data that relate to a technical system are supplied to it.
- a control signal 5 is formed in step 230 from the output variable values 13 then supplied by the trainable module 1.
- a vehicle 50 and / or a classification system 60 and / or a system 70 for quality control of mass-produced products and / or a system 80 for medical imaging is controlled with this control signal 5.
- FIG. 3 illustrates by way of example how, on the basis of the course 13b (e) of the uncertainty 13b as a function of the epoch number e, it can be determined whether the
- Curve a represents the case in which both correctly labeled and incorrectly labeled learning data sets 2 are present.
- the uncertainty 13b initially decreases in the course of the training, since the positive initialization of the training is based on a generally random initialization
- FIG. 4 illustrates by way of example how, on the basis of the course 15 (e) of the accuracy 15 determined with the validation data records 3 as a function of the epoch number e, it can be determined whether the existing learning data records 2 in
- curve a represents the case in which both correctly labeled and incorrectly labeled learning data records 2 are present.
- the accuracy 15 initially increases because the positive effect due to the appropriately labeled learning data sets 2 outweigh the negative effect due to the contradictions with the incorrectly labeled learning data sets 2.
- Curve b represents the case in which essentially only appropriately labeled learning data sets 2 are available. Here the positive learning effect continues until the accuracy 15 finally converges towards saturation.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne un procédé (100) permettant à un module à capacité d'apprentissage (1) d'apprendre, ledit procédé comprenant les étapes suivantes : • une pluralité de variantes (1a-1c) du module à capacité d'apprentissage (1), qui se distinguent l'une de l'autre de façon à ne pas s'imbriquer l'une dans l'autre de manière coïncidente lors d'une progression de l'apprentissage, est apprise au préalable (110) au moins avec une partie des ensembles de données d'apprentissage (2) ; • les valeurs de grandeurs d'entrée d'apprentissage (11a) d'au moins un ensemble de données d'apprentissage (2) sont transmises (120) comme grandeurs d'entrée (11) à toutes les variantes (1a-1c) ; • à partir de l'écart des valeurs de grandeurs de sortie (13), dans lesquelles les variantes (1a-1c) traduisent les valeurs de grandeurs d'entrée d'apprentissage (11a), une mesure est déterminée (130) pour l'incertitude (13b) de ces valeurs de grandeurs de sortie (13) ; • en réponse au fait que l'incertitude (13b) remplit un critère prédéfini (140), la pondération de l'ensemble de données d'apprentissage (2) est adaptée (180) dans l'apprentissage du module à capacité d'apprentissage (1) et/ou une ou plusieurs valeurs de grandeurs de sortie d'apprentissage (13a) de l'ensemble de données d'apprentissage (2) sont adaptées (190). L'invention concerne également un procédé (200) selon lequel le module à capacité d'apprentissage continue de fonctionner (220) et commande un système (50, 60, 70, 80) avec un signal de commande (5).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102019206049.8 | 2019-04-26 | ||
DE102019206049.8A DE102019206049A1 (de) | 2019-04-26 | 2019-04-26 | Erkennung und Behebung von Rauschen in Labels von Lern-Daten für trainierbare Module |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020216622A1 true WO2020216622A1 (fr) | 2020-10-29 |
Family
ID=70228059
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2020/060006 WO2020216622A1 (fr) | 2019-04-26 | 2020-04-08 | Détection et suppression de parasites dans des étiquettes de données d'apprentissage pour des modules à capacité d'apprentissage |
Country Status (2)
Country | Link |
---|---|
DE (1) | DE102019206049A1 (fr) |
WO (1) | WO2020216622A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117724851A (zh) * | 2024-02-07 | 2024-03-19 | 腾讯科技(深圳)有限公司 | 数据处理方法、装置、存储介质及设备 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AT524821A1 (de) * | 2021-03-01 | 2022-09-15 | Avl List Gmbh | Verfahren und System zum Erzeugen von Szenariendaten zum Testen eines Fahrerassistenzsystems eines Fahrzeugs |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180240031A1 (en) * | 2017-02-17 | 2018-08-23 | Twitter, Inc. | Active learning system |
-
2019
- 2019-04-26 DE DE102019206049.8A patent/DE102019206049A1/de active Pending
-
2020
- 2020-04-08 WO PCT/EP2020/060006 patent/WO2020216622A1/fr active Application Filing
Non-Patent Citations (2)
Title |
---|
BRODLEY C E ET AL: "Identifying Mislabeled Training Data", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 1 June 2011 (2011-06-01), XP080511687, DOI: 10.1613/JAIR.606 * |
IVÃ N CANTADOR ET AL: "Boosting Parallel Perceptrons for Label Noise Reduction in Classification Problems", 5 June 2005, 12TH EUROPEAN CONFERENCE ON COMPUTER VISION, ECCV 2012; [LECTURE NOTES IN COMPUTER SCIENCE], SPRINGER BERLIN HEIDELBERG, BERLIN GERMANY, PAGE(S) 586 - 593, ISBN: 978-3-642-36741-0, ISSN: 0302-9743, XP019011275 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117724851A (zh) * | 2024-02-07 | 2024-03-19 | 腾讯科技(深圳)有限公司 | 数据处理方法、装置、存储介质及设备 |
CN117724851B (zh) * | 2024-02-07 | 2024-05-10 | 腾讯科技(深圳)有限公司 | 数据处理方法、装置、存储介质及设备 |
Also Published As
Publication number | Publication date |
---|---|
DE102019206049A1 (de) | 2020-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102019204139A1 (de) | Training für künstliche neuronale Netzwerke mit besserer Ausnutzung der Lern-Datensätze | |
WO2019081545A1 (fr) | Procédé et dispositif destinés à produire automatiquement un réseau neuronal artificiel | |
DE102019124018A1 (de) | Verfahren zum Optimieren von Tests von Regelsystemen für automatisierte Fahrdynamiksysteme | |
WO2020216622A1 (fr) | Détection et suppression de parasites dans des étiquettes de données d'apprentissage pour des modules à capacité d'apprentissage | |
WO2021058223A1 (fr) | Procédé d'application de fonctions de conduite automatisée de manière efficace et simulée | |
DE102017209262A1 (de) | Verfahren und Vorrichtung zur automatischen Gestenerkennung | |
DE102019134053A1 (de) | Verfahren zur kontinuierlichen Absicherung im Fahrversuch applizierter automatisierter Fahrfunktionen | |
EP3959660A1 (fr) | Apprentissage de modules aptes à l'apprentissage avec des données d'apprentissage dont les étiquettes sont bruitées | |
DE102018209108A1 (de) | Schnelle Fehleranalyse für technische Vorrichtungen mit maschinellem Lernen | |
EP3748574A1 (fr) | Correction adaptative des données mesurées en fonction de différents types de défaillances | |
EP3605404B1 (fr) | Procédé et dispositif d'entraînement d'une routine d'apprentissage mécanique permettant de commander un système technique | |
DE102019206052A1 (de) | Situationsadaptives Trainieren eines trainierbaren Moduls mit aktivem Lernen | |
DE102022202985A1 (de) | Nullschuss-Klassifikation von Messdaten | |
WO2023280531A1 (fr) | Procédé implémenté par ordinateur, programme informatique et dispositif pour générer une copie de modèle reposant sur des données dans un capteur | |
WO2022135959A1 (fr) | Dispositif pour une classification et une régression robustes de séquences temporelles | |
EP1157317B1 (fr) | Procede et dispositif pour la reduction d'une pluralite de valeurs mesurees d'un systeme industriel | |
DE102021109130A1 (de) | Verfahren zum Testen eines Produkts | |
DE102019206050A1 (de) | Auswahl neuer ungelabelter Lern-Datensätze für das aktive Lernen | |
DE102019130484A1 (de) | Verfahren und Vorrichtung zum Anlernen eines Ensembles von neuronalen Netzen | |
EP1157311B1 (fr) | Procede et dispositif pour l'etude de conception d'un systeme industriel | |
DE102021132542A1 (de) | Verfahren zum bereitstellen eines bit-flip-fehlerrobusten; perturbationsrobusten und komprimierten neuronalen netzes; computerprogramm; fahrerassistenzsystem | |
DE102016113310A1 (de) | Verfahren zur Bewertung von Aussagen einer Mehrzahl von Quellen zu einer Mehrzahl von Fakten | |
DE102022205824A1 (de) | Verfahren und Vorrichtung zum Trainieren eines neuronalen Netzes | |
DE102020213830A1 (de) | Verfahren und System zur Bereitstellung einer Diagnoseinformation | |
DE102021109128A1 (de) | Verfahren zum Testen eines Produkts |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20717860 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20717860 Country of ref document: EP Kind code of ref document: A1 |