EP3931760A1 - Apprentissage de réseaux neuronaux pour une mise en oeuvre efficace sur un matériel - Google Patents
Apprentissage de réseaux neuronaux pour une mise en oeuvre efficace sur un matérielInfo
- Publication number
- EP3931760A1 EP3931760A1 EP20705699.5A EP20705699A EP3931760A1 EP 3931760 A1 EP3931760 A1 EP 3931760A1 EP 20705699 A EP20705699 A EP 20705699A EP 3931760 A1 EP3931760 A1 EP 3931760A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- neurons
- ann
- training
- quality
- contributions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000012549 training Methods 0.000 title claims abstract description 59
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 10
- 210000002569 neuron Anatomy 0.000 claims abstract description 159
- 238000000034 method Methods 0.000 claims abstract description 47
- 238000011156 evaluation Methods 0.000 claims abstract description 5
- WBXQXRXMGCOVHA-UHFFFAOYSA-N [methyl(nitroso)amino]methyl acetate Chemical compound O=NN(C)COC(C)=O WBXQXRXMGCOVHA-UHFFFAOYSA-N 0.000 claims abstract 13
- 238000004590 computer program Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims description 2
- 238000003908 quality control method Methods 0.000 claims description 2
- 230000009849 deactivation Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 230000004913 activation Effects 0.000 description 7
- 238000001994 activation Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000000053 physical method Methods 0.000 description 2
- 238000013138 pruning Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001955 cumulated effect Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
Definitions
- the present invention relates to the training of neural networks with the aim of being able to implement these neural networks efficiently on hardware, for example for use on board vehicles.
- An artificial neural network comprises an input layer, several processing layers and an output layer. Input variables are read into the ANN at the input layer and on their way through the processing layers to the output layer using a
- Processing chain processed which is usually parameterized.
- those values are determined for the parameters of the processing chain with which the processing chain can obtain a set of learning values for the
- Input variables optimally based on an associated set of learning values for the
- KNN The strength of KNN lies in the fact that they can process very high-dimensional data, such as high-resolution images, massively in parallel.
- the price for this parallel processing is a high hardware expenditure for the implementation of an ANN.
- Graphics processors GPUs with a large amount of memory are typically required.
- US Pat. No. 5,636,326 A discloses the weights of connections between neurons in the fully trained ANN in a reading process (“pruning ”) to undergo. This allows the number of Connections and neurons are greatly reduced without much loss of accuracy.
- ANN artificial neural network
- Network architecture and / or the neurons in one or more layers of the ANN are optimally used and / or fully utilized.
- the exact definition of "efficient" results from the specific application in which the ANN is used.
- a measure of the quality that the ANN and / or a sub-area of the ANN has achieved overall within a previous period is determined at any point in time during the training, which otherwise takes place in any known manner.
- the quality can include, for example, training progress, utilization of the neurons of a layer or another sub-area of the ANN, utilization of the neurons of the ANN as a whole, as well as any, for example weighted, combinations thereof.
- the exact definition of “quality” thus also results from the specific application.
- the measure for the quality for example, a measure for the
- Training progress of the ANN a measure of the utilization of the neurons of a layer or of another sub-area of the ANN, and / or a measure of the utilization of the neurons of the ANN as a whole.
- One or more neurons are evaluated on the basis of a measure for their respective quantitative contributions to the determined quality. Measures with which the evaluated neurons are trained in the further course of the training, and / or positions of these neurons in the ANN are based on determined by the ratings of the neurons. These measures can then be carried out in further training.
- Neurons in the ANN can also continue to apply for the inference phase, i.e. for the later effective operation of the ANN after training.
- the measure for the quality can be evaluated as a weighted or unweighted sum of quantitative contributions from individual neurons.
- connections between these neurons can, for example, lead to an optimization goal for training the ANN. Should it become apparent during training that, despite the explicit request, certain neurons or connections between neurons are not optimally utilized, these neurons or connections can be deactivated during training. This has various advantages over a subsequent “pruning” after completing the training.
- the restriction to the actually relevant neurons and connections is in turn important for the efficient implementation of the fully trained ANN on hardware.
- the specification with regard to the available hardware is often already established before training of the ANN is started.
- the finished ANN is then limited in its size and complexity to these given hardware resources. At the same time, it must be used for inference, i.e. when evaluating input variables in
- the deactivation of neurons or connections basically represents an intervention in the ANN. By this intervention during the ongoing
- Training process takes place, the training process can react to the intervention.
- Side effects of the intervention such as overfitting on the training data, poor generalization of the ANN to unknown situations or an increased susceptibility to manipulation of the inference by presenting an "adverse example", can be significantly reduced.
- random deactivation of a certain percentage of neurons during training (“random dropout”), this does not mean that a proportion of the learned information corresponding to this percentage remains unused. The reason for this is that the deactivation of the neurons or connections is motivated from the outset by the lack of relevance of the neurons or connections in question for the quality.
- the previous period advantageously comprises at least one epoch of the training, i.e. a period in which each of the available learning data sets, each comprising learning values for the input variables and associated learning values for the output variables, was used once.
- the determined quantitative contributions of the neurons to quality can then be better compared. It is quite possible that certain neurons of the ANN are “specialized” in a good treatment of certain situations occurring in the input variables, i.e. that these situations are particularly “suited” to these neurons. If a period is considered in which predominantly this
- Consideration of at least one epoch corresponds to a fairer examination with a wide range of questions from the entire spectrum of the examination material.
- a change in a cost function (loess function), the optimization of which the training of the ANN is aimed at, is included in the measure of quality over the past period. This ensures that the actions taken in response to the evaluation of the neurons do not conflict with the ultimate goal of training the ANN.
- the quantitative contributions that a layer k of the ANN has made to the quality of the ANN in this period can be cumulated and / or aggregated become.
- it can also have a
- M contains the corresponding quality of all individual neurons in layer k and depends on the journal t.
- L is the cost function (loess function).
- AL tn is the difference in the cost function between the time step tn and the journal t.
- the effects of iteration steps that were different in time are normalized by the decay parameter g.
- the dropout tensor indicates for each layer k of the ANN and for each iteration step tn which neurons of the layer k were active or not in the iteration step tn. thus serves to take into account a possible temporary deactivation of individual neurons for individual iteration steps (dropout). It is not mandatory that a dropout be used when training the ANN.
- M Be vectors or matrices.
- the described "activation quality" can alternatively also be used as a function of the signals and weights express for all neurons of layer k.
- the signals are in the broadest sense activations of neurons, such as with weights weighted sums of inputs corresponding to the respective
- Neurons are fed. can then be used, for example .write as
- x t denotes the inputs that are fed to the KN N as a whole.
- the weights assigned to neurons in the ANN are changed by certain amounts in accordance with the cost function and the training strategy used (such as, for example, Stochastic Gradient Descent, SGD). In a further particularly advantageous embodiment, these amounts are reinforced with a multiplicative factor that is used for
- Neurons with higher quantitative contributions to quality is lower than for neurons with lower quantitative contributions to quality.
- Neurons with a currently lower performance are therefore subjected to stronger learning steps with the aim of actively improving performance, analogous to
- neurons are temporarily deactivated during training with a probability which is higher for neurons with higher quantitative contributions to quality than for neurons with lower quantitative contributions to quality.
- This measure also serves the targeted promotion of neurons whose quantitative contributions to quality are currently low.
- the temporary deactivation of the powerful neurons in particular forces the ANN to also involve the weaker neurons in the formation of the ultimate output variables. Accordingly, these weaker neurons also receive more feedback from the comparison of the output variables with the “ground truth” in the form of the learning values for the output variables. As a result, their performance tends to improve.
- the situation can be compared to teaching in a class with a heterogeneous performance level. If the teacher's questions only go to the strong students, the weaker students do not learn, so that the gap between the strong and the weaker students is cemented or even widened.
- this measure also makes the ANN more robust against failures of the powerful neurons, because precisely such situations are practiced by the ANN through the temporary deactivation.
- neurons with higher quantitative contributions to quality are assigned higher values in the ANN than neurons with lower quantitative contributions to quality.
- the significance can manifest itself, for example, in the weight with which outputs from the relevant neurons are taken into account or whether the neurons are activated at all.
- This embodiment can be used in particular to compress the ANN to the part that is relevant for it.
- neurons whose quantitative contributions to quality meet a specified criterion can be deactivated in the ANN.
- the criterion can for example be formulated as an absolute criterion, such as a threshold value.
- the criterion can also be formulated, for example, as a relative criterion, such as a deviation of the quantitative contributions to quality from the quantitative contributions of other neurons or from a summary statistic thereof.
- Summary statistics can include, for example, a mean, median, and / or standard deviations.
- deactivated neurons can be completely saved when implementing the fully trained ANN on hardware.
- Weight may be zero and how little this is taken into account
- Neurons whose weights meet a given criterion are deactivated in the ANN. Analogously to the criterion for the quantitative contributions of neurons, the criterion can be formulated absolutely or relatively.
- the number of neurons activated in the ANN and / or in a sub-area of the ANN is reduced from a first number to a predetermined second number by adding neurons deactivated with the lowest quantitative contributions.
- the hardware used can dictate the maximum complexity of the ANN.
- the invention also relates to a method for implementing an ANN on a predetermined arithmetic unit.
- a model of the ANN is trained in a training environment outside the arithmetic unit using the method described above.
- activated neurons and connections between neurons are implemented on the arithmetic unit.
- the specified arithmetic unit can be designed, for example, to be built into a control unit for a vehicle, and / or it can be designed to be supplied with energy from the on-board network of a vehicle.
- the training environment can be equipped with significantly more resources.
- a physical or virtual computer with a powerful graphics processor (GPU) can be used. Little to no preliminary thought is required to begin exercising; the model should only have a certain minimum size with which the problem to be solved can probably be represented with sufficient accuracy.
- the method described above makes it possible to determine within the training environment which neurons and connections between neurons are important. Based on this, the ANN can be compressed for implementation on the arithmetic unit. As described above, this can also be done automatically within the training environment.
- Arithmetic logic unit which hardware resources for a given number of neurons, layers of neurons and / or connections between neurons , a model of the ANN can be selected whose number of neurons, layers of neurons and / or connections between neurons exceeds the predetermined number.
- the compression ensures that the trained ANN ultimately fits the specified hardware.
- the aim here is to ensure that those neurons and connections between neurons that are ultimately implemented on the hardware are also those that are most important for the inference in the operation of the ANN.
- the invention also relates to another method. With this one
- an artificial neural network ANN
- ANN an artificial neural network
- the ANN is then operated by supplying it with one or more input variables.
- a vehicle, a robot becomes a
- Quality control system and / or a system for monitoring an area on the basis of sensor data.
- an ANN can be selected which is designed as a classifier and / or regressor for physical measurement data recorded with at least one sensor.
- the sensor can be, for example, an imaging sensor, a radar sensor, a lidar sensor or an ultrasonic sensor.
- the methods can be implemented entirely or partially in software that brings about the immediate customer benefit that an ANN delivers better results in relation to the hardware expenditure and the energy consumption for the inference in its operation.
- the invention therefore also relates to a computer program with machine-readable instructions which, when they are on a computer, and / or on a control device, and / or on a Embedded system are executed, cause the computer, the control unit, and / or the embedded system to execute one of the methods described.
- Control devices and embedded systems can therefore be regarded as computers at least in the sense that their behavior is characterized in whole or in part by a computer program.
- the term “computer” thus encompasses any device for processing that can be specified
- Calculation rules can be in the form of software, or in the form of hardware, or also in a mixed form of software and hardware.
- the invention also relates to a machine-readable one
- a download product is a product that can be transmitted over a data network, i.e. digital product downloadable by a user of the data network that
- Such a specific device can be implemented, for example, with field-programmable gate arrangements (FPGAs) and / or application-specific integrated circuits (ASICs).
- FPGAs field-programmable gate arrangements
- ASICs application-specific integrated circuits
- FIG. 1 exemplary embodiment of the method 100 for training an ANN 1
- FIG. 2 exemplary embodiment of the method 200 for implementing the KNN 1 on an arithmetic unit 4;
- FIG. 3 Exemplary compression of an ANN 1.
- FIG. 1 shows an exemplary embodiment of the method 100.
- the method 100 is embedded in the training of the ANN 1, which takes place in a known manner, which includes the loading of the ANN 1 with learning values for input variables, the comparison of the output variables formed by the ANN 1 with the Includes learning values for the output variables as well as the change in parameters within the ANN in accordance with the cost function.
- step 110 at any point in time and in any phase of the training process, a measure for the quality 11 that the ANN achieved within a predetermined previous period is determined.
- a measure for the quality 11 that the ANN achieved within a predetermined previous period is determined.
- the change in the cost function used in training the ANN 1 can be included in the measure for the quality 11.
- step 120 several neurons 2 are evaluated using a measure for their respective quantitative contributions 21 to the previously determined quality 11.
- these contributions 21 can be weighted higher, the shorter the time since the contributions 21 were made.
- These contributions 21 can for example from the previously described in detail
- Activation quality] M ⁇ can be determined.
- values of the activation quality] M ⁇ can be used directly as contributions 21.
- measures 22 for the further training of the evaluated neurons 2 and / or position values 23 of these evaluated neurons 2 in the ANN 1 are determined in step 130.
- amounts by which the weights of neurons 2 are changed in at least one training step can, according to block 131, with a multiplicative factor are strengthened, which for stronger quality 11
- contributing neurons 2 is lower than for weaker contributing neurons 2 to quality 11.
- neurons 2 can be temporarily deactivated during training, the probability of such a deactivation being higher for neurons 2 with higher quantitative contributions 21 to quality 11 than for neurons 2 with lower quantitative contributions 21 to quality 11.
- neurons 2 with higher quantitative contributions 21 to quality 11 can be assigned higher values in the KN N 1 than neurons 2 with lower quantitative contributions 21.
- connections 25 between neurons 2, the weights of which meet a predefined criterion, can also be deactivated.
- sub-block 133c the number of activated neutrons can be reduced to a predetermined number in a targeted manner by
- Neurons 2 with the lowest quantitative contributions 21 to quality 11 are deactivated.
- FIG. 2 shows an exemplary embodiment of the method 200 for implementing the ANN 1 on a predetermined arithmetic unit 4.
- an arithmetic unit 4 with limited resources with regard to the number of neurons 2, layers 3a, 3b with neurons 2, and / or connections 25 between neurons 2, are preselected.
- the model la of the ANN can then be selected so that it has a number of neurons 2, layers 3a, 3b or connections 25 that are above the respective limit of the
- Arithmetic unit 4 is located.
- an arithmetic unit 4 can be selected which is designed to be installed in a control device for a vehicle and / or to be supplied with energy from the on-board network of a vehicle.
- the ANN 1 is trained according to the method 100 described above. Upon completion of training 210 activated neurons 2 and
- Connections 25 between neurons 2 are in step 220 on the
- Arithmetic unit 4 implemented. As previously described, during exercise 210 compression may be too large an ANN for limited
- the neurons 2, layers 3a, 3b, or connections 25 can be read out further after the training 210 has been completed.
- FIG. 3 shows the effect of compression on an exemplary ANN 1.
- This ANN 1 comprises two layers 3a and 3b each with four neurons 2, the exemplary quantitative contributions 21 of which to the quality 11 of the ANN 1 are indicated overall.
- each neuron 2 of the first layer 3a is connected to each neuron 2 of the second layer 3b, and all neurons 2 are still active.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Feedback Control In General (AREA)
Abstract
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102019202816.0A DE102019202816A1 (de) | 2019-03-01 | 2019-03-01 | Training neuronaler Netzwerke für effizientes Implementieren auf Hardware |
PCT/EP2020/054054 WO2020178009A1 (fr) | 2019-03-01 | 2020-02-17 | Apprentissage de réseaux neuronaux pour une mise en œuvre efficace sur un matériel |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3931760A1 true EP3931760A1 (fr) | 2022-01-05 |
Family
ID=69593701
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20705699.5A Withdrawn EP3931760A1 (fr) | 2019-03-01 | 2020-02-17 | Apprentissage de réseaux neuronaux pour une mise en oeuvre efficace sur un matériel |
Country Status (5)
Country | Link |
---|---|
US (1) | US20220101088A1 (fr) |
EP (1) | EP3931760A1 (fr) |
CN (1) | CN113454655A (fr) |
DE (1) | DE102019202816A1 (fr) |
WO (1) | WO2020178009A1 (fr) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE202021102832U1 (de) | 2021-05-25 | 2021-06-24 | Albert-Ludwigs-Universität Freiburg | Vorrichtung zum Training neuronaler Netzwerke im Hinblick auf den Hardware- und Energiebedarf |
DE102021205300A1 (de) | 2021-05-25 | 2022-12-01 | Robert Bosch Gesellschaft mit beschränkter Haftung | Training neuronaler Netzwerke im Hinblick auf den Hardware- und Energiebedarf |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5288645A (en) | 1992-09-04 | 1994-02-22 | Mtm Engineering, Inc. | Hydrogen evolution analyzer |
-
2019
- 2019-03-01 DE DE102019202816.0A patent/DE102019202816A1/de active Pending
-
2020
- 2020-02-17 CN CN202080017830.9A patent/CN113454655A/zh active Pending
- 2020-02-17 EP EP20705699.5A patent/EP3931760A1/fr not_active Withdrawn
- 2020-02-17 US US17/429,094 patent/US20220101088A1/en active Pending
- 2020-02-17 WO PCT/EP2020/054054 patent/WO2020178009A1/fr active Application Filing
Also Published As
Publication number | Publication date |
---|---|
DE102019202816A1 (de) | 2020-09-03 |
WO2020178009A1 (fr) | 2020-09-10 |
US20220101088A1 (en) | 2022-03-31 |
CN113454655A (zh) | 2021-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102012009502A1 (de) | Verfahren zum Trainieren eines künstlichen neuronalen Netzes | |
DE10296704T5 (de) | Fuzzy-Inferenznetzwerk zur Klassifizierung von hochdimensionalen Daten | |
DE102019209644A1 (de) | Verfahren zum Trainieren eines neuronalen Netzes | |
DE102019204139A1 (de) | Training für künstliche neuronale Netzwerke mit besserer Ausnutzung der Lern-Datensätze | |
EP1934895A2 (fr) | Procede d'apprentissage assiste par ordinateur d'un reseau neuronal, et reseau neuronal correspondant | |
EP3931760A1 (fr) | Apprentissage de réseaux neuronaux pour une mise en oeuvre efficace sur un matériel | |
DE102019007340A1 (de) | Technik zum Einrichten und Betreiben eines neuronalen Netzwerks | |
DE102018127802A1 (de) | Hybrider klassifikator eines gepulsten neuronalen netzwerks und einer support-vektor-maschine | |
DE102021212276A1 (de) | Wissensgetriebenes und selbstüberwachtes system zur fragenbeantwortung | |
DE102019204118A1 (de) | Verfahren zum Übertragen eines Merkmals eines ersten Bilds an ein zweites Bild | |
DE102019206049A1 (de) | Erkennung und Behebung von Rauschen in Labels von Lern-Daten für trainierbare Module | |
DE102020122979A1 (de) | Verfahren zum Bereitstellen eines komprimierten, robusten neuronalen Netzes und Assistenzeinrichtung | |
DE202021102832U1 (de) | Vorrichtung zum Training neuronaler Netzwerke im Hinblick auf den Hardware- und Energiebedarf | |
WO2021004741A1 (fr) | Entraînement plus intense pour réseaux neuronaux artificiels | |
DE102020205542A1 (de) | Aufbereiten von Lern-Datensätzen mit verrauschten Labeln für Klassifikatoren | |
DE102020210376A1 (de) | Vorrichtung und Verfahren zum Steuern eines Hardware-Agenten in einer Steuersituation mit mehreren Hardware-Agenten | |
DE102019130484A1 (de) | Verfahren und Vorrichtung zum Anlernen eines Ensembles von neuronalen Netzen | |
DE102019206047A1 (de) | Training trainierbarer Module mit Lern-Daten, deren Labels verrauscht sind | |
DE102019131639B4 (de) | System zum Bereitstellen eines Erklärungsdatensatzes für ein KI-Modul | |
DE102019214308B4 (de) | Schnelles quantisiertes Training trainierbarer Module | |
DE102022207726A1 (de) | Weitertrainieren neuronaler Netzwerke für die Auswertung von Messdaten | |
DE102020210729A1 (de) | Training von Klassifikatornetzwerken auf eine bessere Erklärbarkeit der erhaltenen Klassifikations-Scores | |
DE202024100807U1 (de) | Verstärkungslernen durch Präferenz-Rückmeldung | |
DE102022204415A1 (de) | Verbesserung der domänenübergreifenden Few-Shot-Objektdetektion | |
DE102022213485A1 (de) | Föderiertes Training für ein neuronales Netzwerk mit geringerem Kommunikationsbedarf |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20211001 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20231031 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20240130 |