EP3710992A1 - Réseau neuronal artificiel et procédé associé - Google Patents
Réseau neuronal artificiel et procédé associéInfo
- Publication number
- EP3710992A1 EP3710992A1 EP18807029.6A EP18807029A EP3710992A1 EP 3710992 A1 EP3710992 A1 EP 3710992A1 EP 18807029 A EP18807029 A EP 18807029A EP 3710992 A1 EP3710992 A1 EP 3710992A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- neurons
- output
- neural network
- input
- artificial neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
- 238000000034 method Methods 0.000 title claims description 17
- 230000001537 neural effect Effects 0.000 title abstract description 14
- 210000002569 neuron Anatomy 0.000 claims abstract description 126
- 210000002364 input neuron Anatomy 0.000 claims abstract description 30
- 210000004205 output neuron Anatomy 0.000 claims abstract description 25
- 238000013528 artificial neural network Methods 0.000 claims description 118
- 238000004590 computer program Methods 0.000 claims 4
- 230000006399 behavior Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 13
- 230000004913 activation Effects 0.000 description 8
- 238000007726 management method Methods 0.000 description 6
- 238000005295 random walk Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 240000002627 Cordeauxia edulis Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000002269 spontaneous effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
Definitions
- the invention relates to an artificial neural network for a computer-aided knowledge management with a plurality of neurons, which are interconnected via weighted edges.
- the invention also relates to a method for carrying out such a neural network and to a method for training such a neural network.
- Computer-aided knowledge management in the sense of the present invention is the retrieval and / or derivation of knowledge with the aid of a computer-modeled knowledge base.
- corresponding knowledge is generated in the form of output values, whereby these output values either directly represent the learned knowledge or generate new knowledge by a corresponding generalization depending on the input values of the knowledge base.
- the knowledge base and associated data structure are suitable as an artificial neural network consisting of a large number of artificial neurons which are connected to one another via weighted edges.
- the learned knowledge of this artificial neural network is contained in the respective weights of the individual edges, whereby the individual artificial neurons can be stimulated or inhibited in relation to the respective input.
- an artificial neural network is highly networked such that an artificial neuron receives as inputs the outputs of the respective preceding layer of neurons, which inputs are affected by the weights of the weighted edges between these artificial neurons and the artificial neurons of the previous layer , From these weighted inputs (also Called net input) then the artificial neuron calculates an output or activity level (also called output), which is then provided to the following neurons as input taking into account the respective weighting of the edge.
- an output or activity level also called output
- the input of a neuron from a previous neuron thus depends on two essential values, namely the output of the previous neuron and the weight of the edge between the two neurons.
- the artificial neuron has a transfer function with which the network input for the artificial neuron is then calculated from the individual input values of the artificial neuron. With the aid of the transfer function, all input values of the neuron are thus converted to a single network input value, with this network input then being supplied with an activation function of the artificial neuron.
- the activation function calculates the output or network output of the artificial neuron taking into account a threshold value.
- Known activation functions are, for example, linear activation functions, binary threshold functions or sigmoid activation functions. A threshold may or may not be considered.
- an artificial neural network consists of several layers, each of which has one or more artificial neurons.
- the artificial neurons of the first layer are connected via the weighted edges with the artificial neurons of the subsequent layer, resulting in the network-like structure and the corresponding interweaving.
- an input layer is provided, which contains one or more input neurons, via which the input values are entered into the artificial neural network.
- these input neurons may, for example, be such that, based on the input values, they forward them directly without change as output values to the nearest neuron layer.
- the activation function is linear.
- an output layer consisting of one or more output neurons, which then terminate. could output the corresponding output values which the artificial neural network is to calculate.
- a meaning content is assigned to both the individual input neurons and the individual output neurons, so that each input neuron has a semantic meaning with regard to the input values, while each output value also has a semantic meaning with regard to the respective output neuron.
- the processing neurons using the weighted edges, connecting the input neurons to the output neurons.
- the artificial neuron consists of an input neuron, two processing neurons, and one output neuron
- there are two weighted edges from the input neuron which extend to the first processing neuron with the second processing neuron.
- another weighted edge then extends to the output neuron, while from the second processing neuron a weighted edge also extends to the output neuron.
- the weights are adjusted such that they ultimately store the knowledge contained therein based on the training data and then, by inputting corresponding input values, the knowledge stored in the artificial neural network is calculated by calculating the output value or retrieve the output values.
- an artificial neural network can be referred to as a data structure containing computer-aided learned knowledge and which can retrieve this knowledge through a predetermined calculation rule inherent in the data structure by inputting input values and calculating output values. Accordingly, an artificial neural network transfers corresponding input values, each of which has a meaning content, depending on the given data structure of the artificial neural network and the respective values of the individual weights of the weighted edges in output values, which also has a meaning content related to knowledge management.
- An artificial neural network is thus an information-processing system or computer-aided model in which data storage in the form of learned knowledge and processing rules is usually stored in one. Such an artificial neural network is stored computationally as an associative data structure dynamically in a memory area of a data processing system.
- a conventional artificial neural network in this case has a deterministic behavior, i. at the same input values, the artificial neural network always produces the same output values.
- knowledge can be generalized by means of an artificial neuronal network, i. With the help of the artificial neural network, knowledge can be generated that can not be directly and uniquely taken from the training data. However, due to the unique data paths within the neural network and the learned weights, a deterministic behavior is always generated, which, depending on the application, appears to be less intuitive, non-spontaneous and less varied for the artificial neuronal network.
- the strict deterministic behavior of such an artificial neural network leads to a very monotonous response behavior, especially for simple input values, and thus appears less human.
- an artificial neural network is claimed to have computer-assisted knowledge management with a plurality of neurons which are connected to one another via weighted edges.
- such an artificial neural network is a data structure present in a data memory of a data processing system, with which corresponding output values are calculated based on input values to which a meaning content is assigned and the knowledge base stored in the weights, which also have meaning content - is ordered.
- the data processing system with the aid of the artificial neural network is prepared such that it simulates the data processing of input values similar to those of a human brain and the neural connections contained therein. Consequently, the artificial neural network is computed dynamically as an associative data structure in the memory area of a data processing system and is executed there by the data processing system when input values are input to the neural network or when the artificial neural network is learned from training data.
- the artificial neural network according to claim 1 has one or more input neurons, to each of which an input meaning content is assigned and transmitted by means of their input values to the artificial neural network.
- the artificial neural network according to claim 1 further comprises one or more output neurons, each associated with an output meaning content and are output by means of their output values of the artificial neural network. These output values are the result of the calculations from the artificial neural network based on the input values of the input neurons.
- the artificial neural network generically further comprises one or more processing neurons, which connect the input neurons with the output neurons via the weighted edges, so that those at the input neurons entered input values are converted by the processing neurons according to the output values.
- the weights of the edges from the input neurons to the processing neurons and finally from the processing neurons to the output neurons are set such that by inputting appropriate ones Input values and the associated output values associated with these input values are correspondingly output from the artificial neural network.
- the learned knowledge ie the knowledge base of the artificial neural network, thus lies in appropriately adjusted weights of the edges.
- this artificially known artificial neural network is expanded to include one or more additional switch neurons connected to one or more processing neurons via weighted or unweighted edges, the at least one switch neuron transmitting a pseudo-random number outputs the weighted or unweighted edges as additional input to the connected processing neurons.
- Such a pseudo-random number can be generated and provided, for example, by the Heidelbergereuron itself.
- the switch neuron receives from the outside such a pseudo-random number as an input value, this pseudo-random number then being output directly as an output value to the weighted or unweighted edges to the processing neurons.
- Such an artificial neural network was trained with respect to the input values and the one or more pseudorandom numbers to vary the output values in dependence on the pseudorandom number or pseudorandom numbers. This means that for the same input values the output values vary depending on the pseudorandom number output by the at least one switch neuron, so that the artificial neural network becomes quasi non-deterministic or pseudo-nondeterministic since the output values now depend on the pseudo-random number.
- the pseudo-random variability is added to the artificial neural network, whereby the response behavior of one such artificial neural network acts more spontaneously and also more humanly depending on the assigned input meaning content with output meaning contents.
- switch neurons derives from the fact that, given identical input values, the output ultimately depends solely on the pseudo-random number output by the switch neuron, whereby the output of the artificial neural network is virtually non-deterministic due to the output of the artificial neural network Wegerneurons is determined.
- the artificial neural network can be switched between different output values with respect to the same input values, this being done pseudo-randomly.
- the artificial neural network was further trained in such a way with the aid of the training data that a set of output values is assigned to a set of input values, wherein the set of input values additionally has a value for the pseudorandom number which also belongs to the Set of output values is assigned.
- the training data are created such that different output values are also assigned to the same input values but different pseudo-random numbers, so that the artificial neural network learns the output of the correspondingly assigned output values for the same input values depend on the pseudo-random number.
- At least one switch rerouter outputs a pseudorandom number from a predetermined number range, for example in a numerical range from 0 to 1, wherein the artificial neural network is set up such that at the same input values the output values at pseudo Random numbers do not vary within a common subrange of the specified number range.
- the number range of the pseudorandom number is subdivided into a finite number of subareas, wherein pseudo-random numbers that are within a common subarea lead to the same output values always being output for the same input values.
- the artificial neural network is thus deterministic.
- the probability for the occurrence of particular output values can be controlled, whereby the behavior of the neural network can be set with regard to its pseudo-nonde terminology.
- At least one processing layer is provided, which has the processing neurons, wherein the at least one switch neuron is connected to each processing neuron of the processing layer via the weighted or unweighted edges.
- the artificial neural network it is conceivable for the artificial neural network to have a plurality of processing layers which are applied one after the other, wherein each processing layer of the artificial neural network has one or more processing neurons, it being possible for the at least one switch neuron to be provided with each Processing neuron is connected to each processing layer.
- the at least one switch neuron is connected only to certain processing neurons of particular processing layers.
- each switch neuron being connected to the processing neurons of precisely one processing layer, whereby each of these switch neurons can only influence one processing layer at a time.
- the concrete connection of the artificial neural network with respect to the switch neurons depends on the requirements and the output behavior.
- the object is also achieved with a method for calculating output values based on input values by means of an artificial neural network according to claim 5 for a computer-aided knowledge management, wherein initially an artificial neural network is provided which contains the features and properties of the artificial neural network as written above has.
- the artificial neural network thus extended was trained with the aid of training data, the training data having an assignment of the input values and one or more pseudo-random numbers to corresponding output values, whereupon the artificial neural network is trained accordingly.
- one or more input values are now transferred or input to such an artificial neural network by means of the input neurons and a corresponding pseudorandom number of the at least one additional switch neuron is generated.
- This pseudo-random number can be generated by the switch neuron itself or can be provided accordingly as an input value to the switch master.
- the output values are then calculated by means of the processing neurons, the output values being dependent on the input values entered and the one or more pseudo-random numbers. The output values are then output via the output neurons.
- the output values are calculated in such a way that, for the same input values, the output values depend on the pseudorandom number output by the at least one switch neuron.
- the pseudo-random number is generated from a predetermined number range and the output values are calculated such that, given the same input values, the output values for pseudorandom numbers do not vary within a common subarea of the predetermined number range.
- the calculation of the output values with the aid of such an artificial neural network takes place in such a way that not only the weighted result of the preceding neurons is provided in at least one processing neuron, but also the weighted or unweighted output of the one or more switch neurons, so that one Such processing neuron just one or more pseudo-random numbers are provided as input.
- the network input to the respective processing neuron then becomes then calculate the output of the processing neuron using the activation function and, if necessary, a threshold value function. It is also conceivable that the added pseudo-random number influences the threshold value in the activation function by the additional switch neurons, as a result of which such a processing neuron can be more inhibited or more excited as a function of the pseudorandom number.
- such a pseudorandom number which is input as an additional element of the network input of such a processing neuron, affects the output of the processing neuron to the nearest layer, such that such a pseudorandom number influences the overall output and behavior of the artificial neural network.
- the object is also achieved with a method for training an artificial neural network according to claim 10 for a computer-aided knowledge management, wherein initially also an artificial neural network is provided, which provides the features and properties of the previously described artificial neural network Has.
- the artificial neural network has a switch neuron so as to provide a pseudo-random number of the processing of the artificial neural network.
- training data are provided with which the artificial neural network is to be trained, wherein the training data contains a plurality of training sets, with which the artificial neural network is to be trained on the input-output behavior.
- such a training set consists of an assignment of input values to corresponding output values, wherein each training set additionally has one or more pseudorandom numbers in the input values, which are assigned to the respective output values together with the input values.
- the artificial neural network is to be trained in such a way that, when a combination of the input values and the pseudorandom number is present, it outputs the output values assigned to this combination in each case.
- the artificial neural network is trained by adjusting the weights of the weighted edges such that at the same input values, the output values depend on the pseudo-random number issuing through the at least one switch neuron.
- the HEBB rule For the training of such a neural network, there are a variety of learning rules, such as the HEBB rule, the delta rule, for example, backpropagation.
- Figure 1 - schematic representation of the artificial neuronal network according to the invention
- Figure 2 Representation of an embodiment of an artificial neural
- FIG. 1 shows the artificial neural network 10 having a first input layer 11, a second processing layer 12 and a third output layer 13.
- Each of the layers 11, 12 and 13 in this case has an artificial neuron 14 in order to calculate the output values based on the corresponding input values.
- the first input layer 11 in this case has three input neurons Ei, E2 and E3, which are each connected via weighted edges to the following processing neurons V1 and V2 of the second processing layer 12. This means that the input echo Ei is connected to the processing neuron V1 via an edge having a weight W.
- the indices i and j stand for the respective layer of the artificial neural network 10 and the respective node in this layer.
- the processing nodes V1 and V2 of the second processing layer 12 are in this case connected to the output neurons Ai and A2 of the third output layer 13, the edges in each case also being weighted again.
- the weights Wy contain the learned knowledge which has been trained by appropriate training data to the artificial neural network 10.
- an additional switch neuron S1 is now provided which is connected by respective edges to each of the processing neurons V1 and V2 of the second processing layer 12.
- the switch neuron S1 generates a pseudorandom number, which is then provided as an input to the respective processing neuron V1 and V2.
- the processing neuron V1 receives as input the weighted output of the input neurons Ei, E2 and E3 as well as additionally the output of the switch neuron S1, which outputs a corresponding pseudorandom number.
- the processing neuron Vi receives a total of four inputs, namely an additional input by the switch neuron Si.
- Figure 2 shows schematically an artificial neural network having four input neurons and four output neurons and two processing layers, each of the processing layers having four processing neurons each.
- the artificial neural network of Fig. 2 has a switch neuron connected to all the processing neurons of the first processing layer.
- a "random walk” also called random movement or random walk
- a neural network due to the deterministic behavior.
- the random walk depicted using the artificial neural network shown in FIG. 2 has four states, namely the transition to the left, to the right, to the top or to the bottom.
- the input neurons Ei to E 4 indicate the current state, ie in which direction the rotor has run last.
- the output Ai to A4 is expected to be the next state, ie whether the runner should run to the left, to the right, to the top or to the bottom.
- the next condition to be taken is to be random. This is finally realized with the switch neuron Si.
- the neural network is trained in Figure 2 using training data, wherein the training data in addition to the input values Ei to E 4 also include the switch neuron Si, while the output include the output values Ai to A 4 .
- the training data can be defined schematically as follows:
- This table shows schematically that for the same input values, namely an input value 1 for the input echo Ei and otherwise an input value 0 for the other input neurons, the output varies.
- the artificial neural network now learns for the same input values as the output values depend on the pseudo-random number at the beginning. For a pseudorandom number of 0.5, the output neuron A3 is activated, while for a pseudorandom number 0.1 and 0.2, the second output neuron A2 is activated.
- the artificial neural network can thus map a random walk based on the pseudorandom number, the four states being mapped by corresponding subareas of the random number range.
- the number range is subdivided into four equally sized subregions, with a corresponding state being assigned to each subarea.
- Such a trained neural network can also generalize a randomness, ie also output output values, even if there are no training data templates for the corresponding combination of input values and output values. For example, suppose that the set of training data was not included in the training data, so would the learned artificial neuronal network at a random number of 0.15 and input values of (1, 0, 0, 0) just activate the second output neuron A2.
- Other applications that can be mapped using such a pseudo-nondeterministic neural network to allow for more random variability, without sacrificing the benefits of a neural network in terms of learning and generalizability, are Chat Bots, Artificial Intelligence, for example in the field of computer games or automated computer-generated text synthesis.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Feedback Control In General (AREA)
Abstract
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102017126846.4A DE102017126846A1 (de) | 2017-11-15 | 2017-11-15 | Künstliches Neuronales Netz und Verfahren hierzu |
PCT/EP2018/081323 WO2019096881A1 (fr) | 2017-11-15 | 2018-11-15 | Réseau neuronal artificiel et procédé associé |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3710992A1 true EP3710992A1 (fr) | 2020-09-23 |
Family
ID=64402194
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18807029.6A Ceased EP3710992A1 (fr) | 2017-11-15 | 2018-11-15 | Réseau neuronal artificiel et procédé associé |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3710992A1 (fr) |
DE (1) | DE102017126846A1 (fr) |
WO (1) | WO2019096881A1 (fr) |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102012011194A1 (de) * | 2012-06-06 | 2013-12-12 | Kisters Ag | Verfahren zum Trainieren eines künstlichen neuronalen Netzes |
US9189729B2 (en) * | 2012-07-30 | 2015-11-17 | International Business Machines Corporation | Scalable neural hardware for the noisy-OR model of Bayesian networks |
-
2017
- 2017-11-15 DE DE102017126846.4A patent/DE102017126846A1/de not_active Ceased
-
2018
- 2018-11-15 WO PCT/EP2018/081323 patent/WO2019096881A1/fr unknown
- 2018-11-15 EP EP18807029.6A patent/EP3710992A1/fr not_active Ceased
Also Published As
Publication number | Publication date |
---|---|
DE102017126846A1 (de) | 2019-05-16 |
WO2019096881A1 (fr) | 2019-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE68927014T2 (de) | Assoziatives Musterkonversionssystem und Anpassungsverfahren dafür | |
DE102012009502A1 (de) | Verfahren zum Trainieren eines künstlichen neuronalen Netzes | |
EP3701433B1 (fr) | Procédé, dispositif et programme informatique pour créer un réseau neuronal profond | |
WO2000063788A2 (fr) | Reseau semantique d'ordre n, operant en fonction d'une situation | |
DE102007001025A1 (de) | Verfahren zur rechnergestützten Steuerung und/oder Regelung eines technischen Systems | |
DE10296704T5 (de) | Fuzzy-Inferenznetzwerk zur Klassifizierung von hochdimensionalen Daten | |
DE102008020379A1 (de) | Verfahren zur rechnergestützten Steuerung und/oder Regelung eines technischen Systems | |
DE102005046747B3 (de) | Verfahren zum rechnergestützten Lernen eines neuronalen Netzes und neuronales Netz | |
DE60125536T2 (de) | Anordnung zur generierung von elementensequenzen | |
DE112020002186T5 (de) | Dnn-training mit asymmetrischen rpu-einheiten | |
WO2019121206A1 (fr) | Procédé de réalisation d'un réseau neuronal | |
DE10201018B4 (de) | Neuronales Netz, Opimierungsverfahren zur Einstellung der Verbindungsgewichte eines neuronalen Netzes sowie Analyseverfahren zur Überwachung eines Optimierungsverfahrens | |
DE69315250T2 (de) | Neuronaler Prozessor mit Datennormalisierungsanlage | |
DE112020005613T5 (de) | Neuromorphe Einheit mit Kreuzschienen-Array-Struktur | |
DE10139682A1 (de) | Verfahren zum Generieren von neuronalen Netzen | |
EP3710992A1 (fr) | Réseau neuronal artificiel et procédé associé | |
EP0548127B1 (fr) | Réseau neuronal et dispositiv de mise en oeuvre de réseaux neuronaux de type ADALINE | |
EP0978052B1 (fr) | Selection assistee par ordinateur de donnees d'entrainement pour reseau neuronal | |
DE3607241C2 (fr) | ||
DE69313622T2 (de) | Speicherorganisationsverfahren für eine Steuerung mit unscharfer Logik und Gerät dazu | |
DE69809402T2 (de) | Assoziativneuron in einem künstlichen neuralen netzwerk | |
EP1093639A2 (fr) | Reseau neuronal, et procede et dispositif pour l'entrainement d'un reseau neuronal | |
WO2020193481A1 (fr) | Procédé et dispositif d'apprentissage et de réalisation d'un réseau neuronal artificiel | |
DE102019113958A1 (de) | Verfahren zur Leistungssteigerung eines Fahrzeugsystems mit einem neuronalen Netz zum Steuern einer Fahrzeugkomponente | |
EP1145190A2 (fr) | Ensemble de plusieurs elements de calcul relies entre eux, procede de determination assistee par ordinateur d'une dynamique se trouvant a la base d'un processus dynamique et procede pour l'entrainement assiste par ordinateur d'un ensemble d'elements de calcul relies entre eux |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20200512 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20210715 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20230928 |