WO2006005665A2 - Procédé de réaction à des modifications de contexte à l'aide d'un réseau neuronal et réseau neuronal destiné à réagir à des modifications de contexte - Google Patents

Procédé de réaction à des modifications de contexte à l'aide d'un réseau neuronal et réseau neuronal destiné à réagir à des modifications de contexte Download PDF

Info

Publication number
WO2006005665A2
WO2006005665A2 PCT/EP2005/052859 EP2005052859W WO2006005665A2 WO 2006005665 A2 WO2006005665 A2 WO 2006005665A2 EP 2005052859 W EP2005052859 W EP 2005052859W WO 2006005665 A2 WO2006005665 A2 WO 2006005665A2
Authority
WO
WIPO (PCT)
Prior art keywords
pool
model
pools
neural network
output
Prior art date
Application number
PCT/EP2005/052859
Other languages
German (de)
English (en)
Other versions
WO2006005665A3 (fr
Inventor
Gustavo Deco
Martin Stetter
Original Assignee
Siemens Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft filed Critical Siemens Aktiengesellschaft
Publication of WO2006005665A2 publication Critical patent/WO2006005665A2/fr
Publication of WO2006005665A3 publication Critical patent/WO2006005665A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the invention relates to a method for responding to text changes with a neural network, as well as a neuronal network, which can respond to context changes.
  • Context is understood to be an external meaning context which indicates which behavior makes sense in a particular situation.
  • the context could indicate whether the price is going up or down, and depending on that context, it would be a wise decision to buy or sell the stock.
  • the context may also indicate that a change in the weather is imminent, and depending on this change in the weather, a positive or negative weather forecast would make sense.
  • the context may also be a variable strategy of a player of a computer game, and depending on this strategy, a particular behavior of a character may be useful in the computer game (e.g., attack or defense).
  • the invention thus relates to a method of data processing in order to find a correct behavior for a given situation.
  • Behavior is already understood here as the estimation of the situation, but the behavior can also include a decision or an action.
  • Intelligent software agents are known which are developed using methods of artificial intelligence. These are rule-based systems which have a database with rules and a logic for processing the rules. The rules are explicitly formulated and must be manually entered by the developer. It is a declarative programming model. An example of such intelligent agents are BDI agents (lieve-desire-intention). These collect sensory data, process it with a set of rules and choose a behavior.
  • BDI agents lieve-desire-intention
  • the invention also relates to dynamic reinforcement learning, in which a system or agent receives feedback on its behavior, which serve to enable it to flexibly respond to context changes. If the system or the agent z. If, for example, a buy decision is made and the share price subsequently falls, the system or the agent can learn by means of appropriate feedback and choose a more correct behavior in the next situation. Behavior is a decision, a situation assessment or an action ver ⁇ stood. The evaluation of a situation can also be called a feeling. Dynamic gain learning is described in document [1].
  • Neural networks are known in particular as learning systems. Information about the current situation in the form of input information is fed to such a neural network. The input information is processed by the neural network. Subsequently, output information can be taken from the neural network. The output information describes the behavior of the neural network and thus represents a decision, a situation evaluation or an action instruction.
  • a neural network can be trained so that it learns which output information is correct for a given input information. This is called synaptic learning. Synapses are the connections between the individual neurons, the elements of the neural network. By expressing the synaptic strengths, the neural network learns in this context the correct mapping of input information to output information.
  • neural networks lies in the fact that the rules for mapping the input information to output information do not have to be specified explicitly and declaratively. Rather, the neural network learns an implicit rule representation from the data with which it is trained.
  • the disadvantage here is that the neural network can not react flexibly to context changes.
  • a change in context that is to say a change in the external context of meaning, constitutes the requirement for the neural network to convert the mapping of input information to output information with immediate effect.
  • synaptic learning is an incremental, time-delayed process which does not allow a flexible and rapid response to context changes.
  • the task arises to indicate a method for the reaction to context changes and a data processing unit which can react to context changes.
  • input information is supplied to an input layer of the neural network. Furthermore, output information is taken from an output layer of the neural network. Furthermore, several models are stored in the neural network, each of which predefines a mapping of the input information onto the output information. Only one model can be active at the same time. The following steps are then repeated: The neural network forms an inputting information with the active model on an output information. This is then taken from the output layer. If the extracted output information for the supplied input information is incorrect in a current context, the neural network receives a false feedback, whereupon the neural network activates another model.
  • an action is derived and executed from the extracted output information.
  • the neural network contains exciting pulsed neurons.
  • These form model pools, with each model being assigned at least one model pool.
  • the model pools compete with each other, whereby an active model pool prevails in the competition.
  • the neuronal network contains inhibiting pulsed neurons. These form at least one inhibitory pool.
  • the inhibitory pool exerts a global inhibition on the competing model pools.
  • the false feedback activates the inhibiting pool.
  • the activated inhibiting pool performs a complete inhibition of all model pools. The complete inhibition then deactivates the active model pool. After complete inhibition, another model pool is activated.
  • sys- tems adapt to the exciting pulsed neurons of the active model pool.
  • recurrent weights of the active model pool decrease. This leads to the fact that the active model pool is subject to the complete inhibition in the competition compared to the other model pools.
  • the adaptation of the synapses is implemented as short-term synaptic depression (STD).
  • exciting pulsed neurons form rule pools.
  • Each of these rule pools ver ⁇ switches each one of the input information with one of the output information.
  • the rule pools compete with each other, with the active model pool supporting a selection of rule pools.
  • the supplied input information activates a rule pool from this selection.
  • the appropriately activated rule pool activates the output information to be taken. This will be taken out afterwards.
  • the neural network includes interconnections formed by Hebbian learning.
  • the neural network for responding to context changes has an input layer to which input information can be supplied. Furthermore, there is an intermediate layer, by means of which the input information can be mapped onto output information. Furthermore, there is an output layer from which the output information can be taken. Finally, the neural network also has a model layer with which the image formation in the intermediate layer can be controlled as a function of a context.
  • the model layer contains exciting pulsed neurons. Furthermore, the model layer contains several model pools, which consist of the exciting pulsed neurons. With the model pools, the image in the intermediate layer can be controlled as a function of a context.
  • the neural network contains inhibitory pulsed neurons. Furthermore, the neuronal network contains an inhibiting pool, which consists of the inhibitory pulsed neurons. The inhibiting pool is interconnected with the model pools. Furthermore, the inhibiting pool can be activated by an incorrect feedback if an extracted output information is incorrect. According to one development of the invention, the neural network has the model layer in a first module and the other layers in a second module.
  • the input layer for each input information has an input pool, which consists of exciting pulsed neurons and can be activated by supplying the respective input information.
  • the intermediate layer has rule pools, by means of which an input information can be interconnected with an output information.
  • the rule pools consist of exciting pulsed neurons.
  • the output layer has an output pool consisting of exciting pulsed neurons.
  • the model pools are connected to the rule pools such that only a selection of rule pools can be activated depending on the activation of the model pools.
  • a representative input pool can be activated by an input data input, which represents the input information supplied.
  • a preferred rule pool can be activated by the representative input pool, which belongs to the selection of rule pools which are connected with an active model pool.
  • a representative output pool can be activated by the preferred rule pool, wherein the representative output pool represents a removable output information.
  • the neural network may further comprise one or more additional input layers, interlayers, output layers, model layers or other layers. These additional layers may store, filter, rate, network or categorize the input information, the output information or other information, or may perform other functions.
  • additional layers may store, filter, rate, network or categorize the input information, the output information or other information, or may perform other functions.
  • the invention describes a quantitative mathematical model with direct biological relevance, in that it specifically depicts the neurodynamic processes which were also recognized in biological research.
  • the behavior thus results as an autonomous, emergent phenomenon and has clear advantages over conventional agents with declarative Regel ⁇ representation on.
  • the invention permits not only the implicit rule representation but also dynamic rule selection, which also includes a random element.
  • the random element results from the emergent, free interaction of neurons and pools of the neural network.
  • FIG. 1 shows a first embodiment of the neural network
  • Fig. 2 shows a second embodiment of the neural network
  • Fig. 3 weights of a first module
  • Fig. 4 weights of a second module
  • Fig. 5 shows the structure of a pool.
  • FIG. 1 shows a neural network 1 with an input layer 10, an intermediate layer 20, an output layer 30 and a model layer 40.
  • the neural network 1 is supplied with input information 2 and later with an output information 3.
  • the input layer 10 has strong connections to the intermediate layer 20, which in turn has strong connections to the exit layer 30.
  • the model layer 40 has strong connections to the intermediate layer 20.
  • a strong connection between two layers means that the layers have an above-average number of compounds or that the compounds are markedly above average.
  • the connections are usually made via synapses, which connect neurons of one layer with neurons of the other layer.
  • a strong connection of two layers thus means that there are particularly many synaptic connections between the layers or that the synaptic connections are particularly pronounced.
  • the synaptic strength can be described by a weight w. Higher values for a synaptic weight w mean a stronger synaptic connection between the participating neurons.
  • the neurons of the neural network can be partially or completely linked in the layers and over the layers.
  • full connectivity full connectivity exists, meaning that each neuron is linked, linked or networked with each other neuron.
  • the layers are fed back together, that is, there are also strong connections in the opposite direction.
  • This feedback leads to a shift in the equilibrium in the competition of the individual neurons or groups (pools) of neurons.
  • the thickness of the forwardly directed compounds, ie from the input layer 10 via the intermediate layer 20 to the output layer 30 and from the model layer 40 to the intermediate layer 20, is expediently stronger than the strength of the backward-looking connections.
  • the intermediate layer 20 has stored for this purpose different images, one of which is activated by the model layer 40 in each case. As a result, a rapid response to context changes is possible since the mapping in the intermediate layer 20 can be flexibly changed.
  • the input layer 10, the intermediate layer 20, the output layer 30, and the model layer 40 contain groups (pools) of neurons. It is also possible that only some of these layers contain pools.
  • the pools of each layer may contain excitatory pulsed neurons. Exciting pulsed neurons are activated by pulses from other exciting pulsed neurons and send out pulses to other neurons themselves.
  • the activity of a pool of pulsed neurons can be modeled using a mean-field approximation.
  • the pools are attached to the biology of the human brain ange ⁇ .
  • large and homogeneous populations of neurons that receive a similar external input are mutually coupled, and probably together act as a single entity, forming groups (pools).
  • These pools can be a more robust processing and coding unit because their instantaneous population average response, as opposed to the time average of a relatively stochastic neuron in a large time window, is better suited to the analysis of fast changes in the real world. If one neuron can activate another one, this means that there is a strong connection between the two neurons. The same applies in the event that one pool activates another pool. This means that there is at least one strong connection of at least one neuron of the first pool with a neuron of the second pool.
  • FIG. 5 shows that the neurons 101 of a pool 100 are also strongly connected to one another. As shown in FIG. 5, there may be a partial connection, but also a complete networking of the neurons 101 of the pool 100.
  • the neurons 101 of the pool 100 are linked via strong connections 102, thus contributing to the activation of the pool 100 in mutual support.
  • the greater the activity of the neurons 101 the greater the activity of the pool 100.
  • the activity of the neurons 101 can be described by mathematical models. Further information on the mathematical modeling of pools in the development of neural networks as well as different mathematical models for pulsed neurons, as used in the exemplary embodiments are u. a. from the writings [2], [3] and [4] known.
  • neurons 101 are always artificial neurons 101. These partially or completely model a particular type of neuron known from biology. The modeling can be done by an electronic circuit or a mathematical model. gen, which is calculated by a data processing system. By connecting two pools 100, it is meant that the neurons 101 of these pools 100 are strongly or weakly connected to one another, ie that, for example, B. many or few, strong or weak synaptic connections between the
  • Neurons of the one and the neurons 101 of the other pool 100 exist.
  • the strong connections 102 between the neurons 101 of the pool 100 serve for the self-amplification of the pool 100.
  • the strength of these strong connections 102 corresponds to z.
  • the synaptic strengths between the neurons 101 which are referred to within the pool 100 as recurrent weights.
  • the neural network 1 stores a model for each context, ie for each class of situations.
  • the model indicates how the input information should be mapped to the output information.
  • the model thus describes which behavior is correct in a particular situation. Different situations may require different behavior. This is always the case when the context changes from one situation to the next. This means that in the second situation different output information must be selected than in the first situation. Different situations can therefore be based on a different context.
  • the neural network 1 now attempts to specify, for each context, for each class of situations, how the input information is to be mapped to the output information.
  • the intermediate layer 20 contains rule pools 21 which compete with each other, whereby only one activated rule pool 23 can enforce itself.
  • the rule pools 21 each represent a possibility of interconnection of an input information with an output information.
  • the input control information 2 is interconnected with the output information 3 to be extracted by the activated rule pool 23.
  • This interconnection consists of strong connections between neurons of the input layer 10, neurons of the control pool 23 and neurons of the output layer 30. Interconnection thus means that neurons, pools or layers are not only linked to one another, which is already present in a full interconnection, but instead that the weights of the links are strong.
  • the model layer 40 contains model pools 41 that compete with each other, whereby only one active model pool 42 can prevail.
  • the active model pool 42 is interconnected with a selection 22 of rule pools 21. This means that the rule pools 21 in the selection 22 are supported in their competition with the other rule pools 21 as long as the active model pool 42 is activated.
  • the active model pool 42 thus determines which
  • Rule pools 21 can be activated, and thus, how the zu ⁇ guided input information 2 is mapped to the output information 3 to be taken. Which rule pool 21 is activated in the selection 22 depends on the supplied input information 2. Which output information 3 is taken depends on the activated rule pool 23.
  • the neuronal network 1 contains an inhibiting pool 50, which is formed from inhibitory, pulsed neurons.
  • the inhibiting pool 50 is connected to the model layer 40 and exerts a global inhibition on the competing pools 41. This is necessary so that only one active model pool 42 can prevail in the competition.
  • the inhibiting pool 50 is stimulated here by the model pools 41.
  • excitatory compounds eg. For example, strong, excitatory, synaptic connections.
  • inhibitory compounds eg. B. inhibitory, synaptic connections.
  • the inhibiting pool 50 is also strongly connected to the other layers or there are one or more inhibiting pools which are interconnected with the other layers, d. H. have an inhibiting influence on them.
  • the neural network 1 must be informed that its current map or model does not correspond to the current situation or context.
  • the inhibiting pool 50 is activated by a false feedback 4, whereby the global inhibition becomes complete
  • the false feedback 4 can z. B. be implemented by activating the neurons of the inhibitory pool 50 via the external synaptic inputs.
  • synapses of the exciting, pulsed neurons of the active model pool 42 are adapted such that the self-amplification of the active model pool 42 decreases over time .
  • the recurring weights of the active model pool 42 are also reduced.
  • STD short-term synaptic depression
  • the input layer 10 also has input pools 11 which compete with each other, whereby only one input pool 11 can ever prevail in the competition.
  • each input information can be assigned its own input pool 11. It However, preprocessing may also take place, so that from the input information certain features are extracted, which are subsequently represented by an input pool 11. Thus, an input pool 11 for angular shapes and another input pool for round shapes can be activated.
  • the output layer 30 also has output pools 31 which compete with one another, whereby again only one output pool 31 can prevail in the competition.
  • FIG. 1 shows the case that an active model pool 42 supports a selection 22 of rule pools 21. Furthermore, a representative input pool 12 is activated by the supplied input information 2. This activates the activated rule pool 23 within the selection 22. This in turn is interconnected with a representative output pool 32, for which reason it is activated. Subsequently, the output information 3 is taken, which is represented by the representative output pool 32.
  • the layers are excited by at least one non-specific pool, which consists of exciting pulsed neurons.
  • This device does not receive any specific inputs from one of the layers and contributes with spontaneous pulses to the formation of realistic pulse distributions.
  • the neurons of the non-specific pool are not correlated with the other layers or pools, i. H. they are not specifically activatable, as they may be networked with the other layers / pools, but not with strong connections.
  • the neural network 1 can be designed as an attractor-recurrent autoassociative neural network. Such networks are described in the cited documents. In this case, the synaptic connections from the input layer 10 to the intermediate layer 20, from the intermediate layer 20 to the output layer 30, and from the model layer 40 to the intermediate layer 20, respectively, are made stronger than in the return direction.
  • the weight between two exciting pulsed neurons, which lie in different exciting pools of the same layer, is preferably weak.
  • each rule pool 21 is connected to an input pool 11, a model pool 41, and an output pool 31 as if the connection were through
  • the interplay between the layers corresponds to the multi-area interconnection in the human brain.
  • the neural network 1 can also have further layers. These additional layers can form the functions of specific brain areas. In this way, the functionality of the neuronal network 1 can be considerably expanded. It is conceivable z. As the filtering of input information by modeling ei ⁇ ner selective attention, as well as the implementation of work or long-term memory functions.
  • the further layers can be constructed in the manner described and interact with one another, or else implement other known methods.
  • the neural network 1 can be designed as a neurodynamic network, in particular in the form of a neurodynamic network of pulsed neurons. This may include the use of known neural networks, multilayer
  • Perceptrons, SOMs (seif organizing maps) etc. include.
  • the pulsed neurons can z. B. be formed as so-called spiking or as so-called pulse-coding neurons.
  • Intelligent agents which use the method according to the invention or the neural network can be used for neurocognitive process control, in particular for technical processes, for neurocognitive driver assistance and for neurocognitive robot control.
  • the invention is based on the principle of influenced competition (Biased Competition).
  • an active model which is formed by the active model pool 42 and by the interconnections of the rule pools 21 which belong to the selection 22, acts on the mapping of input information to output information in a top-down manner.
  • the input information acts as an opposing influence (bottom-up), which activates certain neurons and pools.
  • the actual interconnection of the supplied input information 2 with the extracted output information 3 develops from an interplay of these two influences.
  • the neural network 1 thus achieves a high degree of flexibility, which in its behavior, ie. H. in its mapping of input information to output information flexible to context changes, d. H. changed situations, can react.
  • the neural network 1 directly depicts the neurodynamic processes in the human brain, it is of direct biological relevance. This is based on the exact simulation of the biological processes in the brain.
  • the pulsed neurons may be integrated-and-firing neurons
  • the neural network and method according to the invention can be used as a model for clinical trial results.
  • experiments are conducted with a subject to examine behavior in situations of changing context.
  • the method according to the invention with a neural network becomes the same Test conditions carried out.
  • hypotheses about the functioning of the brain can be checked directly.
  • modified chemical conditions eg. As metabolic disorders or ver ⁇ changed concentrations of neurotransmitters, be taken into account.
  • the behavior for example of a schizophrenic patient, can be simulated by the method according to the invention. From the results and the comparison with test results with schizophrenic patients, hypotheses concerning the nature of the disorder in the brain can be checked again. The same applies to other disorders.
  • An intelligent agent which implements the method according to the invention or the neural network according to the invention can be used in an extremely versatile manner.
  • an agent could be used for chart analysis.
  • an attempt is made to conclude from the history of a stock price on its future development.
  • the neural network 1 is fed as input information 2, for example, the price history of a share during the last six months.
  • the extracted output information 3 now represents a purchase or sales recommendation. If the recommendation turns out to be false, the neural network 1 receives a false feedback 4. In this way, the neural network 1 can react flexibly to context changes, for example, when the mood on the stock exchange changes.
  • the next input information input 2 e.g.
  • the neural network 1 can already take into account the changed mood on the stock market, ie the changed context.
  • the invention is particularly suitable for complex connections that can not be detected by a declarative rule model. These include developments on the stock markets or the weather forecast.
  • the neural network 1 is figuratively in a position to make a gut decision. This is because / that the extracted output information 2 can also be interpreted as a feeling. The feeling is based on an intuitive interpretation of the input information, whereby intuitively means that the input information is mapped to the output information by implicit, not explicitly represented rules. In this context, the extracted output information 3 gives as feeling a general direction of trade.
  • Intelligent agents with flexible, human-like behavior can also be used elsewhere, for example as characters in training simulations and computer games.
  • the invention relates inter alia to the field of cognitive flexibility and behavioral change.
  • the dynamic reinforcement (Dynamic Reinforcement) supports the Ler ⁇ nen.
  • the neuronal network 1 as an image in the intermediate layer 20 forms associations between input information and a reward or punishment. Due to the interplay with the model layer 40, these associations can already be reversed after a false feedback 4.
  • the false feedback 4 always follows when a reward expectation has been taken as output information 3 and this does not occur or punishment occurs.
  • the false-feedback 4 can also take place when it is taken as output information 3 that the neural network 1 expects a punishment, but this fails or a reward occurs. In this way, the neural network 1 learns when to reward, and when to expect punishment, and responds with a feeling as output information 3 to be extracted, representing this maintenance, and from which an appropriate action can be derived ,
  • FIG. 2 shows a specific embodiment of the invention.
  • the neural network 1 is here divided into a first module 5 and a second module 6.
  • the second module 6 is a
  • the first module 5 has a model layer 40.
  • the second module 6 has an input layer 10, an intermediate layer 20 and an output layer 30.
  • the input layer 10 includes two input pools 15 and 16 which compete with each other. If a first object is supplied, the input pool 15 is activated. If a second object is supplied, the input pool 16 is activated.
  • the intermediate layer 20 has a selection 22 and a selection 24 of rule pools.
  • the selection 22 contains rule pools 25 and 26, the selection 24 contains rule pools 27 and 28.
  • the output layer 30 contains output pools 35 and 36.
  • the model layer 40 contains model pools 45 and 46. Within each tier, the pools compete with each other.
  • the competition in the model layer 40 is supported by an inhibiting pool 50, which consists of inhibitory neurons, and can be additionally excited by the false feedback 4.
  • the competition in the layers of the second module 6 is aided by an inhibiting pool 60 which is connected to these layers and exerts a global inhibition on the contained neurons.
  • the first module 5 is assigned a non-specific pool 70 whose activity is not influenced by the layers of the two modules.
  • the module 6 contains a non-specific pool 80 whose activity is not affected by the layers.
  • the non-specific pools 80 and 70 contribute with spontaneous pulses to the formation of realistic pulse distributions.
  • the second module 6 is implemented with 1600 pulsed neurons (here pyramidal cells) and 400 inhibiting pulsed neurons.
  • the first module 5 can be implemented with 1000 exciting pulsed neurons and 200 inhibitory neurons. These numbers are only an example.
  • the ratio of excitatory pyramidal cells to inhibitory neurons may be e. B. 80:20 amount.
  • all neurons are fully interconnected.
  • the pools can z. B. from 80 or 100 neurons are formed.
  • all the pools of the first module including the inhibiting pool 50 and the non-specific pool 70 are networked with each other. Furthermore, all the pools of the second module 6, including the enclosing pool 60 and the non-specific pool 80, are networked with one another.
  • the networking of the pools takes place via synaptic connections between the neurons contained in the pools.
  • the strength of the synaptic connections between the individual pools for the first module is shown by way of example in FIG.
  • FIG. 4 shows examples of synaptic strengths of the connections between the pools of the second module, including the inhibiting pool 60 and the non-specific pool 80
  • the input pool 15 is interconnected by the rule pool 25 of the selection 22 with the output pool 35.
  • the input pool 16 is connected to the output pool 36 via the rule pool 26 of the selection 22.
  • the interconnection of the input pools 15 and 16 with the output pools 35 and 36 takes place via the control pools 27 and 28 of the selection 24 exactly opposite.
  • the selection 22 and the selection 24 provide two different maps of input information to output information. Which of the two images is used depends on which of the two model pools 45 and 46 is activated.
  • the output pool 35 and the output pool 36 may each represent a situation score (eg, positive / negative), a decision (eg, buy / sell), or an action (eg, go forward / backward).
  • a situation score eg, positive / negative
  • a decision eg, buy / sell
  • an action eg, go forward / backward
  • the supplied input information 2 is transmitted to all neurons of the neural network 1, for example via in each case 800 connections from outside the network Network supplied.
  • background noise can also be supplied via these connections, which represents a spontaneous firing of neurons outside the network.
  • the states in which the neural network 1 can stabilize can also be referred to as global attractors. These are each composed of individual attractors for each pool. When a pool wins in the competition, the activity in that layer converges to the attractor concerned.
  • the global attractor is thus composed of a combination of activated pools in the input layer 10, in the intermediate layer 20, in the output layer 30 and in the model layer 40.
  • the optional recurrent connections between the layers give rise to controlled competition in one, several or all layers. This leads to an autonomous, emergent and highly flexible behavior of the neural network 1.
  • the course of a stock price during the last six months is fed as input information 2.
  • the input pool 15 is activated.
  • the model pool 45 is activated. He represents a model that assumes a positive mood on the stock exchange.
  • the model pool 45 controls the competition in the intermediate layer 20 in such a way that the pools 22 in the selection 22 can compete in the competition.
  • the rule pool 25 is activated, which is strongly connected to the input pool 15.
  • the rule pool 25 in turn activates the output pool 35 via a strong connection, which represents the output information "buy share", which is taken from the neural network as output information 3.
  • model pool 45 While model pool 45 is active, its self-boosting is depleted, that is, the synaptic weights of the model pool 45's internal connections decrease over time.
  • the inhibitory pool 50 is activated via a false feedback. This then amplifies its global inhibition to a complete inhibition of the model pools 45 and 46, which can also extend to the second module 6.
  • the formerly active model pool 45 is in competition with the model pool 46, since the self-amplification of the model pool 45 has exhausted itself compared to the self-boosting of the model pool 46. Therefore now the model pool 46 wins in the competition.
  • the model pool 46 now supports the selection 24 in the intermediate layer 20.
  • the input pool 15 is again activated. This now activates the rule pool 27, since this belongs to the selection 24, which is supported by the model pool 46.
  • the rule pool 27 in turn activates the output pool 36. This represents the output information "Do not buy stock", which is subsequently taken as output information 3.
  • the neural network 1 has thus adapted its behavior in one step to the changed context, for example a change in mood on the stock market.
  • Suitable values for the synaptic starches may deviate from the stated values and can be determined or optimized in the experiment. The general procedure for this is described in the document [2].
  • the readout of the output information 3 can also be realized differently. For example, it is conceivable that this takes place from each layer or from the respective pools.
  • Different input layers 10 could represent different features (color, shape, size, location, motion, etc.) of the input information.
  • Different model layers 40 could have different aspects of a model (different dimensions of a model)
  • a model layer 40 could store whether the player is practiced or untrained, as well as another which strategy the player is currently following.
  • the model could represent a context that is composed of different aspects.
  • Different issue stories 30 could represent different aspects of issue information (e.g., buy / sell, urgency, security of recommendation).
  • the intermediate layer 20 could also be implemented in multiple layers. In any case, it would have the task of networking the additional layers in a meaningful way.
  • Competition Within a shift certain characteristics or feature groups compete with each other for representation. This produces a weighting map (salicency map) as an emergent process, so that certain features are more intensely represented than others. A context-dependent information selection is obtained. This can be implemented by competing pools.
  • Features can also be linked dynamically to feature groups or categories.
  • One feature may also favor the representation of another feature. This can be implemented by pools that are connected to each other with strong weights and thus ter racen. For example, several pools can be activated simultaneously in the input layer 10 and thereby simultaneously represent several properties of an input information.
  • Additional layers can be implemented according to the principle of influenced competition and biased competition and cooperation: Through the connection between layers, a layer can direct the competition process in one or more other layers. This process can be recurrent, so that successively and dynamically an ever better matching of different feature spaces of the different layers arises with each other through this mutual steering process. In particular, because it covers only a partial aspect of the environment, each representation inevitably contains ambiguities. Influenced competition represents a mechanism by which the various layers can resolve ambiguities in the respective other feature spaces by the information of their particular feature space. Each representation evolves before the context of all other representations. Cooperation can then bind different characteristics to groupings, that is, relate them to one another.
  • Dynamic data from technical systems can be fed into the neural network 1 as input information after pre-processing, if necessary for dimensional reduction.
  • This can extract various features (eg Independent Composites).
  • Optimization of the neural network can be achieved by biologically motivated learning rules (eg Hebb rule or spike time dependent plasticity) with which cost functions can also be set up to evaluate how well a dynamic task is solved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

Procédé de réaction à des modifications de contexte à l'aide d'un réseau neuronal (1), selon lequel des informations d'entrée sont introduites dans une couche d'entrée (10) du réseau neuronal (1). En outre, des informations de sortie sont extraites d'une couche de sortie (30) du réseau neuronal (1). De plus, ledit réseau neuronal (1) contient en mémoire plusieurs modèles qui définissent une projection des informations d'entrée sur les informations de sortie. Dans ce cadre, un seul modèle à la fois peut être actif. Les étapes suivantes sont alors répétées : le réseau neuronal (1) projette une information d'entrée introduite (2) à l'aide du modèle actif sur une information de sortie (3). Cette dernière est ensuite extraite de la couche de sortie (30). Lorsque l'information de sortie extraite (3) est fausse dans un contexte en cours par rapport à l'information d'entrée introduite (2), le réseau neuronal (1) reçoit une notification d'erreur en retour (4), à la suite de quoi le réseau neuronal (1) active un autre modèle.
PCT/EP2005/052859 2004-07-09 2005-06-21 Procédé de réaction à des modifications de contexte à l'aide d'un réseau neuronal et réseau neuronal destiné à réagir à des modifications de contexte WO2006005665A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102004033391.2 2004-07-09
DE102004033391 2004-07-09

Publications (2)

Publication Number Publication Date
WO2006005665A2 true WO2006005665A2 (fr) 2006-01-19
WO2006005665A3 WO2006005665A3 (fr) 2006-12-07

Family

ID=35448080

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2005/052859 WO2006005665A2 (fr) 2004-07-09 2005-06-21 Procédé de réaction à des modifications de contexte à l'aide d'un réseau neuronal et réseau neuronal destiné à réagir à des modifications de contexte

Country Status (1)

Country Link
WO (1) WO2006005665A2 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1975857A3 (fr) * 2007-03-27 2009-03-25 Siemens Aktiengesellschaft Procédé de traitement assisté par ordinateur de valeurs de mesure établies dans un réseau de capteurs
US7577099B1 (en) * 2006-04-06 2009-08-18 At&T Corp. Method and apparatus for fault localization in a network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5155801A (en) * 1990-10-09 1992-10-13 Hughes Aircraft Company Clustered neural networks
US5239594A (en) * 1991-02-12 1993-08-24 Mitsubishi Denki Kabushiki Kaisha Self-organizing pattern classification neural network system
EP0574951A2 (fr) * 1992-06-18 1993-12-22 Seiko Epson Corporation Système pour la reconnaissance de la parole
EP1073012A1 (fr) * 1999-07-30 2001-01-31 Eidgenössische Technische Hochschule Zürich Méthode et circuit de traitement de données neuromimétique
EP1327959A2 (fr) * 2002-01-11 2003-07-16 EADS Deutschland GmbH Réseau neuronal pour modéliser un système physique et procédé de construction de ce réseau neuronal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5155801A (en) * 1990-10-09 1992-10-13 Hughes Aircraft Company Clustered neural networks
US5239594A (en) * 1991-02-12 1993-08-24 Mitsubishi Denki Kabushiki Kaisha Self-organizing pattern classification neural network system
EP0574951A2 (fr) * 1992-06-18 1993-12-22 Seiko Epson Corporation Système pour la reconnaissance de la parole
EP1073012A1 (fr) * 1999-07-30 2001-01-31 Eidgenössische Technische Hochschule Zürich Méthode et circuit de traitement de données neuromimétique
EP1327959A2 (fr) * 2002-01-11 2003-07-16 EADS Deutschland GmbH Réseau neuronal pour modéliser un système physique et procédé de construction de ce réseau neuronal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SZABO MIRUNA ET AL: "COORPERATION AND BIASED COMPETITION MODEL CAN EXPLAIN ATTENTIANAL FILTERING IN THE PREFRONTAL CORTEX" EUROPEAN JOURNAL OF NEUROSCIENCE, OXFORD UNIVERSITY PRESS, GB, Bd. 19, Nr. 6, April 2004 (2004-04), Seiten 1969-1977, XP008069137 ISSN: 0953-816X in der Anmeldung erwähnt *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7577099B1 (en) * 2006-04-06 2009-08-18 At&T Corp. Method and apparatus for fault localization in a network
EP1975857A3 (fr) * 2007-03-27 2009-03-25 Siemens Aktiengesellschaft Procédé de traitement assisté par ordinateur de valeurs de mesure établies dans un réseau de capteurs

Also Published As

Publication number Publication date
WO2006005665A3 (fr) 2006-12-07

Similar Documents

Publication Publication Date Title
DE68928484T2 (de) Verfahren zum erkennen von bildstrukturen
DE102005046747B3 (de) Verfahren zum rechnergestützten Lernen eines neuronalen Netzes und neuronales Netz
DE102008020379A1 (de) Verfahren zur rechnergestützten Steuerung und/oder Regelung eines technischen Systems
DE102019209644A1 (de) Verfahren zum Trainieren eines neuronalen Netzes
Ludwig Planbasierte Mensch-Maschine-Interaktion in multimodalen Assistenzsystemen
DE60125536T2 (de) Anordnung zur generierung von elementensequenzen
WO2020182541A1 (fr) Procédé pour faire fonctionner un robot dans un système multi-agents, robot et système multi-agents
EP1690219A2 (fr) Machine d'inference
DE10162927A1 (de) Auswerten von mittels funktionaler Magnet-Resonanz-Tomographie gewonnenen Bildern des Gehirns
WO2006005665A2 (fr) Procédé de réaction à des modifications de contexte à l'aide d'un réseau neuronal et réseau neuronal destiné à réagir à des modifications de contexte
DE112020005613T5 (de) Neuromorphe Einheit mit Kreuzschienen-Array-Struktur
WO2020178009A1 (fr) Apprentissage de réseaux neuronaux pour une mise en œuvre efficace sur un matériel
DE112020004025T5 (de) Gegnerisches und koopoeratives Nachahmungslernen zur dynamischen Behandlung
DE60022398T2 (de) Sequenzgenerator
DE19703964C1 (de) Verfahren zur Transformation einer zur Nachbildung eines technischen Prozesses dienenden Fuzzy-Logik in ein neuronales Netz
Tutić et al. Soziale Normen
DE102017219269A1 (de) Klassifizierung mit automatischer Auswahl aussichtsreicher Lerndaten
DE3609925C2 (fr)
EP1359539A2 (fr) Modèle neurodynamique de traitement d'informations visuelles
DE102020210700A1 (de) Flexiblerer iterativer Betrieb künstlicher neuronaler Netzwerke
WO2006005669A2 (fr) Systeme d'extraction d'informations et ou d'evaluation d'informations
DE69702116T2 (de) Verfahren zur verarbeitung von datenströmen in einem neuronalen netzwerk und ein neuronales netzwerk
WO2006005663A2 (fr) Procede de selection dynamique d'informations presentant un reseau neuronal, et reseau neuronal pour la selection dynamique d'informations
DE102021212906A1 (de) Verfahren zum Steuern eines Agenten
Maasjosthusmann et al. Explainable Artificial Intelligence: Analyse und Visualisierung des Lernprozesses eines Convolutional Neural Network zur Erkennung deutscher Straßenverkehrsschilder (Explainable Artificial Intelligence: Analysis and Visualization of a Convolutional Neural Network for the Recognition of German Traffic Signs)

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase