WO1998043159A1 - Associative neuron in an artificial neutral network - Google Patents

Associative neuron in an artificial neutral network Download PDF

Info

Publication number
WO1998043159A1
WO1998043159A1 PCT/FI1998/000257 FI9800257W WO9843159A1 WO 1998043159 A1 WO1998043159 A1 WO 1998043159A1 FI 9800257 W FI9800257 W FI 9800257W WO 9843159 A1 WO9843159 A1 WO 9843159A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
neuron
function
main
auxiliary
Prior art date
Application number
PCT/FI1998/000257
Other languages
Finnish (fi)
French (fr)
Inventor
Pentti Haikonen
Original Assignee
Nokia Oyj
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Oyj filed Critical Nokia Oyj
Priority to AU65025/98A priority Critical patent/AU6502598A/en
Priority to DE69809402T priority patent/DE69809402T2/en
Priority to JP54092798A priority patent/JP3650407B2/en
Priority to US09/381,825 priority patent/US6625588B1/en
Priority to EP98910770A priority patent/EP0970420B1/en
Priority to AT98910770T priority patent/ATE227860T1/en
Publication of WO1998043159A1 publication Critical patent/WO1998043159A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the invention relates to an associative neuron used in artificial neural networks.
  • synaptic weights which can also be either positive or negative. In some cases, only positive signal values and/or weighting coefficients are used. Synapses 11 1 to 11 n of the neuron weight the corresponding input signal by the weighting coefficients W, to W n .
  • a summing circuit 12 calculates a weighted sum U. The sum U is supplied to a thresholding function circuit 13, whose output signal is V.
  • the threshold function can vary, but usually a sigmoid or a piecewise linear function is used, whereby the output signal is given continuous values. In a conventional neuron, the output signal V of the thresholding function circuit 13 is simultaneously the output signal Y of the whole neuron.
  • the network When neurons of this kind are used in artificial neural networks, the network must be trained, i.e. appropriate values must be found for the weighting coefficients ⁇ N, to W n .
  • Different algorithms have been developed for the purpose.
  • Associative neurons different versions of what is known as the Hebb rule are often used. According to the Hebb rule, the weighting coefficient is increased always when the input corresponding to the weighting coefficient is active and the output of the neuron should be active.
  • the changing of the weighting coefficients according to the algorithms is called the training of the neural network.
  • the object of the invention is to provide a method and equipment implementing the method in which the above problems of training a neural network can be solved.
  • the object of the invention is to provide a mechanism by which useful additional information can be produced on the level of an individual neuron about the relations between the different input signals of the neuron.
  • the mechanism must be flexible and versatile to make artificial neurons widely applicable.
  • the mechanism must also be fairly simple so that the costs of manufacturing neurons can be kept low.
  • the invention is based on expansion of a conventional neuron such that a specific expansion, i.e. nucleus, is attached to the conventional neuron, a specific main input signal, i.e. main signal, passing through the nucleus.
  • the nucleus keys and adjusts the main signal by a signal obtained from the conventional part of the neuron, and forms between these signals logical operations and/or functions needed to control neural networks.
  • the processing power of a single neuron is thus increased as compared with the previously known neurons, which process data only by means of weighting coefficients and threshold functions.
  • main signals and auxiliary signals make neural networks easier to design, since the training according to the Hebb rule is then easy to implement in such a way that each weighting coefficient is increased always when the main signal and the auxiliary input signal concerned are simultaneously active.
  • the neuron of the invention and the network consisting of such neurons learn quickly: even one example may suffice.
  • the operation of the neuron of the invention and that of the networks consisting of such neurons are simple and clear.
  • Fig. 1 is a general view of an artificial neuron
  • Fig. 2 is a general view of a neuron of the invention
  • Fig. 3 is a block diagram of the neuron of the invention
  • Figs. 4 to 6 illustrate ways of implementing specific details of the neuron of the invention.
  • the neuron comprises a main signal input S, an arbitrary number of auxiliary signal inputs A 1 t A 2 , ..., A n , at least one controlling input C and at least one inhibiting input I, and a number of outputs.
  • the main output signal of the neuron is S 0
  • Y 0 , N 0 and N a are auxiliary output signals.
  • the input and output signals can be, for example, voltage levels.
  • Blocks 21 1t 21 2 , ..., 21 n are synapses of the neuron, in which the weighting coefficient corresponding to the auxiliary signal A 1 ? A 2 , ..., A n concerned is stored.
  • the synapses are, for example, circuit units.
  • Block 12 is a summing circuit, in which the output signals At, At 3 of the synapses 21,, 21 2 , ••-, 21 n are summed.
  • Block 13 is a thresholding circuit, which can be implemented simply as a comparator, which supplies an active output signal only if its input signal level, i.e. the output signal level of the summing circuit 12, exceeds a pre-set threshold value.
  • Block 22 comprises the neuron expansions of the invention.
  • the expansions are called the nucleus of the neuron.
  • the function of the nucleus is, for example, to key and adjust the main signal S on the basis of the output signal of the thresholding circuit 13 and to form logical operations and/or functions between the signals.
  • Particularly useful logical operations are the logical OR (signal S 0 ) and the logical AND (signal Y 0 ).
  • Other logical operations can also be used in the same way as AND so that the main signal S is inverted first (signal N 0 ) or so that the output signal V of the thresholding circuit 13 is inverted first (signal N a ).
  • the nucleus 22 also comprises circuitry that deactivates the output signal S 0 when a certain period of time has passed from the initiation of the signal, irrespective of what happens in the inputs of the neuron.
  • the circuitry can also take care that a new output pulse cannot be initiated until a certain period of recovery has passed.
  • an inhibiting input signal I Inhibit
  • the control input signal C Control controls the synapses' learning.
  • Fig. 3 is a block diagram of a neuron of the invention, the neuron here comprising three auxiliary signal inputs A, to A n and therefore three synapses 21 , to 21 3 in addition to the main signal input.
  • the expanded neuron of the invention can be implemented in various ways within the scope of the inventive idea disclosed above.
  • Figs. 4 to 6 show an embodiment of the neuron according to the present invention in which the input and output signals are voltage signals.
  • the signal is called 'active', if its voltage is positive, and 'inactive', if its voltage is substantially zero.
  • Fig. 4 shows a way of implementing the synapses 21, to 21 n of the neuron of Fig. 3.
  • the voltage corresponding to the weighting coefficient of the synapse is stored through a resistor 41 and a diode 42 in a capacitor 43 always when auxiliary signal A, and the main signal S are simultaneously active.
  • the resistor 41 and the capacitor 43 define a time constant by which the voltage of the capacitor 43 grows.
  • the diode 42 inhibits the voltage from discharging through an AND gate 40.
  • the voltage of the capacitor 43 is supplied to an operational amplifier 44 functioning as a voltage follower, the input impedance of the amplifier being very high (i.e.
  • the output of the synapse is signal At, which is obtained from input signal A, by locking it at the voltage level corresponding to the weighting coefficient by a diode 45 and a resistor 46.
  • a second voltage follower 47 buffers the output signal. Always when input signal A, is active, output signal At, is proportional to the current value of the weighting coefficient.
  • Fig. 5 shows a way of implementing the summing block 12 of the neuron of Fig. 3.
  • the voltages At, to At 3 obtained from synapses 21 , to 21 3 are summed by a resistor network 50 to 53.
  • the thresholding is performed by a comparator 54, and the thresholding is here abrupt so that the output of the comparator 54 is active only when the summed voltage U in the positive input of the comparator 54 exceeds the threshold value in the negative input (the threshold value in the example of Fig. 5 being the output voltage of a constant voltage power source 55).
  • Fig. 6 shows a way of implementing the nucleus 22 of the neuron of Fig. 3.
  • An OR circuit 602 generates a main output signal S 0 if the inputted main signal S is active or the thresholded summed voltage V is active.
  • the nucleus 22 contains a block 606, indicated by a dotted line, functioning as a delay circuit.
  • the delay circuit 606 comprises a buffer 608 and an inverter 610, resistors 612 to 614 and capacitors 616 to 618. Normally the output of the delay circuit 606 is active, so an AND gate 604 allows an output signal to pass through.
  • a logical AND operation Y 0 is formed by AND circuit 620: the first element in the operation is the main signal S and the second element is a summed signal V weighted by the weighting coefficients of the auxiliary signals A, to A n and subsequently thresholded.
  • a corresponding AND operation N 0 is formed by AND circuit 622, with the exception that the inverse value of the main signal S has been first formed (i.e. the signal has been inverted) by NO circuit 626.
  • the corresponding AND operation N a is formed by AND circuit 624, with the exception that the thresholded summed signal V has been first inverted by NO circuit 628. All the outputs can be inhibited by an I signal, which is (here) inverted by NO circuit 630 and then supplied, in the inverted form, to AND circuits 620 to 624.
  • the synapses are controlled by a K signal in accordance with the Hebb rule (cf. Fig. 2).
  • a control signal C is used to define when learning is allowed at all.
  • the generation of the key signal K is inhibited by AND circuit 632 when the control signal C is inactive.
  • the additional output signals Y 0 , N 0 and N a of the neuron according to the invention can be used, for example, as follows.
  • An active signal N a (“No association) indicates a situation where the main signal S is active but the auxiliary signal A is not.
  • One characteristic of the neural network is its ability to predict a situation.
  • An active signal N a indicates that there is a new input signal S which is not predicted by the auxiliary signals A.
  • Signal N a is thus a 'surprise indicator', which can be used to draw attention to new, surprising signals.
  • the control signal C controls, or keys, the K signal. It is not expedient for the network to learn all the situations that occur. When a normal human being encounters a new situation, he/she either concludes or instinctively knows whether the situation is worth learning. This kind of focusing of attention can be simulated by the control signal C.
  • the auxiliary signals A, to A n can be given continuously changing values and the main signal S can be given two different values.
  • the threshold function is here a simple comparative operation.
  • the invention is not limited to the above, but it can be applied more broadly, for example, so that the main signal S and the key signal K can also be given continuous values.
  • the threshold function can be replaced with any appropriate non-linear continuous or step function.
  • the neuron's learning is then not limited to two mutually exclusive situations: allowed or inhibited. Instead, the learning process is divided into different degrees or it is a continuum of degrees, whereby the strength of the K signal is adjusted on the basis of the main signal S.
  • the key signal K In the normal state of the neural network (when the network is not being trained), the key signal K is not more than a fraction of the main signal S, if the S signal is active. When the network is to be trained, the value of the key signal K approaches the value of the main signal S.
  • the binary AND gates in Figs. 4 and 6 should be replaced, for example, with analogue multipliers or adjustable amplifiers or attenuators or the like.
  • a huge number of neurons (usually 10 4 to 10 6 ) are needed in neural networks.
  • the neuron of the invention can be implemented by a process suitable to large-scale integration, for example by the EEPROM technique, which is used to manufacture the speech storage circuits implemented by semi-conductors.
  • the neurons and the neural network can be simulated by a computer program executed in a digital processor.
  • the values corresponding to the weighting coefficients of the synapses of the neurons are here stored in memory locations (e.g. in a matrix variable) and the other parts of the neuron are implemented by software logic.
  • the invention can be applied in areas where information is processed using extensive artificial neural networks.
  • the areas include, for example, processing of audiovisual information, interpretation of sensory information in general and of speech and image in particular, and formation of response.
  • the invention is applicable in many modern fields of industry, such as human/machine interfaces, personal electronic assistants and/or means of communication, multimedia, virtual reality, robotics, artificial intelligence and artificial creativity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Neurology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Feedback Control In General (AREA)
  • Image Analysis (AREA)
  • Networks Using Active Elements (AREA)
  • Advance Control (AREA)
  • Coils Or Transformers For Communication (AREA)
  • Electronic Switches (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

An associative artificial neuron comprises means for receiving a number of auxiliary input signals (A1 to An); means for forming from them a sum (U) weighted by coefficients (W1 to W) and means (13) for applying a non-linear function to the weighted sum (U) to generate a non-linear signal (V). In the invention the neuron further comprises means for receiving a main input signal (S) and means (22) for forming, on the basis of the main signal (S) and the non-linear signal, the function [(So) S] OR V, which is used to generate a main output signal, and at least one of the three logical functions Yo=S AND V, No=NOT S AND V, Na=S AND NOT V, and for using the thus obtained logical function to generate an additional output signal for the neuron.

Description

ASSOCIATIVE NEURON IN AN ARTIFICIAL NEUTRAL NETWORK
BACKGROUND OF THE INVENTION
The invention relates to an associative neuron used in artificial neural networks. In artificial neural networks, neurons derived from the McCullogh-
Pitts (1943) neuron, such as different versions of the perceptron (Frank Rosenblatt 1957), are used. Neural networks are discussed, for example, in the article "Artificial Neural Networks: A Tutorial" by Anil K. Jain, Jianchang Mao and K.M. Mohiuddin in IEEE Computer, March 1996, p. 31 to 44. In Fig. 1, signals X, to Xn are inputs of an artificial neuron and Y is its output signal. The values of the input signals X, to Xn can be continuously changing (analogous) or binary quantities, and the output signal Y can usually be given both positive and negative values. \N to Wn are weighting coefficients, i.e. synaptic weights, which can also be either positive or negative. In some cases, only positive signal values and/or weighting coefficients are used. Synapses 111 to 11n of the neuron weight the corresponding input signal by the weighting coefficients W, to Wn. A summing circuit 12 calculates a weighted sum U. The sum U is supplied to a thresholding function circuit 13, whose output signal is V. The threshold function can vary, but usually a sigmoid or a piecewise linear function is used, whereby the output signal is given continuous values. In a conventional neuron, the output signal V of the thresholding function circuit 13 is simultaneously the output signal Y of the whole neuron.
When neurons of this kind are used in artificial neural networks, the network must be trained, i.e. appropriate values must be found for the weighting coefficients \N, to Wn. Different algorithms have been developed for the purpose. A neural network that is capable of storing repeatedly supplied information by associating different signals, for example a certain input with a certain situation, is called an associative neural network. In associative neurons, different versions of what is known as the Hebb rule are often used. According to the Hebb rule, the weighting coefficient is increased always when the input corresponding to the weighting coefficient is active and the output of the neuron should be active. The changing of the weighting coefficients according to the algorithms is called the training of the neural network. From previously known artificial neurons, it is possible to assemble neural networks by connecting neurons in parallel to form layers and by arranging the layers one after the other. Feedback can be implemented in the networks by feeding output signals back as input signals. In wide networks assembled from neurons, however, the meaning of individual signals and even groups of signals is blurred, and the network becomes more difficult to design and manage. To produce an attention effect, for example, the network operations would have to be strengthened in one place and weakened in another, but the present solutions do not provide any clear answers to where, when and how this should be done, and in what way.
BRIEF DESCRIPTION OF INVENTION The object of the invention is to provide a method and equipment implementing the method in which the above problems of training a neural network can be solved. To put it more precisely, the object of the invention is to provide a mechanism by which useful additional information can be produced on the level of an individual neuron about the relations between the different input signals of the neuron. The mechanism must be flexible and versatile to make artificial neurons widely applicable. The mechanism must also be fairly simple so that the costs of manufacturing neurons can be kept low.
The object of the invention is achieved by a method and equipment that are characterized by what is stated in the independent claims. The preferred embodiments of the invention are claimed in the dependent claims.
The invention is based on expansion of a conventional neuron such that a specific expansion, i.e. nucleus, is attached to the conventional neuron, a specific main input signal, i.e. main signal, passing through the nucleus. The nucleus keys and adjusts the main signal by a signal obtained from the conventional part of the neuron, and forms between these signals logical operations and/or functions needed to control neural networks. The processing power of a single neuron is thus increased as compared with the previously known neurons, which process data only by means of weighting coefficients and threshold functions. On the other hand, a clear distinction between main signals and auxiliary signals makes neural networks easier to design, since the training according to the Hebb rule is then easy to implement in such a way that each weighting coefficient is increased always when the main signal and the auxiliary input signal concerned are simultaneously active. On the basis of the main signal (S) and a non-linear signal (V), the function (S0)S OR V is formed in the neuron of the invention and used to generate a main output signal, and in addition, at least one of the three logical functions Y0=S AND V, N0=NOT S AND V, Na=S AND NOT V is formed and used to generate an additional output signal for the neuron.
The neuron of the invention and the network consisting of such neurons learn quickly: even one example may suffice. The operation of the neuron of the invention and that of the networks consisting of such neurons are simple and clear.
BRIEF DESCRIPTION OF DRAWINGS
The invention will now be described in greater detail by means of preferred embodiments and with reference to the attached drawings, in which Fig. 1 is a general view of an artificial neuron, Fig. 2 is a general view of a neuron of the invention, Fig. 3 is a block diagram of the neuron of the invention, and Figs. 4 to 6 illustrate ways of implementing specific details of the neuron of the invention.
DETAILED DESCRPTION OF INVENTION
In Fig. 2, the neuron according to a preferred embodiment of the invention comprises a main signal input S, an arbitrary number of auxiliary signal inputs A1 t A2, ..., An, at least one controlling input C and at least one inhibiting input I, and a number of outputs. In the example of Fig. 2 the main output signal of the neuron is S0, and Y0, N0 and Na (or one/some of them) are auxiliary output signals. The input and output signals can be, for example, voltage levels.
Blocks 211t 212, ..., 21n are synapses of the neuron, in which the weighting coefficient corresponding to the auxiliary signal A1 ? A2, ..., An concerned is stored. In practice, the synapses are, for example, circuit units.
Block 12 is a summing circuit, in which the output signals At, At3 of the synapses 21,, 212, ••-, 21n are summed. Block 13 is a thresholding circuit, which can be implemented simply as a comparator, which supplies an active output signal only if its input signal level, i.e. the output signal level of the summing circuit 12, exceeds a pre-set threshold value.
Block 22 comprises the neuron expansions of the invention. In the present application, the expansions are called the nucleus of the neuron. The function of the nucleus is, for example, to key and adjust the main signal S on the basis of the output signal of the thresholding circuit 13 and to form logical operations and/or functions between the signals. Particularly useful logical operations are the logical OR (signal S0) and the logical AND (signal Y0). Other logical operations can also be used in the same way as AND so that the main signal S is inverted first (signal N0) or so that the output signal V of the thresholding circuit 13 is inverted first (signal Na).
In a preferred embodiment of the invention, the nucleus 22 also comprises circuitry that deactivates the output signal S0 when a certain period of time has passed from the initiation of the signal, irrespective of what happens in the inputs of the neuron. The circuitry can also take care that a new output pulse cannot be initiated until a certain period of recovery has passed. To the nucleus 22 can also be connected an inhibiting input signal I (Inhibit), which inhibits all outputs when activated (forces them to an inactive state). The control input signal C (Control) controls the synapses' learning.
Fig. 3 is a block diagram of a neuron of the invention, the neuron here comprising three auxiliary signal inputs A, to An and therefore three synapses 21 , to 213 in addition to the main signal input. The expanded neuron of the invention can be implemented in various ways within the scope of the inventive idea disclosed above.
Figs. 4 to 6 show an embodiment of the neuron according to the present invention in which the input and output signals are voltage signals. In the embodiment of Figs. 4 to 6 the signal is called 'active', if its voltage is positive, and 'inactive', if its voltage is substantially zero.
Fig. 4 shows a way of implementing the synapses 21, to 21 n of the neuron of Fig. 3. In this solution the voltage corresponding to the weighting coefficient of the synapse is stored through a resistor 41 and a diode 42 in a capacitor 43 always when auxiliary signal A, and the main signal S are simultaneously active. (A possible association between the main signal S and the key signal K is described in connection with gate 632 of Fig. 6.) The resistor 41 and the capacitor 43 define a time constant by which the voltage of the capacitor 43 grows. The diode 42 inhibits the voltage from discharging through an AND gate 40. The voltage of the capacitor 43 is supplied to an operational amplifier 44 functioning as a voltage follower, the input impedance of the amplifier being very high (i.e. the discharging of the capacitor 43 caused by it is negligible). The output of the synapse is signal At,, which is obtained from input signal A, by locking it at the voltage level corresponding to the weighting coefficient by a diode 45 and a resistor 46. A second voltage follower 47 buffers the output signal. Always when input signal A, is active, output signal At, is proportional to the current value of the weighting coefficient.
Fig. 5 shows a way of implementing the summing block 12 of the neuron of Fig. 3. The voltages At, to At3 obtained from synapses 21 , to 213 are summed by a resistor network 50 to 53. (It is readily noticeable that the number of the inputs At, to At3 and that of the resistors 51 to 53 are arbitrary.) The thresholding is performed by a comparator 54, and the thresholding is here abrupt so that the output of the comparator 54 is active only when the summed voltage U in the positive input of the comparator 54 exceeds the threshold value in the negative input (the threshold value in the example of Fig. 5 being the output voltage of a constant voltage power source 55).
Fig. 6 shows a way of implementing the nucleus 22 of the neuron of Fig. 3. An OR circuit 602 generates a main output signal S0 if the inputted main signal S is active or the thresholded summed voltage V is active. The nucleus 22 contains a block 606, indicated by a dotted line, functioning as a delay circuit. In the example of Fig. 6 the delay circuit 606 comprises a buffer 608 and an inverter 610, resistors 612 to 614 and capacitors 616 to 618. Normally the output of the delay circuit 606 is active, so an AND gate 604 allows an output signal to pass through. When the delay caused by the structure of the components of the delay circuit 606 has passed, the output pulse, inverted, reaches the AND gate 606 and deactivates the main output S0. S0 cannot be re-activated until the delayed output pulse in the output of the delay circuit 606 has ended. A logical AND operation Y0 is formed by AND circuit 620: the first element in the operation is the main signal S and the second element is a summed signal V weighted by the weighting coefficients of the auxiliary signals A, to An and subsequently thresholded. A corresponding AND operation N0 is formed by AND circuit 622, with the exception that the inverse value of the main signal S has been first formed (i.e. the signal has been inverted) by NO circuit 626. The corresponding AND operation Na is formed by AND circuit 624, with the exception that the thresholded summed signal V has been first inverted by NO circuit 628. All the outputs can be inhibited by an I signal, which is (here) inverted by NO circuit 630 and then supplied, in the inverted form, to AND circuits 620 to 624. The synapses are controlled by a K signal in accordance with the Hebb rule (cf. Fig. 2). A control signal C is used to define when learning is allowed at all. The generation of the key signal K is inhibited by AND circuit 632 when the control signal C is inactive.
The additional output signals Y0, N0 and Na of the neuron according to the invention can be used, for example, as follows. An active signal Y0 (Y = "Yes") means that the main signal S and the auxiliary signals A correspond to each other, i.e. they have been associated. An active signal N0 (N = "No) means that the main signal S and the auxiliary signals A do not correspond to each other. The auxiliary signal A is thus active, but the main signal S is not. An active signal Na ("No association) indicates a situation where the main signal S is active but the auxiliary signal A is not. One characteristic of the neural network is its ability to predict a situation. An active signal Na indicates that there is a new input signal S which is not predicted by the auxiliary signals A. Signal Na is thus a 'surprise indicator', which can be used to draw attention to new, surprising signals. The control signal C controls, or keys, the K signal. It is not expedient for the network to learn all the situations that occur. When a normal human being encounters a new situation, he/she either concludes or instinctively knows whether the situation is worth learning. This kind of focusing of attention can be simulated by the control signal C. In the above example the auxiliary signals A, to An can be given continuously changing values and the main signal S can be given two different values. The threshold function is here a simple comparative operation. The invention is not limited to the above, but it can be applied more broadly, for example, so that the main signal S and the key signal K can also be given continuous values. The threshold function can be replaced with any appropriate non-linear continuous or step function. The neuron's learning is then not limited to two mutually exclusive situations: allowed or inhibited. Instead, the learning process is divided into different degrees or it is a continuum of degrees, whereby the strength of the K signal is adjusted on the basis of the main signal S. In the normal state of the neural network (when the network is not being trained), the key signal K is not more than a fraction of the main signal S, if the S signal is active. When the network is to be trained, the value of the key signal K approaches the value of the main signal S. In practice, the binary AND gates in Figs. 4 and 6 should be replaced, for example, with analogue multipliers or adjustable amplifiers or attenuators or the like. In practice, a huge number of neurons (usually 104 to 106) are needed in neural networks. The neuron of the invention can be implemented by a process suitable to large-scale integration, for example by the EEPROM technique, which is used to manufacture the speech storage circuits implemented by semi-conductors. Alternatively, the neurons and the neural network can be simulated by a computer program executed in a digital processor. The values corresponding to the weighting coefficients of the synapses of the neurons are here stored in memory locations (e.g. in a matrix variable) and the other parts of the neuron are implemented by software logic. The invention can be applied in areas where information is processed using extensive artificial neural networks. The areas include, for example, processing of audiovisual information, interpretation of sensory information in general and of speech and image in particular, and formation of response. The invention is applicable in many modern fields of industry, such as human/machine interfaces, personal electronic assistants and/or means of communication, multimedia, virtual reality, robotics, artificial intelligence and artificial creativity.
It will be obvious to a person skilled in the art that with the advancement of technology, the basic idea of the invention can be implemented in many different ways. The invention and its embodiments are thus not limited to the above examples but they can vary within the scope of the claims.

Claims

1. A method of forming output signals of an associative artificial neural network by receiving a number of auxiliary signals (A, to An); forming a corresponding weighting coefficient (W, to Wn) for each auxiliary signal (A, to An); forming from the auxiliary signals (A, to An) a sum (U) weighted by the corresponding coefficients (W, to Wn); applying a non-linear function to the weighted sum (U) to generate a non-linear signal (V); c h a r a c t e r i z e d by receiving a main signal (S), which can be associated with the auxiliary signals (A, to An) such that the weighting coefficient (W, to Wn) of each auxiliary signal is increased when the main signal (S) and the corresponding auxiliary signal (A, to An) are simultaneously active; forming, on the basis of the main signal (S) and the non-linear signal (V), a function (S0) S OR V, which is used to generate a main output signal, and at least one of the three logical functions Y0=S AND V, N0=NOT S
AND V, Na=S AND NOT V, and using said logical function to generate an additional output signal for the neuron.
2. An associative artificial neuron comprising means (11, to 11 n; 21, to 21 n) for receiving a number of auxiliary signals (A, to An) and forming a corresponding coefficient (W, to Wn) for each auxiliary signal (A, to An); means (12) for forming from the auxiliary signals (A, to An) a sum
(U) weighted by the corresponding coefficients (W, to Wn); means (13) for applying a non-linear function to the weighted sum (U) to generate a non-linear signal (V); c h a r a c t e r i z e d by further comprising means for receiving a main signal (S) ), which can be associated with the auxiliary signals (A, to An) such that the weighting coefficient (W, to Wn) of each auxiliary signal is increased when the main signal (S) and the corresponding auxiliary signal (A, to An) are simultaneously active; and means (22) for forming, on the basis of the main signal (S) and the non-linear signal (V), a function (S0) S OR V, which is used to generate a main output signal, and at least one of the three logical functions Y0=S AND V, N0=NOT S AND V, Na=S AND NOT V, and using the thus obtained logical function to generate an additional output signal for the neuron.
3. A neuron according to claim 2, characterized by said non-linear function being a threshold function and the non-linear signal (V) obtained by the function having a first state and a second state.
4. A neuron according to claim 2, characterized by the nonlinear function being a step function with more than two steps.
5. A neuron according to claim 2, characterized by the nonlinear function being a continuous function.
6. A neuron according to claim 2, characterized by the main output signal (S0) having a first state and a second state and the neuron further comprising means (606) for setting an upper limit to the length of time that said main output signal (S0) is in the second state.
7. A neuron according to claim 6, characterized by the neuron further comprising means (606) for setting a lower limit to the length of time that said main output signal (S0) remains in the first state after having been in the second state.
8. A neuron according to any one of claims 2 to 7, characterized by further comprising means (40, 632) for adjusting the neuron's learning in response to an external control signal (C).
9. A neuron according to claim 8, characterized by the means (40, 632) for adjusting the neuron's learning having two states, whereby the neuron's learning is either allowed or inhibited.
10. A neuron according to any one of claims 2 to 9, charac- t e r i z e d by further comprising means (630, 604, 620 to 624) for forcing at least one ouput signal (S0, Y0, N0, Na) to a predetermined state in response to an external inhibiting signal (I).
PCT/FI1998/000257 1997-03-26 1998-03-23 Associative neuron in an artificial neutral network WO1998043159A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
AU65025/98A AU6502598A (en) 1997-03-26 1998-03-23 Associative neuron in an artificial neutral network
DE69809402T DE69809402T2 (en) 1997-03-26 1998-03-23 ASSOCIATIVE NEURON IN AN ARTIFICIAL NEURAL NETWORK
JP54092798A JP3650407B2 (en) 1997-03-26 1998-03-23 Associative neurons in artificial neural networks
US09/381,825 US6625588B1 (en) 1997-03-26 1998-03-23 Associative neuron in an artificial neural network
EP98910770A EP0970420B1 (en) 1997-03-26 1998-03-23 Associative neuron in an artificial neural network
AT98910770T ATE227860T1 (en) 1997-03-26 1998-03-23 ASSOCIATIVE NEURON IN AN ARTIFICIAL NEURAL NETWORK

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI971284 1997-03-26
FI971284A FI103304B (en) 1997-03-26 1997-03-26 Associative neuron

Publications (1)

Publication Number Publication Date
WO1998043159A1 true WO1998043159A1 (en) 1998-10-01

Family

ID=8548483

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI1998/000257 WO1998043159A1 (en) 1997-03-26 1998-03-23 Associative neuron in an artificial neutral network

Country Status (9)

Country Link
US (1) US6625588B1 (en)
EP (1) EP0970420B1 (en)
JP (1) JP3650407B2 (en)
AT (1) ATE227860T1 (en)
AU (1) AU6502598A (en)
DE (1) DE69809402T2 (en)
ES (1) ES2186139T3 (en)
FI (1) FI103304B (en)
WO (1) WO1998043159A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002052500A1 (en) * 2000-12-22 2002-07-04 Nokia Corporation Artificial associative neuron synapse

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7672918B2 (en) * 2007-02-05 2010-03-02 Steve Adkins Artificial neuron
US11354572B2 (en) 2018-12-05 2022-06-07 International Business Machines Corporation Multi-variables processing neurons and unsupervised multi-timescale learning for spiking neural networks
EP3839833A1 (en) * 2019-12-16 2021-06-23 ams International AG Neural amplifier, neural network and sensor device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5299285A (en) * 1992-01-31 1994-03-29 The United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Neural network with dynamically adaptable neurons
US5467429A (en) * 1990-07-09 1995-11-14 Nippon Telegraph And Telephone Corporation Neural network circuit
US5535309A (en) * 1992-10-05 1996-07-09 The Research Foundation, State University Of New York At Buffalo Single layer neural network circuit for performing linearly separable and non-linearly separable logical operations

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3602888A (en) * 1967-12-14 1971-08-31 Matsushita Electric Ind Co Ltd Learning device
US3601811A (en) * 1967-12-18 1971-08-24 Matsushita Electric Ind Co Ltd Learning machine
FR2051725B1 (en) * 1969-07-14 1973-04-27 Matsushita Electric Ind Co Ltd
US5515454A (en) * 1981-08-06 1996-05-07 Buckley; B. Shawn Self-organizing circuits
US4874963A (en) * 1988-02-11 1989-10-17 Bell Communications Research, Inc. Neuromorphic learning networks
JPH0782481B2 (en) * 1989-12-26 1995-09-06 三菱電機株式会社 Semiconductor neural network
JP2760145B2 (en) * 1990-09-26 1998-05-28 三菱電機株式会社 Knowledge information processing device
US5172204A (en) * 1991-03-27 1992-12-15 International Business Machines Corp. Artificial ionic synapse
US5454064A (en) * 1991-11-22 1995-09-26 Hughes Aircraft Company System for correlating object reports utilizing connectionist architecture
US5446829A (en) * 1993-06-24 1995-08-29 The United States Of America As Represented By The Department Of Health And Human Services Artificial network for temporal sequence processing
US5448684A (en) 1993-11-12 1995-09-05 Motorola, Inc. Neural network, neuron, and method for recognizing a missing input valve
DE69430529T2 (en) 1994-07-28 2003-01-16 International Business Machines Corp., Armonk Daisy chain circuit for serial connection of neuron circuits

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5467429A (en) * 1990-07-09 1995-11-14 Nippon Telegraph And Telephone Corporation Neural network circuit
US5299285A (en) * 1992-01-31 1994-03-29 The United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Neural network with dynamically adaptable neurons
US5535309A (en) * 1992-10-05 1996-07-09 The Research Foundation, State University Of New York At Buffalo Single layer neural network circuit for performing linearly separable and non-linearly separable logical operations

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
IEEE COMPUTER, Volume 29, No. 3, March 1996, ANIL K. JAIN et al., "Artificial Neural Networks: A Tutorial", pages 31-44. *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002052500A1 (en) * 2000-12-22 2002-07-04 Nokia Corporation Artificial associative neuron synapse
US7092921B2 (en) 2000-12-22 2006-08-15 Nokia Corporation Artificial associative neuron synapse

Also Published As

Publication number Publication date
US6625588B1 (en) 2003-09-23
JP3650407B2 (en) 2005-05-18
DE69809402T2 (en) 2003-08-21
FI971284A0 (en) 1997-03-26
FI971284A (en) 1998-09-27
AU6502598A (en) 1998-10-20
ATE227860T1 (en) 2002-11-15
EP0970420A1 (en) 2000-01-12
FI103304B1 (en) 1999-05-31
ES2186139T3 (en) 2003-05-01
JP2001517341A (en) 2001-10-02
EP0970420B1 (en) 2002-11-13
DE69809402D1 (en) 2002-12-19
FI103304B (en) 1999-05-31

Similar Documents

Publication Publication Date Title
Sutton et al. Online learning with random representations
Zilouchian Fundamentals of neural networks
WO1991004546A1 (en) Memory modification of artificial neural networks
Ye et al. Dynamic system identification using recurrent radial basis function network
US6625588B1 (en) Associative neuron in an artificial neural network
EP0985181B1 (en) Associative neural network
Sorheim A combined network architecture using ART2 and back propagation for adaptive estimation of dynamical processes
Hassoun An adaptive attentive learning algorithm for single-layer neural networks
CN111582461A (en) Neural network training method and device, terminal equipment and readable storage medium
Sheel et al. Accelerated learning in MLP using adaptive learning rate with momentum coefficient
Neumerkel et al. Modelling dynamic processes with clustered time-delay neurons
Kocamaz et al. Control of Chaotic Finance System Using Artificial Neural Networks
Tang et al. A model of neurons with unidirectional linear response
Ku et al. Diagonal recurrent neural network-based control: convergence and stability
Gupta et al. Learning and adaptive neural controller
De Vries et al. Short term memory structures for dynamic neural networks
Van der Spiegel et al. Artificial neural networks: principles and VLSI implementation
Xu et al. Adaptive higher-order feedforward neural networks
Lin et al. Neural net-based H∞ control for a class of nonlinear systems
Huang et al. The capacity of the semi-orthogonally associative memories model
Kelkar Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems
Curtiss et al. Neural network fundamentals for scientists and engineers
Selonen et al. Using external knowledge in neural network models
Laffely et al. Synaptic learning in VLSI-based artificial nerve cells
Tokita et al. Force control of robotic manipulator using neural network

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
CFP Corrected version of a pamphlet front page
CR1 Correction of entry in section i

Free format text: PAT. BUL. 39/98 UNDER (54) THE TITLE SHOULD READ "ASSOCIATIVE NEURON IN AN ARTIFICIAL NEUTRAL NETWORK"

ENP Entry into the national phase

Ref document number: 1998 540927

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1998910770

Country of ref document: EP

Ref document number: 09381825

Country of ref document: US

WWP Wipo information: published in national office

Ref document number: 1998910770

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

WWG Wipo information: grant in national office

Ref document number: 1998910770

Country of ref document: EP