US6708159B2 - Finite-state automaton modeling biologic neuron - Google Patents

Finite-state automaton modeling biologic neuron Download PDF

Info

Publication number
US6708159B2
US6708159B2 US09/846,053 US84605301A US6708159B2 US 6708159 B2 US6708159 B2 US 6708159B2 US 84605301 A US84605301 A US 84605301A US 6708159 B2 US6708159 B2 US 6708159B2
Authority
US
United States
Prior art keywords
inputs
automaton
computing unit
output
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US09/846,053
Other versions
US20020184174A1 (en
Inventor
Rachid M. Kadri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US09/846,053 priority Critical patent/US6708159B2/en
Publication of US20020184174A1 publication Critical patent/US20020184174A1/en
Application granted granted Critical
Publication of US6708159B2 publication Critical patent/US6708159B2/en
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the present invention is directed to an automaton for use as a logical building block, and particularly to an automaton suitable for use in a neural architecture.
  • the logic structure that does pervade the computing world relies on one of two models.
  • the first model is a simple all or nothing state model. Beginning with vacuum tubes and extending to the present transistors, the element is either on or off. Ones and zeros may be assigned to either state as needed, being consistent within the circuit. Multiple elements may be assembled to form gates. The gate reacts to input depending on its nature and provides an output of either on or off. Typical gates include AND, OR, XOR, NOT, or the like. While helpful for circuit design, this sort of logic is not conducive to the creation of automata capable of inferential decision making.
  • Threshold logic implies that there is a gate, and when inputs to that gate meet or exceed the threshold, the gate is triggered and an output is generated. While functional to model simply the action potential behavior of the nerve cell, these models are not capable of emulating completely biologic nerve cell behavior. Further, a single threshold logic gate cannot implement an XOR function. The lack of synthesis procedures limits this logic to adapt in modern logic design procedures. Attempts to use either of these models to create useful automata based on neuronal signaling have failed.
  • Threshold logic as applied to neural networks, led to neural net circuits whose dynamics are neither controllable nor observable due to the analog nature of the components. Specifically, because analog systems are flow through systems, an observer cannot tell exactly what it is doing something and when it would do such. Threshold gates further fail to model with any success inhibitory mechanisms in human nerve cells.
  • An automaton may be constructed digitally, modeled on the ability of nerve cells to operate at a plurality of discrete electrochemical states.
  • the automaton of the present invention may comprise a plurality of inputs.
  • these inputs may each be weighted independently of the others, or weights may be assigned to combinations of inputs, also reflecting the biological phenomena of coupling between presynaptic terminals.
  • These weighted inputs are fed into a state computing unit.
  • An output is generated by the state computing unit.
  • the output is simultaneously fed back to a weight computing unit together with the inputs of the automaton.
  • the weight computing unit in turn controls dynamically the weights assigned to each input or combination of inputs.
  • a digital clock drives the inputs, the state computing unit, and the weight computing unit.
  • the output of the automaton is thus a digital value having more than two state levels that more closely reflects the electrochemical communication between biologic neurons.
  • FIG. 1 illustrates a graph of synaptic currents at various membrane resting potentials in a nerve cell
  • FIG. 2 illustrates a schematic version of a biologic nerve synapse
  • FIG. 3 illustrates the automaton of the present invention.
  • the study of neurophysiology reveals that nerve cells communicate to one another on many levels.
  • the most obvious form of communication is through an action potential, comprised of an electrical signal in a nerve cell. Once sufficient stimulus is applied to the nerve cell, the action potential is emitted down the axon of the nerve cell.
  • This sort of signal is used mainly for long range signaling. For example, such a signal may be used from motoneurons to the central nervous system.
  • threshold logic gates were modeled. While permutations do exist within threshold logic gates such as a piecewise linear threshold gate and a sigmoid threshold gate, such threshold gates are analog in nature and do not exhibit a plurality of discrete states. Further, neural nets built of these sorts of electrical neurons still fail to show the ability for inferential thinking.
  • Nerve cells exhibit a natural electrical potential relative to their surroundings, called the resting potential. Almost all nerve cells are negative inside, having resting potentials in the range of ⁇ 20 to ⁇ 100 mV, with approximately ⁇ 70 mV being typical.
  • Information may be communicated between cells by variations in the potential of the cell. Such variations in the potential are achieved by the presence or absence of neurotransmitters that impact the action of the sodium pump and the diffusion of potassium across the cellular membrane. Variations in potentials result in current flow between cells.
  • the “amount” of information communicated between cells is reflected in the graded size of the postsynaptic potential. Because neurons in the brain region are so tiny, with a body of a few microns in diameter, and close to each other with axons of a few tens of microns long, they do not use action potentials to conduct information, but rather currents that vary in size depending on the strength of the input stimulus that gave rise to them.
  • FIG. 1 illustrates a number of ynaptic currents at various membrane potentials. Note that these currents decay with time and reflect a minute current flow between cells as a result of change in potentials. However, this sort of communication has never been captured by neural net or automaton design because there has not been a good model with which to emulate this communication.
  • a synaptic junction 10 comprises a synapse 12 , a number of presynaptic terminals 14 , 16 , and a postsynaptic cell 30 .
  • Presynaptic terminals may be inhibitory presynaptic terminals 14 or excitatory presynaptic terminals 16 .
  • excitatory presynaptic terminals 16 may be inhibited by a presynaptic terminal inhibitor 18 . This activity reflects a coupling between the inputs.
  • Presynaptic terminals 14 , 16 release quanta of neurotransmitters 20 into the synapse 12 from which the postsynaptic cell 30 receives them. Further, the function of inhibitory synaptic terminal 18 (u m ) is to reduce or block the output from presynaptic terminal 16 (u m ⁇ 1 ).
  • Excitatory presynaptic terminals 16 release neurotransmitters that raise the resting potential in the postsynaptic cell 30 and inhibitory presynaptic terminals 14 release neurotransmitters that lower the resting potential in the postsynaptic cell 30 . The combination of both excitatory and inhibitory terminals results in a graded potential.
  • Each presynaptic terminal 14 , 16 , 18 may be a one or zero.
  • a value of one means that the terminal in question is conducting. Conversely, a value of zero means that the terminal in question is not conducting.
  • Postsynaptic cell 30 includes a cell membrane 32 , an axon 36 , and a plurality of connection branches 38 .
  • the introduction of the neurotransmitters 20 across the cell membrane 32 of the postsynaptic cell 30 causes an electrochemical reaction that changes the potential of the cell membrane 32 .
  • the resulting potential variations 34 propagate down the axon 36 to the connection branches 38 where neurotransmitters 20 may be released again across another synapse (not shown).
  • E p represents the normal resting potential of the cell 30 .
  • E w is the equivalent of the electrochemical energy carried by the quanta of neurotransmitters 20 released by all active presynaptic terminals 14 , 16 , either excitatory or inhibitory.
  • E p+1 may be expressed as follows:
  • the present invention lies in its ability to model the electrochemical communication exhibited by cells as described with respect to FIG. 2 .
  • FIG. 3 wherein the structure of an automaton 100 of the present invention is illustrated.
  • Automaton 100 comprises a plurality of m inputs 102 , a weight storage unit 104 , a state computing unit 106 , a weight computing unit 108 , and an output 110 .
  • a clock 112 may provide a clock signal from a point external to the automaton 100 .
  • u i There may be up to m inputs 102 , denoted in FIG. 3 as u i , 0 ⁇ i ⁇ m. Each of the inputs 102 may take on values of zero or one. An active input is set at a one. This mimics the biologic cell described above. Inputs 102 may be assembled into patterns. If there are m inputs 102 , then there could be up to 2 m different input patterns as stimulus.
  • weight storage unit 104 has up to 2 m weight elements w i′ , 0 ⁇ i′ ⁇ 2 m , that are controlled by the weight computing unit 108 .
  • w i ′ models the variable number of quanta of neurotransmitters released by the presynaptic terminals.
  • the output of the weight storage unit 104 is intended to act as the equivalent of E w described above.
  • f j (u i ) represents the jth pattern of input terminals contributing to E w and w j represents the weights that correspond to that given pattern of inputs. For example, all inputs could be weighted with a single weight w o ; some portion of the inputs could be weighted with w 1 , another portion with w 2 ; every input could have its own weight, or any variation thereof.
  • the use of f j (u i ) allows the present invention to model various patterns of synaptic arrangements and also model coupling between presynaptic terminals.
  • State computing unit 106 mimics the cell membrane where the electrochemical interaction takes place. State computing unit 106 generates an output at output 110 .
  • Output 110 represents a finite set of output terminals from the state computing unit 106 , each one taking on values of zero or one. This represents the digitized resting potential of the automaton 100 . This may be an n-bit digital value. n may be arbitrarily assigned and is determined by the number of state levels that the automaton 100 can output. That value can be fed as is, or encoded (to emulate current digital computing) before being passed to another automaton 100 .
  • the output of state computing unit 106 represents E p+1 and equation 1, above, is the source of this output.
  • Weight computing unit 108 receives the inputs 102 prior to weighting as well as receives the output of state computing unit 106 . Weight computing unit 108 may evaluate E w prior to introduction at the state computing unit 106 . Finally, weight computing unit 108 may dictate what weights w i′ are placed in weight storage unit 104 . Usually, the weight computing unit 108 first computes the weights, perhaps arbitrarily, and stores them in the weight storage unit 104 . Weight computing unit 108 will update the weights during learning, as discussed below, or as needed to alter the output at output 110 so as to fall within a predefined state level band as desired.
  • Clock 112 forces automaton 100 to act as a discrete time system.
  • Clock 112 is connected to the inputs 102 , the state computing unit 106 , and the weight computing unit 108 .
  • Clock 112 has a period of At that corresponds to the amount of time the state computing unit 106 takes to calculate the output E p+1 from the weighted inputs E w . For example, if a set of inputs are received at inputs 102 at time t 0 , state computing unit 106 outputs a signal at output 110 at time t 0 + ⁇ t.
  • Clock 112 forces the synchronization of automaton 100 .
  • a fresh input pattern f j (u i ) is presented at inputs 102 , weighted in weight storage unit 104 , and fed into state computing unit 106 .
  • a new state level is generated by the state computing unit 106 according to a state transition function which combines equations (1) and (2) and is defined as follows:
  • This computing architecture permits an output that is structured as a set of discrete state levels with positive and negative polarity, analogous to the membrane potential variations that occur during excitation and inhibition of a nerve cell.
  • External input stimulus being a combination of excitatory and inhibitory inputs from the set u i , will cause the occurrence of any state amongst the set.
  • the automaton 100 will output a different state level from within the set available.
  • This output defined between time t and time t+n′ ⁇ t after some number of n′ (note that n′ has no relation to n mentioned above) different input patterns represents the state trajectory of the automaton 100 just like the cell's membrane outputs a train of pulses of different amplitudes. Consequently, for m number of input and output terminals, there could be up to 2 m different states within the output state trajectory.
  • Each state trajectory represents a response in time of the internal state of the automaton 100 .
  • Automaton 100 is said to be in the learning mode when it is executing the following function:
  • This learning function may be an error correction learning algorithm, a memory based learning algorithm, a Hebbian learning algorithm, a competitive learning algorithm, or even a Boltzmann learning algorithm if needed or desired.
  • the weights can be altered discretely. Further, as this is a discrete system, the set of states can be traced backwards to observe how the system arrived at the final state.
  • the neuron may also synthesize Mealy and Moore automata as well. In such a configuration, the last E p+1 remains the active state until changed by a new input pattern received at inputs 102 .
  • the architecture of automaton 100 may be implemented using an ASIC. Alternatively, it may equivalently be implemented using a microprocessor, a microcontroller, a programmable logic device (PLD), a Complex PLD (CPLD), a programmable Arithmetic Logic Unit (ALU), Field Programmable Gate Array (FPGA) or the like as needed or desired.
  • Weight storage unit 104 may be implemented using static or dynamic RAM. Alternatively, it may be implemented with flash memory, ROM, EPROM, EEPROM, or the like as needed or desired.
  • the entire automaton 100 may be modeled with software run by appropriately powered microprocessor.
  • Automaton 100 termed herein as a neuro-automaton, represents an upper level model of logic implemented as a discrete time system. Thus, multiple automata 100 may be assembled together, potentially communicating one with the others, with cascaded layers of automata 100 to build adaptive filters, perform logic and signal processing (just as brain cells do) and the like as needed or desired.
  • the present invention may also be used to synthesize an algorithmic state machine (a state machine that executes a sequence of states according to a predefined algorithm).
  • the state-machine input-output relation is no longer expressed in terms of combinatorial expressions. Instead, state transitions are expressed in terms of discrete time expressions. This is superior to Boolean expressions and threshold logic expressions because of the analytical nature of discrete mathematics.

Abstract

A finite state electrical automaton modeled after a human neuron comprises a plurality of weighted inputs that pass into a state computing unit. A feedback mechanism changes the weights of the inputs as needed to control the response of the automaton to a desired output. A clock signal allows the automaton to function as a discrete time system. Unlike the threshold gates, the present automaton is capable of outputting an n-bit digital value analogous to a cell membrane potential. Because this automaton is capable of outputting more than two simple states it is a better building block for building neural nets.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention is directed to an automaton for use as a logical building block, and particularly to an automaton suitable for use in a neural architecture.
2. Description of the Prior Art
Modern computing owes much of its logical structure to John Von Neumann, who incidentally hypothesized that circuits as complex as the human nervous system would never be realized. The logic structure that does pervade the computing world relies on one of two models. The first model is a simple all or nothing state model. Beginning with vacuum tubes and extending to the present transistors, the element is either on or off. Ones and zeros may be assigned to either state as needed, being consistent within the circuit. Multiple elements may be assembled to form gates. The gate reacts to input depending on its nature and provides an output of either on or off. Typical gates include AND, OR, XOR, NOT, or the like. While helpful for circuit design, this sort of logic is not conducive to the creation of automata capable of inferential decision making.
The second model was derived from the study of nerve cells and relies on threshold logic. Threshold logic implies that there is a gate, and when inputs to that gate meet or exceed the threshold, the gate is triggered and an output is generated. While functional to model simply the action potential behavior of the nerve cell, these models are not capable of emulating completely biologic nerve cell behavior. Further, a single threshold logic gate cannot implement an XOR function. The lack of synthesis procedures limits this logic to adapt in modern logic design procedures. Attempts to use either of these models to create useful automata based on neuronal signaling have failed.
These failures extend past the creation of useful automata. Threshold logic, as applied to neural networks, led to neural net circuits whose dynamics are neither controllable nor observable due to the analog nature of the components. Specifically, because analog systems are flow through systems, an observer cannot tell exactly what it is doing something and when it would do such. Threshold gates further fail to model with any success inhibitory mechanisms in human nerve cells.
To date modern research in neurobiology has contributed to further understanding nerve cell signaling and uncovering vast complexities in synaptic organizations. To this effect, elaborate structures in chemical synaptic connections with extensive contacts between cells have also been observed. The perception that an axon terminating onto a single synapse has consequently been revised. To date, there has not been a logic circuit based on the electrochemical communication between cells.
SUMMARY OF THE INVENTION
An automaton may be constructed digitally, modeled on the ability of nerve cells to operate at a plurality of discrete electrochemical states. The automaton of the present invention may comprise a plurality of inputs. To make the automaton more adaptable to represent synaptic arrangements more complex than the ones modeled by threshold logic, these inputs may each be weighted independently of the others, or weights may be assigned to combinations of inputs, also reflecting the biological phenomena of coupling between presynaptic terminals. These weighted inputs are fed into a state computing unit. An output is generated by the state computing unit. The output is simultaneously fed back to a weight computing unit together with the inputs of the automaton. The weight computing unit in turn controls dynamically the weights assigned to each input or combination of inputs. A digital clock drives the inputs, the state computing unit, and the weight computing unit.
The output of the automaton is thus a digital value having more than two state levels that more closely reflects the electrochemical communication between biologic neurons. With this automaton as a building block, more advanced neural architectures maybe constructed.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a graph of synaptic currents at various membrane resting potentials in a nerve cell;
FIG. 2 illustrates a schematic version of a biologic nerve synapse; and
FIG. 3 illustrates the automaton of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The study of neurophysiology reveals that nerve cells communicate to one another on many levels. The most obvious form of communication is through an action potential, comprised of an electrical signal in a nerve cell. Once sufficient stimulus is applied to the nerve cell, the action potential is emitted down the axon of the nerve cell. This sort of signal is used mainly for long range signaling. For example, such a signal may be used from motoneurons to the central nervous system. It is upon this sort of communication that threshold logic gates were modeled. While permutations do exist within threshold logic gates such as a piecewise linear threshold gate and a sigmoid threshold gate, such threshold gates are analog in nature and do not exhibit a plurality of discrete states. Further, neural nets built of these sorts of electrical neurons still fail to show the ability for inferential thinking.
Another form of neural communication, ignored in large part by the proponents of artificial intelligence and automaton construction is the electrochemical communication across synapses. A brief discussion of this biologic phenomenon is in order. Nerve cells exhibit a natural electrical potential relative to their surroundings, called the resting potential. Almost all nerve cells are negative inside, having resting potentials in the range of −20 to −100 mV, with approximately −70 mV being typical. Information may be communicated between cells by variations in the potential of the cell. Such variations in the potential are achieved by the presence or absence of neurotransmitters that impact the action of the sodium pump and the diffusion of potassium across the cellular membrane. Variations in potentials result in current flow between cells. The “amount” of information communicated between cells is reflected in the graded size of the postsynaptic potential. Because neurons in the brain region are so tiny, with a body of a few microns in diameter, and close to each other with axons of a few tens of microns long, they do not use action potentials to conduct information, but rather currents that vary in size depending on the strength of the input stimulus that gave rise to them. FIG. 1 illustrates a number of ynaptic currents at various membrane potentials. Note that these currents decay with time and reflect a minute current flow between cells as a result of change in potentials. However, this sort of communication has never been captured by neural net or automaton design because there has not been a good model with which to emulate this communication.
Research has shown that neurotransmitters which effectuate the change in the potential of the postsynaptic cell are released in discrete quanta by the presynaptic terminals of other cells. Further, the variations of the resting potential of the postsynaptic cell are found to be at discrete states (graded) as a result of the discrete quanta of neurotransmitters that cross the membrane of the postsynaptic cell. The output of the cell can no longer be interpreted as true or false, but rather as a quantity with which to be dealt. The discrete quantal nature of the release of neurotransmitters and the consequent graded postsynaptic potential lends itself to digital modeling.
For further assistance in understanding this biologic phenomenon, a synapse and the postsynaptic cell are illustrated in FIG. 2. A synaptic junction 10 comprises a synapse 12, a number of presynaptic terminals 14, 16, and a postsynaptic cell 30. Presynaptic terminals may be inhibitory presynaptic terminals 14 or excitatory presynaptic terminals 16. Further, excitatory presynaptic terminals 16 may be inhibited by a presynaptic terminal inhibitor 18. This activity reflects a coupling between the inputs. Presynaptic terminals 14, 16 release quanta of neurotransmitters 20 into the synapse 12 from which the postsynaptic cell 30 receives them. Further, the function of inhibitory synaptic terminal 18 (um) is to reduce or block the output from presynaptic terminal 16 (um−1). Excitatory presynaptic terminals 16 release neurotransmitters that raise the resting potential in the postsynaptic cell 30 and inhibitory presynaptic terminals 14 release neurotransmitters that lower the resting potential in the postsynaptic cell 30. The combination of both excitatory and inhibitory terminals results in a graded potential. Each presynaptic terminal 14, 16, 18, represented by ui, may be a one or zero. A value of one means that the terminal in question is conducting. Conversely, a value of zero means that the terminal in question is not conducting.
Postsynaptic cell 30 includes a cell membrane 32, an axon 36, and a plurality of connection branches 38. The introduction of the neurotransmitters 20 across the cell membrane 32 of the postsynaptic cell 30 causes an electrochemical reaction that changes the potential of the cell membrane 32. The resulting potential variations 34 propagate down the axon 36 to the connection branches 38 where neurotransmitters 20 may be released again across another synapse (not shown). In FIG. 2, Ep represents the normal resting potential of the cell 30. Ew is the equivalent of the electrochemical energy carried by the quanta of neurotransmitters 20 released by all active presynaptic terminals 14, 16, either excitatory or inhibitory. When these quanta of neurotransmitters enter the membrane 32 with an energy equal to Ew, there is a movement of ions which gives rise to a pulse of potential amplitude Ep+1 measurable in terms of current and voltage. Thus, Ep+1 may be expressed as follows:
E p+1 =E p +E w  (1)
The present invention lies in its ability to model the electrochemical communication exhibited by cells as described with respect to FIG. 2. Reference is made to FIG. 3, wherein the structure of an automaton 100 of the present invention is illustrated. Automaton 100 comprises a plurality of m inputs 102, a weight storage unit 104, a state computing unit 106, a weight computing unit 108, and an output 110. A clock 112 may provide a clock signal from a point external to the automaton 100.
There may be up to m inputs 102, denoted in FIG. 3 as ui, 0≦i≦m. Each of the inputs 102 may take on values of zero or one. An active input is set at a one. This mimics the biologic cell described above. Inputs 102 may be assembled into patterns. If there are m inputs 102, then there could be up to 2m different input patterns as stimulus.
To model effectively the coupling between inputs, weight storage unit 104 has up to 2m weight elements wi′, 0≦i′≦2m, that are controlled by the weight computing unit 108. wi′ models the variable number of quanta of neurotransmitters released by the presynaptic terminals. The output of the weight storage unit 104 is intended to act as the equivalent of Ew described above. In particular, Ew, in this context, may be expressed as follows: E w = j [ w j * f j ( u i ) ] ( 2 )
Figure US06708159-20040316-M00001
where fj(ui) represents the jth pattern of input terminals contributing to Ew and wj represents the weights that correspond to that given pattern of inputs. For example, all inputs could be weighted with a single weight wo; some portion of the inputs could be weighted with w1, another portion with w2; every input could have its own weight, or any variation thereof. The use of fj(ui) allows the present invention to model various patterns of synaptic arrangements and also model coupling between presynaptic terminals.
State computing unit 106 mimics the cell membrane where the electrochemical interaction takes place. State computing unit 106 generates an output at output 110. Output 110 represents a finite set of output terminals from the state computing unit 106, each one taking on values of zero or one. This represents the digitized resting potential of the automaton 100. This may be an n-bit digital value. n may be arbitrarily assigned and is determined by the number of state levels that the automaton 100 can output. That value can be fed as is, or encoded (to emulate current digital computing) before being passed to another automaton 100. The output of state computing unit 106 represents Ep+1 and equation 1, above, is the source of this output.
Weight computing unit 108 receives the inputs 102 prior to weighting as well as receives the output of state computing unit 106. Weight computing unit 108 may evaluate Ew prior to introduction at the state computing unit 106. Finally, weight computing unit 108 may dictate what weights wi′ are placed in weight storage unit 104. Usually, the weight computing unit 108 first computes the weights, perhaps arbitrarily, and stores them in the weight storage unit 104. Weight computing unit 108 will update the weights during learning, as discussed below, or as needed to alter the output at output 110 so as to fall within a predefined state level band as desired.
Clock 112 forces automaton 100 to act as a discrete time system. Clock 112 is connected to the inputs 102, the state computing unit 106, and the weight computing unit 108. Clock 112 has a period of At that corresponds to the amount of time the state computing unit 106 takes to calculate the output Ep+1 from the weighted inputs Ew. For example, if a set of inputs are received at inputs 102 at time t0, state computing unit 106 outputs a signal at output 110 at time t0+Δt. Clock 112 forces the synchronization of automaton 100. In particular, when the clock 112 sends a pulse, a fresh input pattern fj(ui) is presented at inputs 102, weighted in weight storage unit 104, and fed into state computing unit 106. Upon receipt of this new input a new state level is generated by the state computing unit 106 according to a state transition function which combines equations (1) and (2) and is defined as follows:
E p(t+Δt)=h[E p(t), W(t), u(t)]  (3)
This computing architecture permits an output that is structured as a set of discrete state levels with positive and negative polarity, analogous to the membrane potential variations that occur during excitation and inhibition of a nerve cell. External input stimulus being a combination of excitatory and inhibitory inputs from the set ui, will cause the occurrence of any state amongst the set. As input stimulus is applied with changing patterns, the automaton 100 will output a different state level from within the set available. This output defined between time t and time t+n′ Δt after some number of n′ (note that n′ has no relation to n mentioned above) different input patterns represents the state trajectory of the automaton 100 just like the cell's membrane outputs a train of pulses of different amplitudes. Consequently, for m number of input and output terminals, there could be up to 2m different states within the output state trajectory. Each state trajectory represents a response in time of the internal state of the automaton 100.
It is well known in the study of neural nets to provide a learning function that trains the neural net to react, i.e., provide a desired output to given inputs. The same is true of the present invention. Automaton 100 is said to be in the learning mode when it is executing the following function:
w j =g[f j(u i), E p , E p+1]  (1.4)
This learning function may be an error correction learning algorithm, a memory based learning algorithm, a Hebbian learning algorithm, a competitive learning algorithm, or even a Boltzmann learning algorithm if needed or desired. During the learning process, the weights can be altered discretely. Further, as this is a discrete system, the set of states can be traced backwards to observe how the system arrived at the final state.
If the automaton 100 is configured not to return to its original resting state as a normal biological cell does, the neuron may also synthesize Mealy and Moore automata as well. In such a configuration, the last Ep+1 remains the active state until changed by a new input pattern received at inputs 102.
The architecture of automaton 100, as illustrated in FIG. 3 may be implemented using an ASIC. Alternatively, it may equivalently be implemented using a microprocessor, a microcontroller, a programmable logic device (PLD), a Complex PLD (CPLD), a programmable Arithmetic Logic Unit (ALU), Field Programmable Gate Array (FPGA) or the like as needed or desired. Weight storage unit 104 may be implemented using static or dynamic RAM. Alternatively, it may be implemented with flash memory, ROM, EPROM, EEPROM, or the like as needed or desired. As yet another alternative, the entire automaton 100 may be modeled with software run by appropriately powered microprocessor.
The present invention is better suited to model nerve cell behavioral change than the previous threshold model. As such, the present invention constitutes an appropriate building block for artificial nervous systems architecture. Automaton 100, termed herein as a neuro-automaton, represents an upper level model of logic implemented as a discrete time system. Thus, multiple automata 100 may be assembled together, potentially communicating one with the others, with cascaded layers of automata 100 to build adaptive filters, perform logic and signal processing (just as brain cells do) and the like as needed or desired.
The present invention may also be used to synthesize an algorithmic state machine (a state machine that executes a sequence of states according to a predefined algorithm). The state-machine input-output relation is no longer expressed in terms of combinatorial expressions. Instead, state transitions are expressed in terms of discrete time expressions. This is superior to Boolean expressions and threshold logic expressions because of the analytical nature of discrete mathematics.
The present invention may, of course, be carried out in other specific ways than those herein set forth without departing from the scope and the essential characteristics of the invention. The present embodiments are therefore to be construed in all aspects as illustrative and not restrictive and all changes coming within the meaning and equivalency range of the appended claims are intended to be embraced therein.

Claims (17)

What is claimed is:
1. An automaton comprising:
a plurality of inputs, each capable of assuming one of two states;
a weight storage unit operatively connected to said inputs;
a state computing unit operatively connected to said weight storage unit and having an output; and
a clock operatively connected to and synchronously driving said inputs and said state computing unit;
said inputs forming a plurality of patterns;
said weight storage unit applying selected weights to different ones of said plurality of patterns;
said state computing unit receiving the weighted pattern of inputs from the weight storage unit to compute a new state level at the output, said output being one of a predetermined number of possible states greater than two and expressed as a digital value.
2. The automaton of claim 1 wherein said clock has a period comparable to a length of time required to produce an output in the state computing unit from a given set of inputs.
3. The automaton of claim 1 further comprising a weight computing unit for dynamically determining the weights within the weight storage unit.
4. The automaton of claim 3 wherein said weight computing unit is synchronously driven by said clock with said inputs and said state computing unit.
5. The automaton of claim 3 wherein said weight computing unit uses a learning function to determine the weights within the weight storage unit based on a given input and output.
6. The automaton of claim 3 wherein said weight computing unit receives the output from said state computing unit to select weights from said weight storage unit.
7. The automaton of claim 1 wherein different ones of said plurality of patterns models coupling behavior between inputs.
8. A method of controlling an automaton, comprising:
providing a plurality of inputs, each capable of being in one of two states, said plurality of inputs capable of forming a plurality of different patterns;
weighting a given pattern of inputs;
computing a digital output, said output being one of a predetermined number of possible states greater than two and expressed as a digital value; and
using a clock to drive synchronously the inputs and the output.
9. The method of claim 8 further comprising teaching said automaton with a learning function.
10. An automaton comprising:
a plurality of m inputs, each capable of assuming one of two states;
a weight storage unit storing 2m weights, said weight storage unit receiving a selected pattern of said inputs, and weighting said selected pattern of inputs;
a state computing unit receiving said weighted inputs and generating an output corresponding to one of a predetermined number greater than two of possible states and expressed as a digital value;
a weight computing unit receiving said output and said inputs and determining weights to be placed in said weight storage unit; and
a clock synchronously driving said inputs, said state computing unit, and said weight computing unit.
11. The automaton of claim 10 wherein said clock comprises a period equivalent to a length of time required by said state computing unit to determine said output after receipt of said weighted inputs.
12. The automaton of claim 10 wherein said weight computation unit learns which weights to store in said weight storage unit based on a given pattern of inputs and outputs according to a learning function.
13. A discrete time neural net comprising a plurality of automata as described in claim 10.
14. An automaton comprising:
a plurality of m inputs ui capable of being assembled into 2m patterns according to a function fj(ui), where j lies between 1 and 2m;
a weight storage unit comprising 2m weights w, selectively assigned to different ones of said 2m patterns, said weight storage unit producing an output E w = j [ w j * f j ( u i ) ] ;
Figure US06708159-20040316-M00002
a state computing unit receiving Ew and calculating an output Ep+1=Ew+Ep, where Ep represents an initial state, said output corresponding to one of a predetermined number greater than two of possible states and expressed as a digital value;
a weight computing unit selectively determining weights wj to store in said weight storage unit; and
a clock synchronously driving said weight computing unit, said inputs, and said state computing unit.
15. The automaton of claim 14 wherein said weight computing unit is trained according to a learning function.
16. The automaton of claim 15 wherein said state computing unit initially has an output of Ep.
17. The automaton of claim 16 wherein said learning function is a function of fj(ui), Ep, and Ep+1.
US09/846,053 2001-05-01 2001-05-01 Finite-state automaton modeling biologic neuron Expired - Lifetime US6708159B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/846,053 US6708159B2 (en) 2001-05-01 2001-05-01 Finite-state automaton modeling biologic neuron

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/846,053 US6708159B2 (en) 2001-05-01 2001-05-01 Finite-state automaton modeling biologic neuron

Publications (2)

Publication Number Publication Date
US20020184174A1 US20020184174A1 (en) 2002-12-05
US6708159B2 true US6708159B2 (en) 2004-03-16

Family

ID=25296811

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/846,053 Expired - Lifetime US6708159B2 (en) 2001-05-01 2001-05-01 Finite-state automaton modeling biologic neuron

Country Status (1)

Country Link
US (1) US6708159B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030210786A1 (en) * 2002-05-08 2003-11-13 Carr Jeffrey Douglas System and method for securely controlling access to device functions
US20060004681A1 (en) * 2004-04-29 2006-01-05 Michel Howard E Artificial neuron with phase-encoded logic
US20110119215A1 (en) * 2009-11-13 2011-05-19 International Business Machines Corporation Hardware analog-digital neural networks

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6708159B2 (en) 2001-05-01 2004-03-16 Rachid M. Kadri Finite-state automaton modeling biologic neuron
FR2845503A1 (en) * 2002-10-07 2004-04-09 Rachid M Kadri Robot has operating states modeled on a biological neuron system with a plurality of weighted inputs to a state calculation unit and a unit for modifying weighting according to the required output
WO2010048206A1 (en) * 2008-10-20 2010-04-29 Arizona Board Of Regents For And On Behalf Of Arizona State University Decomposition based approach for the synthesis of threshold logic circuits
US8832614B2 (en) 2012-05-25 2014-09-09 Arizona Board Of Regents, A Body Corporate Of The State Of Arizona, Acting For And On Behalf Of Arizona State University Technology mapping for threshold and logic gate hybrid circuits
US9306151B2 (en) 2012-05-25 2016-04-05 Arizona Board Of Regents, A Body Corporate Of The State Of Arizona, Acting For And On Behalf Of Arizona State University Threshold gate and threshold logic array
US9490815B2 (en) 2013-07-08 2016-11-08 Arizona Board Of Regents On Behalf Of Arizona State University Robust, low power, reconfigurable threshold logic array
US10984320B2 (en) * 2016-05-02 2021-04-20 Nnaisense SA Highly trainable neural network configuration
CN109800851B (en) * 2018-12-29 2024-03-01 中国人民解放军陆军工程大学 Neural synapse circuit and impulse neural network circuit
CN111552298B (en) * 2020-05-26 2023-04-25 北京工业大学 Bionic positioning method based on mouse brain hippocampus space cells

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5226092A (en) 1991-06-28 1993-07-06 Digital Equipment Corporation Method and apparatus for learning in a neural network
EP0560595A2 (en) 1992-03-13 1993-09-15 Pilkington Micro-Electronics Limited Improved artificial digital neuron, neural network and network traning algorithm
US5517597A (en) 1991-06-24 1996-05-14 International Business Machines Corporation Convolutional expert neural system (ConExNS)
EP0834817A1 (en) 1996-10-01 1998-04-08 FINMECCANICA S.p.A. AZIENDA ANSALDO Programmed neural module
US20020184174A1 (en) 2001-05-01 2002-12-05 Kadri Rachid M. Finite-state automaton modeling biologic neuron

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5517597A (en) 1991-06-24 1996-05-14 International Business Machines Corporation Convolutional expert neural system (ConExNS)
US5226092A (en) 1991-06-28 1993-07-06 Digital Equipment Corporation Method and apparatus for learning in a neural network
EP0560595A2 (en) 1992-03-13 1993-09-15 Pilkington Micro-Electronics Limited Improved artificial digital neuron, neural network and network traning algorithm
EP0834817A1 (en) 1996-10-01 1998-04-08 FINMECCANICA S.p.A. AZIENDA ANSALDO Programmed neural module
US20020184174A1 (en) 2001-05-01 2002-12-05 Kadri Rachid M. Finite-state automaton modeling biologic neuron

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Chen et al, "On Neural-Network Implementations of K-Nearest Neighbor Pattern Classifiers", IEEE Transactions on Circuits and Systems, Jul. 1997. *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030210786A1 (en) * 2002-05-08 2003-11-13 Carr Jeffrey Douglas System and method for securely controlling access to device functions
US20060004681A1 (en) * 2004-04-29 2006-01-05 Michel Howard E Artificial neuron with phase-encoded logic
US7401058B2 (en) 2004-04-29 2008-07-15 University Of Massachusetts Artificial neuron with phase-encoded logic
US20110119215A1 (en) * 2009-11-13 2011-05-19 International Business Machines Corporation Hardware analog-digital neural networks
US8275727B2 (en) 2009-11-13 2012-09-25 International Business Machines Corporation Hardware analog-digital neural networks

Also Published As

Publication number Publication date
US20020184174A1 (en) 2002-12-05

Similar Documents

Publication Publication Date Title
US8504502B2 (en) Prediction by single neurons
US8250011B2 (en) Autonomous learning dynamic artificial neural computing device and brain inspired system
Indiveri et al. Neuromorphic silicon neuron circuits
EP3340118A1 (en) Trace-based neuromorphic architecture for advanced learning
US6708159B2 (en) Finite-state automaton modeling biologic neuron
KR20160084401A (en) Implementing synaptic learning using replay in spiking neural networks
Gerstner et al. Universality in neural networks: the importance of the ‘mean firing rate’
Silva et al. Are the long–short term memory and convolution neural networks really based on biological systems?
Alstrøm et al. Versatility and adaptive performance
Harkin et al. Parallel and recurrent cascade models as a unifying force for understanding subcellular computation
Mostafa et al. A hybrid analog/digital spike-timing dependent plasticity learning circuit for neuromorphic VLSI multi-neuron architectures
US8112372B2 (en) Prediction by single neurons and networks
Eriksson et al. Spiking neural networks for reconfigurable POEtic tissue
Maass et al. Theory of the computational function of microcircuit dynamics
Huayaney et al. A VLSI implementation of a calcium-based plasticity learning model
Barton et al. The application perspective of izhikevich spiking neural model–the initial experimental study
Ahmadi-Farsani et al. Digital-signal-processor realization of izhikevich neural network for real-time interaction with electrophysiology experiments
Ostrovskii et al. Studying the dynamics of memristive synapses in spiking neuromorphic systems
Schomaker et al. Non-linear adaptive control inspired by neuromuscular systems
Merino Mallorquí Digital system for spiking neural network emulation
Bamford Synaptic rewiring in neuromorphic VLSI for topographic map formation
Konovalov Modified Network of Generalized Neural Elements as an Example of a New Generation Neural Network
Cowan Von Neumann and neural networks
Naveenraj et al. Spiking Neural Network-Based Digit Pattern Recognition in FPGA
Gutiérrez Naranjo et al. A first model for Hebbian learning with spiking neural P systems

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

REMI Maintenance fee reminder mailed
FEPP Fee payment procedure

Free format text: PAT HOLDER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: LTOS); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FPAY Fee payment

Year of fee payment: 4

SULP Surcharge for late payment
REMI Maintenance fee reminder mailed
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FPAY Fee payment

Year of fee payment: 8

SULP Surcharge for late payment

Year of fee payment: 7

REMI Maintenance fee reminder mailed
FPAY Fee payment

Year of fee payment: 12

SULP Surcharge for late payment

Year of fee payment: 11