WO2004027704A1 - Spiking neural network device - Google Patents

Spiking neural network device Download PDF

Info

Publication number
WO2004027704A1
WO2004027704A1 PCT/EP2002/010646 EP0210646W WO2004027704A1 WO 2004027704 A1 WO2004027704 A1 WO 2004027704A1 EP 0210646 W EP0210646 W EP 0210646W WO 2004027704 A1 WO2004027704 A1 WO 2004027704A1
Authority
WO
WIPO (PCT)
Prior art keywords
spiking
value
neurons
genotypic
representation
Prior art date
Application number
PCT/EP2002/010646
Other languages
French (fr)
Inventor
Dario Floreano
Original Assignee
Ecole Polytechnique Federale De Lausanne (Epfl)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecole Polytechnique Federale De Lausanne (Epfl) filed Critical Ecole Polytechnique Federale De Lausanne (Epfl)
Priority to AU2002338754A priority Critical patent/AU2002338754A1/en
Publication of WO2004027704A1 publication Critical patent/WO2004027704A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Definitions

  • the present invention relates to a spiking neural network device and to a process for training a spiking neural network.
  • Neural networks are widely used for various applications such as for example voice recognition systems, image recognition systems, industrial robotics, medical imaging, data mining, aerospace applications, etc. They allow to produce artificial systems capable of sophisticated computations, close to that of a human brain.
  • Spiking neural networks are built with spiking neurons, which are very close computational models of biological neurons. Like most biological neurons, the spiking neurons communicate by sending pulses across connections, called synapses, to other neurons. The pulse is also known as "spike” to indicate its short and transient nature. Spiking neurons are affected by incoming spikes which increase or decrease their membrane potential (voltage or state) and they generate a spike when this membrane potential becomes larger than a threshold. Spike generation is followed by a short "refractory period" during which the neuron can't generate another spike. In order for the behavior of the computational models to be closer to that of biological neurons, leakage can additionally be taken into account: as long as the neuron doesn't generate a spike, its membrane potential regularly decreases by a known leakage factor.
  • spiking neurons allows to build neural networks with highly non-linear behavior, thus able to perform very complex functions which could not be performed with standard neural networks, or only to a much higher computational cost.
  • highly non-linear dynamics of spiking neural networks allows them to efficiently capture and exploit temporal patterns. They can for example efficiently react to regularly repeated patterns of input signals.
  • Prior art spiking neural networks have essentially been used for two main purposes: to address specific questions in neuroscience, such as how biological neuron communicate with each other and to develop new neuromorphic devices, some of which may replace lesioned fibers or sensory organs.
  • Prior art computational investigations of spiking neurons are thus often based on complicated biophysical models with predetermined and fixed structures.
  • Neuromorphic vision circuits have for example been developed that emulate the interconnections among the neurons in the early layers of an artificial retina in order to extract motion information and a simple form of attentive selection of visual stimuli.
  • An aim of the present invention is to propose a spiking neural network that can be used in various control systems.
  • Another aim of the present invention is to provide a spiking neural network able to evolve through learning for instance from environmental conditions.
  • Another aim of the present invention in its preferred embodiment is to provide an evolving spiking neural network that can be implemented on very small integrated circuits.
  • these aims are achieved by means of a device and a training method according to the characteristics of the corresponding independent claims, preferred embodiments being furthermore described in the dependent claims and in the description.
  • these aims are achieved by means of a device comprising storage means for storing a genotypic representation of a spiking neural network comprising spiking neurons and input neurons connected by synapses, and computer program portions for performing the steps of mutating said genotypic representation and computing a fitness value associated to said mutated genotypic representation.
  • the parameters can for instance be the configuration parameters of a process controller.
  • the configuration parameters typically include the weights of the synapses and possibly other parameters defining the structure of the network.
  • a group of individuals builds a population.
  • a fitness function can measure the quality of any individual, for instance by measuring the quality of a process when the process controller is tuned using the parameters given by the corresponding individual, and associate to this individual a fitness value which is often proportional to the measured quality.
  • Most genetic algorithms start from a randomly generated population. New individuals are then generated by crossing and/or by mutating existing individuals and their fitness value is computed with the fitness function. If a new individual has a higher fitness value than the individual of the current population having the lowest fitness value, it replaces it. Otherwise, the new individual is disregarded.
  • the population is evolved to a population with individuals having always higher fitness values, i.e., the population is adapted to its environment.
  • An optimized set of parameters can be chosen by picking for example the individual of the evolved population having the highest fitness value.
  • Prior art genetic algorithms are sometimes used for training standard neural network having large amount of parameters. They have however never been used with spiking neural networks, because prior art spiking neural networks have always been implemented for applications as the ones mentioned earlier where the structure of the network is hand- designed and isn't meant to evolve.
  • a spiking neural networks with its advantageous highly non-linear behavior can be trained with an adapted genetic algorithm to fulfill any type of control for various systems.
  • the spiking neural network with its associated training algorithm are specifically designed to be implemented in very small digital integrated circuits.
  • Fig. 1 shows a spiking neural network according to the invention with all potential synapses.
  • Fig. 2 diagrammatically represents an example of architecture for a spiking neural network according to the invention.
  • Fig. 3 shows a genotypic representation of a spiking neural network according to the invention.
  • Fig. 4 shows the components of a micro-controller.
  • Fig. 5 shows the behavior of a spiking neuron according to the preferred embodiment of the invention.
  • Fig. 6 shows an example of representation of the genotype and of the state of the spiking neural network according to the preferred embodiment of the invention.
  • the spiking neural network comprises a preferably predefined number n of spiking neurons 1.
  • Each spiking neuron 1 can potentially receive signals from any neuron 1 of the network, including from itself, as well as from any of the s input neurons 2. It will be further explained how some synapse connections will be enabled while other will be disabled during the training phase of the spiking neural network.
  • Each neuron 1 can be excitatory or inhibitory. In the figures, inhibitory neurons are represented as black dots.
  • the spikes emitted by excitatory neurons increase the value of a variable at the receiving spiking neuron describing its current state, or how close it is to emit a spike. This variable is commonly called the membrane potential, by analogy with the terminology used for biological neurons.
  • the spikes emitted by inhibitory neurons however decrease the membrane potential of the receiving neurons.
  • the sign of each neuron, excitatory or inhibitory is preferably also determined during the training phase.
  • the input neurons 2 receive external input signals 21, for example environmental information from sensors and/or detectors, and emit spikes depending on these input signals 21 to the spiking neurons 1 of the neural network.
  • the input neurons 2 are preferably all excitatory.
  • the output 12 of the neural network preferably includes the spikes emitted by at least some spiking neurons 1. This output 12 is for instance used as control signals.
  • the input signals 21 can for instance be the signals received from optical detectors signaling the presence of obstacles in the environment of the robot and the output 12 of the spiking neural network is used for controlling the robot's movements, determining usually its speed and direction.
  • Figure 2 is another representation of the network architecture, i.e., of the configuration of the synapse connections inside the spiking neural network.
  • the neurons in the left column receive signals from the connected neurons 1 and input neurons 2 represented in the top row.
  • a square indicates the presence of a synapse.
  • Inhibitory neurons are represented as black dots.
  • This network architecture and the other variable parameters of the network, such as for example the polarity of the neurons 1, can also be represented by a binary string 3.
  • the binary string 3 is for instance composed of n blocks 30, each block 30 corresponding to one neuron 1, as illustrated for example in figure 3.
  • the first bit b 0 of each block 30 encodes the sign of the corresponding neuron 1, a value one for the first bit b 0 corresponding for instance to an excitatory neuron while a value zero corresponds to an inhibitory neuron.
  • the remaining n+s bits bi ... b n+s of each block 30 encode the presence or the absence of a synapse from the n neurons 1 and from the s input neurons 2 to the corresponding neuron 1.
  • the presence of a synapse is for example encoded by a value one, while the absence of synapse is represented by a value zero.
  • the weight of each synapse is preferably set to a fixed value, for example to one. In- other words, all spikes of same polarity produce the same effect on the receiving neuron's membrane.
  • the one skilled in the art will however recognize that it is possible, according to the invention, to assign a specific weight to each synapse by encoding it with more than one bit, thus increasing the number of bits in the block 30.
  • the neurons 1 are spiking neurons. Spiking neurons all basically show the same behavior. Just after emitting a spike, the spiking neuron 1 goes through a refractory period during which it can't emit another spike and it usually isn't affected by incoming spikes. After the refractory period, each incoming spike contributes to increase or decrease the neuron's membrane potential, depending on the sign of the emitting neuron. Preferably, a leakage parameter is introduced, decreasing the membrane potential by a certain amount or percentage for each time period after the refractory period during which the neuron 1 didn't emit a spike.
  • the neuron 1 If the membrane potential gets higher or equal to a predetermined threshold ⁇ , the neuron 1 emits a spike and enters again the refractory period.
  • the values of the neuron's parameters such as the value of the threshold, the length of the refractory period or the value of the leakage parameter, are preferably predefined and fixed for all neurons 1.
  • a small random value simulating noise is then preferably added to the threshold or to the length of the refractory period in order to prevent oscillatory behavior of the spiking neural network.
  • the value of one or more of these parameters can be variable and evolved. They are then preferably included in the binary string 3 including the other variable configuration parameters to be evolved, in which they can be encoded by one or more additional bits.
  • the configuration of the spiking neural network is fully characterized by the information contained in the binary string 3.
  • the networks architecture can be recreated by assigning to each variable parameter the corresponding value contained in the binary string 3.
  • the binary strings 3 can thus be said to contain the genetic information of the network.
  • Different binary strings 3 characterize different configurations of the spiking neural network, each configuration having its specific behavior.
  • the binary string 3 can thus be assimilated to genotypes or individuals and an adapted genetic algorithm with an appropriate fitness function can be applied to a population of such individuals 3 during a training phase of the spiking neural networks, in order to raise the fitness value of the individuals 3 within the population.
  • a training phase starts with a population of preferably randomly generated individuals 3.
  • One individual 3 of the population is copied at random and mutated, i. e., the value of one or more of its bits is toggled.
  • the variable parameters of the spiking neural networks which preferably include the polarity of each neuron 1 and the presence or absence of synapses between the neurons 1 and input neurons 2, are set with the values given by the new individual 3.
  • a specific configuration of the spiking neural network is thus generated.
  • the spiking neural networks is operated during a predetermined period of time and the quality of the output 12 is measured by the fitness function which then attributes a fitness value to the individual 3.
  • the fitness function can for instance measure the amount of forward movement accomplished by the robot during a determined time interval, using the selected individual 3. The fitness value can then be proportional to this amount of forward movement.
  • the new individual 3 replaces the latter. Otherwise, the new individual 3 is cancelled.
  • One of the individuals 3 of the population preferably the one having the highest fitness value, is then used to configure the spiking neural networks which is said to be trained to its environment and can operate in a satisfactory manner.
  • training periods are preferably regularly conducted, in order to regularly train the neural network to possibly changing environmental conditions.
  • the spiking neural network is adapted to be implemented in a digital micro-controller 5 as illustrated in figure 4.
  • a micro-controller is an integrated circuit composed of an arithmetic logic unit (ALU) 50, a memory unit 51, an input device 52 and an output device 53.
  • the memory unit 51 usually comprises a ROM 510 (Read Only Memory), a RAM 511 (Random Access Memory) and an EEPROM 512 (Electrically Erasable Programmable ROM). In other words, it is a full computer in a single chip capable of receiving, storing, processing and transmitting signals to the external world.
  • the spiking neuron model used in the preferred embodiment of the invention is a simple integrate-and-fire model with leakage and refractory period.
  • the neuron 1 After emitting a spike, the neuron 1 enters a refractory period ⁇ : the integrating variable representing the state of the neuron, commonly called the membrane potential v, is not updated and stays to its minimal value.
  • is equal to one time unit.
  • the contribution of incoming spikes from the neurons 1 or from the input devices 2 is computed as the sum of the incoming spikes weighted by the sign of the emitting neurons.
  • the membrane potential v is updated by adding the contribution of incoming spikes to the current membrane potential.
  • the membrane potential v can't go below a minimal value which is preferably set to 0.
  • the configuration of the spiking neural network is preferably adapted to the type of micro-controller 5 in which it is implemented.
  • the implemented spiking neural network is advantageously composed of four, or a multiple of four, spiking neurons 1 and of four, or a multiple of four, input neurons 2.
  • the implemented spiking neural is thus preferably composed of eight, or a multiple of eight, spiking neurons 1 and of eight, or a multiple of eight, input neurons 2.
  • spiking neural networks according to the invention having any other number of spiking neurons 1 and/or of input neurons 2 can easily be implemented by discarding the bits corresponding to unused neurons.
  • the spiking neural network of the invention is composed of eight neurons 1 and eight input devices 2. All the information about the neurons 1 and the input devices 2 is stored in the memory 51 and is preferably structured according to the example of figure 6.
  • the polarity information for all eight neurons 1 is stored in a dedicated byte SIGN. Each bit of the byte SIGN corresponds to one of the eight neurons 1. A value one means that the corresponding neuron 1 is excitatory, a value zero means that the corresponding neuron 1 is inhibitory.
  • the connectivity pattern to each neuron 1 is stored in one byte of a block NCONN (connections from neurons) and in one byte of a block ICONN (connections from input devices).
  • the block NCONN and the block ICONN are blocks of eight bytes each. The network's configuration parameters thus require seventeen bytes of memory storage. These seventeen bytes represent one individual 3.
  • the output of the eight neurons 1 is stored in a byte OUTPS.
  • Each bit of the byte OUTPS corresponds to the output of one neuron 1.
  • a value 1 means that the corresponding neuron 1 emits a spike
  • a value 0 means that the corresponding neuron 1 doesn't emit.
  • the output of the eight input devices 2 are stored in a byte INPS.
  • the membrane potential of each neuron 1 is stored in one byte of a block MEMB, which is a block of eight bytes. The maximum membrane potential is preferably constant for all neurons and is stored in a byte THRES.
  • a random value simulating noise is preferably added to the value of the byte THRES at each iteration in order to avoid strong oscillatory behavior of the spiking neural network.
  • the minimum membrane potential is preferably 0 for all neurons 1 and thus doesn't require memory storage.
  • the entire spiking neural circuit thus requires twenty-eight bytes of memory storage.
  • the spiking neuron's behavior described above is implemented in computer program portions preferably stored in the ROM 510 and performing the following steps when run by the ALU 50:
  • the contribution of incoming spikes is computed and the neuron's membrane potential v is updated.
  • the contribution of the spikes from the input neurons 2 is computed: the byte in the block MEMB corresponding to the considered neuron is incremented by counting the number of active bits (of value one) that result from the AND function of the byte INPS and the byte in the block ICONN corresponding to the considered neuron.
  • the contribution of the spikes from excitatory neurons is computed: the corresponding byte in the block MEMB is incremented by counting the number of active bits that result from the AND function of the byte OUTPS, the byte SIGN and the byte in the block NCONN corresponding to the considered neuron.
  • each byte in the block MEMB is decremented by counting the number of active bits that result from the AND function of the byte OUTPS, the complement of the byte SIGN and the byte in the block NCONN corresponding to the considered neuron.
  • the decrementation is stopped before the byte of the block MEMB goes below zero, which is preferably signaled by a bit flag in a housekeeping byte of the micro-controller.
  • the value of the corresponding byte in the block MEMB is compared to the value of the byte THRES, preferably incremented or decreased by a random value. If the value of the corresponding byte in the block MEMB is equal or higher than the resulting threshold value, the bit in the byte OUTPS corresponding to the considered neuron is set to one and the corresponding byte in the block MEMB is set to zero. Otherwise, no spike is emitted by the neuron and the corresponding bit in the byte OUTPS is set to zero.
  • step c The random value used in step c for increasing or decreasing the threshold value simulates noise in the process.
  • the threshold value is set to be the same for all the neurons, the use of this random variable avoids that the neural networks enters oscillatory phases which could be induced by the network's internal dynamics.
  • step c above updates only a temporary copy of the byte OUTPS which is then moved into the byte OUTPS once all neurons 1 have been updated.
  • the value of the byte INPS is updated too according to the input signals 21. If the input signals have not changed or are not available, the byte INPS can retain its previous value or be set to zero, depending on the type of control to be achieved by the spiking neural network.
  • the spiking neural network is adapted to be trained by means of a genetic algorithm. Preferably, only the sign of the neurons 1 and the presence or absence of synapse between the neurons 1 and 2 are genetically encoded.
  • the genetic string, or individual 3 of the spiking neural network consists of only seventeen bytes: 1 byte SIGN for the sign of the neurons, a block NCONN of eight bytes for its neural synapses and a block ICONN of eight bytes for its input synapses.
  • the genetic algorithm is preferably adapted to small populations.
  • the genetic algorithm is implemented in a computer program portion preferably stored in the ROM 510 and performing the following steps when run by the ALU 50:
  • step b go to step b.
  • Each individual 3 is preferably mutated at three locations by toggling the value of a randomly selected bit.
  • the first mutation takes place in the byte SIGN that defines the signs of the neurons 1.
  • the second mutation occurs at a random location of the block NCONN that defines the connectivity among neurons.
  • the third mutation occurs at a random location of the block ICONN that defines the connectivity from the input devices 2. Mutations are for example performed by making an XOR operation between the byte to be mutated and a byte with a single 1 at a random location.
  • the population is preferably stored in an EEPROM within the memory 51, because this type of memory can be read and written by the program just like the RAM memory, but in addition it holds its contents also when the micro-controller is not powered.
  • Each individual preferably occupies a continuous block of bytes where the first byte is its fitness value and the remaining bytes represent its genetic string.
  • the very first byte of the EEPROM memory records the number of replacement of population made so far. Whenever the micro-controller is powered up, the main program reads the first byte of the EEPROM. If it is 0, the population is initialized (step a), otherwise it is incrementally evolved (step b and following).
  • EEPROM memories can be written only a limited number of times and usage and temperature generate errors during reading and writing that require error-checking routines.
  • the one skilled in the art will recognize that it is also possible to keep a copy of the entire population in the RAM part of memory 51, use this copy for training, and copy it to the EEPROM at predefined large intervals.
  • Micro-controllers are used for a wide range of smart devices, such as microwaves ovens, telephones, washing machines, car odometers and credit cards. More than 3.5 billion micro-controller units are sold each year for embedded control, exceeding by more than an order of magnitude the number of microprocessor units sold for computers.
  • the one skilled in the art will easily realize that the spiking neural network according to the invention, together with its training algorithm, can advantageously and easily be implemented in the micro-controller of these smart devices in order to improve their functionality and allow them to respond in a more appropriate manner to their environment or other external conditions.
  • the weight of the synapse are chosen to be always equal to one or zero. As explained before, these weights could however take more different values which could also be optimized during the training of the spiking neural network.
  • inventive spiking neural network was described above using simple spiking neuron models and training algorithm.
  • inventive neural network can be implemented using any model of spiking neuron, including significantly more sophisticated models requiring to be implemented using analog integrated circuits or high compute power.
  • inventive spiking neural network together with its associated training algorithm can then for example be implemented in a personal computer or in a supercomputer, depending on its size and complexity.
  • the implementation of the genetic training algorithm can also be different than the implementation example described above.
  • it can be chosen to be more complex, in order for instance to adapt to a more complex representation of the spiking neurons.
  • the generation and mutation of new individuals can for example occur very differently.
  • spiking neural network Other parameters of the spiking neural network than the ones mentioned above could be included in the genetic string to be optimized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Physiology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Feedback Control In General (AREA)
  • Image Analysis (AREA)

Abstract

Device comprising storage means (51) for storing a genotypic representation (3) of a spiking neural network comprising spiking neurons (1) and input neurons (2) connected by synapses, and computer program portions for performing the steps of mutating said genotypic representation (3) and computing a fitness value associated to said mutated genotypic representation (3). The spiking neural network implemented in this device can thus be trained and used for various control systems, achieving better results, thanks to its highly non-linear behavior, than standard prior art neural networks.

Description

Spiking Neural Network Device
The present invention relates to a spiking neural network device and to a process for training a spiking neural network.
Neural networks are widely used for various applications such as for example voice recognition systems, image recognition systems, industrial robotics, medical imaging, data mining, aerospace applications, etc. They allow to produce artificial systems capable of sophisticated computations, close to that of a human brain.
Spiking neural networks are built with spiking neurons, which are very close computational models of biological neurons. Like most biological neurons, the spiking neurons communicate by sending pulses across connections, called synapses, to other neurons. The pulse is also known as "spike" to indicate its short and transient nature. Spiking neurons are affected by incoming spikes which increase or decrease their membrane potential (voltage or state) and they generate a spike when this membrane potential becomes larger than a threshold. Spike generation is followed by a short "refractory period" during which the neuron can't generate another spike. In order for the behavior of the computational models to be closer to that of biological neurons, leakage can additionally be taken into account: as long as the neuron doesn't generate a spike, its membrane potential regularly decreases by a known leakage factor.
The use of spiking neurons allows to build neural networks with highly non-linear behavior, thus able to perform very complex functions which could not be performed with standard neural networks, or only to a much higher computational cost. In particular, the highly non-linear dynamics of spiking neural networks allows them to efficiently capture and exploit temporal patterns. They can for example efficiently react to regularly repeated patterns of input signals.
Prior art spiking neural networks have essentially been used for two main purposes: to address specific questions in neuroscience, such as how biological neuron communicate with each other and to develop new neuromorphic devices, some of which may replace lesioned fibers or sensory organs. Prior art computational investigations of spiking neurons are thus often based on complicated biophysical models with predetermined and fixed structures. Neuromorphic vision circuits have for example been developed that emulate the interconnections among the neurons in the early layers of an artificial retina in order to extract motion information and a simple form of attentive selection of visual stimuli.
There are however no prior art applications of spiking neural networks in control systems, such as in control systems for robots for example, even though their specific properties would probably allow for a more efficient control, either more appropriate or at a lower computational cost, than what can be achieved with the commonly used standard neural networks. The main reason is that designing networks of spiking neurons with a given functionality is a challenging task because of their highly non-linear dynamics and there are no prior art spiking neural networks that could be evolved for instance through interaction with the environment.
An aim of the present invention is to propose a spiking neural network that can be used in various control systems.
Another aim of the present invention is to provide a spiking neural network able to evolve through learning for instance from environmental conditions.
Another aim of the present invention in its preferred embodiment is to provide an evolving spiking neural network that can be implemented on very small integrated circuits.
According to the present invention, these aims are achieved by means of a device and a training method according to the characteristics of the corresponding independent claims, preferred embodiments being furthermore described in the dependent claims and in the description. In particular, these aims are achieved by means of a device comprising storage means for storing a genotypic representation of a spiking neural network comprising spiking neurons and input neurons connected by synapses, and computer program portions for performing the steps of mutating said genotypic representation and computing a fitness value associated to said mutated genotypic representation.
Genetic algorithms are evolutionary algorithms allowing the optimization of a set of parameters encoded in what is commonly called a genotype or individual. The parameters can for instance be the configuration parameters of a process controller. In the case where this process controller comprises a standard neural network, the configuration parameters typically include the weights of the synapses and possibly other parameters defining the structure of the network. A group of individuals builds a population. A fitness function can measure the quality of any individual, for instance by measuring the quality of a process when the process controller is tuned using the parameters given by the corresponding individual, and associate to this individual a fitness value which is often proportional to the measured quality.
Most genetic algorithms start from a randomly generated population. New individuals are then generated by crossing and/or by mutating existing individuals and their fitness value is computed with the fitness function. If a new individual has a higher fitness value than the individual of the current population having the lowest fitness value, it replaces it. Otherwise, the new individual is disregarded. By repeating these steps under constant environmental conditions, the population is evolved to a population with individuals having always higher fitness values, i.e., the population is adapted to its environment. An optimized set of parameters can be chosen by picking for example the individual of the evolved population having the highest fitness value.
Prior art genetic algorithms are sometimes used for training standard neural network having large amount of parameters. They have however never been used with spiking neural networks, because prior art spiking neural networks have always been implemented for applications as the ones mentioned earlier where the structure of the network is hand- designed and isn't meant to evolve.
According to the invention, a spiking neural networks with its advantageous highly non-linear behavior can be trained with an adapted genetic algorithm to fulfill any type of control for various systems. In its preferred embodiment, the spiking neural network with its associated training algorithm are specifically designed to be implemented in very small digital integrated circuits.
The invention will be better understood with the aid of the description given by way of example and illustrated by the attached figures, in which:
Fig. 1 shows a spiking neural network according to the invention with all potential synapses.
Fig. 2 diagrammatically represents an example of architecture for a spiking neural network according to the invention.
Fig. 3 shows a genotypic representation of a spiking neural network according to the invention.
Fig. 4 shows the components of a micro-controller.
Fig. 5 shows the behavior of a spiking neuron according to the preferred embodiment of the invention.
Fig. 6 shows an example of representation of the genotype and of the state of the spiking neural network according to the preferred embodiment of the invention.
As illustrated in Figure 1, the spiking neural network according to the invention comprises a preferably predefined number n of spiking neurons 1. Each spiking neuron 1 can potentially receive signals from any neuron 1 of the network, including from itself, as well as from any of the s input neurons 2. It will be further explained how some synapse connections will be enabled while other will be disabled during the training phase of the spiking neural network. Each neuron 1 can be excitatory or inhibitory. In the figures, inhibitory neurons are represented as black dots. The spikes emitted by excitatory neurons increase the value of a variable at the receiving spiking neuron describing its current state, or how close it is to emit a spike. This variable is commonly called the membrane potential, by analogy with the terminology used for biological neurons. The spikes emitted by inhibitory neurons however decrease the membrane potential of the receiving neurons. The sign of each neuron, excitatory or inhibitory, is preferably also determined during the training phase.
The input neurons 2 receive external input signals 21, for example environmental information from sensors and/or detectors, and emit spikes depending on these input signals 21 to the spiking neurons 1 of the neural network. The input neurons 2 are preferably all excitatory. The output 12 of the neural network preferably includes the spikes emitted by at least some spiking neurons 1. This output 12 is for instance used as control signals. In a typical implementation example where the spiking neural network is used to control the movements of an autonomous robot, the input signals 21 can for instance be the signals received from optical detectors signaling the presence of obstacles in the environment of the robot and the output 12 of the spiking neural network is used for controlling the robot's movements, determining usually its speed and direction.
Figure 2 is another representation of the network architecture, i.e., of the configuration of the synapse connections inside the spiking neural network. The neurons in the left column receive signals from the connected neurons 1 and input neurons 2 represented in the top row. A square indicates the presence of a synapse. Inhibitory neurons are represented as black dots. This network architecture and the other variable parameters of the network, such as for example the polarity of the neurons 1, can also be represented by a binary string 3. The binary string 3 is for instance composed of n blocks 30, each block 30 corresponding to one neuron 1, as illustrated for example in figure 3. In this example, the first bit b0 of each block 30 encodes the sign of the corresponding neuron 1, a value one for the first bit b0 corresponding for instance to an excitatory neuron while a value zero corresponds to an inhibitory neuron. The remaining n+s bits bi ... bn+s of each block 30 encode the presence or the absence of a synapse from the n neurons 1 and from the s input neurons 2 to the corresponding neuron 1. The presence of a synapse is for example encoded by a value one, while the absence of synapse is represented by a value zero.
The weight of each synapse is preferably set to a fixed value, for example to one. In- other words, all spikes of same polarity produce the same effect on the receiving neuron's membrane. The one skilled in the art will however recognize that it is possible, according to the invention, to assign a specific weight to each synapse by encoding it with more than one bit, thus increasing the number of bits in the block 30.
According to the invention, the neurons 1 are spiking neurons. Spiking neurons all basically show the same behavior. Just after emitting a spike, the spiking neuron 1 goes through a refractory period during which it can't emit another spike and it usually isn't affected by incoming spikes. After the refractory period, each incoming spike contributes to increase or decrease the neuron's membrane potential, depending on the sign of the emitting neuron. Preferably, a leakage parameter is introduced, decreasing the membrane potential by a certain amount or percentage for each time period after the refractory period during which the neuron 1 didn't emit a spike. If the membrane potential gets higher or equal to a predetermined threshold θ, the neuron 1 emits a spike and enters again the refractory period. An example of computational model of this behavior will be described further in the particular case of the preferred embodiment of the invention. The values of the neuron's parameters, such as the value of the threshold, the length of the refractory period or the value of the leakage parameter, are preferably predefined and fixed for all neurons 1. A small random value simulating noise is then preferably added to the threshold or to the length of the refractory period in order to prevent oscillatory behavior of the spiking neural network. The one skilled in the art will however recognize that the value of one or more of these parameters can be variable and evolved. They are then preferably included in the binary string 3 including the other variable configuration parameters to be evolved, in which they can be encoded by one or more additional bits.
Provided that the constant parameters of the spiking neurons 1 are known, the configuration of the spiking neural network is fully characterized by the information contained in the binary string 3. The networks architecture can be recreated by assigning to each variable parameter the corresponding value contained in the binary string 3. The binary strings 3 can thus be said to contain the genetic information of the network. Different binary strings 3 characterize different configurations of the spiking neural network, each configuration having its specific behavior. The binary string 3 can thus be assimilated to genotypes or individuals and an adapted genetic algorithm with an appropriate fitness function can be applied to a population of such individuals 3 during a training phase of the spiking neural networks, in order to raise the fitness value of the individuals 3 within the population.
According to the invention, a training phase starts with a population of preferably randomly generated individuals 3. One individual 3 of the population is copied at random and mutated, i. e., the value of one or more of its bits is toggled. The variable parameters of the spiking neural networks, which preferably include the polarity of each neuron 1 and the presence or absence of synapses between the neurons 1 and input neurons 2, are set with the values given by the new individual 3. A specific configuration of the spiking neural network is thus generated. The spiking neural networks is operated during a predetermined period of time and the quality of the output 12 is measured by the fitness function which then attributes a fitness value to the individual 3.
In the case, for example, where the spiking neural network is used to control a robot moving autonomously in an environment with obstacles to be avoided, the fitness function can for instance measure the amount of forward movement accomplished by the robot during a determined time interval, using the selected individual 3. The fitness value can then be proportional to this amount of forward movement.
If the fitness value of the last tested individual 3 is higher than the fitness value of the existing individual 3 having the lowest fitness value in the population, the new individual 3 replaces the latter. Otherwise, the new individual 3 is cancelled.
These steps are repeated preferably until the average or the best fitness value within the population reaches a satisfactory value. One of the individuals 3 of the population, preferably the one having the highest fitness value, is then used to configure the spiking neural networks which is said to be trained to its environment and can operate in a satisfactory manner.
In practical implementations, training periods are preferably regularly conducted, in order to regularly train the neural network to possibly changing environmental conditions.
The practical implementation of the inventive spiking neural network together with its training algorithm will be further described with the help of the illustrative but not limitative example of the preferred embodiment of the invention.
The spiking neural network according to the preferred embodiment of the invention is adapted to be implemented in a digital micro-controller 5 as illustrated in figure 4. A micro-controller is an integrated circuit composed of an arithmetic logic unit (ALU) 50, a memory unit 51, an input device 52 and an output device 53. The memory unit 51 usually comprises a ROM 510 (Read Only Memory), a RAM 511 (Random Access Memory) and an EEPROM 512 (Electrically Erasable Programmable ROM). In other words, it is a full computer in a single chip capable of receiving, storing, processing and transmitting signals to the external world.
The spiking neuron model used in the preferred embodiment of the invention is a simple integrate-and-fire model with leakage and refractory period.
The behavior of the spiking neuron 1 is described by the following steps illustrated in figure 5:
a) After emitting a spike, the neuron 1 enters a refractory period Δ: the integrating variable representing the state of the neuron, commonly called the membrane potential v, is not updated and stays to its minimal value. In the example illustrated in figure 4, Δ is equal to one time unit.
b) For each time unit following the refractory period Δ, the contribution of incoming spikes from the neurons 1 or from the input devices 2 is computed as the sum of the incoming spikes weighted by the sign of the emitting neurons.
c) The membrane potential v is updated by adding the contribution of incoming spikes to the current membrane potential. The membrane potential v can't go below a minimal value which is preferably set to 0.
d) If the membrane potential v is larger or equal to the threshold θ, the neuron emits a spike and its membrane potential is set to its minimum value again.
e) If no spike is emitted, a leaking constant k is subtracted from the membrane potential v. The configuration of the spiking neural network is preferably adapted to the type of micro-controller 5 in which it is implemented. For example, in a micro-controller 5 having an ALU 50 operating on four bits in parallel, the implemented spiking neural network is advantageously composed of four, or a multiple of four, spiking neurons 1 and of four, or a multiple of four, input neurons 2. In a micro-controller 5 having an ALU 50 operating on eight bits, the implemented spiking neural is thus preferably composed of eight, or a multiple of eight, spiking neurons 1 and of eight, or a multiple of eight, input neurons 2. The one skilled in the art will however recognize that spiking neural networks according to the invention having any other number of spiking neurons 1 and/or of input neurons 2 can easily be implemented by discarding the bits corresponding to unused neurons.
In its preferred embodiment, the spiking neural network of the invention is composed of eight neurons 1 and eight input devices 2. All the information about the neurons 1 and the input devices 2 is stored in the memory 51 and is preferably structured according to the example of figure 6.
The polarity information for all eight neurons 1 is stored in a dedicated byte SIGN. Each bit of the byte SIGN corresponds to one of the eight neurons 1. A value one means that the corresponding neuron 1 is excitatory, a value zero means that the corresponding neuron 1 is inhibitory. The connectivity pattern to each neuron 1 is stored in one byte of a block NCONN (connections from neurons) and in one byte of a block ICONN (connections from input devices). The block NCONN and the block ICONN are blocks of eight bytes each. The network's configuration parameters thus require seventeen bytes of memory storage. These seventeen bytes represent one individual 3.
The output of the eight neurons 1 is stored in a byte OUTPS. Each bit of the byte OUTPS corresponds to the output of one neuron 1. A value 1 means that the corresponding neuron 1 emits a spike, a value 0 means that the corresponding neuron 1 doesn't emit. Similarly, the output of the eight input devices 2 are stored in a byte INPS. The membrane potential of each neuron 1 is stored in one byte of a block MEMB, which is a block of eight bytes. The maximum membrane potential is preferably constant for all neurons and is stored in a byte THRES. As explained further, a random value simulating noise is preferably added to the value of the byte THRES at each iteration in order to avoid strong oscillatory behavior of the spiking neural network. The minimum membrane potential is preferably 0 for all neurons 1 and thus doesn't require memory storage.
The entire spiking neural circuit thus requires twenty-eight bytes of memory storage.
The spiking neuron's behavior described above is implemented in computer program portions preferably stored in the ROM 510 and performing the following steps when run by the ALU 50:
a) The state of the bit in OUTPS corresponding to the considered neuron is checked. If the value of the bit is set to one, the process goes to step c.
b) The contribution of incoming spikes is computed and the neuron's membrane potential v is updated. First, the contribution of the spikes from the input neurons 2 is computed: the byte in the block MEMB corresponding to the considered neuron is incremented by counting the number of active bits (of value one) that result from the AND function of the byte INPS and the byte in the block ICONN corresponding to the considered neuron. Then, the contribution of the spikes from excitatory neurons is computed: the corresponding byte in the block MEMB is incremented by counting the number of active bits that result from the AND function of the byte OUTPS, the byte SIGN and the byte in the block NCONN corresponding to the considered neuron. Finally, the contribution of the spikes from negative neurons is computed: each byte in the block MEMB is decremented by counting the number of active bits that result from the AND function of the byte OUTPS, the complement of the byte SIGN and the byte in the block NCONN corresponding to the considered neuron. The decrementation is stopped before the byte of the block MEMB goes below zero, which is preferably signaled by a bit flag in a housekeeping byte of the micro-controller.
c) The value of the corresponding byte in the block MEMB is compared to the value of the byte THRES, preferably incremented or decreased by a random value. If the value of the corresponding byte in the block MEMB is equal or higher than the resulting threshold value, the bit in the byte OUTPS corresponding to the considered neuron is set to one and the corresponding byte in the block MEMB is set to zero. Otherwise, no spike is emitted by the neuron and the corresponding bit in the byte OUTPS is set to zero.
d) If MEMB is greater or equal than the leaking constant k, decrement it by its constant value.
The random value used in step c for increasing or decreasing the threshold value simulates noise in the process. Experiments have shown that, if the threshold value is set to be the same for all the neurons, the use of this random variable avoids that the neural networks enters oscillatory phases which could be induced by the network's internal dynamics.
The state of the neurons 1 is preferably updated synchronously, so that each neuron 1 changes its state according to the state of all neurons computed at the previous cycle. Therefore, step c above updates only a temporary copy of the byte OUTPS which is then moved into the byte OUTPS once all neurons 1 have been updated. Alternatively, one could update the network asynchronously by picking up a neuron 1 at random and change directly the byte OUTPS at step c.
Once the entire network has been updated, the value of the byte INPS is updated too according to the input signals 21. If the input signals have not changed or are not available, the byte INPS can retain its previous value or be set to zero, depending on the type of control to be achieved by the spiking neural network. According to the invention, the spiking neural network is adapted to be trained by means of a genetic algorithm. Preferably, only the sign of the neurons 1 and the presence or absence of synapse between the neurons 1 and 2 are genetically encoded. Therefore, as explained above, the genetic string, or individual 3, of the spiking neural network consists of only seventeen bytes: 1 byte SIGN for the sign of the neurons, a block NCONN of eight bytes for its neural synapses and a block ICONN of eight bytes for its input synapses.
The memory constraints of micro-controllers puts a severe limit on the number of individuals 3 maintained in the population. Therefore, the genetic algorithm is preferably adapted to small populations. In the preferred embodiment of the invention, the genetic algorithm is implemented in a computer program portion preferably stored in the ROM 510 and performing the following steps when run by the ALU 50:
a) Randomly generate a population of binary strings 3 and initialize their fitness value to zero.
b) Pick an individual 3 at random, mutate it, and measure its fitness.
c) If its fitness is equal or larger to the fitness of the worst individual 3 in the population, write its genetic string over the genetic string of the worst individual 3, otherwise throw it away.
d) go to step b.
Each individual 3 is preferably mutated at three locations by toggling the value of a randomly selected bit. The first mutation takes place in the byte SIGN that defines the signs of the neurons 1. The second mutation occurs at a random location of the block NCONN that defines the connectivity among neurons. The third mutation occurs at a random location of the block ICONN that defines the connectivity from the input devices 2. Mutations are for example performed by making an XOR operation between the byte to be mutated and a byte with a single 1 at a random location.
The population is preferably stored in an EEPROM within the memory 51, because this type of memory can be read and written by the program just like the RAM memory, but in addition it holds its contents also when the micro-controller is not powered. Each individual preferably occupies a continuous block of bytes where the first byte is its fitness value and the remaining bytes represent its genetic string. The very first byte of the EEPROM memory records the number of replacement of population made so far. Whenever the micro-controller is powered up, the main program reads the first byte of the EEPROM. If it is 0, the population is initialized (step a), otherwise it is incrementally evolved (step b and following).
EEPROM memories can be written only a limited number of times and usage and temperature generate errors during reading and writing that require error-checking routines. The one skilled in the art will recognize that it is also possible to keep a copy of the entire population in the RAM part of memory 51, use this copy for training, and copy it to the EEPROM at predefined large intervals.
Micro-controllers are used for a wide range of smart devices, such as microwaves ovens, telephones, washing machines, car odometers and credit cards. More than 3.5 billion micro-controller units are sold each year for embedded control, exceeding by more than an order of magnitude the number of microprocessor units sold for computers. The one skilled in the art will easily realize that the spiking neural network according to the invention, together with its training algorithm, can advantageously and easily be implemented in the micro-controller of these smart devices in order to improve their functionality and allow them to respond in a more appropriate manner to their environment or other external conditions.
In the above example, the weight of the synapse are chosen to be always equal to one or zero. As explained before, these weights could however take more different values which could also be optimized during the training of the spiking neural network.
The implementation of the inventive spiking neural network was described above using simple spiking neuron models and training algorithm. The one skilled in the art will however understand that the inventive neural network can be implemented using any model of spiking neuron, including significantly more sophisticated models requiring to be implemented using analog integrated circuits or high compute power. The inventive spiking neural network together with its associated training algorithm can then for example be implemented in a personal computer or in a supercomputer, depending on its size and complexity.
The implementation of the genetic training algorithm can also be different than the implementation example described above. In particular, it can be chosen to be more complex, in order for instance to adapt to a more complex representation of the spiking neurons. The generation and mutation of new individuals can for example occur very differently.
Other parameters of the spiking neural network than the ones mentioned above could be included in the genetic string to be optimized. One could for example chose to individually optimize the threshold value or the length of the refractory period for each spiking neuron, the weight of the synapses, etc.

Claims

Claims
1. A device comprising storage means (51) for storing a genotypic representation (3) of a spiking neural network comprising spiking neurons (1) and input neurons (2) connected by synapses, and computer program portions for performing the steps of mutating said genotypic representation (3) and computing a fitness value associated to said mutated genotypic representation (3).
2. The device of claim 1, further comprising computer program portions for performing the step of generating a population of randomly generated genotypic representations (3).
3. The device of claim 1, further comprising computer program portions for performing the following steps: comparing the fitness value of said mutated genotypic representation (3) with the fitness value of another genotypic representation (3), replacing said another genotypic representation (3) by said mutated genotypic representation (3) if the fitness value of said mutated genotypic representation (3) is higher than the fitness value of said another genotypic representation (3), disregarding said mutated genotypic representation (3) otherwise.
4. The device of claim 1, further comprising computer program portions for performing the behavior of a spiking neuron (1).
5. The device of claim 4, said spiking neuron (1) being excitatory or inhibitory.
6. The device of claim 4, said behavior comprising the steps of: computing the contribution of incoming spikes to the value of a membrane potential (v) of said spiking neuron (1), updating the value of said membrane potential (v) with said contribution, emitting a spike and setting the value of said membrane potential (v) to its minimal value if the value of said membrane potential (v) after said step of updating is higher than a threshold value (θ).
7. The device of claim 6, said behavior further comprising the steps of: checking if said spiking neuron (1) emitted a spike within a predefined refractory period (Δ), not updating the value of said membrane potential (v) with said contribution if said spiking neuron (1) emitted a spike within said time refractory period (Δ).
8. The device of claim 6, said behavior further comprising the step of decrementing the value of said membrane potential (v) by a leakage (k) value if said spiking neuron (1) does not emit a spike after said step of updating.
9. The device of claim 6, said threshold value (θ) being regularly incremented or decremented by a random value.
10. The device of claim 1, said genotypic representation (3) comprising values to be optimized of configuration parameters of said spiking neural network.
11. The device of claim 10, said configuration parameters comprising the presence or absence of said synapses (ICONN, NCONN) between said spiking neurons (1) and said input neurons (2).
12. The device of claim 11 , said configuration parameters further comprising the polarity (SIGN) of said spiking neurons (1).
13. The device of claim 1 being a micro-controller.
14. The device of claim 1 being a computer.
15. The device of claim 1 being an analog integrated circuit.
16. The device of one of the preceding claims, said genotypic representation including a binary string (3) and at least some of said values to be optimized being each determined by the value of one bit in said binary string (3).
17. The device of claim 16, said at least some of said values to be optimized comprising the polarity of said spiking neurons (1).
18. The device of claim 16, said at least some of said values to be optimized comprising the presence or absence of said synapses.
19. The device of claim 16, said step of mutating being performed by modifying one or more bits of said binary string (3).
20. The device of claim 16, further comprising binary strings (MEMB, OUTPS, INPS) determining the state and/or activity of said spiking neurons (1) and of said input neurons (2), said step of computing the contribution of incoming spikes comprising performing a basic logical operation with at least part of the bits of said genotypic representation (3) and part of the bits of said binary strings (MEMB, OUTPS, INPS) determining the state and/or activity of said spiking neurons (1) and of said input neurons (2).
21. The device of claim 20, said step of emitting a spike being performed by toggling the value of one bit of one of said binary strings (OUTPS) determining the state and/or activity of said spiking neurons (1) and of said input neurons (2).
22. The device of claim 20, said step of checking if said spiking neuron (1) emitted a spike being performed by reading the value of one bit of one of said binary strings (OUTPS) determining the state and/or activity of said spiking neurons (1 ) and of said input neurons (2).
23. A method for training a spiking neural network, comprising the step of computing a fitness value and attributing said fitness value to a genotypic representation (3) of said spiking neural network.
24. The method of claim 23, further comprising the step of generating a population of randomly generated genotypic representations (3).
25. The method of claim 23, further comprising the following steps: comparing the fitness value of said mutated genotypic representation (3) with the fitness value of another genotypic representation (3), replacing said another genotypic representation (3) by said mutated genotypic representation (3) if the fitness value of said mutated genotypic representation (3) is higher than the fitness value of said another genotypic representation (3), disregarding said mutated genotypic representation (3) otherwise.
26. The method of claim 23, said genotypic representation (3) comprising values to be optimized of configuration parameters of said spiking neural network.
27. The method of claim 26, said configuration parameters comprising the presence or absence of synapses (ICONN, NCONN) between spiking neurons (1) and input neurons (2) of said spiking neural network.
28. The method of claim 27, said configuration parameters further comprising the polarity (SIGN) of said spiking neurons (1).
PCT/EP2002/010646 2002-09-20 2002-09-23 Spiking neural network device WO2004027704A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002338754A AU2002338754A1 (en) 2002-09-20 2002-09-23 Spiking neural network device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US41231502P 2002-09-20 2002-09-20
US60/412,315 2002-09-20

Publications (1)

Publication Number Publication Date
WO2004027704A1 true WO2004027704A1 (en) 2004-04-01

Family

ID=32030848

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2002/010646 WO2004027704A1 (en) 2002-09-20 2002-09-23 Spiking neural network device

Country Status (2)

Country Link
AU (1) AU2002338754A1 (en)
WO (1) WO2004027704A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007008519A2 (en) 2005-07-11 2007-01-18 Fiske Software Llc Active element machine computation
WO2007071070A1 (en) * 2005-12-23 2007-06-28 Universite De Sherbrooke Spatio-temporal pattern recognition using a spiking neural network and processing thereof on a portable and/or distributed computer
US7579942B2 (en) 2006-10-09 2009-08-25 Toyota Motor Engineering & Manufacturing North America, Inc. Extra-vehicular threat predictor
JP2010146514A (en) * 2008-12-22 2010-07-01 Sharp Corp Information processor and neural network circuit using the same
US8510239B2 (en) 2010-10-29 2013-08-13 International Business Machines Corporation Compact cognitive synaptic computing circuits with crossbar arrays spatially in a staggered pattern
KR20150038334A (en) * 2012-07-27 2015-04-08 퀄컴 테크놀로지스, 인크. Apparatus and methods for efficient updates in spiking neuron networks
CN103164741B (en) * 2011-12-09 2017-04-12 三星电子株式会社 Neural working memory device
US9753959B2 (en) 2013-10-16 2017-09-05 University Of Tennessee Research Foundation Method and apparatus for constructing a neuroscience-inspired artificial neural network with visualization of neural pathways
CN108572648A (en) * 2018-04-24 2018-09-25 中南大学 A kind of automatic driving vehicle power supply multi-source fusion prediction technique and system
US10268843B2 (en) 2011-12-06 2019-04-23 AEMEA Inc. Non-deterministic secure active element machine
US20200065674A1 (en) * 2016-01-26 2020-02-27 Samsung Electronics Co., Ltd. Recognition apparatus based on neural network and method of training neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ATSUMI M: "Artificial neural development for pulsed neural network design-a simulation experiment on animat's cognitive map genesis", COMBINATIONS OF EVOLUTIONARY COMPUTATION AND NEURAL NETWORKS, 2000 IEEE SYMPOSIUM ON SAN ANTONIO, TX, USA 11-13 MAY 2000, PISCATAWAY, NJ, USA,IEEE, US, 11 May 2000 (2000-05-11), pages 188 - 198, XP010525039, ISBN: 0-7803-6572-0 *
RÜDIGER KOCH, MATT GROVER: "Amygdala - A spiking neural network library, version 0.2", ONLINE, April 2002 (2002-04-01), pages 1 - 17, XP002248881, Retrieved from the Internet <URL:http://amygdala.sourceforge.net/docs/amygdala.pdf> [retrieved on 20030722] *
T.GOMI: "Evolutionary Robotics. From Intelligent Robotics to Artificial Life. International Symposium, ER 2001, Tokyo, Japan, October 18-19, 2001. Proceedings", SPRINGER VERLAG, ISBN: 3540427376, XP002248882 *
WULFRAM GERSTNER, WERNER M. KISTLER: "Spiking Neuron Models", 15 August 2002, CAMBRIDGE UNIV PR, ISBN: 0521890799, XP002248883 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1902376B1 (en) * 2005-07-11 2018-03-07 Aemea, Inc. Active element machine computation
WO2007008519A2 (en) 2005-07-11 2007-01-18 Fiske Software Llc Active element machine computation
WO2007071070A1 (en) * 2005-12-23 2007-06-28 Universite De Sherbrooke Spatio-temporal pattern recognition using a spiking neural network and processing thereof on a portable and/or distributed computer
US7579942B2 (en) 2006-10-09 2009-08-25 Toyota Motor Engineering & Manufacturing North America, Inc. Extra-vehicular threat predictor
JP2010146514A (en) * 2008-12-22 2010-07-01 Sharp Corp Information processor and neural network circuit using the same
US8510239B2 (en) 2010-10-29 2013-08-13 International Business Machines Corporation Compact cognitive synaptic computing circuits with crossbar arrays spatially in a staggered pattern
US10268843B2 (en) 2011-12-06 2019-04-23 AEMEA Inc. Non-deterministic secure active element machine
CN103164741B (en) * 2011-12-09 2017-04-12 三星电子株式会社 Neural working memory device
KR20150038334A (en) * 2012-07-27 2015-04-08 퀄컴 테크놀로지스, 인크. Apparatus and methods for efficient updates in spiking neuron networks
KR101626444B1 (en) 2012-07-27 2016-06-01 퀄컴 테크놀로지스, 인크. Apparatus and methods for efficient updates in spiking neuron networks
US9798751B2 (en) 2013-10-16 2017-10-24 University Of Tennessee Research Foundation Method and apparatus for constructing a neuroscience-inspired artificial neural network
US9753959B2 (en) 2013-10-16 2017-09-05 University Of Tennessee Research Foundation Method and apparatus for constructing a neuroscience-inspired artificial neural network with visualization of neural pathways
US10019470B2 (en) 2013-10-16 2018-07-10 University Of Tennessee Research Foundation Method and apparatus for constructing, using and reusing components and structures of an artifical neural network
US10055434B2 (en) 2013-10-16 2018-08-21 University Of Tennessee Research Foundation Method and apparatus for providing random selection and long-term potentiation and depression in an artificial network
US10095718B2 (en) 2013-10-16 2018-10-09 University Of Tennessee Research Foundation Method and apparatus for constructing a dynamic adaptive neural network array (DANNA)
US10248675B2 (en) 2013-10-16 2019-04-02 University Of Tennessee Research Foundation Method and apparatus for providing real-time monitoring of an artifical neural network
US10929745B2 (en) 2013-10-16 2021-02-23 University Of Tennessee Research Foundation Method and apparatus for constructing a neuroscience-inspired artificial neural network with visualization of neural pathways
US20200065674A1 (en) * 2016-01-26 2020-02-27 Samsung Electronics Co., Ltd. Recognition apparatus based on neural network and method of training neural network
US11669730B2 (en) 2016-01-26 2023-06-06 Samsung Electronics Co., Ltd. Recognition apparatus based on neural network and method of training neural network
CN108572648A (en) * 2018-04-24 2018-09-25 中南大学 A kind of automatic driving vehicle power supply multi-source fusion prediction technique and system
CN108572648B (en) * 2018-04-24 2020-08-25 中南大学 Unmanned vehicle power supply multi-source fusion prediction method and system

Also Published As

Publication number Publication date
AU2002338754A1 (en) 2004-04-08

Similar Documents

Publication Publication Date Title
US10248675B2 (en) Method and apparatus for providing real-time monitoring of an artifical neural network
JP7047062B2 (en) Neuromorphological processing device
US7904398B1 (en) Artificial synapse component using multiple distinct learning means with distinct predetermined learning acquisition times
Graupe Principles of artificial neural networks
JP5963315B2 (en) Methods, devices, and circuits for neuromorphic / synaptronic spiking neural networks with synaptic weights learned using simulation
Rao et al. Neural networks: algorithms and applications
Floreano et al. Neuroevolution: from architectures to learning
Tyrrell et al. Poetic tissue: An integrated architecture for bio-inspired hardware
Pearlmutter Gradient calculations for dynamic recurrent neural networks: A survey
KR20160076520A (en) Causal saliency time inference
Torresen A scalable approach to evolvable hardware
WO2004027704A1 (en) Spiking neural network device
Floreano et al. Evolution of spiking neural circuits in autonomous mobile robots
KR101825937B1 (en) Plastic synapse management
US6708159B2 (en) Finite-state automaton modeling biologic neuron
Floreano et al. Evolution of neural controllers with adaptive synapses and compact genetic encoding
Zamani et al. A bidirectional associative memory based on cortical spiking neurons using temporal coding
Potvin et al. Artificial neural networks for combinatorial optimization
Maass et al. Theory of the computational function of microcircuit dynamics
Torresen Two-step incremental evolution of a prosthetic hand controller based on digital logic gates
US11640520B2 (en) System and method for cognitive self-improvement of smart systems and devices without programming
Bogacz et al. Frequency-based error backpropagation in a cortical network
Gupta et al. The role of neurocomputational principles in skill savings
Downing Heterochronous neural baldwinism
Khan et al. Intelligent agents capable of developing memory of their environment

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP