WO2020049541A1 - A neuromorphic computing system - Google Patents

A neuromorphic computing system Download PDF

Info

Publication number
WO2020049541A1
WO2020049541A1 PCT/IB2019/057570 IB2019057570W WO2020049541A1 WO 2020049541 A1 WO2020049541 A1 WO 2020049541A1 IB 2019057570 W IB2019057570 W IB 2019057570W WO 2020049541 A1 WO2020049541 A1 WO 2020049541A1
Authority
WO
WIPO (PCT)
Prior art keywords
neurons
neuron
spikes
spiking
synapse
Prior art date
Application number
PCT/IB2019/057570
Other languages
French (fr)
Inventor
Mihir Ratnakar GORE
Original Assignee
Gore Mihir Ratnakar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gore Mihir Ratnakar filed Critical Gore Mihir Ratnakar
Publication of WO2020049541A1 publication Critical patent/WO2020049541A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Definitions

  • the present disclosure relates to the field of neuromorphic computing system engineering. More particularly, the present disclosure relates to a system and method for updating a spiking neural network.
  • ANNs artificial neural network
  • CNN convolutional neural network
  • ConvNet convolutional neural network
  • SNN spiking neural network
  • the SNN uses still more biologically-realistic models of neurons to carry out computations.
  • the SNN is fundamentally different from the neural networks presently known.
  • the SNNs operate using spikes, which are discrete events that take place at points in time, rather than continuous values.
  • the occurrence of a spike is determined by differential equations that represent various biological processes, the most important of which is a membrane potential of the neuron. Essentially, once a neuron reaches a certain potential, it spikes, and potential of the neuron is reset.
  • the present disclosure relates to the field of neuromorphic computing system engineering. More particularly, the present disclosure relates to a system and method for updating a spiking neural network.
  • the present disclosure relates to the field of neuromorphic computing system engineering. More particularly, the present disclosure relates to a system and method for updating a spiking neural network.
  • An aspect of the present disclosure relates to a neuromorphic computing system comprising a plurality of spiking neurons, said system comprising : a processor communicatively coupled to a memory, the memory storing a set of instructions executable by the processor, wherein the processor upon execution of the set of instructions causes the system to: receive one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons; in response to the received one or more data packets, activate a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse; determine a change in performance of the synapse when the set of output spikes are generated based on the set of input spikes by the plurality of spiking neurons, wherein the determined change
  • the plasticity is one of a hebbian plasticity or an anti- hebbian plasticity.
  • the set of output spikes activated by the set of input spikes are categorized as being under a delayed response or a non-locked response.
  • the plurality of spiking neurons is combined in form a hierarchy.
  • a lowest layer performs spatial integration
  • a middle layer performs order and sequential integration
  • a highest layer performs temporal integration
  • a potentiation of the synapse occurs at firing of a post- synaptic neuron and depression of the synapse occurs at firing by a pre-synaptic neuron.
  • the potentiation of the synapse occurs due to modulation of the hebbian plasticity and the depression of the synapse occurs due to modulation of the anti- hebbian plasticity.
  • Another aspect of the present disclosure relates to a method for training a neuromorphic network, said method comprising : receiving, by one or more processors, one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons; in response to the received one or more input spikes, activating, by the one or more processors, a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse; determining, by the one or more processors, a change in performance of the synapse when the set of output spikes are generated based on the set of input spikes by the plurality of spiking neurons, wherein the determined change in performance of the synapse is indicative of an occurrence pattern of a plasticity modulation
  • the neuromorphic network is a non-linear spiking neural network.
  • a mapping between a sensory neuron and a motor neuron of the plurality of spiking neurons results in producing reflex arcs in the neuromorphic network.
  • FIG. 1 illustrates exemplary functional components 100 of a neuromorphic computing system 102 in accordance with an embodiment of the present disclosure.
  • FIG. 2A illustrates upon a neuronal model 200
  • FIG. 2B illustrates behavior of a neuron unit 220, in accordance with an exemplary embodiment of the present disclosure.
  • FIGs. 3 A and 3B illustrate at 300 how neurons in the proposed model behave in accordance with an exemplary embodiment of the present disclosure.
  • FIGs. 5 A to 5C illustrate the "flytrap reflex" model 500 that can be implemented using proposed system in accordance with an exemplary embodiment of the present disclosure.
  • FIGs. 6A to 6C illustrate at 600 how a XOR logic function can be implemented in accordance with an exemplary embodiment of the present disclosure.
  • FIG. 7A elaborates upon an approximate structure, 700 of hebbian and anti- hebbian weight change of a prior art approach.
  • FIGs. 7B to 7D illustrate, at 720 how hebbian and anti-hebbian is implemented in this proposed model for sequence characterization, in accordance with an exemplary embodiment of the present disclosure.
  • FIG. 8 illustrates oscillatory neurons, 800, in accordance with an exemplary embodiment of the present disclosure.
  • FIGs. 9A to 9D illustrated various examples, 900 of oscillator neurons, in accordance with an exemplary embodiment of the present disclosure.
  • FIGs. 10A to 10D illustrate, at 1000 an architecture for implementation of a model for understanding human speech and results thereupon, in accordance with an exemplary embodiment of the present disclosure.
  • FIGs.l lA to 11C illustrate at 1100, single sensor neuron behavioral variation in accordance with an exemplary embodiment of the present disclosure.
  • FIG.12A illustrates behavior of a real muscle (prior art) while FIG. 12B illustrates at 1200 behavior of the muscle/effector unit of this model, in accordance with an exemplary embodiment of the present disclosure.
  • FIG. 12C illustrates muscle integration with multiple sensory input, and then tetanus stimulation (prior art), while FIG. 12D emulates the same using proposed model, in accordance with an exemplary embodiment of the present disclosure.
  • FIG. 13 A illustrates at 1300 a real muscle response to tetanic tension (75 Hz) at three levels (prior art), while FIG. 13B illustrates at 1302 response achieved using proposed model, in accordance with an exemplary embodiment of the present disclosure.
  • FIG. 14 illustrates at 1400 the values of data points as per the system.
  • FIG. 15 illustrates exemplary implementation, 1500 of fused trajectory optimization in accordance with an embodiment of the present disclosure.
  • FIG. 16 is a flow diagram, 1600 illustrating a method for optimizing a trajectory for a host vehicle for dynamic evasive maneuver in a drive passage to avoid collision of the host vehicle with a target vehicle in accordance with an embodiment of the present disclosure.
  • Embodiments of the present invention include various steps, which will be described below.
  • the steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special- purpose processor programmed with the instructions to perform the steps.
  • steps may be performed by a combination of hardware, software, and firmware and/or by human operators.
  • Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein.
  • An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product.
  • Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process.
  • machine-readable storage medium or “computer-readable storage medium” includes, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).
  • computer programming code such as software or firmware
  • a machine- readable medium may include a non-transitory medium in which data may be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections.
  • Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD), flash memory, memory or memory devices.
  • a computer-program product may include code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements.
  • a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents.
  • Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
  • the present disclosure relates to the field of neuromorphic computing system engineering. More particularly, the present disclosure relates to a system and method for updating a spiking neural network.
  • An aspect of the present disclosure relates to a neuromorphic computing system comprising a plurality of spiking neurons, said system comprising : a processor communicatively coupled to a memory, the memory storing a set of instructions executable by the processor, wherein the processor upon execution of the set of instructions causes the system to: receive one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons; in response to the received one or more data packets, activate a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse; determine a change in performance of the synapse when the set of output spikes are generated based on the set of input spikes by the plurality of spiking neurons, wherein the determined change
  • the plasticity is one of a hebbian plasticity or an anti- hebbian plasticity.
  • the set of output spikes activated by the set of input spikes are categorized as being under a delayed response or a non-locked response.
  • the plurality of spiking neurons is combined in form a hierarchy.
  • a lowest layer performs spatial integration
  • a middle layer performs order and sequential integration
  • a highest layer performs temporal integration
  • a potentiation of the synapse occurs at firing of a post- synaptic neuron and depression of the synapse occurs at firing by a pre-synaptic neuron.
  • the potentiation of the synapse occurs due to modulation of the hebbian plasticity and the depression of the synapse occurs due to modulation of the anti- hebbian plasticity.
  • a mapping between a sensory neuron and a motor neuron of the plurality of spiking neurons results in producing reflex arcs in the neuromorphic network.
  • the neuromorphic computing system 102 may comprise one or more processor(s) 104.
  • the one or more processor(s) 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions.
  • the one or more processor(s) 104 are configured to fetch and execute computer-readable instructions stored in a memory 106 of the system 102.
  • the memory 106 may store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service.
  • the memory 106 may comprise any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
  • the system 102 may also comprise an interface(s) 108.
  • the interface(s) 108 may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as EO devices, storage devices, and the like.
  • the interface(s) 108 may facilitate communication of the system 102 with various devices coupled to the system 102 such as an input unit and an output unit.
  • the interface(s) 108 may also provide a communication pathway for one or more components of the system 102. Examples of such components include, but are not limited to, processing engine(s) 110 and database 122.
  • the processing engine(s) 110 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 110.
  • programming for the processing engine(s) 110 may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 110 may comprise a processing resource (for example, one or more processors), to execute such instructions.
  • the machine -readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 110.
  • system 102 may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine -readable storage medium may be separate but accessible to the system 102 and the processing resource.
  • processing engine(s) 110 may be implemented by electronic circuitry.
  • the processing engine(s) 110 may comprise an input spikes receiving unit 112, an output spikes activating unit 114, a plasticity modulation determination unit 116, a neuron updating unit 118, and other units(s) 120.
  • a plurality of spiking neurons can be operatively coupled to one another using one or more of synapses.
  • the synapses of the spiking neurons can perform a pre-processing of synaptic input signals defined by so called synaptic weights of the neuron.
  • the spiking neural network comprises neurons for outputting spikes in response to a synaptic current.
  • the spiking neural network can be formed of, or comprise, a plurality of neurons.
  • the plurality of neurons can be arranged in an array, in a linear structure, in a tree (branching) structure, or in any other configuration. Further, all or a subset, of the plurality of neurons of the spiking neural network are spiking neurons.
  • the spiking neurons sums inputs into the spiking neurons and outputs a spike (or spike signal) when the inputs are summed and the summed inputs reach or exceed a threshold value associated with the spiking neurons.
  • the threshold value may be a threshold voltage, a threshold current, a threshold charge, etc.
  • the inputs into the spiking neuron may correspond to the threshold type.
  • the inputs into the spiking neurons are currents, and the output can be a sharp change in the current as may be represented by a delta function, for example.
  • the inputs into the spiking neuron do not need to match the output spikes and the inputs can be currents but the spiking neuron can output a voltage spike.
  • the input spikes receiving unit 112 can receive multiple data packets pertaining to a set of input spikes by a plurality of spiking neurons present in the neumorphic computing system.
  • the neumorphic computing system can be one of a spiking neural network.
  • the set of input spikes can have attributes related to spatial information and temporal information of the plurality of spiking neurons.
  • the input spike receiving unit 112 in the spiking neural networks can characterize patterns of input neurons in multiple domains such as spatial domain, temporal domain and sequential domains.
  • an input dendrite in a neuron unit can make synapse with input neurons and an axon can make synapses with output neurons.
  • the synapses can have weights that define their connection to other neurons.
  • a signal can arrive at the input dendrite as input spike that is to be delivered to soma. If the soma fires, the axon can be activated and that sends signals to the dendrites it is connected to as output spikes.
  • the soma can be an asynchronous process and the neuron can be powered by the soma.
  • the soma stops working the neurons attached therewith can also stops working.
  • the neuron can fire an input spike in the spiking neural network.
  • the input spike can comprise a threshold value to determine whether or not the neuron should output an output spike.
  • the term‘output a spike’ is also referred to herein as ‘firing’ and‘firing a spike’).
  • the spiking neural network can comprises a synapse coupled to a neuron.
  • the neuron can be a spiking neuron.
  • the synapse can receive one or more inputs from, for example, other neurons present in the spiking neural network.
  • the synapse outputs a synaptic signal (e.g. a synaptic current, a synaptic voltage, etc.), which may be received by neuron as an input signal.
  • the neuron comprises the threshold value such that when the inputs into the neuron reach or exceed the threshold value, the neuron outputs a spike.
  • a set of neurons can facilitate continuous beating of human heart.
  • the neurons that facilitate continuous beating of the human heart also provide additional information to other set of neurons for performing various activities in the human body.
  • One such activity can be blinking automatically of the human eyes.
  • the plurality of neurons can provide one or more input spikes in the human body leading to other set of neurons resulting in continuous beating of the human heart.
  • a one or more of additional neurons can be activated that generate a set of output spikes.
  • the set of the output spikes can be correlated to one or more functions such as automatic blinking of human eyes, continuous breathing with respect to ongoing heart beating in the human body.
  • the output spikes activating unit 114 can receive a response from the input spikes. On receiving a response set of output spikes are activated.
  • the set of output spikes activated by the set of input spikes are categorized as being under a delayed response or a non-locked response.
  • the plurality of neurons can be arranged in an array, in a linear structure, in a tree (branching) structure, or in any other configurations. Further, when the plurality of neurons is combined in a tree structure a lowest layer performs spatial integration, a middle layer performs order and sequential integration, and a highest layer performs temporal integration.
  • the plasticity can be one of a hebbian plasticity or an anti- hebbian plasticity. Further, modulation of the hebbian plasticity or the anti-hebbian plasticity within the spiking neural network can be determined by examining reward specific and/or punishment specific modulatory neurons.
  • a type and magnitude of hebbian and Anti- hebbian plasticity can be determined by the modulatory neurons.
  • a new-born baby bird can know how to eat and breathe as this is shaped by evolution. But the bird has to observe and learn to fly, sing and hunt. This kind of observational learning or associative learning can be performed using the system by hebbian and anti- hebbian plasticity based sequence learning.
  • the weights of the synapse are increased/decreased accordingly in a shared fashion by both, the pre-synaptic Neuron A and the post-synaptic Neuron B.
  • the above discussed mechanism can be exactly reversed when the modulation enables anti-hebbian plasticity.
  • a process of generation of the output spike based on a set of input spikes is saved.
  • the sequence of the set of input neurons that produce input spikes leads to generation of output spikes needs be considered and noted.
  • the system can be updated based on the sequence leading to continuous learning and updation of the system.
  • the neuromorphic computing system representing the spiking neural network may employ a spike-timing-dependent plasticity (STDP) learning.
  • STDP spike-timing-dependent plasticity
  • a network of neural network elements can communicate via the spike messages sent from one element to other element.
  • Each of the elements can implement some number of the neurons.
  • the neurons can operate as primitive nonlinear temporal computing elements.
  • the neurons Upon the neuron's activation exceeding some threshold level, the neurons can generate a spike message (e.g. input spike) that can be propagated to a set of additional neurons contained in destination cores to generate a resultant spike message (e.g. output spike).
  • the spike when the soma spikes, in addition to that spike propagating downstream to the other neurons, the spike also propagates backwards down through a dendritic tree, which can be beneficial for learning.
  • a synaptic plasticity at the synapse can be a function of when a postsynaptic neuron fires and when a presynaptic neuron is firing. It can be appreciated by one skill in the art that in a hierarchical architecture, once the soma fires, there are other elements that know that the neuron can fire in order to support learning. The learning may be based on implementation of the STDP. The learning can then be communicated with the synapses accordingly.
  • other units 120 implement functionalities that supplement applications or functions performed by the system 102, one or more processors 104 or the processing engine(s) 110.
  • FIG. 2A illustrates upon a neuronal model 200
  • FIG. 2B illustrates behavior of a neuron unit 220, in accordance with an exemplary embodiment of the present disclosure.
  • the proposed invention uses intelligent units based on adaptive non-linear spiking neuron.
  • the spiking neuron used is unlike Leaky Integrate and Fire model because in this model, no differential equations are used. Instead, the design is more like Spike Response Model, in which the activation and spikes occur in real-time.
  • This model is also unlike Hodgkin Huxley model, which although accurately models ion channels in real neurons, is very complex in order to create multiple units and create functional neural network simulations.
  • the proposed neural model has non-linear activations so as to enrich the computation capabilities.
  • the neuron is also adaptive, meaning it can adjust its firing rate on the fly, gaining a capability akin to short term memory.
  • each neuron unit 202 consists of input dendrites 204, a soma 206 and output axon 208.
  • the input dendrites 204 make synapse with input neurons and axon 208 makes synapses with output neurons.
  • the synapses can have“weights” that define their connection to other neurons. These synapses can be excitory or inhibitory depending upon the weights.
  • FIG. 2B illustrates stimulation of a neuron and its behavior thereupon. If a single neuron can be stimulated with insufficient strength, as shown at 222, its potential first rises and then falls back. When enough stimulation is given to the neuron, as shown at 224, its potential can increase until the neuron fires robustly to the stimulus. Further, every neuron firing can provide a signal to the neuron’s axon, which connects to the dendrites of other neurons. The processing of the internal state can take place in the soma. A signal can arrive at the dendrites, which can be delivered to the soma. If the soma fires, the axon is activated which can send signals to the dendrites it is further connected to. If the soma does not fire, the neuron does not fire and the axon can not be engaged, so no signal is sent to other neurons.
  • the synapse is the joining point of the axon of one neuron and dendrite of another. It can be represented by a weight, which defines the type of connection.
  • the adaptive and non-linear nature of the neuron can be contained in the soma.
  • the soma is an asynchronous process and can power the neuron. If the soma stops, the neuron stops functioning.
  • the neurons can be of various types and can be based on the given above neuronal unit.
  • the sensory neurons can have receptors instead of dendrites to detect stimulus in real-time, whereas the output/muscle neurons can only have positive connections and longer potential durations, and do not have an axon to connect to other neurons.
  • the potential is not reset manually. Instead the internal dynamics of the neurons can introduce the same two periods as in real neurons: an absolute refractory period where the neuron cannot fire at all, and a relative refractory period where the neuron resists firing but can fire given enough activation.
  • an absolute refractory period where the neuron cannot fire at all
  • a relative refractory period where the neuron resists firing but can fire given enough activation.
  • FIGs. 3A and 3B illustrate at 300 how neurons in the proposed model behave in accordance with an exemplary embodiment of the present disclosure.
  • FIG. 3A illustrates two neurons with different inherent sensitivities (obtained by Neuroevolution). Both of these neurons can be given a low intensity stimulus. As can be seen, neuron 1 with higher sensitivity responds rapidly whereas neuron 2, being less sensitive, responds sparsely to the same stimulus.
  • FIG. 3B illustrates the situation when both the neurons can be stimulated with same very high strength stimulus.
  • the less sensitive neuron 2 starts firing robustly along with the sensitive neuron 1, and that the maximum firing frequency for both is the same, despite having different sensitivities. This is due to the absolute and refractory period emergent in the model. This behavior is only possible due to the internal dynamics of the neuronal units themselves.
  • FIG. 4 illustrates at 400 how spatial and temporal processing is present in prior art representing biological life.
  • FIG. 4 examines most primitive natural intelligence using neurons. Such intelligence can be present in plants as well since, although plants do not have neurons like animals do, there are certainly cells that act like neurons do. Their tasks include information processing and signaling.
  • An example of the venus flytrap is elaborated herein to appreciate how biological systems use techniques similar to those that can be implemented using computational techniques.
  • the plant can live in nutrient-poor habitats but has been able to overcome the limitations of its surroundings by evolving a carnivorous lifestyle, particularly by modifying its leaves into active traps to catch animals. When flies, ants, or other small animals touch mechano- sensitive hairs protruding from the inner surface of the bi-lobed trap, it shuts within a fraction of a second.
  • processing and signaling akin to neural processes is utilized here.
  • the spatial summation system integrates touch events over various trigger hairs, while the temporal summation system integrates touch events over a period of time.
  • the decision of tap closure relies on the output of this integration. This kind of processing is very akin to neurons.
  • the plants cells involved can be considered a very different kind of primitive neuronal unit while the decision system regarding the trap closure can be analogized by interactions between such neuronal units.
  • the trap and its closure itself can be analogized by a muscle’s contraction due to neuronal stimulation.
  • connection and“weights” of the connection between these“neurons” can be defined by evolution by natural selection. Over time, the weights fluctuated and settled on what maximized the effort of closure and trapping insects without responding to false alarms.
  • the fitness function here is nothing but the tradeoffs between sensitivity of the trap and energy wasted in false alarms like raindrops or wind.
  • a corresponding neural model can be created for the action of the venus flytrap plant.
  • the model can consist of the sensory neuron responding on every touch, while a motor neuron can integrate touches over space (multiple receptors) and time (multiple touches) to generate output activation and trap closure.
  • the entire model can be crafted by neuroevolution by artificial selection, wherein the weights can be evolved according to the fitness function which defines positive fitness value for output activation for touches by insects and negative fitness value for false alarms.
  • FIGs.5A to 5C illustrate the "flytrap reflex" model 500 that can be implemented using proposed system in accordance with an exemplary embodiment of the present disclosure.
  • the touch receptors 502 can stimulate the sensory unit Nl (504), which can stimulate the motor unit N2 (508) using plastic synapse (506). If the motor unit N2 (508) fires, it can stimulate the effector unit N3 (510) that acts as the trap. If motor unit N2 (508) does not fire, no action will be performed by the effector unit N3 (510).
  • the most crucial weight here is the synapse 506 between sensory unit (504) and motor unit (508) which determines whether or not to activate the effector unit N3 (510), while the weight between motor unit N2 (508) and effector unit N3 (510) controls how much the effector is activated.
  • the value of the effector unit (526) can represent trap closure. If the value of the effector unit is near 1, the trap will close. If it is near 0, the trap remains open. This is like a contraction of a muscle in animals. Further, in FIG. 5B, the sensory unit can be triggered 350 ms apart to stimulate two touches made after a very long time, to stimulate wind gusts etc. (False Alarm). The motor unit does not fire, and the effector unit is not activated.
  • the touch stimulus can be presented only with a gap of 30 ms to simulate an insect crawling (True Stimulus).
  • the model can“count” and“remember” how far apart the touches were made and the motor unit fired which caused an action by the effector unit. How fast the‘trap’ closed can also be observed.
  • the flytrap reflex model can be extended to trigger digestion upon“counting” of 5 such input touch stimulus.
  • the flytrap reflex model as elaborated above forms the basis of every reflex action, even in animals.
  • a simple monosynaptic reflex arc can be a patellar reflex, also known as knee-jerk response, which can be used by doctors to test the reflex arc in patients.
  • This reflex involves interneurons that inhibit firing of the opposite muscular motor neurons.
  • Such reflex arcs do not need voluntary control signal from the brain. Some of these reflexes may be present for a short time in the body even after death of a person.
  • These reflex arcs combine both the spatial processing aspect and the temporal processing aspect to generate input phase-locked responses. While some reflex actions are complex like running away, a few of them are very basic like muscle stretching, and share characteristics with the flytrap reflex model.
  • FIGs. 6A to 6C illustrate at 600 how a XOR logic function can be implemented in accordance with an exemplary embodiment of the present disclosure.
  • a XOR function gives an output (say‘0’) when both of its inputs have the same value, else it gives another output (say‘ G). This is illustrated in FIG. 6A.
  • inputs 1 and 2 are the input nodes S 1 and S2, along with the bias unit (to maintain a default potential) into the network.
  • FIG. 6C illustrates test results pertaining to activations of neuron A and neuron B as elaborated above.
  • when initially only one input is high so neuron A can be active and neuron B can not be engaged. But when both the inputs are turned high, neuron B is engaged. The neuron B inhibits neuron A and so neuron A goes inactive.
  • FIG. 6C illustrates that the neuroevolutionary process has evolved weight such that neuron A (602) can fire only when either input is high, while neuron B (604) fires only when both inputs are high. The neuron B inhibits neuron A thus achieving the output logic.
  • neuron A can be the output neuron, which can be active only when either one input is high, but not when both are high.
  • This implements the XOR logic gate with just two active neurons, instead of a multilayered architecture needed for second generation neural networks, with or without back propagation.
  • the neuroevolution process can also provide a successful mechanism for training real-time multilayered adaptive non-linear spiking neural networks.
  • a sequence is when neurons fire one after the other in a certain fashion. For instance, a neuron A may fire 10 ms before a neuron B. In another instance, B may fire 10 ms before A. Since both A and B are firing, there is no spatial difference (both are engaged in the sequence). Similarly, since in both cases the time difference is 10 ms, there is no temporal difference. Thus, the only differentiating factor in these two cases is the sequence, or the order in which they fired.
  • sequential/order learning can be hard to occur at the evolutionary level. This is because sequential patterns are transient, fast and short lived in nature, and that makes it hard to learn from evolution by natural selection.
  • the way to integrate sequence specific information along with the spatial and temporal information is by utilizing modulated plasticity, which is also done in nature. For example, a new-bom baby bird knows how to eat and breathe. This is shaped by evolution. But the bird then has to observe and learn to fly, sing and hunt. This kind of observational learning, or associative learning is beyond the scope of Neuroevolution except a few primitive sequence patterns like breathing. For such kind of sequence learning Hebbian and anti-Hebbain plasticity is proposed by this model.
  • FIG. 7A elaborates upon an approximate structure of hebbian and anti-hebbian weight change, 700 of a prior art approach.
  • a type and magnitude of hebbian/Anti-Hebbian plasticity can be determined by the modulatory neurons.
  • the approximate structure of Hebbian and anti-hebbian weight change is given in FIG. 7A.These values are decided on the fly based on output of modulatory neurons, as elaborated further.
  • Hebbian learning is one of the oldest learning algorithms and is based in large part on the dynamics of biological systems. A synapse between two neurons is strengthened when the neurons on either side of the synapse (input and output) have highly correlated outputs. In essence, when an input neuron fires and such firing frequently leads to the firing of the output neuron the synapse is strengthened. Following the analogy to an artificial system, the tap weight is increased with high correlation between two sequential neurons. As is stated‘the neurons that fire together, wire together’. Currently, hebbian learning is not used for sequence processing.
  • FIGs. 7B to 7D illustrate, at 720 how hebbian and anti-hebbian is implemented in this proposed model for sequence characterization, in accordance with an exemplary embodiment of the present disclosure.
  • FIGs. 7B to 7D illustrate two neurons A and B connected in feed forward fashion. This can make neuron A presynaptic neuron because it is before the synapse.
  • Neuron B can be a postsynaptic neuron because it is after the synapse, that is, synapse feeds input in B.
  • Hebbian learning can be enabled here by modulation.
  • the model defines that the synapse is shared between the pre-synaptic and post-synaptic neuron. These two neurons independently adjust the weight of the shared synapse. This updating only occurs when that neuron is firing. So, for clarity, the way synapse strength will adjust is as shown in FIG. 1C.
  • presynaptic neuron A fires and hebbian is enabled then it will depress the weight while when postsynaptic neuron B fires, it will potentiate the weight(as shown in FIG. 7D).
  • the degree of such change in the weight is defined by the timing difference, the intrinsic property values and the modulatory signals from modulatory neurons.
  • FIG. 8 illustrates oscillatory neurons, 800, in accordance with an exemplary embodiment of the present disclosure.
  • all the response seen until now can be stimulus dependent, that is, response occurs only when a certain stimulus is encountered.
  • intelligent beings can generate rhythmic activity intrinsically even in the absence of rhythmic input.
  • These neural circuits are commonly referred to as oscillators or central pattern generators.
  • FIGs. 9A to 9D illustrated various examples, 900 of oscillator neurons, in accordance with an exemplary embodiment of the present disclosure.
  • the mechanism by which simple oscillators work can be primarily reciprocal inhibition.
  • the inhibition is modulated by other neurons to create varying rhythmic patterns.
  • FIG. 9A illustrates output of a muscle/effector which is connected to such an oscillatory sub-network.
  • the entire network can remain the same, only the modulatory signals can change, which causes the dynamics such that oscillations differ.
  • case 1 as shown in FIG.9A has all positive synapses and there is no oscillation since there is no inhibition.
  • FIG. 9B illustrates Case 2 of positive and negative synapses, non-modulated.
  • the synapses are not modulated so there is just random fluctuations due to adaptive nature. There are no oscillations.
  • FIG. 9C illustrates Case 3 of positive and negative synapse, modulated, slow.
  • the synapses are modulated to give slow oscillations.
  • FIG. 9D illustrates Case 4 of positive and negative synapse, modulated fast.
  • the synapses are modulated to give fast oscillations.
  • the hebbian and the anti hebbian learning cannot always be engaged. That is, these kinds of learnings can occur when modulated by specifically designated neurons called modulatory neurons. These neurons are inspired by the Dopamine neurons in the brain, and are primarily used for indicating rewards/ punishment and errors. Different dopamine neurons encode for errors, reward and punishment with variable strengths, and this determines whether Hebbian or anti-Hebbian learning will occur in a group of interconnected neurons.
  • the modulatory neurons can be connected directly to the group of neurons in question and enable synaptic modifications. These neurons can also act as error signals for a subgroup of neurons that indicates whether‘learning’ a spatio-temporal sequence is advantageous or not. If advantageous, the synaptic strengths are potentiated, otherwise they are depressed and the neuronal subgroup tries to learn a different spatio- temporal sequence. Once the desired output is required, these modulatory neurons turn off so that no rapid changes occur further in the given neuronal subgroup. These neurons are designated in the architecture itself.
  • FIGs. 10A to 10D illustrate, at 1000 an architecture for implementation of a model for understanding human speech and results thereupon, in accordance with an exemplary embodiment of the present disclosure.
  • the system can be developed to understand human speech.
  • second generation neural networks cannot be applied to human speech due to its continuous nature.
  • Other spiking models are nowhere capable to handle such a task and so spiking neural networks are not used because existing systems just are not powerful enough.
  • audio processing is a sequential problem.
  • RNN and LSTM use mainly sequential processing over time. This means that long-term information has to sequentially travel through all cells before getting to the present processing cell. Hence it can be easily corrupted by being multiplied many time by small numbers ⁇ 0. This is the cause of vanishing gradients.
  • a fundamental issue with these models is that RNN and LSTM combine only two (spatial and sequential)types of processing but fail to incorporate temporal processing. That is why they are inherently unsuitable for real-time applications.
  • RNN and LSTM are not hardware friendly. It takes a lot of resources to train these network fast, and similarly, to run them in the cloud. Given that the demand for speech-to-text is growing rapidly, the models do not scale well. While the LSTM model (that can be seen as multiple switch gates) can bypass units and thus remember for longer time steps and so remove some of the vanishing gradients, there is still a sequential path from older past cells to the current one. In fact the path is now even more complicated, because it has additive and forget branches attached to it.
  • the proposed system avoids all these issues by rejecting error-gradient descent based approach and instead adopting more natural mechanisms of neuroevolution and modulated plasticity for processing spatial, temporal and sequential data in an implementation that is light-weight, scalable, highly functional and works in real-time.
  • a prototype can be developed using proposed model of spiking neural networks for all its computations.
  • an audio sound 1002 is first frequency analyzed (similar to function of cochlea in human ear) shown at 1004.
  • the sound then passes through proposed system 1006 of ascending sub-networks of spiking neurons and finally providesoutput/read-out neurons 1008.
  • the ascending sub-networks of spiking neurons can include sensory neurons 1010, integrating intemeurons 1012, filtering intemeurons 1014 and Hebbian / anti-Hebbian interneurons 1016.
  • the Hebbian/ anti-Hebbian neurons can also receive inputs from oscillatory neurons 1018 and modulatory neurons 1020.
  • the sensory part is created by Neuroevolution while the interneuronal layer is subject to modulated Hebbian and anti-hebbian plasticity.
  • This structure disclosed can extract spatial, temporal as well as sequential patterns and accurately identify human speech. All simulations are done one a regular consumer-grade laptop. The output is not phase-locked to the input stimulus. The oscillatory neurons are used for determining when to provide output decision.
  • the proposed system can take as input raw sounds and train itself to a client specific vocabulary by extracting spatial, temporal as well as sequential patterns of the raw sound inputs in line with the client specific vocabulary.
  • proposed system can find good use in speech recognition systems since it can extract spatial, temporal and sequential patterns into distinct parts.
  • FIGs.9B to 9D provide results. Included is the spectrogram for frequency analysis and output by the read-out neurons
  • FIG. 9B is the spectrogram for audio saying“1, 2, 3, 4, 5”
  • FIG. 9C the spectrogram for audio saying“5, 4, 3, 2, l”and FIG.9D spectrogram for audio saying“5, 1, 3”
  • FIGs.l lA to 11C illustrate at 1100, single sensor neuron behavioral variation in accordance with an exemplary embodiment of the present disclosure.
  • FIG.12A illustrates at 1200 behavior of a real muscle (prior art) while FIG. 12B illustrates at 1202 behavior of the muscle/effector unit of this model, in accordance with an exemplary embodiment of the present disclosure.
  • FIG. 12C illustrates at 1204 muscle integration with multiple sensory input, and then tetanus stimulation (prior art), while FIG. 12D emulates at 1206 the same using proposed model, in accordance with an exemplary embodiment of the present disclosure.
  • FIG. 13A illustrates at 1300 a real muscle response to tetanic tension (75 Hz) at three levels (prior art), while FIG. 13B illustrates at 1302 response achieved using proposed model, in accordance with an exemplary embodiment of the present disclosure.
  • FIG. 13 A shows response at three levels shown as 1302, 1304 and 1306 to tetanic tension (75Hz).
  • series 1 response is indicated as 1322, series 2 as 1324 and series 3 as 1326.
  • FIG. 14 illustrates at 1400 the values of data points as per the system.
  • the normal distribution denotes performance of a biological or artificial set of neural networks that are randomly connected internally.
  • the z values can be drawn from normal distribution N where mean can be set to 0 and variation can be called mutation step size.
  • Parents can be selected by uniform random distribution whenever an operator needs one/some. Thus, ES parent selection is unbiased as every individual can have a same probability to be selected. In the ES“parent” can mean a population member (in GA’s: a population member can be selected to undergo variation.)
  • At block 1502 receiving at one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons.
  • receiving at block 1504 in response to the received one or more input spikes, activating, by the one or more processors, a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse.
  • FIG. 16 is a flow diagram, 1600 illustrating a method for optimizing a trajectory for a host vehicle for dynamic evasive maneuver in a drive passage to avoid collision of the host vehicle with a target vehicle in accordance with an embodiment of the present disclosure.
  • computer system includes an external storage device (1610), a bus (1620), a main memory (1630), a read only memory (1640), a mass storage device (1650), communication port (1660), and a processor (1670).
  • processor (1670) include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOCTM system on a chip processors or other future processors.
  • Processor (1870) may include various engines associated with embodiments of the present invention.
  • Memory (1630) can be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art.
  • Read only memory (1640) can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor (1670).
  • Mass storage (1650) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g.
  • PATA Parallel Advanced Technology Attachment
  • SATA Serial Advanced Technology Attachment
  • USB Universal Serial Bus
  • Seagate e.g., the Seagate Barracuda 7102 family
  • Hitachi e.g., the Hitachi Deskstar 7K1000
  • one or more optical discs e.g., Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc.
  • RAID Redundant Array of Independent Disks
  • Bus (1620) communicatively couples processor(s) (1670) with the other memory, storage and communication blocks.
  • Bus (1620) can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor (1670) to software system.
  • PCI Peripheral Component Interconnect
  • PCI-X PCI Extended
  • SCSI Small Computer System Interface
  • FFB front side bus
  • operator and administrative interfaces e.g. a display, keyboard, and a cursor control device
  • bus (1620) may also be coupled to bus (1620) to support direct operator interaction with computer system.
  • Other operator and administrative interfaces can be provided through network connections connected through communication port (1660).
  • External storage device (1610) can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Video Disk - Read Only Memory (DVD-ROM).
  • CD-ROM Compact Disc - Read Only Memory
  • CD-RW Compact Disc - Re-Writable
  • DVD-ROM Digital Video Disk - Read Only Memory
  • present disclosure elaborates upon a model of real-time adaptive non-linear spiking based upon biologically-inspired structure of the neurons including their Axon, Soma and Dendrites and the way they interact with each other that can work in a computationally efficient way.
  • the system disclosed uses a combination of sub-networks into a hierarchy that performs its individual functions: lowest layer for spatial integration, middle layer for order and sequential integration, and highest layer for temporal integration. It implements various layers of neuronal groups including sensory neurons, intemeurons, motor- neurons, muscles/effectors, oscillators and modulatory neurons.
  • the invention eprovides for neuronal dynamics that provide modes of activities of the neurons ranging from bursting, tonic firing, phasic firing, absolute refractory period, relative refractory period, post-inhibition rebound and sensory adaptation. It discloses reflex arcs that arise from sensory to motor neuronal mapping as the basic unit of cognition in spiking neural networks.
  • the invention uses central pattern generators, or oscillators, that emerge from the activity of groups of regular neurons, which are heavily modulated by sensory input, and provides for an implementation of modulation of Hebbian and anti- Hebbian plasticity within the neuronal groups using reward- specific and punishment-specific modulatory neurons.
  • the system disclosed can enable delayed and non-phased-locked response to input stimuli, enabled by combination of stimulus dependent processing with intrinsically oscillatory sub-networks. It enables characterization of patterns in spatial, temporal as well as sequential domains (order sensitivity) using spiking neural networks.
  • the system disclosed herein enables the most effective way of using above described third generation spiking neural networks, while gaining significant performance improvement.
  • Proposed invention enables development of an SNN with far fewer layers. If nodes only fire in response to a spike (actually a train of spikes) then one spiking neuron could replace many hundreds of hidden units on a sigmoidal neural network.
  • Proposed system can handle continuous input data like live streaming, and is much more energy efficient than any other model/system offering similar functionalities.
  • the various spikes can be routed like data packets, further reducing layers, instead of the real valued outputs of second generation neural networks
  • the invention proposes nature-inspired modulated hebbian and anti-hebbian plasticity as non-gradient training method, while CNNs rely on gradient descent functions.
  • Gradient descent which looks at the performance of the overall network can be led astray by unusual conditions at a layer like a non-differentiable activation function.
  • Other more severe limitations of back propagation are: 1) vanishing gradient in deep learning 2) Affinity to converge at a local minima instead of the global suboptima.
  • a very severe limitation of Convolutional Networks is that they are too constrained: they accept a fixed sized vector as input (e.g. an image) and produce a fixed-sized vector as output (e.g. probabilities of different classes). Not only that, these models perform this mapping using a fixed amount of computational steps (e.g. the number of layers in the model)
  • the proposed system enables SNNs that can learn from one source and apply knowledge learnt to another and can generalize about their environment.
  • SNNs enabled can learn to distinguish minor differences and can specialize to input patterns. Further, SNNs so formed can remember and tasks once learned can be recalled and applied to other data. They can learn from their environment unsupervised/semi- supervised and with very few examples or observations. That makes them quick learners.
  • the proposed system is very lightweight and can run hundreds of spiking neurons in real-time in parallel on a conventional desktop PC or laptop. It needs very few data samples to learn. Even two-three samples are enough in some cases to learn effectively. It can extract temporal sequences along with spatial sequences, as well as sequential and order-specific sequences.
  • the proposed system uses spike timing-based coding instead of rate -based coding. Spike timing-based coding has been decisively shown to be much more powerful and bio-realistic.
  • the proposed system enables an asynchronous units and modular structure. This is a great advantage. Since the neuronal units and subgroups are asynchronous, in a deep network the upper layers do not have to wait for signals from the lower layers. Each layer works autonomously and there are several cross connections. This enables much deeper networks with almost no lagtimes. Much deeper functional neural networks can be utilized as opposed to second-generation neural networks.
  • the proposed system enables novel behaviors and functionalities unlike existing artificial neural systems.
  • the present disclosure relates to the field of neuromorphic computing system engineering. More particularly, the present disclosure relates to a system and method for updating a spiking neural network.
  • the present disclosure provides a neuromorphic computing system that works on nature-inspired modulated hebbian and anti-hebbian plasticity.
  • the present disclosure provides a system and method for providing an efficient technique of using third generation neural networks, while providing significant performance improvement.
  • the present disclosure provides a system and method for providing a neuromorphic computing system that handles continuous input patterns, differentiates and learns between various input patterns and applies the input patterns to multiple operations.
  • the present disclosure provides a system and method for providing a neuromorphic computing system with multiple layers, where the layers work autonomously such that upper layers do not depend on inputs from lower layers resulting in deeper networks with no-lag times.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Feedback Control In General (AREA)

Abstract

A neuromorphic computing system comprising a plurality of spiking neurons is disclosed. A plurality of spiking neurons is provided, that are connected to each other using a synapse. A set of input spikes are generated by the plurality of spiking neurons based on spatial and temporal information. A set of output spikes are generated based on the generated set of input spikes and an occurrence pattern of a plasticity modulation is determined. The plurality of the spiking neurons is updated based on the determined occurrence pattern of the plasticity modulation.

Description

A NEUROMORPHIC COMPUTING SYSTEM
TECHNICAL FIELD
[0001] The present disclosure relates to the field of neuromorphic computing system engineering. More particularly, the present disclosure relates to a system and method for updating a spiking neural network.
BACKGROUND
[0002] Existing computing systems such as an artificial neural network (ANNs) and a convolutional neural network (CNN, or ConvNet) suffer from various limitations. The limitations can range from the computing systems needing huge amounts of tagged training data which is generally in short supply and is expensive to produce, to a large number of tuning parameters that needs to be used in the system. The tuning parameters need to be adjusted occasionally thus making computation process of the system lengthy and labor intensive. Additionally, these systems cannot handle continuous data inputs like live streaming, cannot extract meaning and cannot remember obtained results, and hence are unsuitable for real time processing.
[0003] Prevalent third generation of neural networks, termed as spiking neural network (SNN), uses still more biologically-realistic models of neurons to carry out computations. The SNN is fundamentally different from the neural networks presently known. The SNNs operate using spikes, which are discrete events that take place at points in time, rather than continuous values. The occurrence of a spike is determined by differential equations that represent various biological processes, the most important of which is a membrane potential of the neuron. Essentially, once a neuron reaches a certain potential, it spikes, and potential of the neuron is reset.
[0004] However, existing models of the SNNs work on recorded pre-existing data and not on a real-time live data stream. Further, the SNNs lack a qualitative description of intelligence and hence are limited to theoretical discussions and are not being used practically as real time solutions, despite natural and biological intelligence being powered by spiking the neurons in real life.
[0005] While it is generally accepted that the third generation neural networks can learn by some sort spike timing dependent plasticity, however, the mechanism and efficient implementation of such plasticity may still not provide enough functionality to use these neural networks effectively. [0006] Hence there is a need in the art for a system to overcome existing limitations of the ANNs, and particularly those of the SNNs as elaborated above.
OBJECTS OF THE PRESENT DISCLOSURE
[0007] The present disclosure relates to the field of neuromorphic computing system engineering. More particularly, the present disclosure relates to a system and method for updating a spiking neural network.
[0008] It is an object of the present disclosure to provide a neuromorphic computing system that works on nature-inspired modulated hebbian and anti-hebbian plasticity.
[0009] It is an object of the present disclosure to provide an efficient technique of using third generation neural networks, while providing significant performance improvement.
[0010] It is an object of the present disclosure to provide a neuromorphic computing system that handles continuous input patterns, differentiates and learns between various input patterns and applies the input patterns to multiple operations.
[0011] It is an object of the present disclosure to provide a neuromorphic computing system that manages multiple neurons in real time during parallel operations.
[0012] It is an object of the present disclosure to provide a neuromorphic computing system that leams from environment without much supervision or observations.
[0013] It is an object of the present disclosure to provide a neuromorphic computing system that extracts sequential and order specific sequences and uses spike time based coding instead of rate based coding for determining output spikes.
[0014] It is an object of the present disclosure to provide a neuromorphic computing system with multiple layers, where the layers work autonomously such that upper layers do not depend on inputs from lower layers resulting in deeper networks with no-lag times.
SUMMARY
[0015] The present disclosure relates to the field of neuromorphic computing system engineering. More particularly, the present disclosure relates to a system and method for updating a spiking neural network.
[0016] An aspect of the present disclosure relates to a neuromorphic computing system comprising a plurality of spiking neurons, said system comprising : a processor communicatively coupled to a memory, the memory storing a set of instructions executable by the processor, wherein the processor upon execution of the set of instructions causes the system to: receive one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons; in response to the received one or more data packets, activate a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse; determine a change in performance of the synapse when the set of output spikes are generated based on the set of input spikes by the plurality of spiking neurons, wherein the determined change in performance of the synapse is indicative of an occurrence pattern of a plasticity modulation within the plurality of spiking neurons; and update the neuromorphic computing system based on the indicated occurrence pattern of the plasticity modulation.
[0017] In an embodiment, the plasticity is one of a hebbian plasticity or an anti- hebbian plasticity.
[0018] In an embodiment, the set of output spikes activated by the set of input spikes are categorized as being under a delayed response or a non-locked response.
[0019] In an embodiment, the plurality of spiking neurons is combined in form a hierarchy.
[0020] In an embodiment, within the hierarchy a lowest layer performs spatial integration, a middle layer performs order and sequential integration, and a highest layer performs temporal integration.
[0021] In an embodiment, a potentiation of the synapse occurs at firing of a post- synaptic neuron and depression of the synapse occurs at firing by a pre-synaptic neuron.
[0022] In an embodiment, the potentiation of the synapse occurs due to modulation of the hebbian plasticity and the depression of the synapse occurs due to modulation of the anti- hebbian plasticity.
[0023] Another aspect of the present disclosure relates to a method for training a neuromorphic network, said method comprising : receiving, by one or more processors, one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons; in response to the received one or more input spikes, activating, by the one or more processors, a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse; determining, by the one or more processors, a change in performance of the synapse when the set of output spikes are generated based on the set of input spikes by the plurality of spiking neurons, wherein the determined change in performance of the synapse is indicative of an occurrence pattern of a plasticity modulation within the plurality of spiking neurons; and updating, by the one or more processors, the neuromorphic network based on the indicated occurrence pattern of the plasticity modulation.
[0024] In an embodiment, the neuromorphic network is a non-linear spiking neural network.
[0025] In an embodiment, a mapping between a sensory neuron and a motor neuron of the plurality of spiking neurons results in producing reflex arcs in the neuromorphic network.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0027] The diagrams are for illustration only, which thus is not a limitation of the present disclosure, and wherein:
[0028] FIG. 1 illustrates exemplary functional components 100 of a neuromorphic computing system 102 in accordance with an embodiment of the present disclosure.
[0029] FIG. 2A illustrates upon a neuronal model 200, while FIG. 2B illustrates behavior of a neuron unit 220, in accordance with an exemplary embodiment of the present disclosure.
[0030] FIGs. 3 A and 3B illustrate at 300 how neurons in the proposed model behave in accordance with an exemplary embodiment of the present disclosure.
[0031] FIG. 4 illustrates at 400 how spatial and temporal processing is present in prior art representing biological life.
[0032] FIGs. 5 A to 5C illustrate the "flytrap reflex" model 500 that can be implemented using proposed system in accordance with an exemplary embodiment of the present disclosure.
[0033] FIGs. 6A to 6C illustrate at 600 how a XOR logic function can be implemented in accordance with an exemplary embodiment of the present disclosure.
[0034] FIG. 7A elaborates upon an approximate structure, 700 of hebbian and anti- hebbian weight change of a prior art approach. [0035] FIGs. 7B to 7D illustrate, at 720 how hebbian and anti-hebbian is implemented in this proposed model for sequence characterization, in accordance with an exemplary embodiment of the present disclosure.
[0036] FIG. 8 illustrates oscillatory neurons, 800, in accordance with an exemplary embodiment of the present disclosure.
[0037] FIGs. 9A to 9D illustrated various examples, 900 of oscillator neurons, in accordance with an exemplary embodiment of the present disclosure.
[0038] FIGs. 10A to 10D illustrate, at 1000 an architecture for implementation of a model for understanding human speech and results thereupon, in accordance with an exemplary embodiment of the present disclosure.
[0039] FIGs.l lA to 11C illustrate at 1100, single sensor neuron behavioral variation in accordance with an exemplary embodiment of the present disclosure.
[0040] FIG.12A illustrates behavior of a real muscle (prior art) while FIG. 12B illustrates at 1200 behavior of the muscle/effector unit of this model, in accordance with an exemplary embodiment of the present disclosure. FIG. 12C illustrates muscle integration with multiple sensory input, and then tetanus stimulation (prior art), while FIG. 12D emulates the same using proposed model, in accordance with an exemplary embodiment of the present disclosure.
[0041] FIG. 13 A illustrates at 1300 a real muscle response to tetanic tension (75 Hz) at three levels (prior art), while FIG. 13B illustrates at 1302 response achieved using proposed model, in accordance with an exemplary embodiment of the present disclosure.
[0042] FIG. 14 illustrates at 1400 the values of data points as per the system.
[0043] FIG. 15 illustrates exemplary implementation, 1500 of fused trajectory optimization in accordance with an embodiment of the present disclosure.
[0044] FIG. 16 is a flow diagram, 1600 illustrating a method for optimizing a trajectory for a host vehicle for dynamic evasive maneuver in a drive passage to avoid collision of the host vehicle with a target vehicle in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0045] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims.
[0046] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.
[0047] Embodiments of the present invention include various steps, which will be described below. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special- purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, and firmware and/or by human operators.
[0048] Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product.
[0049] Thus, for example, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named element.
[0050] Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The term“machine-readable storage medium” or “computer-readable storage medium” includes, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware). A machine- readable medium may include a non-transitory medium in which data may be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD), flash memory, memory or memory devices. A computer-program product may include code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
[0051] All methods described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention.
[0052] The present disclosure relates to the field of neuromorphic computing system engineering. More particularly, the present disclosure relates to a system and method for updating a spiking neural network.
[0053] An aspect of the present disclosure relates to a neuromorphic computing system comprising a plurality of spiking neurons, said system comprising : a processor communicatively coupled to a memory, the memory storing a set of instructions executable by the processor, wherein the processor upon execution of the set of instructions causes the system to: receive one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons; in response to the received one or more data packets, activate a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse; determine a change in performance of the synapse when the set of output spikes are generated based on the set of input spikes by the plurality of spiking neurons, wherein the determined change in performance of the synapse is indicative of an occurrence pattern of a plasticity modulation within the plurality of spiking neurons; and update the neuromorphic computing system based on the indicated occurrence pattern of the plasticity modulation.
[0054] In an embodiment, the plasticity is one of a hebbian plasticity or an anti- hebbian plasticity.
[0055] In an embodiment, the set of output spikes activated by the set of input spikes are categorized as being under a delayed response or a non-locked response.
[0056] In an embodiment, the plurality of spiking neurons is combined in form a hierarchy.
[0057] In an embodiment, within the hierarchy a lowest layer performs spatial integration, a middle layer performs order and sequential integration, and a highest layer performs temporal integration.
[0058] In an embodiment, a potentiation of the synapse occurs at firing of a post- synaptic neuron and depression of the synapse occurs at firing by a pre-synaptic neuron.
[0059] In an embodiment, the potentiation of the synapse occurs due to modulation of the hebbian plasticity and the depression of the synapse occurs due to modulation of the anti- hebbian plasticity.
[0060] Another aspect of the present disclosure relates to a method for training a neuromorphic network, said method comprising : receiving, by one or more processors, one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons; in response to the received one or more input spikes, activating, by the one or more processors, a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse; determining, by the one or more processors, a change in performance of the synapse when the set of output spikes are generated based on the set of input spikes by the plurality of spiking neurons, wherein the determined change in performance of the synapse is indicative of an occurrence pattern of a plasticity modulation within the plurality of spiking neurons; and updating, by the one or more processors, the neuromorphic network based on the indicated occurrence pattern of the plasticity modulation.
[0061] In an embodiment, the neuromorphic network is a non-linear spiking neural network.
[0062] In an embodiment, a mapping between a sensory neuron and a motor neuron of the plurality of spiking neurons results in producing reflex arcs in the neuromorphic network.
[0063] In an embodiment, the system can enable providing an efficient way of using third generation neural networks, while gaining significant performance improvement. After firing the neuron, the system enables potential duration to be not reset manually. The system enables internal dynamics of the neurons to introduce two periods as in real neurons: an absolute refractory period where the neuron cannot fire at all, and a relative refractory period where the neuron resists firing but can fire given enough activation. Further, the system enables generated reflex arcs to combine - both the spatial processing aspect and the temporal processing aspect to generate input phase-locked responses. Furthermore, the system enables the neurons behave more bio-realistically such that a higher stimulation can lead to distinct spikes instead of a continuous value. The system can introduce maximum firing frequency in neurons that is independent of an applied stimulation. Also, the system enables to integrate a sequence specific information along with spatial and temporal information by utilizing modulated plasticity.
[0064] FIG. 1 illustrates exemplary functional components 100 of a neuromorphic computing system 102 in accordance with an embodiment of the present disclosure.
[0065] In an aspect, the neuromorphic computing system 102 may comprise one or more processor(s) 104. The one or more processor(s) 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 104 are configured to fetch and execute computer-readable instructions stored in a memory 106 of the system 102. The memory 106 may store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory 106 may comprise any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0066] The system 102 may also comprise an interface(s) 108. The interface(s) 108 may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as EO devices, storage devices, and the like. The interface(s) 108 may facilitate communication of the system 102 with various devices coupled to the system 102 such as an input unit and an output unit. The interface(s) 108 may also provide a communication pathway for one or more components of the system 102. Examples of such components include, but are not limited to, processing engine(s) 110 and database 122.
[0067] The processing engine(s) 110 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 110. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 110 may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 110 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine -readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 110. In such examples, the system 102 may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine -readable storage medium may be separate but accessible to the system 102 and the processing resource. In other examples, the processing engine(s) 110 may be implemented by electronic circuitry.
[0068] The database 122 may comprise data that is either stored or generated as a result of functionalities implemented by any of the components of the processing engine(s)
110.
[0069] In an exemplary embodiment, the processing engine(s) 110 may comprise an input spikes receiving unit 112, an output spikes activating unit 114, a plasticity modulation determination unit 116, a neuron updating unit 118, and other units(s) 120.
[0070] It would be appreciated that units being described are only exemplary units and any other unit or sub- unit may be included as part of the system 102. These units too may be merged or divided into super- units or sub- units as may be configured. [0071] In an embodiment, a term spiking neuron(s) (referred to interchangeably as neuron(s)) can refer to a kind of neurons that produces a neuronal output in form of a so called spike train, preferably a temporal pattern of binary spikes. Multiple coding methods exist for interpreting the spike train as a real-value number, either relying on the frequency of spikes, or timing between spikes, to encode information.
[0072] In an embodiment, a plurality of spiking neurons can be operatively coupled to one another using one or more of synapses. The synapses of the spiking neurons can perform a pre-processing of synaptic input signals defined by so called synaptic weights of the neuron.
[0073] In an embodiment, the system may provide simple and efficient architectures and methods for feature extraction directly in a spiking neural network. In contrast to prior art, the present invention uses a simpler and more scalable solution for feature learning by the spiking neural network.
[0074] In an embodiment, the spiking neural network comprises neurons for outputting spikes in response to a synaptic current. The spiking neural network can be formed of, or comprise, a plurality of neurons. The plurality of neurons can be arranged in an array, in a linear structure, in a tree (branching) structure, or in any other configuration. Further, all or a subset, of the plurality of neurons of the spiking neural network are spiking neurons. The spiking neurons sums inputs into the spiking neurons and outputs a spike (or spike signal) when the inputs are summed and the summed inputs reach or exceed a threshold value associated with the spiking neurons.
[0075] In an embodiment, for the spiking neurons the threshold value may be a threshold voltage, a threshold current, a threshold charge, etc. The inputs into the spiking neuron may correspond to the threshold type. As an example, when the spiking neurons have a threshold current, the inputs into the spiking neurons are currents, and the output can be a sharp change in the current as may be represented by a delta function, for example. As can be appreciated, the inputs into the spiking neuron do not need to match the output spikes and the inputs can be currents but the spiking neuron can output a voltage spike.
Input spikes receiving unit 112
[0076] In an embodiment, the input spikes receiving unit 112 can receive multiple data packets pertaining to a set of input spikes by a plurality of spiking neurons present in the neumorphic computing system. The neumorphic computing system can be one of a spiking neural network. The set of input spikes can have attributes related to spatial information and temporal information of the plurality of spiking neurons. [0077] In an embodiment, in the spiking neural networks the input spike receiving unit 112 can characterize patterns of input neurons in multiple domains such as spatial domain, temporal domain and sequential domains.
[0078] In an embodiment, in a neuron unit an input dendrite can make synapse with input neurons and an axon can make synapses with output neurons. The synapses can have weights that define their connection to other neurons. A signal can arrive at the input dendrite as input spike that is to be delivered to soma. If the soma fires, the axon can be activated and that sends signals to the dendrites it is connected to as output spikes.
[0079] In an embodiment, when the soma does not fire, the neuron cannot get fired and so the input spike cannot be generated. This can lead to the axon being not engaged and hence generate no output spikes that are to be generated and transmitted to other neurons.
[0080] In an embodiment, the soma can be an asynchronous process and the neuron can be powered by the soma. When the soma stops working, the neurons attached therewith can also stops working.
[0081] In an embodiment, the neuron can fire an input spike in the spiking neural network. The input spike can comprise a threshold value to determine whether or not the neuron should output an output spike. (The term‘output a spike’ is also referred to herein as ‘firing’ and‘firing a spike’).
[0082] In an embodiment, the spiking neural network can comprises a synapse coupled to a neuron. The neuron can be a spiking neuron. The synapse can receive one or more inputs from, for example, other neurons present in the spiking neural network. The synapse outputs a synaptic signal (e.g. a synaptic current, a synaptic voltage, etc.), which may be received by neuron as an input signal. The neuron comprises the threshold value such that when the inputs into the neuron reach or exceed the threshold value, the neuron outputs a spike.
[0083] In an exemplary embodiment, a set of neurons can facilitate continuous beating of human heart. The neurons that facilitate continuous beating of the human heart also provide additional information to other set of neurons for performing various activities in the human body. One such activity can be blinking automatically of the human eyes. As can be appreciated by a person skilled in the art, the plurality of neurons can provide one or more input spikes in the human body leading to other set of neurons resulting in continuous beating of the human heart. Based on the provided one or more input spikes, a one or more of additional neurons can be activated that generate a set of output spikes. The set of the output spikes can be correlated to one or more functions such as automatic blinking of human eyes, continuous breathing with respect to ongoing heart beating in the human body.
Output spikes activating unit 114
[0084] In an embodiment, the output spikes activating unit 114 can receive a response from the input spikes. On receiving a response set of output spikes are activated.
[0085] In an embodiment, the set of output spikes activated by the set of input spikes are categorized as being under a delayed response or a non-locked response.
[0086] In an embodiment, output neurons can have positive connections and longer potential durations and do not have an axon to connect to other neurons. The output neurons can generate output spikes based on the input spikes.
[0087] In an embodiment, the plurality of neurons can be arranged in an array, in a linear structure, in a tree (branching) structure, or in any other configurations. Further, when the plurality of neurons is combined in a tree structure a lowest layer performs spatial integration, a middle layer performs order and sequential integration, and a highest layer performs temporal integration.
[0088] In an embodiment, the system enables the plurality of the neurons to behave more bio-realistically, where a highest simulation can lead to distinct spikes instead of a continuous value. This leads to introducing maximum firing frequency in the neurons that are independent of the applied simulation.
[0089] In an exemplary embodiment, one or more touch receptors at a flytrap reflex model can stimulate a sensory unit as (e.g. input spikes), which can thus stimulate a motor unit. When the motor unit fires, it can stimulate an effector unit (e.g. output spikes) i.e. a trap. If motor unit does not fire, no action will be performed by the effector unit. Weight here is the synapse between the sensor unit (e.g. input spikes) and the motor unit which can determine whether or not to activate the effector unit (e.g. output spikes). A weight between the motor unit and the effector unit can control how much the effector unit is activated. A value of the effector unit (e.g. output spikes) can represent a trap closure.
Plasticity modulation determination unit 116
[0090] In an embodiment, a change in performance of the synapse is noticed when the set of output spikes are generated based on the set of input spikes. The determined change in performance of the synapse can indicate an occurrence pattern of a plasticity modulation within the plurality of spiking neurons.
[0091] In an embodiment, the plasticity can be one of a hebbian plasticity or an anti- hebbian plasticity. Further, modulation of the hebbian plasticity or the anti-hebbian plasticity within the spiking neural network can be determined by examining reward specific and/or punishment specific modulatory neurons.
[0092] In an embodiment, a type and magnitude of hebbian and Anti- hebbian plasticity can be determined by the modulatory neurons. In an exemplary embodiment, a new-born baby bird can know how to eat and breathe as this is shaped by evolution. But the bird has to observe and learn to fly, sing and hunt. This kind of observational learning or associative learning can be performed using the system by hebbian and anti- hebbian plasticity based sequence learning.
[0093] Further, in an exemplary embodiment when a neuron A fires, it can cause a potential change in activation of Neuron B. Upon the hebbian plasticity learning being enabled by modulation, the system can define that a synapse is being shared between the pre- synaptic neuron and the post-synaptic neuron. The two neurons can independently adjust weight of the shared synapses and hence the system can be updated when the neuron is firing. Further, when the presynaptic neuron fires and the hebbian plasticity is enabled then it can depress weight while when postsynaptic neuron fires, it can potentiate the weight. The degree of the weight can be defined by timing difference, intrinsic property values and the modulatory signals from modulatory neurons. Additionally, depending on order of the neuronal firing and the type and magnitude of plasticity enabled, the weights of the synapse are increased/decreased accordingly in a shared fashion by both, the pre-synaptic Neuron A and the post-synaptic Neuron B.
[0094] In an embodiment, the above discussed mechanism can be exactly reversed when the modulation enables anti-hebbian plasticity.
Neuron updating unit 118
[0095] In an embodiment, a process of generation of the output spike based on a set of input spikes is saved. The sequence of the set of input neurons that produce input spikes leads to generation of output spikes needs be considered and noted. The system can be updated based on the sequence leading to continuous learning and updation of the system.
[0096] In an exemplary embodiment, the neuromorphic computing system representing the spiking neural network may employ a spike-timing-dependent plasticity (STDP) learning. Here, a network of neural network elements can communicate via the spike messages sent from one element to other element. Each of the elements can implement some number of the neurons. The neurons can operate as primitive nonlinear temporal computing elements. Upon the neuron's activation exceeding some threshold level, the neurons can generate a spike message (e.g. input spike) that can be propagated to a set of additional neurons contained in destination cores to generate a resultant spike message (e.g. output spike).
[0097] In an embodiment, when the soma spikes, in addition to that spike propagating downstream to the other neurons, the spike also propagates backwards down through a dendritic tree, which can be beneficial for learning. A synaptic plasticity at the synapse can be a function of when a postsynaptic neuron fires and when a presynaptic neuron is firing. It can be appreciated by one skill in the art that in a hierarchical architecture, once the soma fires, there are other elements that know that the neuron can fire in order to support learning. The learning may be based on implementation of the STDP. The learning can then be communicated with the synapses accordingly.
[0098] In an aspect, other units 120 implement functionalities that supplement applications or functions performed by the system 102, one or more processors 104 or the processing engine(s) 110.
[0099] FIG. 2A illustrates upon a neuronal model 200, while FIG. 2B illustrates behavior of a neuron unit 220, in accordance with an exemplary embodiment of the present disclosure.
[00100] In an embodiment, the proposed invention (interchangeably termed as model or system herein) uses intelligent units based on adaptive non-linear spiking neuron. The spiking neuron used is unlike Leaky Integrate and Fire model because in this model, no differential equations are used. Instead, the design is more like Spike Response Model, in which the activation and spikes occur in real-time. This model is also unlike Hodgkin Huxley model, which although accurately models ion channels in real neurons, is very complex in order to create multiple units and create functional neural network simulations.
[00101] In another embodiment, the proposed neural model has non-linear activations so as to enrich the computation capabilities. The neuron is also adaptive, meaning it can adjust its firing rate on the fly, gaining a capability akin to short term memory.
[00102] In another exemplary embodiment, the proposed system can be represented as in FIG. 2A and 2B. As shown in FIG. 2A each neuron unit 202 consists of input dendrites 204, a soma 206 and output axon 208. The input dendrites 204 make synapse with input neurons and axon 208 makes synapses with output neurons. The synapses can have“weights” that define their connection to other neurons. These synapses can be excitory or inhibitory depending upon the weights.
[00103] FIG. 2B illustrates stimulation of a neuron and its behavior thereupon. If a single neuron can be stimulated with insufficient strength, as shown at 222, its potential first rises and then falls back. When enough stimulation is given to the neuron, as shown at 224, its potential can increase until the neuron fires robustly to the stimulus. Further, every neuron firing can provide a signal to the neuron’s axon, which connects to the dendrites of other neurons. The processing of the internal state can take place in the soma. A signal can arrive at the dendrites, which can be delivered to the soma. If the soma fires, the axon is activated which can send signals to the dendrites it is further connected to. If the soma does not fire, the neuron does not fire and the axon can not be engaged, so no signal is sent to other neurons.
[00104] Further, in an embodiment, the synapse is the joining point of the axon of one neuron and dendrite of another. It can be represented by a weight, which defines the type of connection. The adaptive and non-linear nature of the neuron can be contained in the soma. The soma is an asynchronous process and can power the neuron. If the soma stops, the neuron stops functioning.
[00105] In an embodiment, the neurons can be of various types and can be based on the given above neuronal unit. The sensory neurons can have receptors instead of dendrites to detect stimulus in real-time, whereas the output/muscle neurons can only have positive connections and longer potential durations, and do not have an axon to connect to other neurons.
[00106] In conventional spiking neuron models, after the neuron has fired, the potential is reset in a hardcoded way and then computation resumes. This may not be a good implementation since real neurons have two types of refractory period: absolute refractory period and relative refractory period which also serve some function. This function is lost by such hardcoding.
[00107] In an embodiment of the proposed model, after firing, the potential is not reset manually. Instead the internal dynamics of the neurons can introduce the same two periods as in real neurons: an absolute refractory period where the neuron cannot fire at all, and a relative refractory period where the neuron resists firing but can fire given enough activation. One immediate advantage of this scheme is that the neurons elaborated herein behave more bio-realistically in that a highest stimulation will still lead to distinct spikes instead of a continuous value. It introduces maximum firing frequency in neurons that is independent of the applied stimulation.
[00108] FIGs. 3A and 3B illustrate at 300 how neurons in the proposed model behave in accordance with an exemplary embodiment of the present disclosure.
[00109] In an embodiment, FIG. 3A illustrates two neurons with different inherent sensitivities (obtained by Neuroevolution). Both of these neurons can be given a low intensity stimulus. As can be seen, neuron 1 with higher sensitivity responds rapidly whereas neuron 2, being less sensitive, responds sparsely to the same stimulus.
[00110] In a further embodiment, FIG. 3B illustrates the situation when both the neurons can be stimulated with same very high strength stimulus. As can be seen, the less sensitive neuron 2 starts firing robustly along with the sensitive neuron 1, and that the maximum firing frequency for both is the same, despite having different sensitivities. This is due to the absolute and refractory period emergent in the model. This behavior is only possible due to the internal dynamics of the neuronal units themselves.
[00111] FIG. 4 illustrates at 400 how spatial and temporal processing is present in prior art representing biological life.
[00112] In an embodiment, FIG. 4 examines most primitive natural intelligence using neurons. Such intelligence can be present in plants as well since, although plants do not have neurons like animals do, there are certainly cells that act like neurons do. Their tasks include information processing and signaling. An example of the venus flytrap is elaborated herein to appreciate how biological systems use techniques similar to those that can be implemented using computational techniques. The plant can live in nutrient-poor habitats but has been able to overcome the limitations of its surroundings by evolving a carnivorous lifestyle, particularly by modifying its leaves into active traps to catch animals. When flies, ants, or other small animals touch mechano- sensitive hairs protruding from the inner surface of the bi-lobed trap, it shuts within a fraction of a second. An insect bending the trigger hairs electrically excites the trap and fosters the production of a touch hormone based upon oxo- phytodienoic acid (OPDA). Thus, visitors seem to convert formerly touch-insensitive trap sectors into mechano- sensitive ones. As insects struggle to escape, the resultant consecutive mechano-electrical stimulation increases the level of touch hormone and, as a result of hormone action, glands covering the inner epidermis of the closed trap release an acidic hydrolase cocktail. Exposed to this lytic medium inside the hermetically sealed trap, the prey is digested and nitrogen-rich compounds are absorbed by the same, initially secretory, glands. Compared to the human body, therefore, the flytrap serves as mouth, stomach, and intestine all together. Ants touching a single trigger hair twice or more than one hair consecutively induce fast closure of the trap.
[00113] However, there could be false alarms not requiring a closure. For instance, a trap should not snap shut when a drop of rain touches it. The process of shutting the trap and producing digestive enzymes consumes energy. Further, traps do not last forever; they fall off after several closures or partial closures and have to be regrown. With too many false alarms the plant may not have enough energy to survive, let alone grow or produce seeds. Hence it can be appreciated that the trap should close only when prey is inside.
[00114] Further, in an embodiment, scientists have figured out that it takes atleast two taps, in rapid succession, on tiny sensor hairs within the trap to initiate closing. Some scientists have simulated the landing of an insect on the trap with an electrode and then monitored how the plants responded. After a first touch, the trap stayed open. At the second, it snapped shut. The next few touches simulated the prey struggling to escape the green “stomach” in which it had become entrapped. After a total of five taps, the glands on the inside of the trap received the message that they needed to start making enzymes to digest the meal. This can suggest that in a rudimentary sense, the plant can“count”.
[00115] In an embodiment, processing and signaling akin to neural processes is utilized here. The spatial summation system integrates touch events over various trigger hairs, while the temporal summation system integrates touch events over a period of time. The decision of tap closure relies on the output of this integration. This kind of processing is very akin to neurons.
Comparing this with a primitive neural circuit, the plants cells involved can be considered a very different kind of primitive neuronal unit while the decision system regarding the trap closure can be analogized by interactions between such neuronal units. The trap and its closure itself can be analogized by a muscle’s contraction due to neuronal stimulation.
[00116] In an embodiment, it can be appreciated that the connection and“weights” of the connection between these“neurons” can be defined by evolution by natural selection. Over time, the weights fluctuated and settled on what maximized the effort of closure and trapping insects without responding to false alarms. The fitness function here is nothing but the tradeoffs between sensitivity of the trap and energy wasted in false alarms like raindrops or wind.
[00117] Further, in an embodiment, a corresponding neural model can be created for the action of the venus flytrap plant. The model can consist of the sensory neuron responding on every touch, while a motor neuron can integrate touches over space (multiple receptors) and time (multiple touches) to generate output activation and trap closure. The entire model can be crafted by neuroevolution by artificial selection, wherein the weights can be evolved according to the fitness function which defines positive fitness value for output activation for touches by insects and negative fitness value for false alarms. [00118] FIGs.5A to 5C illustrate the "flytrap reflex" model 500 that can be implemented using proposed system in accordance with an exemplary embodiment of the present disclosure.
[00119] In an embodiment, as shown in FIG. 5A the touch receptors 502 can stimulate the sensory unit Nl (504), which can stimulate the motor unit N2 (508) using plastic synapse (506). If the motor unit N2 (508) fires, it can stimulate the effector unit N3 (510) that acts as the trap. If motor unit N2 (508) does not fire, no action will be performed by the effector unit N3 (510). The most crucial weight here is the synapse 506 between sensory unit (504) and motor unit (508) which determines whether or not to activate the effector unit N3 (510), while the weight between motor unit N2 (508) and effector unit N3 (510) controls how much the effector is activated.
[00120] In an embodiment, the results of this model in simulation can be given in FIG.
5B and 5C. The value of the effector unit (526) can represent trap closure. If the value of the effector unit is near 1, the trap will close. If it is near 0, the trap remains open. This is like a contraction of a muscle in animals. Further, in FIG. 5B, the sensory unit can be triggered 350 ms apart to stimulate two touches made after a very long time, to stimulate wind gusts etc. (False Alarm). The motor unit does not fire, and the effector unit is not activated.
[00121] Further, in an embodiment at FIG. 5C, the touch stimulus can be presented only with a gap of 30 ms to simulate an insect crawling (True Stimulus). As can be observed, the model can“count” and“remember” how far apart the touches were made and the motor unit fired which caused an action by the effector unit. How fast the‘trap’ closed can also be observed. The flytrap reflex model can be extended to trigger digestion upon“counting” of 5 such input touch stimulus. The flytrap reflex model as elaborated above forms the basis of every reflex action, even in animals.
[00122] In an exemplary embodiment, a simple monosynaptic reflex arc can be a patellar reflex, also known as knee-jerk response, which can be used by doctors to test the reflex arc in patients. This reflex involves interneurons that inhibit firing of the opposite muscular motor neurons. Such reflex arcs do not need voluntary control signal from the brain. Some of these reflexes may be present for a short time in the body even after death of a person. These reflex arcs combine both the spatial processing aspect and the temporal processing aspect to generate input phase-locked responses. While some reflex actions are complex like running away, a few of them are very basic like muscle stretching, and share characteristics with the flytrap reflex model. [00123] In an embodiment, it is apparent that error gradient descent is useless for obtaining such function models using such spiking neural networks. There are basically two complementary ways for obtaining such neural networks: l)Neuroevolution 2) Hebbian plasticity. Neuroevolution is discussed below while Hebbian plasticity is elaborated further hereafter. Basic test computational for multilayered neural networks in general can be the XOR logic gate. So, using reflex arcs with intemeuron as a primitive model of neural network-based intelligence, a small network that can successfully learn XOR function can be implemented. The network can be trained using the aforementioned neuroevolutionary process using the required outputs as the fitness function.
[00124] FIGs. 6A to 6C illustrate at 600 how a XOR logic function can be implemented in accordance with an exemplary embodiment of the present disclosure.
[00125] As known, a XOR function gives an output (say‘0’) when both of its inputs have the same value, else it gives another output (say‘ G). This is illustrated in FIG. 6A.
[00126] In an embodiment, at FIG. 6B an architecture to achieve XOR function is illustrated. In an exemplary embodiment, inputs 1 and 2 are the input nodes S 1 and S2, along with the bias unit (to maintain a default potential) into the network. The processing units are A and B. Both A and B are modelled as interneurouns. Neuron A is a logic output. When neither input is high, both S 1 and S2 do not fire so the bias unit activation is not enough to stimulate either Neuron A or Neuron B. [(0,0) => 0]
[00127] When either input node Sl or S2 is active and firing, it engages Neuron A by excitary connection, which causes Neuron A to start firing. [(0,1) => 1] and [(1,0) => 1].
[00128] When both Sl and S2 are active, their combined activation is enough to activate Neuron B. Neuron B has inhibitory connection to Neuron A, thus it inhibits Neuron A and so Neuron A remains silent. [(1,1) => 0]
[00129] In an embodiment, as illustrated in FIG. 6C are test results pertaining to activations of neuron A and neuron B as elaborated above. In an embodiment, when initially only one input is high so neuron A can be active and neuron B can not be engaged. But when both the inputs are turned high, neuron B is engaged. The neuron B inhibits neuron A and so neuron A goes inactive. These connections may be achieved through using appropriate neuroevolutionary process. Furthermore, FIG. 6C illustrates that the neuroevolutionary process has evolved weight such that neuron A (602) can fire only when either input is high, while neuron B (604) fires only when both inputs are high. The neuron B inhibits neuron A thus achieving the output logic. In this case, neuron A can be the output neuron, which can be active only when either one input is high, but not when both are high. This implements the XOR logic gate with just two active neurons, instead of a multilayered architecture needed for second generation neural networks, with or without back propagation. The neuroevolution process can also provide a successful mechanism for training real-time multilayered adaptive non-linear spiking neural networks.
[00130] A person skilled in the art will appreciate that while spatial and temporal patterns have been elaborated above, true intelligence requires detection of sequential patterns, that is, order sensitive information processing.
[00131] In an exemplary embodiment, a sequence is when neurons fire one after the other in a certain fashion. For instance, a neuron A may fire 10 ms before a neuron B. In another instance, B may fire 10 ms before A. Since both A and B are firing, there is no spatial difference (both are engaged in the sequence). Similarly, since in both cases the time difference is 10 ms, there is no temporal difference. Thus, the only differentiating factor in these two cases is the sequence, or the order in which they fired.
[00132] In an exemplary embodiment, sequential/order learning can be hard to occur at the evolutionary level. This is because sequential patterns are transient, fast and short lived in nature, and that makes it hard to learn from evolution by natural selection. The way to integrate sequence specific information along with the spatial and temporal information is by utilizing modulated plasticity, which is also done in nature. For example, a new-bom baby bird knows how to eat and breathe. This is shaped by evolution. But the bird then has to observe and learn to fly, sing and hunt. This kind of observational learning, or associative learning is beyond the scope of Neuroevolution except a few primitive sequence patterns like breathing. For such kind of sequence learning Hebbian and anti-Hebbain plasticity is proposed by this model.
[00133] FIG. 7A elaborates upon an approximate structure of hebbian and anti-hebbian weight change, 700 of a prior art approach.
[00134] In an embodiment, a type and magnitude of hebbian/Anti-Hebbian plasticity can be determined by the modulatory neurons. The approximate structure of Hebbian and anti-hebbian weight change is given in FIG. 7A.These values are decided on the fly based on output of modulatory neurons, as elaborated further.
[00135] Hebbian learning is one of the oldest learning algorithms and is based in large part on the dynamics of biological systems. A synapse between two neurons is strengthened when the neurons on either side of the synapse (input and output) have highly correlated outputs. In essence, when an input neuron fires and such firing frequently leads to the firing of the output neuron the synapse is strengthened. Following the analogy to an artificial system, the tap weight is increased with high correlation between two sequential neurons. As is stated‘the neurons that fire together, wire together’. Currently, hebbian learning is not used for sequence processing.
[00136] FIGs. 7B to 7D illustrate, at 720 how hebbian and anti-hebbian is implemented in this proposed model for sequence characterization, in accordance with an exemplary embodiment of the present disclosure.
[00137] In an embodiment, FIGs. 7B to 7D illustrate two neurons A and B connected in feed forward fashion. This can make neuron A presynaptic neuron because it is before the synapse. Neuron B can be a postsynaptic neuron because it is after the synapse, that is, synapse feeds input in B. When neuron A fires, it will cause a potential change in the activation of neuron B. Hebbian learning can be enabled here by modulation. The model defines that the synapse is shared between the pre-synaptic and post-synaptic neuron. These two neurons independently adjust the weight of the shared synapse. This updating only occurs when that neuron is firing. So, for clarity, the way synapse strength will adjust is as shown in FIG. 1C.
[00138] As can be seen, when presynaptic neuron A fires and hebbian is enabled then it will depress the weight while when postsynaptic neuron B fires, it will potentiate the weight(as shown in FIG. 7D). The degree of such change in the weight is defined by the timing difference, the intrinsic property values and the modulatory signals from modulatory neurons.
[00139] In this manner, depending on the order of the neuronal firing and the type and magnitude of plasticity enabled, the weights of the synapse are increased/decreased in a shared fashion by both the presynaptic neuron A and the postsynaptic neuron B. The mechanism is exactly reversed when modulation enables Anti-Hebbian Plasticity.
[00140] FIG. 8 illustrates oscillatory neurons, 800, in accordance with an exemplary embodiment of the present disclosure.
[00141] In an embodiment, all the response seen until now can be stimulus dependent, that is, response occurs only when a certain stimulus is encountered. However, intelligent beings can generate rhythmic activity intrinsically even in the absence of rhythmic input. These neural circuits are commonly referred to as oscillators or central pattern generators.
[00142] Further, some common function of oscillators include breathing, heartbeat, etc. Some other complex rhythmic activity is generated internally but is heavily modulated by input stimulus like walking. While walking is a self-generated rhythmic activity, several adjustments are made in the gait based on input from the sensory neurons. Thus, such activities are independent of sensory stimuli but heavily modulated by them.
[00143] FIGs. 9A to 9D illustrated various examples, 900 of oscillator neurons, in accordance with an exemplary embodiment of the present disclosure.
[00144] In an embodiment, the mechanism by which simple oscillators work can be primarily reciprocal inhibition. The inhibition is modulated by other neurons to create varying rhythmic patterns. FIG. 9A illustrates output of a muscle/effector which is connected to such an oscillatory sub-network. The entire network can remain the same, only the modulatory signals can change, which causes the dynamics such that oscillations differ. Hence, case 1 as shown in FIG.9A has all positive synapses and there is no oscillation since there is no inhibition.
[00145] FIG. 9B illustrates Case 2 of positive and negative synapses, non-modulated. The synapses are not modulated so there is just random fluctuations due to adaptive nature. There are no oscillations.
[00146] FIG. 9C illustrates Case 3 of positive and negative synapse, modulated, slow. Herein the synapses are modulated to give slow oscillations.
[00147] FIG. 9D illustrates Case 4 of positive and negative synapse, modulated fast. Herein the synapses are modulated to give fast oscillations.
[00148] In an embodiment, for stability, the hebbian and the anti hebbian learning cannot always be engaged. That is, these kinds of learnings can occur when modulated by specifically designated neurons called modulatory neurons. These neurons are inspired by the Dopamine neurons in the brain, and are primarily used for indicating rewards/ punishment and errors. Different dopamine neurons encode for errors, reward and punishment with variable strengths, and this determines whether Hebbian or anti-Hebbian learning will occur in a group of interconnected neurons.
[00149] In an embodiment, the modulatory neurons can be connected directly to the group of neurons in question and enable synaptic modifications. These neurons can also act as error signals for a subgroup of neurons that indicates whether‘learning’ a spatio-temporal sequence is advantageous or not. If advantageous, the synaptic strengths are potentiated, otherwise they are depressed and the neuronal subgroup tries to learn a different spatio- temporal sequence. Once the desired output is required, these modulatory neurons turn off so that no rapid changes occur further in the given neuronal subgroup. These neurons are designated in the architecture itself. [00150] FIGs. 10A to 10D illustrate, at 1000 an architecture for implementation of a model for understanding human speech and results thereupon, in accordance with an exemplary embodiment of the present disclosure.
[00151] In an embodiment, the system can be developed to understand human speech. Generally, second generation neural networks cannot be applied to human speech due to its continuous nature. Other spiking models are nowhere capable to handle such a task and so spiking neural networks are not used because existing systems just are not powerful enough.
[00152] In an embodiment, audio processing is a sequential problem. There are two main neural network based models for computing sequences- 1) Recurrent Neural Networks and 2) LSTM (Long Short-Term memory networks. However, both RNN and LSTM (and their derivatives) use mainly sequential processing over time. This means that long-term information has to sequentially travel through all cells before getting to the present processing cell. Hence it can be easily corrupted by being multiplied many time by small numbers < 0. This is the cause of vanishing gradients. A fundamental issue with these models is that RNN and LSTM combine only two (spatial and sequential)types of processing but fail to incorporate temporal processing. That is why they are inherently unsuitable for real-time applications. Another issue of RNN and LSTM is that they are not hardware friendly. It takes a lot of resources to train these network fast, and similarly, to run them in the cloud. Given that the demand for speech-to-text is growing rapidly, the models do not scale well. While the LSTM model (that can be seen as multiple switch gates) can bypass units and thus remember for longer time steps and so remove some of the vanishing gradients, there is still a sequential path from older past cells to the current one. In fact the path is now even more complicated, because it has additive and forget branches attached to it.
[00153] In an embodiment, the proposed system avoids all these issues by rejecting error-gradient descent based approach and instead adopting more natural mechanisms of neuroevolution and modulated plasticity for processing spatial, temporal and sequential data in an implementation that is light-weight, scalable, highly functional and works in real-time.
[00154] In an embodiment, a prototype can be developed using proposed model of spiking neural networks for all its computations. As illustrated in FIG. 10A, an audio sound 1002 is first frequency analyzed (similar to function of cochlea in human ear) shown at 1004. The sound then passes through proposed system 1006 of ascending sub-networks of spiking neurons and finally providesoutput/read-out neurons 1008. The ascending sub-networks of spiking neurons can include sensory neurons 1010, integrating intemeurons 1012, filtering intemeurons 1014 and Hebbian / anti-Hebbian interneurons 1016. The Hebbian/ anti-Hebbian neurons can also receive inputs from oscillatory neurons 1018 and modulatory neurons 1020.
[00155] The sensory part is created by Neuroevolution while the interneuronal layer is subject to modulated Hebbian and anti-hebbian plasticity. This structure disclosed can extract spatial, temporal as well as sequential patterns and accurately identify human speech. All simulations are done one a regular consumer-grade laptop. The output is not phase-locked to the input stimulus. The oscillatory neurons are used for determining when to provide output decision.
[00156] In an exemplary embodiment, the proposed system can take as input raw sounds and train itself to a client specific vocabulary by extracting spatial, temporal as well as sequential patterns of the raw sound inputs in line with the client specific vocabulary. As such, proposed system can find good use in speech recognition systems since it can extract spatial, temporal and sequential patterns into distinct parts.
[00157] FIGs.9B to 9D provide results. Included is the spectrogram for frequency analysis and output by the read-out neurons
[00158] In an exemplary embodiment, FIG. 9B is the spectrogram for audio saying“1, 2, 3, 4, 5”, FIG. 9C the spectrogram for audio saying“5, 4, 3, 2, l”and FIG.9D spectrogram for audio saying“5, 1, 3”
[00159] FIGs.l lA to 11C illustrate at 1100, single sensor neuron behavioral variation in accordance with an exemplary embodiment of the present disclosure.
[00160] FIG. 11A illustrates a single neuron showing toning firing first and the burst firing later on. It is the same neuron, only modulatory effects come into picture. FIG. 11B illustrates a single sensory neuron stimulated by a sinusoidal wave input. FIG. 11C illustrates a single sensory neuron response to different frequency sine, wherein 1102 is response to low frequency sine and 1104 is response to high frequency sine.
[00161] FIG.12A illustrates at 1200 behavior of a real muscle (prior art) while FIG. 12B illustrates at 1202 behavior of the muscle/effector unit of this model, in accordance with an exemplary embodiment of the present disclosure. FIG. 12C illustrates at 1204 muscle integration with multiple sensory input, and then tetanus stimulation (prior art), while FIG. 12D emulates at 1206 the same using proposed model, in accordance with an exemplary embodiment of the present disclosure.
[00162] FIG. 13A illustrates at 1300 a real muscle response to tetanic tension (75 Hz) at three levels (prior art), while FIG. 13B illustrates at 1302 response achieved using proposed model, in accordance with an exemplary embodiment of the present disclosure. [00163] FIG. 13 A shows response at three levels shown as 1302, 1304 and 1306 to tetanic tension (75Hz). In FIG. 13B series 1 response is indicated as 1322, series 2 as 1324 and series 3 as 1326.
[00164] FIG. 14 illustrates at 1400 the values of data points as per the system.
[00165] In an embodiment, the normal distribution denotes performance of a biological or artificial set of neural networks that are randomly connected internally. The z values can be drawn from normal distribution N where mean can be set to 0 and variation can be called mutation step size. Parents can be selected by uniform random distribution whenever an operator needs one/some. Thus, ES parent selection is unbiased as every individual can have a same probability to be selected. In the ES“parent” can mean a population member (in GA’s: a population member can be selected to undergo variation.)
[00166] FIG. 15 illustrates exemplary implementation, 1500 of fused trajectory optimization in accordance with an embodiment of the present disclosure.
[00167] In an embodiment, at block 1502, receiving at one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons. At block 1504, in response to the received one or more input spikes, activating, by the one or more processors, a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse. At block 1506, determining, by the one or more processors, a change in performance of the synapse when the set of output spikes are generated based on the set of input spikes by the plurality of spiking neurons, wherein the determined change in performance of the synapse is indicative of an occurrence pattern of a plasticity modulation within the plurality of spiking neurons. At block 1508, updating, by the one or more processors, the plurality of the neurons based on the indicated occurrence pattern of the plasticity modulation.
[00168] FIG. 16 is a flow diagram, 1600 illustrating a method for optimizing a trajectory for a host vehicle for dynamic evasive maneuver in a drive passage to avoid collision of the host vehicle with a target vehicle in accordance with an embodiment of the present disclosure.
[00169] As shown in FIG. 16, computer system includes an external storage device (1610), a bus (1620), a main memory (1630), a read only memory (1640), a mass storage device (1650), communication port (1660), and a processor (1670). A person skilled in the art will appreciate that computer system may include more than one processor and communication ports. Examples of processor (1670) include, but are not limited to, an Intel® Itanium® or Itanium 2 processor(s), or AMD® Opteron® or Athlon MP® processor(s), Motorola® lines of processors, FortiSOC™ system on a chip processors or other future processors. Processor (1870) may include various engines associated with embodiments of the present invention. Communication port (1660) can be any of an RS-232 port for use with a modem based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. Communication port (1660) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which computer system connects.
[00170] Memory (1630) can be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. Read only memory (1640) can be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or BIOS instructions for processor (1670). Mass storage (1650) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), e.g. those available from Seagate (e.g., the Seagate Barracuda 7102 family) or Hitachi (e.g., the Hitachi Deskstar 7K1000), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g. an array of disks (e.g., SATA arrays), available from various vendors including Dot Hill Systems Corp., LaCie, Nexsan Technologies, Inc. and Enhance Technology, Inc.
[00171] Bus (1620) communicatively couples processor(s) (1670) with the other memory, storage and communication blocks. Bus (1620) can be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects processor (1670) to software system.
[00172] Optionally, operator and administrative interfaces, e.g. a display, keyboard, and a cursor control device, may also be coupled to bus (1620) to support direct operator interaction with computer system. Other operator and administrative interfaces can be provided through network connections connected through communication port (1660). External storage device (1610) can be any kind of external hard-drives, floppy drives, IOMEGA® Zip Drives, Compact Disc - Read Only Memory (CD-ROM), Compact Disc - Re-Writable (CD-RW), Digital Video Disk - Read Only Memory (DVD-ROM). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[00173] In this manner, present disclosure elaborates upon a model of real-time adaptive non-linear spiking based upon biologically-inspired structure of the neurons including their Axon, Soma and Dendrites and the way they interact with each other that can work in a computationally efficient way.
[00174] In an embodiment, the system disclosed uses a combination of sub-networks into a hierarchy that performs its individual functions: lowest layer for spatial integration, middle layer for order and sequential integration, and highest layer for temporal integration. It implements various layers of neuronal groups including sensory neurons, intemeurons, motor- neurons, muscles/effectors, oscillators and modulatory neurons.
[00175] Further, the invention elaborates upon a shared model of synapse between the pre-synaptic and post-synaptic neurons such that potentiation of a given synapse occurs at the firing of post-synaptic neuron and depression occurs at the firing by the pre-synaptic neuron, for hebbian plasticity and vice-versa for anti-hebbian.
[00176] Further, the invention eprovides for neuronal dynamics that provide modes of activities of the neurons ranging from bursting, tonic firing, phasic firing, absolute refractory period, relative refractory period, post-inhibition rebound and sensory adaptation. It discloses reflex arcs that arise from sensory to motor neuronal mapping as the basic unit of cognition in spiking neural networks.
[00177] Further, the invention uses central pattern generators, or oscillators, that emerge from the activity of groups of regular neurons, which are heavily modulated by sensory input, and provides for an implementation of modulation of Hebbian and anti- Hebbian plasticity within the neuronal groups using reward- specific and punishment-specific modulatory neurons. The system disclosed can enable delayed and non-phased-locked response to input stimuli, enabled by combination of stimulus dependent processing with intrinsically oscillatory sub-networks. It enables characterization of patterns in spatial, temporal as well as sequential domains (order sensitivity) using spiking neural networks.
[00178] Furthermore, the present invention emulates most primitive neural network- based intelligence and extends it to cover complex intelligence. It is a vast implementation with various complexities arising out of the real-time and noisy nature of the neural networks with the performance bottlenecks of regular consumer PCs and laptops. The proposed system is designed bottom up from scratch and no external library is used for any kind of computation within the system. The radical design has been developed from scratch and has immense performance potential.
[00179] In an embodiment, the system disclosed herein enables the most effective way of using above described third generation spiking neural networks, while gaining significant performance improvement. Proposed invention enables development of an SNN with far fewer layers. If nodes only fire in response to a spike (actually a train of spikes) then one spiking neuron could replace many hundreds of hidden units on a sigmoidal neural network. Proposed system can handle continuous input data like live streaming, and is much more energy efficient than any other model/system offering similar functionalities.
[00180] In an embodiment, the various spikes can be routed like data packets, further reducing layers, instead of the real valued outputs of second generation neural networks
[00181] In an embodiment, the invention proposes nature-inspired modulated hebbian and anti-hebbian plasticity as non-gradient training method, while CNNs rely on gradient descent functions. Gradient descent which looks at the performance of the overall network can be led astray by unusual conditions at a layer like a non-differentiable activation function. Other more severe limitations of back propagation are: 1) vanishing gradient in deep learning 2) Affinity to converge at a local minima instead of the global suboptima. A very severe limitation of Convolutional Networks is that they are too constrained: they accept a fixed sized vector as input (e.g. an image) and produce a fixed-sized vector as output (e.g. probabilities of different classes). Not only that, these models perform this mapping using a fixed amount of computational steps (e.g. the number of layers in the model)
[00182] In an embodiment, the proposed system enables SNNs that can learn from one source and apply knowledge learnt to another and can generalize about their environment. SNNs enabled can learn to distinguish minor differences and can specialize to input patterns. Further, SNNs so formed can remember and tasks once learned can be recalled and applied to other data. They can learn from their environment unsupervised/semi- supervised and with very few examples or observations. That makes them quick learners.
[00183] In an embodiment, the proposed system is very lightweight and can run hundreds of spiking neurons in real-time in parallel on a conventional desktop PC or laptop. It needs very few data samples to learn. Even two-three samples are enough in some cases to learn effectively. It can extract temporal sequences along with spatial sequences, as well as sequential and order-specific sequences. [00184] In an embodiment, the proposed system uses spike timing-based coding instead of rate -based coding. Spike timing-based coding has been decisively shown to be much more powerful and bio-realistic.
[00185] In an embodiment, the proposed system enables an asynchronous units and modular structure. This is a great advantage. Since the neuronal units and subgroups are asynchronous, in a deep network the upper layers do not have to wait for signals from the lower layers. Each layer works autonomously and there are several cross connections. This enables much deeper networks with almost no lagtimes. Much deeper functional neural networks can be utilized as opposed to second-generation neural networks.
[00186] In an embodiment, the proposed system enables novel behaviors and functionalities unlike existing artificial neural systems.
[00187] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE INVENTION
[00188] The present disclosure relates to the field of neuromorphic computing system engineering. More particularly, the present disclosure relates to a system and method for updating a spiking neural network.
[00189] The present disclosure provides a neuromorphic computing system that works on nature-inspired modulated hebbian and anti-hebbian plasticity.
[00190] The present disclosure provides a system and method for providing an efficient technique of using third generation neural networks, while providing significant performance improvement.
[00191] The present disclosure provides a system and method for providing a neuromorphic computing system that handles continuous input patterns, differentiates and learns between various input patterns and applies the input patterns to multiple operations.
[00192] The present disclosure provides a system and method for providing a neuromorphic computing system that manages multiple neurons in real time during parallel operations. [00193] The present disclosure provides a system and method for providing a neuromorphic computing system that learns from environment without much supervision or observations.
[00194] The present disclosure provides a system and method for providing a neuromorphic computing system that extracts sequential and order specific sequences and uses spike time based coding instead of rate based coding for determining output spikes.
[00195] The present disclosure provides a system and method for providing a neuromorphic computing system with multiple layers, where the layers work autonomously such that upper layers do not depend on inputs from lower layers resulting in deeper networks with no-lag times.

Claims

I Claim:
1. A neuromorphic computing system having a plurality of spiking neurons, said system comprising :
a processor communicatively coupled to a memory, the memory storing a set of instructions executable by the processor, wherein the processor upon execution of the set of instructions causes the system to:
receive one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons;
in response to the received one or more data packets, activate a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse;
determine a change in performance of the synapse when the set of output spikes are generated based on the set of input spikes by the plurality of spiking neurons, wherein the determined change in performance of the synapse is indicative of an occurrence pattern of a plasticity modulation within the plurality of spiking neurons; and
update the plurality of spiking neurons based on the indicated occurrence pattern of the plasticity modulation.
2. The system as claimed in claim 1, wherein the plasticity is one of a hebbian plasticity or an anti-hebbian plasticity.
3. The system as claimed in claim 1, wherein the set of output spikes activated by the set of input spikes are categorized as being under a delayed response or a non-locked response.
4. The system as claimed in claim 1, wherein the plurality of spiking neurons are combined in form a hierarchy.
5. The system as claimed in claim 4, wherein within the hierarchy a lowest layer performs spatial integration, a middle layer performs order and sequential integration, and a highest layer performs temporal integration.
6. The system as claimed in claim 1, wherein a potentiation of the synapse occurs at firing of a post-synaptic neuron and depression of the synapse occurs at firing by a pre- synaptic neuron.
7. The system as claimed in claim 6, wherein the potentiation of the synapse occurs due to modulation of the hebbian plasticity and the depression of the synapse occurs due to modulation of the anti-hebbian plasticity.
8. A method for updating a neuromorphic network having a plurality of spiking neurons, said method comprising :
receiving, by one or more processors, one or more data packets pertaining to a set of input spikes by at least one of the plurality of spiking neurons, the set of input spikes having one or more attributes related to spatial information and temporal information of the at least one of the plurality of spiking neurons;
in response to the received one or more input spikes, activating, by the one or more processors, a set of output spikes by the at least one of the plurality of spiking neurons, where the plurality of the spiking neurons are operatively coupled to one another using a synapse;
determining, by the one or more processors, a change in performance of the synapse when the set of output spikes are generated based on the set of input spikes by the plurality of spiking neurons, wherein the determined change in performance of the synapse is indicative of an occurrence pattern of a plasticity modulation within the plurality of spiking neurons; and
updating, by the one or more processors, the plurality of the neurons based on the indicated occurrence pattern of the plasticity modulation.
9. The method as claimed in Claim 9, wherein the neuromorphic network is a non-linear spiking neural network.
10. The method as claimed in Claim 9, wherein a mapping between a sensory neuron and a motor neuron of the plurality of spiking neurons results in producing reflex arcs in the neuromorphic network.
PCT/IB2019/057570 2018-09-09 2019-09-09 A neuromorphic computing system WO2020049541A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201821033883 2018-09-09
IN201821033883 2018-09-09

Publications (1)

Publication Number Publication Date
WO2020049541A1 true WO2020049541A1 (en) 2020-03-12

Family

ID=69721618

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2019/057570 WO2020049541A1 (en) 2018-09-09 2019-09-09 A neuromorphic computing system

Country Status (1)

Country Link
WO (1) WO2020049541A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115169547A (en) * 2022-09-09 2022-10-11 深圳时识科技有限公司 Neuromorphic chip and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9111215B2 (en) * 2012-07-03 2015-08-18 Brain Corporation Conditional plasticity spiking neuron network apparatus and methods
WO2018057226A1 (en) * 2016-09-26 2018-03-29 Intel Corporation Programmable neuron core with on-chip learning and stochastic time step control
US20180174023A1 (en) * 2016-12-20 2018-06-21 Intel Corporation Autonomous navigation using spiking neuromorphic computers
US20190197391A1 (en) * 2017-12-27 2019-06-27 Intel Corporation Homeostatic plasticity control for spiking neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9111215B2 (en) * 2012-07-03 2015-08-18 Brain Corporation Conditional plasticity spiking neuron network apparatus and methods
WO2018057226A1 (en) * 2016-09-26 2018-03-29 Intel Corporation Programmable neuron core with on-chip learning and stochastic time step control
US20180174023A1 (en) * 2016-12-20 2018-06-21 Intel Corporation Autonomous navigation using spiking neuromorphic computers
US20190197391A1 (en) * 2017-12-27 2019-06-27 Intel Corporation Homeostatic plasticity control for spiking neural networks

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115169547A (en) * 2022-09-09 2022-10-11 深圳时识科技有限公司 Neuromorphic chip and electronic device

Similar Documents

Publication Publication Date Title
Rolls Pattern separation, completion, and categorisation in the hippocampus and neocortex
Yao et al. Pattern recognition by a distributed neural network: An industrial application
Rabinovich et al. Dynamical principles in neuroscience
Jimenez Rezende et al. Stochastic variational learning in recurrent spiking networks
Gallo Artificial neural networks tutorial
Cutsuridis et al. Hippocampus, microcircuits and associative memory
Jin et al. AP-STDP: A novel self-organizing mechanism for efficient reservoir computing
Shah et al. Hybrid ant bee colony algorithm for volcano temperature prediction
Yu et al. Neuromorphic cognitive systems
Sheik et al. Spatio-temporal spike pattern classification in neuromorphic systems
Shah et al. Global hybrid ant bee colony algorithm for training artificial neural networks
Tang et al. Introducing astrocytes on a neuromorphic processor: Synchronization, local plasticity and edge of chaos
US11640520B2 (en) System and method for cognitive self-improvement of smart systems and devices without programming
Paugam-Moisy Spiking neuron networks a survey
Bahmer et al. Modern artificial neural networks: Is evolution cleverer?
WO2020049541A1 (en) A neuromorphic computing system
Leugering et al. Dendritic plateau potentials can process spike sequences across multiple time-scales
Sharkey et al. Connectionism
Jin et al. Calcium-modulated supervised spike-timing-dependent plasticity for readout training and sparsification of the liquid state machine
Banerjee et al. An extreme value theory model of cross-modal sensory information integration in modulation of vertebrate visual system functions
Kungl Robust learning algorithms for spiking and rate-based neural networks
Bouhadjar et al. Prediction: An algorithmic principle meeting neuroscience and machine learning halfway
Temel System and circuit design for biologically-inspired intelligent learning
Nilsson Monte carlo optimization of neuromorphic cricket auditory feature detection circuits in the dynap-se processor
Stoev Brain disease detection from EEGs: comparing spiking and recurrent neural networks for non-stationary time series classification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19857486

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19857486

Country of ref document: EP

Kind code of ref document: A1