EP2973240A1 - Neural network and method of programming - Google Patents

Neural network and method of programming

Info

Publication number
EP2973240A1
EP2973240A1 EP13878741.1A EP13878741A EP2973240A1 EP 2973240 A1 EP2973240 A1 EP 2973240A1 EP 13878741 A EP13878741 A EP 13878741A EP 2973240 A1 EP2973240 A1 EP 2973240A1
Authority
EP
European Patent Office
Prior art keywords
array
neurons
neuron
dendrite
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP13878741.1A
Other languages
German (de)
French (fr)
Other versions
EP2973240A4 (en
Inventor
Narayan Srinivasa
Youngkwan Cho
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HRL Laboratories LLC
Original Assignee
HRL Laboratories LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HRL Laboratories LLC filed Critical HRL Laboratories LLC
Priority claimed from PCT/US2013/057724 external-priority patent/WO2014149070A1/en
Publication of EP2973240A1 publication Critical patent/EP2973240A1/en
Publication of EP2973240A4 publication Critical patent/EP2973240A4/en
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Definitions

  • Brain architectures composed of assemblies of interacting neurons and synapses with STDP can solve complex tasks and exhibit complex behaviors in realtime and with high precision but with very low power.
  • modeling such activity in a physical network is complex.
  • Fig. 2 illustrates in schematic form the neural network 10 of Fig. 1 and shows input array /layer 12 fully connected to output array /layer 20 and training array /layer 16 connected on-to-one to output array /layer 20.
  • Fig. 3 schematizes a neural network 30 such as disclosed in the Wu et al. reference above.
  • the neural network 30 comprises a training layer 16 connected one- to-one to an output layer 20 as detailed in Fig. 1. Further, neural network 30 comprises two input layers 12 topographically connected in input of a network layer 32, the network layer 32 being fully connected in input of output layer 20.
  • the neural networks of Figs. 1-3 are not tolerant to faults such as a partial absence of sensory or motor input signals, introduced either from the beginning of the learning process or after some initial learning has taken place.
  • An embodiment of the present disclosure comprises a neural network having a first and a second neural network portions as described above, as well as a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein: the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array; wherein the axon of each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array; and wherein the axon of each neuron of the second array of the second neural network portion forms an excitatory STDP synapse with the dendrite of a plurality
  • the neural network further comprises: a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein: the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array; wherein the dendrite of each neuron of the third array forms an excitatory STDP synapse with the axon of each neuron of the second array.
  • the input signals to the first and second sub-arrays of neurons relate to variable parameters that are to be correlated to the input signals to the fourth array.
  • Another embodiment of the present disclosure comprises a method of programming a neural network, the method comprising: providing a first neural network portion comprising a first array having a first number of neurons and a second array having a second number of neurons, wherein the second number is smaller than the first number, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array; and providing to the dendrite of each neuron of the first array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron.
  • the method further comprises providing the second array with a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein: the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
  • the method further comprises providing the second array with a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
  • Another embodiment of the present disclosure comprises a method of decoding an output of a neural network having first and second neural network portions as detailed above; the method comprising: providing the first arrays of the first and second neural network portions with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first arrays; assigning to each neuron of the fourth array of neurons an incremental position value comprised between 1 and N, N being the number of neurons of the fourth array; at any given time, measuring the firing rate of each neuron of the fourth array; and estimating the output of the neural network, at said any given time, as corresponding to the neuron of the fourth array having a position value equal to a division of the sum of the position value of each neuron of the fourth array, weighted by its firing rate at said any given time, by the sum of the firing rates of each neuron of the fourth array at said any given time.
  • Another embodiment of the present disclosure comprises a method of decoding an output of a neural network having first and second sub-arrays of neurons as disclosed above; the method comprising: providing the first and second sub-arrays of neurons with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first and second sub-arrays of neurons; assigning to each neuron of the third array of neurons an incremental position value comprised between 1 and N, N being the number of neurons of the third array; at any given time, measuring the firing rate of each neuron of the third array; and estimating the output of the neural network, at said any given time, as corresponding to the neuron of the third array having a position value equal to a division of the sum of the position value of each neuron of the third array, weighted by its firing rate at said any given time, by the sum of the firing rates of each neuron of the third array at said any given time.
  • Fig. 1 illustrates a known neural network model.
  • Fig. 2 is a schematic of model of Fig. 1.
  • Fig. 3 is a schematic of another known neural network model.
  • Fig. 4 illustrates a portion of a neural network model according to an embodiment of the present disclosure.
  • Fig. 5 illustrates a portion of a neural network model according to an embodiment of the present disclosure.
  • Fig. 6 illustrates a portion of a neural network model according to an embodiment of the present disclosure.
  • Fig. 9 illustrates the synaptic conductances between various layers during learning showing the emergence of topological organization of conductances in the neural network model of Fig. 8.
  • Figs. 10A-B illustrate the output of layer in response to inputs on layers Li 01 and Li 92 of the neural network model of Fig. 8.
  • Figs. 11A-C illustrate the incremental convergence of the neural network model of Fig. 8 as a function of learning.
  • Figs. 13A-D illustrate the performances of the neural network model of Fig. 8 for varying degrees of damaged neurons.
  • Fig. 14 is a schematic of a neural network model according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic of a further embodiment of the neural network model of Fig. 14.
  • FIG. 16 is a schematic of a further embodiment of the neural network model of Fig. 14.
  • Devices, methods, and systems are hereby described for a neural network model; in particular a spiking model capable of learning arbitrary multiple transformations for a self -realizing network (SRN).
  • the described systems and methods may be used to develop self-organizing robotic platforms (SORB) that autonomously discover and extract key patterns during or from real world interactions. In some configurations, the interactions may occur without human intervention.
  • SORB self-organizing robotic platforms
  • the described SRN may be configured for unmanned ground and air vehicles for intelligence, surveillance, and reconnaissance (ISR) applications.
  • the input signal sent to each neuron 14, relating to a measured parameter has a rate that increases when the measured parameter gets closer to a predetermined value assigned to said neuron.
  • Fig. 4 shows the firing rate FR of the input signals at a given time, with respect to the position value PV of the neurons 14.
  • the neurons are integrate and fire neurons, or operate under a model of integrate and fire neurons and the neural network or neural network model is a spiking neural network or spiking neural network model.
  • the portion of neural network model 40 comprises an intermediate array /layer 42 having a second number of neurons 44.
  • the second number is smaller than the first number.
  • the dendrite of each neuron 44 of the intermediate array forms an excitatory STDP synapse with the axon of a plurality of neurons 14 of the input array 12.
  • the dendrite of each neuron 44 of the intermediate array 42 can form STDP synapses with the axon of 100 to 200 neurons 14 of the input array.
  • the dendrite of each neuron 44 of the intermediate array 42 forms an excitatory STDP synapse 46 with the axon of neighboring neurons 44 of the intermediate array 42.
  • neighboring neurons can be a predetermined number of closest neurons in both direction of the array.
  • the intermediate array 42 further comprises a third number of interneurons 48 distributed among the neurons 44, wherein the third number is smaller than the second number.
  • the third number can be about one fourth of the second number.
  • the interneurons 48 of an array are equally distributed among the neurons 44, for example according to a periodic or pseudorandom scheme.
  • Fig. 5 illustrates a portion of a neural network or neural network model 60 according to an embodiment of the present disclosure.
  • the portion of neural network model 60 comprises two portions of neural model 40, 58 as described in relation with Fig. 4.
  • the portion of neural network model 60 further comprises a network array 62 having a fourth number of neurons 64 and a fifth number of interneurons 68 distributed among the neurons of the network array, wherein the fifth number is smaller than the fourth number.
  • the axon of each neuron 64 of the network array forms an excitatory STDP synapse 70 with the dendrite of a neighboring interneuron 68 of the network array 62.
  • the axon of each interneuron 68 of the network array 62 forms an inhibitory STDP synapse 72 with the dendrite of neighboring neurons 64 and interneurons 68 of the network array 62.
  • the axon of each neuron 44 of the intermediate array 42 of the first neural network portion 40 forms an excitatory STDP synapse 74 with the dendrite of a plurality of neurons 64 of the network array 62.
  • the axon of each neuron 44 of the second array 42 of the second neural network portion 58 forms an excitatory STDP synapse 76 with the dendrite of a plurality of neurons 64 of the network array.
  • the network array 62 comprises rows and columns of neurons 64 the axon of each neuron 44 of the second array 42 of the first neural network portion 40 forms an excitatory STDP synapse 74 with the dendrite of a plurality of neurons 64 of a row of the network array 62.
  • the axon of each neuron 44 of the second array 42 of the second neural network portion 58 then forms an excitatory STDP synapse 76 with the dendrite of a plurality of neurons 64 of a column of the network array 62.
  • the axon of each neuron 44 of the second array 42 of the first neural network portion 40 forms an excitatory STDP synapse 74 with the dendrite of a plurality of neurons 64 of a Gaussian neighborhood of neurons 64 of the network array 62; and the axon of each neuron 44 of the second array 42 of the second neural network portion 58 forms an excitatory STDP synapse 76 with the dendrite of a plurality of neurons 64 of a Gaussian neighborhood of neurons 64 of the network array 62.
  • Fig. 6 illustrates a portion of a neural network or neural network model 80 according to an embodiment of the present disclosure, comprising the portion of neural network 60 described in relation with Fig. 5. For clarity, portions 40 and 58 are not illustrated.
  • neural network 80 comprises a third neural network portion 82 as described in relation with Fig. 4, comprising an input array (not shown) arranged for receiving input signals, and an intermediate array 42 having neurons 44 and interneurons 48.
  • Neural network portion 82 is a training portion of neural network 80.
  • neural network 80 also comprises an output array 84 having a same number of neurons 86 as the intermediate array 42 of portion 82.
  • output array 84 comprises interneurons 88 distributed among the neurons 86.
  • Interneurons 88 can be in the same number as in intermediate array 42.
  • the axon of each neuron 86 of the output array forms an excitatory STDP synapse 90 with the dendrite of a neighboring interneuron 88; and the axon of each interneuron 88 of the output array forms an inhibitory STDP synapse 92 with the dendrite of neighboring neurons 86 and interneurons 88 of the output array.
  • the dendrite of each neuron 86 of the output array forms an excitatory STDP synapse 94 with the axon of a plurality of neurons 64 of the network array 62; and the dendrite of each neuron 86 of the output array 84 forms an excitatory non-STDP synapse 96 with the axon of a corresponding neuron 44 of the intermediary array 42 of the neural network portion 82.
  • the input signals to the neural network portions 40 and 58 relate to variable parameters that are to be correlated to input signals to training portion 82 during a training period.
  • input signals are no more sent to training portion 82, and the signals at the axon of neurons 86 of the output array provide the output of the neural network 80 to input signals provided to the neural network portions 40 and 58.
  • Fig. 7 schematizes a neural network 80 according to an embodiment of the present disclosure.
  • the neural network 80 comprises a training portion 82, comprising an input array/layer 12 connected to an intermediate array/layer 42, connected as detailed above to an output array/layer 36.
  • Neural network 80 further comprises two input portions 40 and 58 having each an input array /layer 12 connected to an intermediate layer 42; the intermediate layers being connected to a network layer 62. itself connected to output layer 84.
  • input portions 40 and 58 can be of identical or different sizes. For example, an input portion having a large number of input neurons can be used to observe a parameter with increased precision, and reciprocally.
  • neural network 80 can comprise more than one output layer 84 and more than one training portion such as training portion 82.
  • neural network 80 comprises an additional output layer and one or more additional training portions, having sizes identical to, or different from, output layer 84 and training portion 82
  • the additional output layers and training portions can be connected to network layer 62 consistently with output layer 84 and training portion 82.
  • the additional training portions will then receive in input additional parameters to be correlated with the parameters input to portions 40 and 58 during the training period, and the additional output layers will output said additional parameter in response to said parameters input to portions 40 and 58 after the training period.
  • neural network 80 can comprise only one input portion 40 or more input portions than the two input portions 40 and 58.
  • the neural network can then comprise more than one network layer 62, as well as intermediate network layers 62, if appropriate. Any number of input layers may be used depending on the application and the desired configuration. For example, the number of layers may reach 100 layers or more.
  • Fi -8 illustrates the application of a neural network model 100 according to an embodiment of the present disclosure to a planar 2DL robotic arm 102.
  • the 2DL robotic arm 102 comprises a first arm 104 capable of making an angle ⁇ 1 with respect to a support 106 at a planar joint 108 arranged at a first end of arm 104.
  • the 2DL robotic arm 102 comprises a second arm 110 capable of making an angle ⁇ 2 in the same plane as ⁇ 1 with respect to the first arm 104 at a planar joint 112 arranged at a second end of arm 104.
  • neural network model 100 comprises a first input layer Li ei coupled in a sparse feedforward configuration via STDP synapses to a first intermediate layer i m , corresponding to arrays 12 and 42 of the first neural network portion 40 of Fig. 7.
  • neural network model 100 comprises a second input layer Li 62 and a second intermediate layer L2 92 , corresponding to arrays 12 and 42 of the second neural network portion 58 of Fig. 7.
  • neural network model 100 comprises a network layer L3 corresponding to array 62 of Fig. 7 and connected to first and second intermediate layer z m , L2 e2 .
  • neural network model 100 comprises a first training layer Li x and a first intermediate layer L2 X , corresponding to arrays 12 and 42 of the training neural network portion 82 of Fig. 7.
  • neural network model 100 comprises a second training layer
  • neural network model 100 comprises a first output layer L4* corresponding to layer 84 of Fig. 7.
  • neural network model 100 comprises a second output layer corresponding to an additional (not shown in Fig. 7) output layer consistent with output layer 84 of Fig. 7.
  • Table (a) below illustrates the number of neurons that were used according to an embodiment of the present disclosure for the various layers/arrays of neural network model 100.
  • an electrical synapse may refer to as a mathematical model of a synapse for use in applications including hardware, software, or a combination of both.
  • input layer Li 81 and input layer Li 92 received input signals corresponding to the values of angles ⁇ 1 and ⁇ 2, having spiking rates for example comprised between lHz and 100Hz.
  • the spiking rate of a neuron m corresponding to layer Lj ei was high when the angle of joint 108 was close to an angular position 02m associated to neuron m.
  • the spiking rates of the neighboring neurons responded in a Gaussian fashion with lower spiking rates farther away from neuron that spikes maximally.
  • the neurons may respond to a small range of values for the variable of interest (e.g., ⁇ for Li 01 ).
  • the signals corresponding to ⁇ 1 and ⁇ 2 were for example generated by proprioception, i.e from the internal state of the robotic arm.
  • training layer Li and input layer Li? received input signals corresponding to the position of the distal end of arm 110 in the plan of motion of the arm, in a coordinate system having x and y axes.
  • the signals corresponding to x and y were for example generated using the processing of an image capture of the robotic arm, with:
  • TAMPA and TGABA are the time constants for -amino-3-hydroxy-5- methyl-4-isoxazolepropionic acid receptors (AMPA) for excitatory neurons andgamma-aminobutyric acidGABA receptors for inhibitory synapses.
  • the neuron model may be self-normalizing in which the multiplicative effect of synaptic input occurs on its own membrane voltage, referred to as voltage shunting.
  • This neuron model may enables the self-regulation of its own excitation and is biologically consistent.
  • the value of the excitatory synaptic conductance w ex (t) (in equation 1) is determined by STDP. We will now outline the STDP learning rule.
  • a synapse may be represented by a junction between two interconnected neurons.
  • the synapse may include two terminals. One terminal may be associated with the axon of the neuron providing information (this neuron is referred as the pre-synaptic neuron). The other terminal may be associated with the dendrite of the neuron receiving the information (this is referred as the post-synaptic neuron).
  • w For a synapse with a fixed synaptic conductance, w, only the input and the output terminals may be required.
  • the conductance of the synapse may be internally adjusted according to a learning rule referred to as the spike- timing dependent plasticity or STDP.
  • the system may be configured with a STDP function that modulates the synaptic conductance w based on the timing difference (tipre - tj pos t) between the action potentials of pre-synaptic neuron i and post-synaptic neuron j.
  • the timing difference (tipre - tjpost) is positive, then synapse undergoes depression. If the timing difference (tipre - tjpost) is negative, then the synapse may undergo potentiation. If the timing difference is too large on either direction, there is no change in the synaptic conductance. In one embodiment, the timing difference may be 80ms.
  • the STDP function may include four parameters (A+, A-, t and ⁇ ) that control the shape of the function.
  • the A+ and A- correspond to the maximum change in synaptic conductance for potentiation and depression respectively.
  • more than one pre- or post-synaptic spike within the time windows for potentiation or depression may occur. Accounting for these multiple spikes may be performed using an additive STDP model where the dynamics of potentiation P and depression D at a synapse are governed by: [0108] Whenever a post-synaptic neuron fires a spike, D is decremented by an amount A relative to the value governed by equation (6). Similarly, every time a synapse receives a spike from a pre-synaptic neuron, P is incremented by an amount
  • the final effective change to the synaptic conductance w due to STDP may be expressed as:
  • a spiking model may be configured to learn multiple transformations from a fixed set of input spike trains.
  • several prediction layer neurons 624 may be coupled or connected to their own training outputs 614.
  • a prediction layer may refer to an output set of neurons 622 that predict the position of a robotic arm.
  • the model in Fig. 6 may function similar to the model described in Fig. 1.
  • the system 600 described below may simultaneously learn multiple outputs or transformations of the input spike trains.
  • the same inputs angles ( ⁇ 1, ⁇ 2) may be used by the spiking model to generate multiple outputs using the equations 10 and 11 in the following.
  • a model as illustrated in Fig. 8 may be configured to learn several types of functions including anticipation, association and prediction and inverse transformations.
  • the system may be configured to use multiple possible pathways for input-to-output transformations.
  • the model is also fault tolerant.
  • Fig. 10A illustrates the output of layer at a given time t in response to inputs on layers Li ei and Li 92 , after the training period of the neural network 100 has been completed.
  • the diameter of the circles on the y, ⁇ 1 and ⁇ 2 axes increases with the firing rate of the neurons forming each axis. Only the neurons that fire are illustrated.
  • decoding the output of a neural network 80 as illustrated in Fig. 7 comprises:
  • y P (i, j, t) the evaluated output position at a given time t, for given values i, j of ⁇ 1 and ⁇ 2;
  • fijk(t) being the firing rate for a neuron k, at time t, for given values i, j of ⁇ 1 and ⁇ 2;
  • y(i, j, k, t) being the position value of a neuron k at time t, for given values i, j of ⁇ 1 and ⁇ 2.
  • Fig. 10B illustrates the output of layer at a given time t in response to inputs on layers Li 91 and Li 82 , after the training period of the neural network 100 has been completed, where the output on the y axis wraps around the end of the output array.
  • the method of measuring the output of array 84 comprises, if the neurons of the middle of the output array 84 have null firing rates, assigning to the neurons of lower position value a position value increased by the value N, N being the number of neurons of the output array 84.
  • the method described in relation with Figs. 10A and 10B can also be used to decode the output of for example layer 84 of Fig. 14, described hereafter.
  • Figs. 11A-C illustrate the incremental convergence of the neural network model of Fig. 8 as a function of learning.
  • Fig. 11A illustrates the x and y output of the neural network model of Fig. 8 after a training period of 300 seconds
  • Fig. 11B illustrates the x and y output of the neural network model of Fig. 8 after a training period of 600 seconds
  • Fig. 11C illustrates the x and y output of the neural network model of Fig. 8 after a training period of 1500 seconds.
  • the real values of x, y corresponding to the inputs used for Figs. 11A-C follow the pretzel- shaped trajectory shown in darker.
  • Figs. 12A-B illustrate the incremental convergence of the neural network model of Fig. 8 when a Gaussian sparse connectivity is used between the neurons 44 of the intermediate arrays 42 and the network array 62.
  • Figs. 12C-D illustrate the incremental convergence of the neural network model of Fig. 8 when a random sparse connectivity is used between the neurons 44 of the intermediate arrays 42 and the network array 62.
  • Figs. 13A-D illustrate the performances of the neural network model of Fig. 8 for varying degrees of damaged neurons.
  • Fig. 13A(a) shows the neural activity, or cortical coding, of the synapses between Li 91 and L2 ei for a network having 5% of neurons damaged.
  • Fig. 13A(b) shows the neural activity, or cortical coding, of the synapses within Li m for a network having 5% of neurons damaged.
  • Fig. 13A(c) shows the output x, y of a network having 5% of neurons damaged, compared to the real values of x, y (darker circle) corresponding to the inputs used for producing the output.
  • Figs. 13B(a)(b)(c) shows the same data as Figs. 13A(a)(b)(c) for network having 8% of neurons damaged.
  • Figs. 13C(a)(b)(c) shows the same data as Figs. 13A(a)(b)(c) for network having 12% of neurons damaged.
  • Figs. 13D(a)(b)(c) shows the same data as Figs. 13A(a)(b)(c) for network having 16% of neurons damaged.
  • a neural network according to an embodiment of the present disclosure is robust to neuron damage, and produces satisfactory output even with significant neuron damage.
  • Fig. 14 illustrates a portion of a neural network or neural network model 118 according to an embodiment of the present disclosure, comprising an input array 12 of neurons 14 coupled to an intermediate array 42 of neurons 44 and interneurons 48.
  • the input array/layer 12 comprises first and second sub-arrays 120 and 122 of neurons 14.
  • the neurons 14 of the first sub-array 120 are provided for receiving input signals related to a first measured parameter.
  • the neurons 14 of the second sub-array 122 are provided for receiving input signals related to a second measured parameter.
  • the intermediate array 42 comprises rows and columns of neurons 44; the interneurons 48 being distributed among the neurons, wherein the axon of each neuron 14 of the first sub-array of neurons 120 forms an excitatory STDP synapse with the dendrite of a plurality of neurons 44 of a row of the intermediate array 42; and wherein the axon of each neuron 14 of the second sub- array of neurons 122 forms an excitatory STDP synapse with the dendrite of a plurality of neurons 44 of a column of the intermediate array 42.
  • the neurons 44 of the intermediate array 42 can be arranged according to another scheme not comprising rows and columns; or the neurons of the first and second sub arrays 102, 122 can be connected to the neurons of intermediate array 42 according to a scheme, for example a sparse and random connection scheme, not following rows and columns in intermediate array 42.
  • one dendrite of a neuron 44 of the intermediate array 42 can form STDP synapses with the axon of 100 to 200 neurons 14 of the input array.
  • sub arrays 120, 122 can each comprise 1000 neurons and intermediate array can comprise 2000 neurons.
  • a neural network wherein a portion of the neural network comprises:
  • a first array having a first number of neurons, wherein the dendrite of each neuron of the first array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; a second array having a second number of neurons, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; and the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array.
  • the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
  • a neural network comprising a first and a second neural network portions according to Concept 1 to 4.
  • a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein:
  • each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array
  • a first neural network portion comprising a first array having a first number of neurons and a second array having a second number of neurons, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array; and
  • each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array
  • a fourth array having a second number of neurons and a third number of interneurons distributed among the neurons of the fourth array, wherein: the axon of each neuron of the fourth array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the fourth array;
  • each neuron of the fourth array forms an excitatory non-STDP synapse with the axon of a corresponding neuron of the second array of the third neural network
  • the input signals to the first and second neural network portions relate to variable parameters that are to be correlated to the input signals to the third neural network portion.
  • said providing to the dendrite of each neuron of the first array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron comprises: providing to the dendrite of each neuron of a first subset of neurons of the first array an input signal indicating that a first measured parameter gets closer to a predetermined value assigned to said neuron;
  • said providing a second array having a second number of neurons comprises providing a second array having rows and columns of neurons
  • each neuron of the first subset of neurons of the first array forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a row of the second array
  • a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein the axon of each neuron of the third array forms an excitatory STOP synapse with the dendrite of the neighboring interneurons of the third array; and the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array; wherein the dendrite of each neuron of the third array forms an excitatory STDP synapse with the axon of each neuron of the second array; and
  • a fourth array comprising as many neurons as the third array of neurons, wherein the dendrite of each neuron of the fourth array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; and wherein the axon of each neuron of the fourth array forms an excitatory non-STDP synapse with the dendrite of a corresponding neuron of the third array;
  • Concept 25 A method of decoding an output of a neural network according to Concept 8; the method comprising: providing the first arrays of the first and second neural network portions with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first arrays;
  • Concept 26 The method of Concept 25 comprising, if the neurons of the middle of the fourth array have null firing rates, assigning to the neurons of lower position value a position value increased by the value N.
  • first and second sub-arrays of neurons with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first and second sub-arrays of neurons; assigning to each neuron of the third array of neurons an incremental position value comprised between 1 and N, N being the number of neurons of the third array; at any given time, measuring the firing rate of each neuron of the third array; and
  • Concept 28 The method of Concept 27 comprising, if the neurons of the middle of the third array have null firing rates, assigning to the neurons of lower position value a position value increased by the value N.

Abstract

A neural network, wherein a portion of the neural network comprises: a first array having a first number of neurons, wherein the dendrite of each neuron of the first array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; and a second array having a second number of neurons, wherein the second number is smaller than the first number, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array.

Description

NEURAL NETWORK AND METHOD OF PROGRAMMING
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH
[001]. This invention was made with support from the United States Government under contract number HR0011-09-C-0001 (SyNAPSE) awarded by the Defense Advanced Research Project Agency (DARPA). The United States Government has certain rights in the invention.
CROSS REFERENCE TO RELATED APPLICATIONS
[002]. This application is related to and claims priority to U.S. Provisional Application Serial No. 61/799,883, filed March 15, 2013, and to U.S. Non-Provisional Application Serial No. 14/015,001, filed August 30, 2013, each of which is incorporated herein as though set forth in full.
BACKGROUND OF THE DISCLOSURE
[003]. Field of the disclosure: The present disclosure relates to neural networks; for example computer-implemented, as well as to methods for programing such networks. In particular, the present disclosure relates to a fault tolerant neural network capable of learning arbitrary multiple transformations, and a method of programming said neural network.
[004]. Background: Sensory perception and action are interdependent. In humans and other species, a behavior may be triggered by an ongoing situation and reflects a being's immediate environmental conditions. This type of behavior is often referred to as stimulus-response reflexes. The interdependency between stimulus and response creates an action perception cycle in which a novel stimulus triggers actions that lead to a better perception of itself or its immediate environmental condition and the cycle continues.
[005]. Human behavior is much more flexible than exclusive control by stimulus-response cycles. One attribute of intelligent-based systems is the ability to learn new relations between environmental conditions and appropriate behavior during action perception cycles. The primary mode of communication between neurons in the brain is encoded in the form of impulses, action potentials or spikes. The brain is composed of billions of neural cells, which are noisy, imprecise and unreliable analog devices. The neurons are complex adaptive structures that make connections between each other via synapses. A synapse has a presynaptic portion, comprising the axon of a neuron, inputing a spike into the synapse, and a postsynaptic portion comprising the dendrite of a neuron, sensitive to the spike being received in the synapse. The synapses may change their function dramatically depending upon the spiking activity of the neurons on either side of the synapse. The synapse includes an adaptation mechanism that adjusts the weight, or gain, of the synapse according to a spike timing dependent plasticity (STDP) learning rule.
[006] Under the STDP rule, if an input spike to a neuron tends, on average, to occur immediately before that neuron's output spike, then that particular input is made somewhat stronger. On another hand, if an input spike tends, on average, to occur immediately after an output spike, then that particular input is made somewhat weaker hence: "spike-timing-dependent plasticity". Thus, inputs that might be the cause of the post-synaptic neuron's excitation are made even more likely to contribute in the future, whereas inputs that are not the cause of the postsynaptic spike are made less likely to contribute in the future. The process continues until a subset of the initial set of connections remains, while the influence of all others is reduced to 0. Since a neuron produces an output spike when many of its inputs occur within a brief period the subset of inputs that remain are those that tended to be correlated in time. In addition, since the inputs that occur before the output are strengthened, the inputs that provide the earliest indication of correlation eventually become the final input to the neuron.
[007] Brain architectures composed of assemblies of interacting neurons and synapses with STDP can solve complex tasks and exhibit complex behaviors in realtime and with high precision but with very low power. However, modeling such activity in a physical network is complex.
[008] Neural networks using analog and digital circuitry and computer- implemented methods have been discussed to implement a STDP learning rule. However, current models do not have the capacity to be tolerant to faults (i.e., to partial absence of sensory or motor input signals) introduced either from the beginning of the learning process or after some initial learning has taken place. Accordingly, the known systems that implement a STDP learning rule are incapable of learning for example arbitrary multiple transformations in a fault tolerant fashion.
[009] Several examples of communication systems that have experienced the above described communication issues include T. P. Vogels, . Rajan and L. F. Abbott, "Neural Network Dynamics," Annual Review Neuroscience, vol. 28, pp. 357- 376, 2005; W. Gerstner and W. Kistler, Spiking Neuron Models - Single Neurons, Populations, Plasticity, Cambridge University Press, 2002; H. Markram, J. Lubke, M.Frotscher, & B. Sakmann, "Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs," Science, vol.275, pp. 213-215, 1997;Bi, G. Q., & M. Poo, "Activity-induced synaptic modifications inhippocampal culture: dependence on spike timing, synaptic strength and cell type," /. Neuroscience. vol. 18,pp. 10464- 10472, 1998;J. C. Magee and D. Johnston, "A synaptically controlled, associative signal for Hebbian plasticity in hippocampal neurons," Science vol. 275, pp.209-213, 1997; S. Song, K. D. Miller and L. F. Abbott, "Competitive Hebbian Learning Through Spike-Timing Dependent Synaptic Plasticity," Nature Neuroscience, vol.3 pp.919-926, 2000; A. P. Davison and Y. Fregnac, "Learning Cross-Modal Spatial Transformations through Spike-Timing Dependent Plasticity," Journal of Neuroscience, vol. 26, no. 2, pp. 5604-5615, 2006; Q. X. Wu, T. M. McGinnity, L. P. Maguire, A. Belatreche, B. Glackin, "2D co-ordinate transformation based on a spike- timing dependent plasticity learning mechanism," Neural Networks, vol. 21, pp. 1318- 1327, 2008; Q. X. Wu, T. M. McGinnity, L. P. Maguire, A. Belatreche, B. Glackin; "Processing visual stimuli using hierarchical spiking neural networks," International Journal of Neurocomputing, vol. 71, no. 10, pp. 2055-2068, 2008.Each of the above references is hereby incorporated by reference in its entirety.
[010] Fig. 1 illustrates a network model described in the above reference entitled "Learning Cross-Modal Spatial Transformations through Spike-Timing Dependent Plasticity". Fig. 1 shows a neural network that receives in input the angle Θ at the joint of an arm with 1 Degree of Freedom (df) and the position x of the end of the arm, in a vision-centered frame of reference. After a learning phase the neural network becomes capable of outputting x based on the angle Θ at the joint. The neural network 10 comprises a first one-dimension array 12 of input neurons 14 that each generate spikes having a firing rate that increases as a function of the angle Θ getting closer to an angle assigned to the neuron. Fig. 1 illustrates the firing rate FR of all the neurons 14 of array 12 for a given value of the angle Θ. The neural network 10 further comprises a second one-dimension array 16 of input neurons 18 that each generate spikes having a firing rate that increases as a function of the position x getting closer to a predetermined value assigned to the neuron. Fig. 1 illustrates the firing rate FR of all the neurons 18 of array 16 for a given value of the position x. The neural network 10 comprises a third one-dimension array 20 of neurons 22. [Oil] Connections are initially all-to-all (full connection) from the neurons 14 to the neurons 22, and the strength of the connections is subject to modification by STDP. Connections from the neurons 18 to the neurons 22 are one to one. The strength of these non-STPD (or non-plastic) connections is fixed.
[012] After a learning phase where stimuli corresponding to random angle Θ and their equivalent position x are sent to array 20, array 16 ceases to provide input to the array 20, and array 20 outputs a position x in response to a based on the angle Θ at the joint. Fig. 1 illustrates the firing rate FR output by the neurons 22 of array 20 in response to a given value of the angle Θ after the learning phase.
[013] Fig. 2 illustrates in schematic form the neural network 10 of Fig. 1 and shows input array /layer 12 fully connected to output array /layer 20 and training array /layer 16 connected on-to-one to output array /layer 20.
[014] Fig. 3 schematizes a neural network 30 such as disclosed in the Wu et al. reference above. The neural network 30 comprises a training layer 16 connected one- to-one to an output layer 20 as detailed in Fig. 1. Further, neural network 30 comprises two input layers 12 topographically connected in input of a network layer 32, the network layer 32 being fully connected in input of output layer 20. As outlined above, the neural networks of Figs. 1-3 are not tolerant to faults such as a partial absence of sensory or motor input signals, introduced either from the beginning of the learning process or after some initial learning has taken place.
[015] There exists a need for neural networks that would be tolerant to fault.
SUMMARY
[016] Narrowly, this writing presents a spiking model to learn arbitrary multiple transformations for a self-realizing network. [017] An embodiment of the present disclosure comprises a neural network, wherein a portion of the neural network comprises: a first array having a first number of neurons, wherein the dendrite of each neuron of the first array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; a second array having a second number of neurons, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array.
[018] According to an embodiment of the present disclosure, the second number is smaller than the first number.
[019] According to an embodiment of the present disclosure, the second array further comprises a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein: the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
[020] According to an embodiment of the present disclosure, the dendrite of each neuron of the first array is provided for receiving an input signal having a rate that increases when a measured parameter gets closer to a predetermined value assigned to said neuron.
[021] An embodiment of the present disclosure comprises a neural network having a first and a second neural network portions as described above, as well as a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein: the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array; wherein the axon of each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array; and wherein the axon of each neuron of the second array of the second neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array.
[022] According to an embodiment of the present disclosure, the third array comprises rows and columns of neurons, wherein the axon of each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a row of the third array; and wherein the axon of each neuron of the second array of the second neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a column of the third array.
[023] According to an embodiment of the present disclosure, the neural network comprises a third neural network portion as described above, as well as a fourth array having a second number of neurons and a third number of interneurons distributed among the neurons of the fourth array, wherein: the axon of each neuron of the fourth array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the fourth array; and the axon of each interneuron of the fourth array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the fourth array; wherein the dendrite of each neuron of the fourth array forms an excitatory STDP synapse with the axon of a plurality of neurons of the third array; and wherein the dendrite of each neuron of the fourth array forms an excitatory non-STDP synapse with the axon of a corresponding neuron of the second array of the third neural network.
[024] According to an embodiment of the present disclosure, the input signals to the first and second neural network portions relate to variable parameters that are to be correlated to the input signals to the third neural network.
[025] According to an embodiment of the present disclosure, the first array of neurons comprises first and second sub-arrays of neurons provided for receiving input signals related to first and second measured parameters, respectively.
[026] According to an embodiment of the present disclosure, the second array comprises rows and columns of neurons; wherein the axon of each neuron of the first sub-array of neurons forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a row of the second array; and wherein the axon of each neuron of the second sub-array of neurons forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a column of the second array.
[027] According to an embodiment of the present disclosure, the second array further comprises a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
[028] According to an embodiment of the present disclosure, the neural network further comprises: a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein: the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array; wherein the dendrite of each neuron of the third array forms an excitatory STDP synapse with the axon of each neuron of the second array.
[029] According to an embodiment of the present disclosure, the neural network comprises as many neurons as the third array of neurons, wherein the dendrite of each neuron of the fourth array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; wherein the axon of each neuron of the fourth array forms an excitatory non-STDP synapse with the dendrite of a corresponding neuron of the third array.
[030] According to an embodiment of the present disclosure, the input signals to the first and second sub-arrays of neurons relate to variable parameters that are to be correlated to the input signals to the fourth array.
[031] According to an embodiment of the present disclosure, the fourth array of neurons is a sub-array of neurons of a further neural network as described above.
[032] Another embodiment of the present disclosure comprises a method of programming a neural network, the method comprising: providing a first neural network portion comprising a first array having a first number of neurons and a second array having a second number of neurons, wherein the second number is smaller than the first number, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array; and providing to the dendrite of each neuron of the first array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron.
[033] According to an embodiment of the present disclosure, the method further comprises providing the second array with a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein: the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
[034] According to an embodiment of the present disclosure, the method comprises providing the dendrite of each neuron of the first array with an input signal having a rate that increases when a measured parameter gets closer to a predetermined value assigned to said neuron.
[035] According to an embodiment of the present disclosure, the method comprises: providing a second neural network portion having the same structure as the first neural network portion; and providing a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein: the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array; wherein the axon of each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array; and wherein the axon of each neuron of the second array of the second neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array; and providing to the dendrite of each neuron of the first array of the second neural network portion an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron.
[036] According to an embodiment of the present disclosure, the method comprises: providing a third neural network portion having the same structure as the first neural network portion; providing a fourth array having a second number of neurons and a third number of interneurons distributed among the neurons of the fourth array, wherein: the axon of each neuron of the fourth array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the fourth array; and the axon of each interneuron of the fourth array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the fourth array; wherein the dendrite of each neuron of the fourth array forms an excitatory STDP synapse with the axon of a plurality of neurons of the third array; and wherein the dendrite of each neuron of the fourth array forms an excitatory non-STDP synapse with the axon of a corresponding neuron of the second array of the third neural network; and providing to the dendrite of each neuron of the first array of the third neural network portion an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron.
[037] According to an embodiment of the present disclosure, the input signals to the first and second neural network portions relate to variable parameters that are to be correlated to the input signals to the third neural network portion.
[038] According to an embodiment of the present disclosure, said providing to the dendrite of each neuron of the first array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron comprises: providing to the dendrite of each neuron of a first subset of neurons of the first array an input signal indicating that a first measured parameter gets closer to a predetermined value assigned to said neuron; providing to the dendrite of each neuron of a second subset of neurons of the first array an input signal indicating that a second measured parameter gets closer to a predetermined value assigned to said neuron.
[039] According to an embodiment of the present disclosure, said providing a second array having a second number of neurons comprises providing a second array having rows and columns of neurons, wherein the axon of each neuron of the first subset of neurons of the first array forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a row of the second array; and wherein the axon of each neuron of the second subset of neurons of the first array forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a column of the second array.
[040] According to an embodiment of the present disclosure, the method further comprises providing the second array with a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
[041] According to an embodiment of the present disclosure, the method comprises: providing a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array; wherein the dendrite of each neuron of the third array forms an excitatory STDP synapse with the axon of each neuron of the second array; and providing a fourth array comprising as many neurons as the third array of neurons, wherein the dendrite of each neuron of the fourth array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; and wherein the axon of each neuron of the fourth array forms an excitatory non-STDP synapse with the dendrite of a corresponding neuron of the third array; the method further comprising providing to the dendrite of each neuron of the fourth array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; wherein the input signals to the first and second subset of neurons relate to variable parameters that are to be correlated to the input signals to the fourth array.
[042] Another embodiment of the present disclosure comprises a method of decoding an output of a neural network having first and second neural network portions as detailed above; the method comprising: providing the first arrays of the first and second neural network portions with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first arrays; assigning to each neuron of the fourth array of neurons an incremental position value comprised between 1 and N, N being the number of neurons of the fourth array; at any given time, measuring the firing rate of each neuron of the fourth array; and estimating the output of the neural network, at said any given time, as corresponding to the neuron of the fourth array having a position value equal to a division of the sum of the position value of each neuron of the fourth array, weighted by its firing rate at said any given time, by the sum of the firing rates of each neuron of the fourth array at said any given time.
[043] According to an embodiment of the present disclosure, the method comprises, if the neurons of the middle of the fourth array have null firing rates, assigning to the neurons of lower position value a position value increased by the value N.
[044] Another embodiment of the present disclosure comprises a method of decoding an output of a neural network having first and second sub-arrays of neurons as disclosed above; the method comprising: providing the first and second sub-arrays of neurons with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first and second sub-arrays of neurons; assigning to each neuron of the third array of neurons an incremental position value comprised between 1 and N, N being the number of neurons of the third array; at any given time, measuring the firing rate of each neuron of the third array; and estimating the output of the neural network, at said any given time, as corresponding to the neuron of the third array having a position value equal to a division of the sum of the position value of each neuron of the third array, weighted by its firing rate at said any given time, by the sum of the firing rates of each neuron of the third array at said any given time.
[045] According to an embodiment of the present disclosure, the method comprises, if the neurons of the middle of the third array have null firing rates, assigning to the neurons of lower position value a position value increased by the value N.
[046] An embodiment of the present disclosure comprises a neural network that includes a plurality of input channels; an intermediate layer of neurons including a plurality of recurrent connections between a plurality of the neurons; a plurality of inhibitor interneurons connected to the intermediate layer of neurons; a plurality of first connections configured to connect the intermediate layer of neurons to a prediction layer; and a plurality of second connections configured to connect the prediction layer to an output layer.
[047] According to an embodiment of the present disclosure, the output layer is configured to be connected to a further layer of neurons, and the further layer of neurons may be connected to one or more additional prediction layers by one or more connections. The one or more additional prediction layers may be configured to be connected to one or more additional circuits. The intermediate layer of neurons may be connected to the plurality of inhibitor interneurons by a plurality of electrical synapses. The input channels may provide a spike train to the first layer of neurons.
[048] An embodiment of the present disclosure comprises a non-transitory computer-useable storage medium for signal delivery in a system including multiple circuits, said medium having a computer-readable program, wherein the program upon being processed on a computer causes the computer to implement the steps of: receiving at a first layer of neurons a spike train; transferring a plurality of inhibitor interneurons to the first layer of neurons; passing the first layer of neurons, by a plurality of first connections, to a prediction layer; and coupling the prediction layer to an output circuit by a plurality of second connections.
[049] An embodiment of the present disclosure comprises a method of signal delivery in a system including a plurality of input channels including receiving at a first layer of neurons a spike train; transferring a plurality of inhibitor interneurons to the first layer of neurons; passing the first layer of neurons, by a plurality of first connections, to a prediction layer; and coupling the prediction layer to an output circuit by a plurality of second connections. BRIEF DESCRIPTION OF THE FIGURES
[050] The disclosure may be better understood by referring to the following figures. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure. In the figures, like reference numerals designate corresponding parts throughout the different views.
[051] Fig. 1 illustrates a known neural network model.
[052] Fig. 2 is a schematic of model of Fig. 1.
[053] Fig. 3 is a schematic of another known neural network model.
[054] Fig. 4 illustrates a portion of a neural network model according to an embodiment of the present disclosure.
[055] Fig. 5 illustrates a portion of a neural network model according to an embodiment of the present disclosure.
[056] Fig. 6 illustrates a portion of a neural network model according to an embodiment of the present disclosure.
[057] Fig. 7 is a schematic of a neural network model according to an embodiment of the present disclosure.
[058] Fig. 8 illustrates the application of a neural network model according to an embodiment of the present disclosure to a 2DL robotic arm.
[059] Fig. 9 illustrates the synaptic conductances between various layers during learning showing the emergence of topological organization of conductances in the neural network model of Fig. 8.
[060] Figs. 10A-B illustrate the output of layer in response to inputs on layers Li01and Li92 of the neural network model of Fig. 8. [061] Figs. 11A-C illustrate the incremental convergence of the neural network model of Fig. 8 as a function of learning.
[062] Figs. 12A-D illustrate the incremental convergence of the neural network model of Fig. 8 for Gaussian sparse connectivity and random sparse connectivity.
[063] Figs. 13A-D illustrate the performances of the neural network model of Fig. 8 for varying degrees of damaged neurons.
[064] Fig. 14 is a schematic of a neural network model according to an embodiment of the present disclosure.
[065] FIG. 15 is a schematic of a further embodiment of the neural network model of Fig. 14.
[066] FIG. 16 is a schematic of a further embodiment of the neural network model of Fig. 14.
DETAILED DESCRIPTION
[067] Each of the additional features and teachings disclosed below can be utilized separately or in conjunction with other features and teachings to provide a computer-implemented device, system, and/or method for a neural network model to learn arbitrary multiple transformations for a self-realizing network. Representative examples of embodiments of the present disclosure, which examples utilize many of these additional features and teachings both separately and in combination, will now be described in further detail with reference to the attached drawings. The present detailed description is merely intended to teach a person of skill in the art further details for practicing preferred aspects of the present teachings and is not intended to limit the scope of the disclosure. Therefore, combinations of features and steps disclosed in the following detail description may not be necessary to practice embodiments of the present disclosure in the broadest sense, and are instead taught merely to particularly describe representative examples of the present teachings.
[068] The following are expressly incorporated by reference in their entirety herein: "Self-Organizing Spiking Neural Model for Learning Fault-Tolerant Spatio- Motor Transformations ' lEEETransadions on Neural Networks and Learning Systems, Vol. 23, No. 10, October 2012; U.S patent application no: 13/679,727, filedll/16/2012, and entitled "Spike Domain Neuron Circuit with Programmable Kinetic Dynamics, Homeostatic Plasticity and Axonal Delays/'U.S. Patent application no. 13/415,812, filed on 3/8/2012, and entitled "Spike Timing Dependent Plasticity Apparatus, System and Method;" and U.S. patent application no. 13/708,823, filed on 12/7/2012, and entitled "Cortical Neuromorphic Network System and Method."
[069] Devices, methods, and systems are hereby described for a neural network model; in particular a spiking model capable of learning arbitrary multiple transformations for a self -realizing network (SRN). The described systems and methods may be used to develop self-organizing robotic platforms (SORB) that autonomously discover and extract key patterns during or from real world interactions. In some configurations, the interactions may occur without human intervention. The described SRN may be configured for unmanned ground and air vehicles for intelligence, surveillance, and reconnaissance (ISR) applications.
[070] Fig. 4 illustrates a portion of a neural network or neural network model 40 according to an embodiment of the present disclosure. According to an embodiment of the present disclosure, an input array/layer 12 comprises a first number of neurons 14. The dendrite of each neuron 14 of the input array 12 is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron. 11.
[071] According to an embodiment of the present disclosure, the input signal sent to each neuron 14, relating to a measured parameter, has a rate that increases when the measured parameter gets closer to a predetermined value assigned to said neuron. Fig. 4 shows the firing rate FR of the input signals at a given time, with respect to the position value PV of the neurons 14. According to an embodiment of the present disclosure, the neurons are integrate and fire neurons, or operate under a model of integrate and fire neurons and the neural network or neural network model is a spiking neural network or spiking neural network model.
[072] According to an embodiment of the present disclosure, the portion of neural network model 40 comprises an intermediate array /layer 42 having a second number of neurons 44. According to an embodiment of the present disclosure, the second number is smaller than the first number. According to an embodiment of the present disclosure, the dendrite of each neuron 44 of the intermediate array forms an excitatory STDP synapse with the axon of a plurality of neurons 14 of the input array 12. According to an embodiment of the present disclosure, the dendrite of each neuron 44 of the intermediate array 42 can form STDP synapses with the axon of 100 to 200 neurons 14 of the input array.
[073] According to an embodiment of the present disclosure, the dendrite of each neuron 44 of the intermediate array 42 forms an excitatory STDP synapse 46 with the axon of neighboring neurons 44 of the intermediate array 42. According to an embodiment of the present disclosure, neighboring neurons can be a predetermined number of closest neurons in both direction of the array. According to an embodiment of the present disclosure, the intermediate array 42 further comprises a third number of interneurons 48 distributed among the neurons 44, wherein the third number is smaller than the second number. According to an embodiment of the present disclosure, the third number can be about one fourth of the second number. According to an embodiment of the present disclosure, the interneurons 48 of an array are equally distributed among the neurons 44, for example according to a periodic or pseudorandom scheme. According to an embodiment of the present disclosure, the axon of each neuron 44 of the intermediate array 42 forms an excitatory STDP synapse 50 with the dendrite of a neighboring interneuron 48 of the intermediate array 42; andthe axon of each interneuron 48 of the intermediate array 42 forms an inhibitory STDP synapse 52 with the dendrite of neighboring neurons 44 and interneurons 48 of the intermediate array 42. The recurrence in the intermediate layer enables a neural network or neural network model according to an embodiment of the present disclosure to be fault- tolerant. This is because neurons in the intermediate layer that do not receive inputs from the input layer neurons may receive inputs from within the neurons in the intermediate layer. This allows the structure to be able to interpolate the network activity despite the absence of feedforward inputs.
[074] Fig. 5 illustrates a portion of a neural network or neural network model 60 according to an embodiment of the present disclosure. According to an embodiment of the present disclosure, the portion of neural network model 60 comprises two portions of neural model 40, 58 as described in relation with Fig. 4.
[075] According to an embodiment of the present disclosure, the portion of neural network model 60 further comprises a network array 62 having a fourth number of neurons 64 and a fifth number of interneurons 68 distributed among the neurons of the network array, wherein the fifth number is smaller than the fourth number. According to an embodiment of the present disclosure, the axon of each neuron 64 of the network array forms an excitatory STDP synapse 70 with the dendrite of a neighboring interneuron 68 of the network array 62. According to an embodiment of the present disclosure, the axon of each interneuron 68 of the network array 62 forms an inhibitory STDP synapse 72 with the dendrite of neighboring neurons 64 and interneurons 68 of the network array 62. According to an embodiment of the present disclosure, the axon of each neuron 44 of the intermediate array 42 of the first neural network portion 40 forms an excitatory STDP synapse 74 with the dendrite of a plurality of neurons 64 of the network array 62. According to an embodiment of the present disclosure, the axon of each neuron 44 of the second array 42 of the second neural network portion 58 forms an excitatory STDP synapse 76 with the dendrite of a plurality of neurons 64 of the network array.
[076] According to an embodiment of the present disclosure, the network array 62 comprises rows and columns of neurons 64 the axon of each neuron 44 of the second array 42 of the first neural network portion 40 forms an excitatory STDP synapse 74 with the dendrite of a plurality of neurons 64 of a row of the network array 62. The axon of each neuron 44 of the second array 42 of the second neural network portion 58 then forms an excitatory STDP synapse 76 with the dendrite of a plurality of neurons 64 of a column of the network array 62.
[077] According to an embodiment of the present disclosure, the axon of each neuron 44 of the second array 42 of the first neural network portion 40 forms an excitatory STDP synapse 74 with the dendrite of a plurality of neurons 64 of a Gaussian neighborhood of neurons 64 of the network array 62; and the axon of each neuron 44 of the second array 42 of the second neural network portion 58 forms an excitatory STDP synapse 76 with the dendrite of a plurality of neurons 64 of a Gaussian neighborhood of neurons 64 of the network array 62. [078] According to an embodiment of the present disclosure, the axon of each neuron 44 of the second array 42 of the first neural network portion 40 forms an excitatory STDP synapse 74 with the dendrite of a plurality of random neurons 64 of the network array; and the axon of each neuron 44 of the second array 42 of the second neural network portion 58 forms an excitatory STDP synapse 76 with the dendrite of a plurality of random neurons 64 of the network array 42.
[079] Fig. 6 illustrates a portion of a neural network or neural network model 80 according to an embodiment of the present disclosure, comprising the portion of neural network 60 described in relation with Fig. 5. For clarity, portions 40 and 58 are not illustrated. According to an embodiment of the present disclosure, neural network 80 comprises a third neural network portion 82 as described in relation with Fig. 4, comprising an input array (not shown) arranged for receiving input signals, and an intermediate array 42 having neurons 44 and interneurons 48. Neural network portion 82 is a training portion of neural network 80. According to an embodiment of the present disclosure, neural network 80 also comprises an output array 84 having a same number of neurons 86 as the intermediate array 42 of portion 82. According to an embodiment of the present disclosure, output array 84 comprises interneurons 88 distributed among the neurons 86. Interneurons 88 can be in the same number as in intermediate array 42. According to an embodiment of the present disclosure, the axon of each neuron 86 of the output array forms an excitatory STDP synapse 90 with the dendrite of a neighboring interneuron 88; and the axon of each interneuron 88 of the output array forms an inhibitory STDP synapse 92 with the dendrite of neighboring neurons 86 and interneurons 88 of the output array. According to an embodiment of the present disclosure, the dendrite of each neuron 86 of the output array forms an excitatory STDP synapse 94 with the axon of a plurality of neurons 64 of the network array 62; and the dendrite of each neuron 86 of the output array 84 forms an excitatory non-STDP synapse 96 with the axon of a corresponding neuron 44 of the intermediary array 42 of the neural network portion 82.
[080] According to an embodiment of the present disclosure, the input signals to the neural network portions 40 and 58 relate to variable parameters that are to be correlated to input signals to training portion 82 during a training period.
[081] According to an embodiment of the present disclosure, after a training period, input signals are no more sent to training portion 82, and the signals at the axon of neurons 86 of the output array provide the output of the neural network 80 to input signals provided to the neural network portions 40 and 58.
[082] Fig. 7 schematizes a neural network 80 according to an embodiment of the present disclosure. The neural network 80 comprises a training portion 82, comprising an input array/layer 12 connected to an intermediate array/layer 42, connected as detailed above to an output array/layer 36. Neural network 80 further comprises two input portions 40 and 58 having each an input array /layer 12 connected to an intermediate layer 42; the intermediate layers being connected to a network layer 62. itself connected to output layer 84. According to an embodiment of the present disclosure, input portions 40 and 58 can be of identical or different sizes. For example, an input portion having a large number of input neurons can be used to observe a parameter with increased precision, and reciprocally.
[083] According to an embodiment of the present disclosure, neural network 80 can comprise more than one output layer 84 and more than one training portion such as training portion 82. Where neural network 80 comprises an additional output layer and one or more additional training portions, having sizes identical to, or different from, output layer 84 and training portion 82, the additional output layers and training portions can be connected to network layer 62 consistently with output layer 84 and training portion 82. The additional training portions will then receive in input additional parameters to be correlated with the parameters input to portions 40 and 58 during the training period, and the additional output layers will output said additional parameter in response to said parameters input to portions 40 and 58 after the training period.
[084] According to an embodiment of the present disclosure, neural network 80 can comprise only one input portion 40 or more input portions than the two input portions 40 and 58. The neural network can then comprise more than one network layer 62, as well as intermediate network layers 62, if appropriate. Any number of input layers may be used depending on the application and the desired configuration. For example, the number of layers may reach 100 layers or more.
[085] Fi -8 illustrates the application of a neural network model 100 according to an embodiment of the present disclosure to a planar 2DL robotic arm 102. According to an embodiment of the present disclosure, the 2DL robotic arm 102 comprises a first arm 104 capable of making an angle Θ1 with respect to a support 106 at a planar joint 108 arranged at a first end of arm 104. According to an embodiment of the present disclosure, the 2DL robotic arm 102 comprises a second arm 110 capable of making an angle Θ2 in the same plane as Θ1 with respect to the first arm 104 at a planar joint 112 arranged at a second end of arm 104.
[086] According to an embodiment of the present disclosure, neural network model 100 comprises a first input layer Liei coupled in a sparse feedforward configuration via STDP synapses to a first intermediate layer im, corresponding to arrays 12 and 42 of the first neural network portion 40 of Fig. 7. According to an embodiment of the present disclosure, neural network model 100 comprises a second input layer Li62 and a second intermediate layer L292, corresponding to arrays 12 and 42 of the second neural network portion 58 of Fig. 7.
[087] According to an embodiment of the present disclosure, neural network model 100 comprises a network layer L3 corresponding to array 62 of Fig. 7 and connected to first and second intermediate layer zm, L2e2.
[088] According to an embodiment of the present disclosure, neural network model 100 comprises a first training layer Lix and a first intermediate layer L2X, corresponding to arrays 12 and 42 of the training neural network portion 82 of Fig. 7. According to an embodiment of the present disclosure, neural network model 100 comprises a second training layer
corresponding to arrays 12 and 42 of an additional (not shown in Fig. 7) training portion consistent with training portion 82 of Fig. 7.
[089] According to an embodiment of the present disclosure, neural network model 100 comprises a first output layer L4* corresponding to layer 84 of Fig. 7. According to an embodiment of the present disclosure, neural network model 100 comprises a second output layer corresponding to an additional (not shown in Fig. 7) output layer consistent with output layer 84 of Fig. 7.
[090] Table (a) below illustrates the number of neurons that were used according to an embodiment of the present disclosure for the various layers/arrays of neural network model 100.
[091] Further, table (b) below illustrates the type and number of synapses existing between the neurons of the various layers of neural network model 100. According to embodiments of the present disclosure, an electrical synapse may refer to as a mathematical model of a synapse for use in applications including hardware, software, or a combination of both.
[092] According to an embodiment of the present disclosure, input layer Li81 and input layer Li92 received input signals corresponding to the values of angles Θ1 and Θ2, having spiking rates for example comprised between lHz and 100Hz. For example, the spiking rate of a neuron m corresponding to layer Ljeiwas high when the angle of joint 108 was close to an angular position 02m associated to neuron m. According to an embodiment of the present disclosure, the spiking rates of the neighboring neurons (m-1 and m+1; etc ..) responded in a Gaussian fashion with lower spiking rates farther away from neuron that spikes maximally. It is noted that according to an embodiment of the present disclosure, the neurons may respond to a small range of values for the variable of interest (e.g., θι for Li01). The signals corresponding to Θ1 and Θ2 were for example generated by proprioception, i.e from the internal state of the robotic arm.
[093] According to an embodiment of the present disclosure, training layer Li and input layer Li? received input signals corresponding to the position of the distal end of arm 110 in the plan of motion of the arm, in a coordinate system having x and y axes. The signals corresponding to x and y were for example generated using the processing of an image capture of the robotic arm, with:
x = /[ cos(6?, ) + l2 cos(0 + θ2 )
y = l sin(0, ) + l2 sin^ + θ )
[094] Where h and h are the lengths of the two arm 104, 110 of the robot. In one embodiment, the joint angles (θι, θ≥) ranged from 0° to 360° while the x and y ranged from -1 to 1.
[095] According to an embodiment of the present disclosure, the firing rate over time of the input and training signals can be represented by a cosine or similar curve. The firing rate r may be expressed as follows: r = R0 + Rl (e + e + e
where Ro is a minimum firing rate, Rns a maximum firing rate, a represents a standard deviation of neuron location that is used in the Gaussian function to weight the firing rate depending upon the neuron location, and N is a total number of neurons in an input layer.
[096] In one embodiment, the firing rate can be comprised between lHz and 100Hz; preferably between 10Hz and 80Hz and a may be 5.
[097] According to an embodiment of the present disclosure, to compensate for variable synaptic path lengths between the input layers from joint angle space to Li and between input layers from position space to Li (the position space having shorter path lengths than joint angle space pathways to layer Li), a delay d in the feedback pathways (i.e., Lix to Lix) may be used. In biological systems, this feedback may be similar to a delay in the proprioceptive feedback either from a visual system or through additional processing in the sensory cortex.
[098] According to an embodiment of the present disclosure, a leaky integrate and fire neuron model can be used in which a neuron receives multiple excitatory input current signals (h, h, h...) and produces a single output spike signal. The output information can be encoded into the timing of these spikes ( , ti...). The potential, V, of the leaky integrate and fire model can be determined using the membrane equation as:
dV
r, (Kes, - V) +∑w t)(Eex - V) - win{t){Em
dt with£«= 0 mV and £<«= 0 mV. When the membrane potential reaches a threshold voltage Vthr, the neuron fires an action potential, and the membrane potential is reset tO Vrest.
[099] According to an embodiment of the present disclosure, an integrate and fire neural cell provides several different variables to control its membrane voltage including synaptic conductance w (both inhibitory and excitatory), membrane time constant vn, the various constants for potentials (e.g., £«) and threshold for firing.
[0100] Synaptic inputs to the neuron may be configured as conductance changes with instantaneous rise times and exponential decays so that a single pre-synaptic spike at time t generates a synaptic conductance for excitatory and inhibitory synapses as follows: wex(t) = weTAUFA (2)
win (t) = weT°AM (3)
[0101] where TAMPA and TGABA are the time constants for -amino-3-hydroxy-5- methyl-4-isoxazolepropionic acid receptors (AMPA) for excitatory neurons andgamma-aminobutyric acidGABA receptors for inhibitory synapses.
[0102] In this configuration, the neuron model may be self-normalizing in which the multiplicative effect of synaptic input occurs on its own membrane voltage, referred to as voltage shunting. This neuron model may enables the self-regulation of its own excitation and is biologically consistent. The value of the excitatory synaptic conductance wex(t) (in equation 1) is determined by STDP. We will now outline the STDP learning rule.
[0103] In one example, a synapse may be represented by a junction between two interconnected neurons. The synapse may include two terminals. One terminal may be associated with the axon of the neuron providing information (this neuron is referred as the pre-synaptic neuron). The other terminal may be associated with the dendrite of the neuron receiving the information (this is referred as the post-synaptic neuron).
[0104] For a synapse with a fixed synaptic conductance, w, only the input and the output terminals may be required. In one example, the conductance of the synapse may be internally adjusted according to a learning rule referred to as the spike- timing dependent plasticity or STDP.
[0105] The system may be configured with a STDP function that modulates the synaptic conductance w based on the timing difference (tipre - tjpost) between the action potentials of pre-synaptic neuron i and post-synaptic neuron j. There are two possibilities for the modulation of synaptic conductance. If the timing difference (tipre - tjpost) is positive, then synapse undergoes depression. If the timing difference (tipre - tjpost) is negative, then the synapse may undergo potentiation. If the timing difference is too large on either direction, there is no change in the synaptic conductance. In one embodiment, the timing difference may be 80ms.
[0106] The STDP function may include four parameters (A+, A-, t and τ) that control the shape of the function. The A+ and A- correspond to the maximum change in synaptic conductance for potentiation and depression respectively. The time constants t and rcontrol the rate of decay for potentiation and depression portions of the curve as shown in Fig. 5(a).
[0107] In one method, more than one pre- or post-synaptic spike within the time windows for potentiation or depression may occur. Accounting for these multiple spikes may be performed using an additive STDP model where the dynamics of potentiation P and depression D at a synapse are governed by: [0108] Whenever a post-synaptic neuron fires a spike, D is decremented by an amount A relative to the value governed by equation (6). Similarly, every time a synapse receives a spike from a pre-synaptic neuron, P is incremented by an amount
A+ relative to value governed by equation (7). These changes may be summarized as:
D = D + A~ (6)
P = P + A+ (7)
[0109] These changes to P and D may affect the change in synaptic conductance.
If the post-synaptic neuron fires a spike, then the value of P at that time, P* is used to increment Aw for the duration of that spike.Similarly, if the pre-synaptic neuron fires a spike that is seen by the synapse, then the value of D at that time, D*, is used to decrement Aw for the duration of that spike. Thus, the net change Aw is given by:
Aw = P* - D* (8)
[0110] The final effective change to the synaptic conductance w due to STDP may be expressed as:
w - w + Aw (9)
[0111] In one embodiment as shown in Fig. 8, a spiking model may be configured to learn multiple transformations from a fixed set of input spike trains. As shown in Fig. 6, several prediction layer neurons 624 may be coupled or connected to their own training outputs 614. A prediction layer may refer to an output set of neurons 622 that predict the position of a robotic arm. In one embodiment, the model in Fig. 6 may function similar to the model described in Fig. 1.
[0112] In another embodiment, the system 600 described below may simultaneously learn multiple outputs or transformations of the input spike trains. In one example, the same inputs angles (Θ1, Θ2) may be used by the spiking model to generate multiple outputs using the equations 10 and 11 in the following.
[0113] The inventors have shown that a model as illustrated in Fig. 8 may be configured to learn several types of functions including anticipation, association and prediction and inverse transformations. In one embodiment, the system may be configured to use multiple possible pathways for input-to-output transformations. As discussed hereafter, the model is also fault tolerant.
[0114] Fig. 9 illustrates the synaptic conductances between various layers of neural network model 100 during learning showing the emergence of topological organization of conductances in the neural network model of Fig. 8.
[0115] Fig. 10A illustrates the output of layer at a given time t in response to inputs on layers Lieiand Li92, after the training period of the neural network 100 has been completed. The diameter of the circles on the y, Θ1 and Θ2 axes increases with the firing rate of the neurons forming each axis. Only the neurons that fire are illustrated.
[0116] According to an embodiment of the disclosure, decoding the output of a neural network 80 as illustrated in Fig. 7 comprises:
a/ providing the first arrays 12 of the first and second neural network portions 40, 58 with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first arrays;
b/assigning to each neuron of the output array of neurons 84 an incremental position value comprised between 1 and N, N being the number of neurons of the output array 84; c/ at any given time, measuring the firing rate of each neuron of the output array 84; and
d/estimating the output of the neural network, at said any given time, as corresponding to the neuron of the output array 84 having a position value equal to a division of the sum of the position value of each neuron of the output array, weighted by its firing rate at said any given time, by the sum of the firing rates of each neuron of the output array at said any given time.
[0117] In other terms,
N
. . ** j* * ) =
∑ fijkit)
k=i
with yP(i, j, t) the evaluated output position at a given time t, for given values i, j of Θ1 and Θ2; fijk(t) being the firing rate for a neuron k, at time t, for given values i, j of Θ1 and Θ2; and y(i, j, k, t) being the position value of a neuron k at time t, for given values i, j of Θ1 and Θ2.
Fig. 10B illustrates the output of layer at a given time t in response to inputs on layers Li91and Li82, after the training period of the neural network 100 has been completed, where the output on the y axis wraps around the end of the output array. According to an embodiment of the present disclosure, the method of measuring the output of array 84 comprises, if the neurons of the middle of the output array 84 have null firing rates, assigning to the neurons of lower position value a position value increased by the value N, N being the number of neurons of the output array 84. According to an embodiment of the present disclosure, the method described in relation with Figs. 10A and 10B can also be used to decode the output of for example layer 84 of Fig. 14, described hereafter. [0118] Figs. 11A-C illustrate the incremental convergence of the neural network model of Fig. 8 as a function of learning. In particular, Fig. 11A illustrates the x and y output of the neural network model of Fig. 8 after a training period of 300 seconds; Fig. 11B illustrates the x and y output of the neural network model of Fig. 8 after a training period of 600 seconds; and Fig. 11C illustrates the x and y output of the neural network model of Fig. 8 after a training period of 1500 seconds. The real values of x, y corresponding to the inputs used for Figs. 11A-C follow the pretzel- shaped trajectory shown in darker.
[0119] Figs. 12A-B illustrate the incremental convergence of the neural network model of Fig. 8 when a Gaussian sparse connectivity is used between the neurons 44 of the intermediate arrays 42 and the network array 62.
[0120] Figs. 12C-D illustrate the incremental convergence of the neural network model of Fig. 8 when a random sparse connectivity is used between the neurons 44 of the intermediate arrays 42 and the network array 62.
[0121] Figs. 13A-D illustrate the performances of the neural network model of Fig. 8 for varying degrees of damaged neurons. Fig. 13A(a) shows the neural activity, or cortical coding, of the synapses between Li91and L2ei for a network having 5% of neurons damaged. Fig. 13A(b) shows the neural activity, or cortical coding, of the synapses within Lim for a network having 5% of neurons damaged. Fig. 13A(c) shows the output x, y of a network having 5% of neurons damaged, compared to the real values of x, y (darker circle) corresponding to the inputs used for producing the output. [0122] Figs. 13B(a)(b)(c) shows the same data as Figs. 13A(a)(b)(c) for network having 8% of neurons damaged.
[0123] Figs. 13C(a)(b)(c) shows the same data as Figs. 13A(a)(b)(c) for network having 12% of neurons damaged.
[0124] Figs. 13D(a)(b)(c) shows the same data as Figs. 13A(a)(b)(c) for network having 16% of neurons damaged.
[0125] As illustrated by Figs. 13A-D, a neural network according to an embodiment of the present disclosure is robust to neuron damage, and produces satisfactory output even with significant neuron damage.
[0126] Fig. 14 illustrates a portion of a neural network or neural network model 118 according to an embodiment of the present disclosure, comprising an input array 12 of neurons 14 coupled to an intermediate array 42 of neurons 44 and interneurons 48. According to an embodiment of the present disclosure, the input array/layer 12 comprises first and second sub-arrays 120 and 122 of neurons 14. The neurons 14 of the first sub-array 120 are provided for receiving input signals related to a first measured parameter. The neurons 14 of the second sub-array 122 are provided for receiving input signals related to a second measured parameter. According to an embodiment of the present disclosure, the intermediate array 42 comprises rows and columns of neurons 44; the interneurons 48 being distributed among the neurons, wherein the axon of each neuron 14 of the first sub-array of neurons 120 forms an excitatory STDP synapse with the dendrite of a plurality of neurons 44 of a row of the intermediate array 42; and wherein the axon of each neuron 14 of the second sub- array of neurons 122 forms an excitatory STDP synapse with the dendrite of a plurality of neurons 44 of a column of the intermediate array 42. [0127] According to an embodiment of the present disclosure, the neurons 44 of the intermediate array 42 can be arranged according to another scheme not comprising rows and columns; or the neurons of the first and second sub arrays 102, 122 can be connected to the neurons of intermediate array 42 according to a scheme, for example a sparse and random connection scheme, not following rows and columns in intermediate array 42. According to an embodiment of the present disclosure, one dendrite of a neuron 44 of the intermediate array 42 can form STDP synapses with the axon of 100 to 200 neurons 14 of the input array. According to an embodiment of the present disclosure, sub arrays 120, 122 can each comprise 1000 neurons and intermediate array can comprise 2000 neurons.
[0128] According to an embodiment of the present disclosure, input array 12 can comprise a number N of sub-arrays of neurons such as 120, 122, respectively provided for receiving input signals related to a number N of associated measured parameters. According to an embodiment of the present disclosure, each neuron 14 of each sub-array is provided for receiving an input signal indicating that the measured parameter associated to the sub-array gets closer to a predetermined value assigned to said neuron. For example, the rate of the signal sent to a neuron can increase when the measured parameter gets closer to a predetermined value assigned to said neuron, and reciprocally. The number of neurons of the sub-arrays can be identical or different.
[0129] According to an embodiment of the present disclosure, the neurons are integrate and fire neurons, or operate under a model of integrate and fire neurons and the neural network or neural network model is a spiking neural network or spiking neural network model.
[0130] According to an embodiment of the present disclosure, neural network 118 comprises an output array 84 having neurons 86 and interneurons 88 distributed among the neurons 86. According to an embodiment of the present disclosure, output array 84 can comprise one interneuron 88 for four neurons 86. According to an embodiment of the present disclosure, the axon of each neuron 86 of the output array forms an excitatory STDP synapse 90 with the dendrite of the neighboring interneurons 88; and the axon of each interneuron 88 of the output array forms an inhibitory STDP synapse 92 with the dendrite of the neighboring neurons 86 and interneurons 88 of the output array.
[0131] According to an embodiment of the present disclosure, the dendrite of each neuron 86 of the output array 84 forms an excitatory STDP synapse with the axon of each neuron 44 of the intermediate array 42.
[0132] According to an embodiment of the present disclosure, neural network 118 comprises a training array 124 comprising as many neurons 126 as the output array 84.
[0133] According to an embodiment of the present disclosure, the dendrite of each neuron 126 is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron. According to an embodiment of the present disclosure, the axon of each neuron 126 of the training array 124 forms an excitatory non-STDP synapse with the dendrite of a corresponding neuron of the output array 84.
[0134] According to an embodiment of the present disclosure, the input signals to the first and second sub-arrays 120, 122 relate to variable parameters that are to be correlated by the neural network to the parameter that relate to the input signals to the training array 124. According to an embodiment of the present disclosure, the parameter signals are sent to first and second sub-arrays 120, 122 as well as to training array 124 during a training period. The signals sent to first and second sub- arrays 120, 122 can for example correspond to two angles measured for a two-level of freedom robot arm such as shown in Fig. 8, for random positions of the arm, whereas the signal sent to training array 124 can for example correspond to an x or y coordinate of the position of an end of said robot arm, as measured for each of said random positions.
[0135] After the training period, input signals are no more sent to training array 124 and the signals at the axon of neurons 86 of the output array provide the output of the neural network 118to input signals provided to input arrays 120, 122.
[0136] Fig. 15 illustrates the portion of a neural network or neural network model 118 of Figure 14, comprising additional output layers 128, 130 connected to intermediate layer 42 in the same way as output layer 84. According to an embodiment of the present disclosure, output layers 84, 128 and 130 can comprise the same number of neurons, or different numbers of neurons. According to an embodiment of the present disclosure, additional output layers 128, 130 are connected to training layers 132, 134 in the same way output layer 84 is connected to training layer 124. According to an embodiment of the present disclosure, neural network 118 can comprise any number of additional output layer, each output layer being connected to a training layer as detailed above. The training periods for each output layer can have the same length and be simultaneous, or they can have different lengths and/or happen at different times.
[0137] Fig. 16 illustrates a portion of a neural network or neural network model 150 comprising the neural network portion 118 of Figure 15. According to an embodiment of the present disclosure, network 150 comprises additional neural network portions 152, 154, 156 similar to neural network portion 118, wherein the training arrays 134, 124, 132 of neural network portion 118 also from an input layer or input sub-array of neural network portions 152, 154, 156. According to an embodiment of the present disclosure, a training array 158 of neural network portion 152 forms an input sub-array of neural network portion 154. Neural network portions 118, 152, 154, 156 can be of the same size or of different sizes. Network 150 can comprise any number of neural network portions such as neural network portion 118.
[0138] In embodiments of the present disclosures, the neural network may be implemented using a shared processing device, individual processing devices, or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions.
[0139] The present disclosure or any part(s) or function(s) thereof, may be implemented using hardware, software, or a combination thereof, and may be implemented in one or more computer systems or other processing systems. A computer system for performing the operations of the present disclosure and capable of carrying out the functionality described herein can include one or more processors connected to a communications infrastructure (e.g., a communications bus, a cross-over bar, or a network). Various software embodiments are described in terms of such an exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art(s) how to implement the disclosure using other computer systems and/or architectures.
[0140] The foregoing description of the preferred embodiments of the present disclosure has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form or to exemplary embodiments disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. Similarly, any process steps described might be interchangeable with other steps in order to achieve the same result. The embodiment was chosen and described in order to best explain the principles of the disclosure and its best mode practical application, thereby to enable others skilled in the art to understand the disclosure for various embodiments and with various modifications as are suited to the particular use or implementation contemplated. It is intended that the scope of the disclosure be defined by the claims appended hereto and their equivalents. Reference to an element in the singular is not intended to mean "one and only one" unless explicitly so stated, but rather means "one or more." Moreover, no element, component, nor method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the following claims. No claim element herein is to be construed under the provisions of 35 U.S.C. Sec. 112, sixth paragraph, unless the element is expressly recited using the phrase "means for .
[0141] It should be understood that the figures illustrated in the attachments, which highlight the functionality and advantages of the present disclosure, are presented for example purposes only. The architecture of the present disclosure is sufficiently flexible and configurable, such that it may be utilized (and navigated) in ways other than that shown in the accompanying figures.
[0142] Furthermore, the purpose of the foregoing Abstract is to enable the U.S. Patent and Trademark Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract is not intended to be limiting as to the scope of the present disclosure in any way. It is also to be understood that the steps and processes recited in the claims need not be performed in the order presented.
[0143] The various features of the present disclosure can be implemented in different systems without departing from the present disclosure. It should be noted that the foregoing embodiments are merely examples and are not to be construed as limiting the present disclosure. The description of the embodiments is intended to be illustrative, and not to limit the scope of the claims. As such, the present teachings can be readily applied to other types of apparatuses and many alternatives, modifications, and variations will be apparent to those skilled in the art.
[0144] All elements, parts and steps described herein are preferably included. It is to be understood that any of these elements, parts and steps may be replaced by other elements, parts and steps or deleted altogether as will be obvious to those skilled in the art.
CONCEPTS
This writing discloses at least the following concepts.
Concept 1. A neural network, wherein a portion of the neural network comprises:
a first array having a first number of neurons, wherein the dendrite of each neuron of the first array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; a second array having a second number of neurons, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; and the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array.
Concept 2. The neural network of Concept 1, wherein the second number is smaller than the first number.
Concept 3. The neural network of Concept 1 or 2, wherein the second array further comprises a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein:
the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
Concept 4. The neural network of Concept 1 to 3, wherein the dendrite of each neuron of the first array is provided for receiving an input signal having a rate that increases when a measured parameter gets closer to a predetermined value assigned to said neuron.
Concept 5. A neural network comprising a first and a second neural network portions according to Concept 1 to 4; and
a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein:
the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and
the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array;
wherein the axon of each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array; and
wherein the axon of each neuron of the second array of the second neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array. Concept 6. The neural network of Concept 5, wherein the third array comprises rows and columns of neurons,
wherein the axon of each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a row of the third array; and
wherein the axon of each neuron of the second array of the second neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a column of the third array.
Concept 7. The neural network of Concept 5 or 6, comprising a third neural network portion according to Concept 1, as well as a fourth array having a second number of neurons and a third number of interneurons distributed among the neurons of the fourth array, wherein:
the axon of each neuron of the fourth array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the fourth array; and
the axon of each interneuron of the fourth array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the fourth array;
wherein the dendrite of each neuron of the fourth array forms an excitatory STDP synapse with the axon of a plurality of neurons of the third array; and
wherein the dendrite of each neuron of the fourth array forms an excitatory non-STDP synapse with the axon of a corresponding neuron of the second array of the third neural network. Concept 8. The neural network of Concept 7, wherein the input signals to the first and second neural network portions relate to variable parameters that are to be correlated to the input signals to the third neural network.
Concept 9. The neural network of Concept 1 to 4, wherein said first array of neurons comprises first and second sub-arrays of neurons provided for receiving input signals related to first and second measured parameters, respectively.
Concept 10. The neural network of Concept 9, wherein the second array comprises rows and columns of neurons;
wherein the axon of each neuron of the first sub-array of neurons forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a row of the second array; and
wherein the axon of each neuron of the second sub-array of neurons forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a column of the second array.
Concept 11. The neural network of Concept 9 or 10, wherein the second array further comprises a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein:
the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array. Concept 12. The neural network of Concept 9 to 11; further comprising:
a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein:
the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and
the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array;
wherein the dendrite of each neuron of the third array forms an excitatory STDP synapse with the axon of each neuron of the second array.
Concept 13. The neural network of Concept 12, comprising a fourth array comprising as many neurons as the third array of neurons, wherein the dendrite of each neuron of the fourth array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron;
wherein the axon of each neuron of the fourth array forms an excitatory non- STDP synapse with the dendrite of a corresponding neuron of the third array.
Concept 14. The neural network of Concept 13, wherein the input signals to the first and second sub-arrays of neurons relate to variable parameters that are to be correlated to the input signals to the fourth array. Concept 15. The neural network of Concept 13, wherein the fourth array of neurons is a sub-array of neurons of a further neural network according to Concept 9.
Concept 16. A method of programming a neural network, comprising:
providing a first neural network portion comprising a first array having a first number of neurons and a second array having a second number of neurons, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array; and
providing to the dendrite of each neuron of the first array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron.
Concept 17. The method of Concept 16, further comprising providing the second array with a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein:
the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array. Concept 18. The method of Concept 16 or 17, comprising providing the dendrite of each neuron of the first array with an input signal having a rate that increases when a measured parameter gets closer to a predetermined value assigned to said neuron.
Concept 19. The method of Concept 16 to 18, comprising:
providing a second neural network portion having the same structure as the first neural network portion; and
providing a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein:
the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and
the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array;
wherein the axon of each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array; and
wherein the axon of each neuron of the second array of the second neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array; and
providing to the dendrite of each neuron of the first array of the second neural network portion an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron. Concept 20. The method of Concept 19, comprising:
providing a third neural network portion having the same structure as the first neural network portion;
providing a fourth array having a second number of neurons and a third number of interneurons distributed among the neurons of the fourth array, wherein: the axon of each neuron of the fourth array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the fourth array; and
the axon of each interneuron of the fourth array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the fourth array;
wherein the dendrite of each neuron of the fourth array forms an excitatory STDP synapse with the axon of a plurality of neurons of the third array; and
wherein the dendrite of each neuron of the fourth array forms an excitatory non-STDP synapse with the axon of a corresponding neuron of the second array of the third neural network; and
providing to the dendrite of each neuron of the first array of the third neural network portion an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; wherein the input signals to the first and second neural network portions relate to variable parameters that are to be correlated to the input signals to the third neural network portion.
Concept 21. The method of Concept 16, wherein:
said providing to the dendrite of each neuron of the first array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron comprises: providing to the dendrite of each neuron of a first subset of neurons of the first array an input signal indicating that a first measured parameter gets closer to a predetermined value assigned to said neuron;
providing to the dendrite of each neuron of a second subset of neurons of the first array an input signal indicating that a second measured parameter gets closer to a predetermined value assigned to said neuron.
Concept 22. The method of Concept 21, wherein:
said providing a second array having a second number of neurons comprises providing a second array having rows and columns of neurons,
wherein the axon of each neuron of the first subset of neurons of the first array forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a row of the second array; and
wherein the axon of each neuron of the second subset of neurons of the first array forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a column of the second array.
Concept 23. The method of Concept 21 or 22, further comprising providing the second array with a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein:
the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array. Concept 24. The method of Concept 23, comprising:
providing a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein the axon of each neuron of the third array forms an excitatory STOP synapse with the dendrite of the neighboring interneurons of the third array; and the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array; wherein the dendrite of each neuron of the third array forms an excitatory STDP synapse with the axon of each neuron of the second array; and
providing a fourth array comprising as many neurons as the third array of neurons, wherein the dendrite of each neuron of the fourth array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; and wherein the axon of each neuron of the fourth array forms an excitatory non-STDP synapse with the dendrite of a corresponding neuron of the third array; and
providing to the dendrite of each neuron of the fourth array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; wherein the input signals to the first and second subset of neurons relate to variable parameters that are to be correlated to the input signals to the fourth array.
Concept 25. A method of decoding an output of a neural network according to Concept 8; the method comprising: providing the first arrays of the first and second neural network portions with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first arrays;
assigning to each neuron of the fourth array of neurons an incremental position value comprised between 1 and N, N being the number of neurons of the fourth array;
at any given time, measuring the firing rate of each neuron of the fourth array; and
estimating the output of the neural network, at said any given time, as corresponding to the neuron of the fourth array having a position value equal to a division of the sum of the position value of each neuron of the fourth array, weighted by its firing rate at said any given time, by the sum of the firing rates of each neuron of the fourth array at said any given time.
Concept 26. The method of Concept 25 comprising, if the neurons of the middle of the fourth array have null firing rates, assigning to the neurons of lower position value a position value increased by the value N.
Concept 27. A method of decoding an output of a neural network according to Concept 14; the method comprising:
providing the first and second sub-arrays of neurons with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first and second sub-arrays of neurons; assigning to each neuron of the third array of neurons an incremental position value comprised between 1 and N, N being the number of neurons of the third array; at any given time, measuring the firing rate of each neuron of the third array; and
estimating the output of the neural network, at said any given time, as corresponding to the neuron of the third array having a position value equal to a division of the sum of the position value of each neuron of the third array, weighted by its firing rate at said any given time, by the sum of the firing rates of each neuron of the third array at said any given time.
Concept 28. The method of Concept 27 comprising, if the neurons of the middle of the third array have null firing rates, assigning to the neurons of lower position value a position value increased by the value N.

Claims

CLAIMS What is claimed is:
1. A neural network, wherein a portion of the neural network comprises:
a first array having a first number of neurons, wherein the dendrite of each neuron of the first array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; a second array having a second number of neurons, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; and the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array.
2. The neural network of claim 1, wherein the second number is smaller than the first number.
3. The neural network of claim 1, wherein the second array further comprises a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein:
the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
4. The neural network of claim 1, wherein the dendrite of each neuron of the first array is provided for receiving an input signal having a rate that increases when a measured parameter gets closer to a predetermined value assigned to said neuron.
5. A neural network comprising a first and a second neural network portions according to claim 1; and
a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein:
the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and
the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array;
wherein the axon of each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array; and
wherein the axon of each neuron of the second array of the second neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array.
6. The neural network of claim 5, wherein the third array comprises rows and columns of neurons,
wherein the axon of each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a row of the third array; and wherein the axon of each neuron of the second array of the second neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a column of the third array.
7. The neural network of claim 5, comprising a third neural network portion according to claim 1, as well as a fourth array having a second number of neurons and a third number of interneurons distributed among the neurons of the fourth array, wherein:
the axon of each neuron of the fourth array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the fourth array; and
the axon of each interneuron of the fourth array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the fourth array;
wherein the dendrite of each neuron of the fourth array forms an excitatory STDP synapse with the axon of a plurality of neurons of the third array; and
wherein the dendrite of each neuron of the fourth array forms an excitatory non-STDP synapse with the axon of a corresponding neuron of the second array of the third neural network.
8. The neural network of claim 7, wherein the input signals to the first and second neural network portions relate to variable parameters that are to be correlated to the input signals to the third neural network.
9. The neural network of claim 1, wherein said first array of neurons comprises first and second sub-arrays of neurons provided for receiving input signals related to first and second measured parameters, respectively.
10. The neural network of claim 9, wherein the second array comprises rows and columns of neurons;
wherein the axon of each neuron of the first sub-array of neurons forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a row of the second array; and
wherein the axon of each neuron of the second sub-array of neurons forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a column of the second array.
11. The neural network of claim 10, wherein the second array further comprises a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein: the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
12. The neural network of claim 11; further comprising:
a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein:
the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array;
wherein the dendrite of each neuron of the third array forms an excitatory STDP synapse with the axon of each neuron of the second array.
13. The neural network of claim 12, comprising a fourth array comprising as many neurons as the third array of neurons, wherein the dendrite of each neuron of the fourth array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron;
wherein the axon of each neuron of the fourth array forms an excitatory non- STDP synapse with the dendrite of a corresponding neuron of the third array.
14. The neural network of claim 13, wherein the input signals to the first and second sub-arrays of neurons relate to variable parameters that are to be correlated to the input signals to the fourth array.
15. The neural network of claim 13, wherein the fourth array of neurons is a sub-array of neurons of a further neural network according to claim 9.
16. A method of programming a neural network, comprising:
providing a first neural network portion comprising a first array having a first number of neurons and a second array having a second number of neurons, the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of a plurality of neurons of the first array; the dendrite of each neuron of the second array forming an excitatory STDP synapse with the axon of neighboring neurons of the second array; and
providing to the dendrite of each neuron of the first array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron.
17. The method of claim 16, further comprising providing the second array with a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein:
the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
18. The method of claim 17, comprising providing the dendrite of each neuron of the first array with an input signal having a rate that increases when a measured parameter gets closer to a predetermined value assigned to said neuron.
19. The method of claim 17, comprising:
providing a second neural network portion having the same structure as the first neural network portion; and
providing a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein: the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and
the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array;
wherein the axon of each neuron of the second array of the first neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array; and
wherein the axon of each neuron of the second array of the second neural network portion forms an excitatory STDP synapse with the dendrite of a plurality of neurons of the third array; and
providing to the dendrite of each neuron of the first array of the second neural network portion an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron.
20. The method of claim 19, comprising:
providing a third neural network portion having the same structure as the first neural network portion;
providing a fourth array having a second number of neurons and a third number of interneurons distributed among the neurons of the fourth array, wherein: the axon of each neuron of the fourth array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the fourth array; and
the axon of each interneuron of the fourth array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the fourth array; wherein the dendrite of each neuron of the fourth array forms an excitatory STDP synapse with the axon of a plurality of neurons of the third array; and
wherein the dendrite of each neuron of the fourth array forms an excitatory non-STDP synapse with the axon of a corresponding neuron of the second array of the third neural network; and
providing to the dendrite of each neuron of the first array of the third neural network portion an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; wherein the input signals to the first and second neural network portions relate to variable parameters that are to be correlated to the input signals to the third neural network portion.
21. The method of claim 16, wherein:
said providing to the dendrite of each neuron of the first array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron comprises:
providing to the dendrite of each neuron of a first subset of neurons of the first array an input signal indicating that a first measured parameter gets closer to a predetermined value assigned to said neuron;
providing to the dendrite of each neuron of a second subset of neurons of the first array an input signal indicating that a second measured parameter gets closer to a predetermined value assigned to said neuron.
22. The method of claim 21, wherein:
said providing a second array having a second number of neurons comprises providing a second array having rows and columns of neurons, wherein the axon of each neuron of the first subset of neurons of the first array forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a row of the second array; and
wherein the axon of each neuron of the second subset of neurons of the first array forms an excitatory STDP synapse with the dendrite of a plurality of neurons of a column of the second array.
23. The method of claim 21, further comprising providing the second array with a third number of interneurons distributed among the neurons of the second array, wherein the third number is smaller than the second number, wherein:
the axon of each neuron of the second array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the second array; and the axon of each interneuron of the second array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the second array.
24. The method of claim 23, comprising:
providing a third array having a fourth number of neurons and a fifth number of interneurons distributed among the neurons of the third array, wherein the fifth number is smaller than the fourth number, wherein the axon of each neuron of the third array forms an excitatory STDP synapse with the dendrite of the neighboring interneurons of the third array; and the axon of each interneuron of the third array forms an inhibitory STDP synapse with the dendrite of the neighboring neurons and interneurons of the third array;wherein the dendrite of each neuron of the third array forms an excitatory STDP synapse with the axon of each neuron of the second array; and providing a fourth array comprising as many neurons as the third array of neurons, wherein the dendrite of each neuron of the fourth array is provided for receiving an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; andwherein the axon of each neuron of the fourth array forms an excitatory non-STDP synapse with the dendrite of a corresponding neuron of the third array; and
providing to the dendrite of each neuron of the fourth array an input signal indicating that a measured parameter gets closer to a predetermined value assigned to said neuron; wherein the input signals to the first and second subset of neurons relate to variable parameters that are to be correlated to the input signals to the fourth array.
25. A method of decoding an output of a neural network according to claim 8; the method comprising:
providing the first arrays of the first and second neural network portions with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first arrays;
assigning to each neuron of the fourth array of neurons an incremental position value comprised between 1 and N, N being the number of neurons of the fourth array;
at any given time, measuring the firing rate of each neuron of the fourth array; and
estimating the output of the neural network, at said any given time, as corresponding to the neuron of the fourth array having a position value equal to a division of the sum of the position value of each neuron of the fourth array, weighted by its firing rate at said any given time, by the sum of the firing rates of each neuron of the fourth array at said any given time.
26. The method of claim 25 comprising, if the neurons of the middle of the fourth array have null firing rates, assigning to the neurons of lower position value a position value increased by the value N.
27. A method of decoding an output of a neural network according to claim 14; the method comprising:
providing the first and second sub-arrays of neurons with first and second input signals having a rate that increases when a measured parameter gets closer to a predetermined value assigned to the neurons of said first and second sub-arrays of neurons;
assigning to each neuron of the third array of neurons an incremental position value comprised between 1 and N, N being the number of neurons of the third array; at any given time, measuring the firing rate of each neuron of the third array; and
estimating the output of the neural network, at said any given time, as corresponding to the neuron of the third array having a position value equal to a division of the sum of the position value of each neuron of the third array, weighted by its firing rate at said any given time, by the sum of the firing rates of each neuron of the third array at said any given time.
28. The method of claim 27 comprising, if the neurons of the middle of the third array have null firing rates, assigning to the neurons of lower position value a position value increased by the value N.
EP13878741.1A 2013-03-15 2013-08-30 Neural network and method of programming Ceased EP2973240A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361799883P 2013-03-15 2013-03-15
PCT/US2013/057724 WO2014149070A1 (en) 2013-03-15 2013-08-30 Neural network and method of programming

Publications (2)

Publication Number Publication Date
EP2973240A1 true EP2973240A1 (en) 2016-01-20
EP2973240A4 EP2973240A4 (en) 2017-09-27

Family

ID=54668477

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13878741.1A Ceased EP2973240A4 (en) 2013-03-15 2013-08-30 Neural network and method of programming

Country Status (2)

Country Link
EP (1) EP2973240A4 (en)
CN (1) CN105122278B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018137411A1 (en) * 2017-01-25 2018-08-02 清华大学 Neural network information conversion method and system, and computer device
CN107092959B (en) * 2017-04-07 2020-04-10 武汉大学 Pulse neural network model construction method based on STDP unsupervised learning algorithm
US10956814B2 (en) * 2018-08-27 2021-03-23 Silicon Storage Technology, Inc. Configurable analog neural memory system for deep learning neural network
US11443195B2 (en) * 2019-02-19 2022-09-13 Volodymyr Bykov Domain-based dendral network
CN110135557B (en) * 2019-04-11 2023-06-02 上海集成电路研发中心有限公司 Neural network topology architecture of image processing system
CN113313240B (en) * 2021-08-02 2021-10-15 成都时识科技有限公司 Computing device and electronic device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8156057B2 (en) * 2003-03-27 2012-04-10 Knowm Tech, Llc Adaptive neural network utilizing nanotechnology-based components
US8311965B2 (en) * 2009-11-18 2012-11-13 International Business Machines Corporation Area efficient neuromorphic circuits using field effect transistors (FET) and variable resistance material
US9665822B2 (en) * 2010-06-30 2017-05-30 International Business Machines Corporation Canonical spiking neuron network for spatiotemporal associative memory
US8433665B2 (en) * 2010-07-07 2013-04-30 Qualcomm Incorporated Methods and systems for three-memristor synapse with STDP and dopamine signaling
US8510239B2 (en) * 2010-10-29 2013-08-13 International Business Machines Corporation Compact cognitive synaptic computing circuits with crossbar arrays spatially in a staggered pattern
US8856055B2 (en) * 2011-04-08 2014-10-07 International Business Machines Corporation Reconfigurable and customizable general-purpose circuits for neural networks
CN103019656B (en) * 2012-12-04 2016-04-27 中国科学院半导体研究所 The multistage parallel single instruction multiple data array processing system of dynamic reconstruct

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2014149070A1 *

Also Published As

Publication number Publication date
EP2973240A4 (en) 2017-09-27
CN105122278B (en) 2017-03-22
CN105122278A (en) 2015-12-02

Similar Documents

Publication Publication Date Title
US9430737B2 (en) Spiking model to learn arbitrary multiple transformations for a self-realizing network
EP2973240A1 (en) Neural network and method of programming
Hunsberger et al. Spiking deep networks with LIF neurons
Todo et al. Unsupervised learnable neuron model with nonlinear interaction on dendrites
Carrillo et al. A real-time spiking cerebellum model for learning robot control
D’Angelo et al. Distributed circuit plasticity: new clues for the cerebellar mechanisms of learning
Lin et al. Supervised learning in multilayer spiking neural networks with inner products of spike trains
Liu et al. Exploring self-repair in a coupled spiking astrocyte neural network
Zhang et al. Spike-based indirect training of a spiking neural network-controlled virtual insect
Bing et al. Supervised learning in SNN via reward-modulated spike-timing-dependent plasticity for a target reaching vehicle
Huang et al. Neurons as Monte Carlo Samplers: Bayesian Inference and Learning in Spiking Networks
EP3033718A2 (en) Methods and apparatus for modulating the training of a neural device
US9195935B2 (en) Problem solving by plastic neuronal networks
Zhang et al. A scalable weight-free learning algorithm for regulatory control of cell activity in spiking neuronal networks
Srinivasa et al. Self-organizing spiking neural model for learning fault-tolerant spatio-motor transformations
Luque et al. From sensors to spikes: Evolving receptive fields to enhance sensorimotor information in a robot-arm
Bakhshiev et al. Mathematical Model of the Impulses Transformation Processes in Natural Neurons for Biologically Inspired Control Systems Development.
Taherkhani et al. Multi-DL-ReSuMe: Multiple neurons delay learning remote supervised method
Al-Mahasneh et al. The development of neural networks applications from perceptron to deep learning
Passot et al. Coupling internal cerebellar models enhances online adaptation and supports offline consolidation in sensorimotor tasks
Talanov et al. Modeling Inhibitory and Excitatory Synapse Learning in the Memristive Neuron Model.
Corchado et al. Integration of paired spiking cerebellar models for voluntary movement adaptation in a closed-loop neuro-robotic experiment. A simulation study
Singh et al. Neuron-based control mechanisms for a robotic arm and hand
US10755167B2 (en) Neuromorphic architecture with multiple coupled neurons using internal state neuron information
Vidyaratne et al. Improved training of cellular SRN using Unscented Kalman Filtering for ADP

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150903

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIN1 Information on inventor provided before grant (corrected)

Inventor name: SRINIVASA, NARAYAN

Inventor name: CHO, YOUNGKWAN

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20170825

RIC1 Information provided on ipc code assigned before grant

Ipc: G06N 3/04 20060101AFI20170821BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20190531

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20211006