EP3871153A1 - Retro-propagation d'erreurs sous forme impulsionnelle dans un reseau de neurones impulsionnels - Google Patents

Retro-propagation d'erreurs sous forme impulsionnelle dans un reseau de neurones impulsionnels

Info

Publication number
EP3871153A1
EP3871153A1 EP19787001.7A EP19787001A EP3871153A1 EP 3871153 A1 EP3871153 A1 EP 3871153A1 EP 19787001 A EP19787001 A EP 19787001A EP 3871153 A1 EP3871153 A1 EP 3871153A1
Authority
EP
European Patent Office
Prior art keywords
neuron
error
phase
layer
artificial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19787001.7A
Other languages
German (de)
English (en)
French (fr)
Inventor
Johannes THIELE
Olivier Bichler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Commissariat a lEnergie Atomique et aux Energies Alternatives CEA
Original Assignee
Commissariat a lEnergie Atomique CEA
Commissariat a lEnergie Atomique et aux Energies Alternatives CEA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Commissariat a lEnergie Atomique CEA, Commissariat a lEnergie Atomique et aux Energies Alternatives CEA filed Critical Commissariat a lEnergie Atomique CEA
Publication of EP3871153A1 publication Critical patent/EP3871153A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means

Definitions

  • the invention relates to the field of artificial neural networks and more specifically that of impulse neural networks.
  • Artificial neural networks are essentially composed of neurons interconnected by synapses, which are conventionally implemented by digital memories, but which can also be implemented by resistive components whose conductance varies according to the voltage applied to their terminals.
  • Impulse neural networks are generally optimized by implementing supervised or unsupervised learning methods.
  • These methods include a first phase of propagation of data, produced at the input of the neural network, to the output layer of the neural network, then a second phase of reverse propagation of errors from the output layer to the layer of Entrance.
  • the synapses are updated from errors calculated locally by each neuron as a function of errors back-propagated from the previous layer of the neural network.
  • the invention relates to an impulse neuron and an impulse neural network, which are designed to implement an error feedback algorithm in the form of signed or unsigned pulses or binary or ternary data.
  • Impulse neural networks have the advantage of allowing implementation on computers with constrained resources because the treatments implemented during the propagation phase of learning or during a classification phase do not require multiplication of floating numbers.
  • the data is coded in the form of pulses (signed or unsigned) and the processing executed by each neuron can, therefore, be implemented only using accumulators and comparators.
  • the use of operators for multiplying floating numbers is avoided, which has a definite advantage for a digital or analog implementation on devices with limited resources.
  • the back-propagation algorithm used to update the values of the synapses during a learning phase requires the use of multiplications of floating numbers to calculate the errors local to each neuron. Furthermore, it also requires the synchronous propagation of these errors in the form of floating numbers between each layer of neurons in the neural network.
  • pulse neural networks are based on a logic of propagation of asynchronous data, in the form of pulses.
  • the back-propagation algorithm is generally not implemented, within a pulse neural network, so as to take into account the hardware constraints of such a network.
  • the invention proposes a new implementation of an error back-propagation algorithm which is adapted to the hardware constraints of a device implementing a pulse neural network.
  • the invention uses, in particular, a binary or ternary coding of the errors calculated during the back-propagation phase to adapt its implementation to the constraints of the network and thus avoid the recourse to operators of multiplication of floating numbers.
  • the invention proposes a global adaptation of the back-propagation algorithm to the specific constraints of a pulse neural network.
  • the invention makes it possible to use the same propagation infrastructure for the propagation of the data and for the reverse propagation of the errors during the learning phase.
  • the invention provides a generic implementation of an impulse neuron which is suitable for the implementation of all types of impulse neural networks, in particular convolutional networks.
  • the invention relates to an artificial impulse neuron belonging to an intermediate layer of several neurons, the intermediate layer belonging to a neural network comprising several successive layers, the neural network being configured to execute a mechanism training comprising a first phase of propagation of data from an input layer to an output layer, and a second phase of back propagation of errors from the output layer to the input layer, the artificial neuron pulse comprising, for the execution of the second back-propagation phase: -
  • a first input / output interface capable of receiving binary or ternary error signals weighted by synaptic coefficients
  • An error calculation module configured to calculate a binary or ternary local error signal from a binary or ternary intermediate signal generated by the neuron in response to the error signals received and an estimate of the derivative an equivalent activation function implemented by the neuron during the first phase of data propagation
  • a second input / output interface capable of propagating the local binary or ternary error signal to several synapses in the form of pulses.
  • the first input / output interface is capable of transmitting binary or ternary signals to several synapses in the form of pulses during the first phase of data propagation and the second input / output interface is capable of receiving binary or ternary signals weighted by synaptic coefficients during the first phase of data propagation.
  • the artificial impulse neuron comprises:
  • At least one comparator to compare the cumulative error with at least one activation threshold among a positive activation threshold and a negative activation threshold
  • An activation module configured to generate the binary or ternary intermediate signal depending on the result of at least one comparator.
  • the invention also relates to an artificial impulse neuron belonging to an intermediate layer of several neurons, the intermediate layer belonging to a neural network comprising several successive layers, the neural network being configured to execute a learning mechanism comprising a first phase of propagation of data from an input layer to an output layer , and a second phase of error back propagation from the output layer to the input layer, the artificial impulse neuron comprising, for the execution of the second phase of back propagation:
  • a first input / output interface capable of receiving binary or ternary error signals
  • An error calculation module configured to calculate a binary or ternary local error signal from a binary or ternary intermediate signal generated by the neuron in response to the error signals received and an estimate of the derivative an equivalent activation function implemented by the neuron during the first phase of data propagation
  • a second input / output interface capable of propagating the local binary or ternary error signal to the neurons of the next layer.
  • the first input / output interface is capable of transmitting binary or ternary signals to the neurons of the next layer during the first phase of data propagation and the second interface input / output is able to receive binary or ternary signals during the first phase of data propagation.
  • the artificial impulse neuron comprises:
  • An activation module configured to generate the binary or ternary intermediate signal depending on the result of at least one comparator.
  • the activation module is configured to generate a positive intermediate signal when the cumulative error is greater than the positive activation threshold and a negative intermediate signal when the cumulative error is less than the threshold d 'negative activation.
  • the artificial impulse neuron according to any one of the embodiments of the invention further comprises a subtractor for subtracting from the cumulative error the value of the positive activation threshold when a positive intermediate signal is generated and subtracting from the cumulative error the value of the negative activation threshold when a negative intermediate signal is generated.
  • the artificial impulse neuron according to any one of the embodiments of the invention further comprises a module for calculating an update of synaptic coefficients from the local error and from a result of the equivalent activation function.
  • the result of the equivalent activation function is calculated during the data propagation phase of a neuron.
  • the module for calculating an update of synaptic coefficients is activated after the propagation of the local error.
  • the module for calculating a local error signal is configured to calculate a product between the intermediate signal and the estimate of the derivative of the equivalent activation function.
  • the equivalent activation function is a function of integration of the pulses generated by the neuron and weighted by a learning rate parameter of the neural network.
  • the artificial impulse neuron comprises a derivative calculation module configured to calculate the estimate of the derivative of the equivalent activation function from a result of the equivalent activation function implemented by the neuron during the first phase of data propagation and of an integration variable of the neuron during the first phase of data propagation.
  • the estimate of the derivative of the equivalent activation function is equal to 1 when the result of said function is strictly positive or the integration variable is strictly positive, and is equal to 0 if not.
  • the derivative calculation module is activated during the data propagation phase of the neuron or during the error back propagation phase of the neuron.
  • the invention also relates to an artificial pulse neural network configured to execute a learning mechanism comprising a first phase of propagation of data from an input layer to an output layer, and a second phase of back propagation errors from the output layer to the input layer, the neural network comprising several layers of pulsed artificial neurons according to any one of the embodiments of the invention, each neuron being connected at least to one neuron d 'a next layer or to a neuron of a previous layer via a synapse.
  • the network comprises an input layer and an output layer, the neurons of the input layer being configured to receive data to be propagated in binary form or ternary and the neurons of the output layer being configured to calculate, from the data propagated during the first phase of data propagation, an error between a result obtained and a target result.
  • each neuron of the output layer comprises an encoder configured to code the error into a set of at least one binary or ternary signal.
  • the coder is configured to quantify the error on two or three quantization levels so as to generate a binary or ternary signal.
  • the coder is configured to code the error by a group of successive binary signals or a group of successive ternary signals.
  • the invention also relates to an artificial neural network configured to execute a learning mechanism comprising a first phase of data propagation from an input layer to an output layer, and a second phase of back propagation of errors from the output layer to the input layer, the neural network comprising several layers of artificial impulse neurons according to the second embodiment of the invention, each neuron being connected to at least one neuron of a next layer or to a neuron of a previous layer via a synapse having a synaptic weight, the synapses being implemented in the form of digital memories, memristive devices or analog circuits.
  • each synapse is configured to update its synaptic weight in response to a binary or ternary error signal received from a neuron of a next layer and to a signal representative of the result of the function. equivalent activation received from a neuron in a previous layer.
  • FIG. 1 a general diagram of a network of artificial impulse neurons
  • FIG. 2 a diagram of an artificial impulse neuron, according to a first embodiment of the invention, executing a first phase of propagation of data of a learning mechanism
  • FIG. 3 a diagram of an artificial impulse neuron, according to a second embodiment of the invention, executing a first phase of propagation of data of a learning mechanism
  • FIG. 5 a diagram of an artificial impulse neuron, according to the first embodiment of the invention, executing a second phase of reverse propagation of errors of a learning mechanism
  • FIG. 1 represents a general diagram of a network of artificial impulse neurons.
  • a neural network is conventionally composed of several layers C e , Ci, Ci + i , C s of interconnected impulse neurons.
  • the network comprises at least one input layer C e and one output layer C s and at least one intermediate layer C
  • the neurons N ie of the input layer C e each receive an input data input 101.
  • the input data can be of a nature different depending on the intended application.
  • a neural network may be pixels of an image or audio or text data or more generally any type of data which can be coded in the form of pulses.
  • the applications of a neural network include in particular the classification and detection of objects in an image or in a video for devices on board autonomous vehicles or video surveillance devices associated with video surveillance cameras.
  • a neural network is, for example, used in the field of image classification or image recognition or more generally the recognition of characteristics which can be visual, audio or both.
  • Each neuron of a layer is connected, by its input and / or its output, to all the neurons of the previous or next layer. More generally, a neuron may only be connected to part of the neurons of another layer, in particular in the case of a convolutional network.
  • connections 102, 103, 104 between two neurons N ie , N are made through artificial synapses Si, S 2 , S 3 which can be carried out, in particular, by digital memories or by memristive devices.
  • the coefficients of the synapses can be optimized thanks to a learning mechanism of the neural network. This mechanism comprises two distinct phases, a first phase of propagation of data from the input layer to the output layer and a second phase of back propagation of errors from the output layer to the input layer with, for each layer, an update of the weights of the synapses.
  • training data for example images or sequences of images
  • training data are supplied at the input of the neurons of the input layer and propagated in the network.
  • the data is coded as asynchronous pulses.
  • the pulses correspond to binary or ternary signals, in other words they can be signed or unsigned pulses.
  • Each neuron implements, during this first phase, an integration function of the pulses it receives from the neurons of the previous layer (or pulses coded from the input data for the neurons of the input layer).
  • the integration function essentially consists of an accumulation of pulses weighted by the weights of the artificial synapses.
  • Each neuron also implements an activation function which consists, from a comparison of the integration variable with one or two activation threshold (s), in generating and propagating an impulse towards the neurons of the layer. next.
  • the integration function and the activation function may vary.
  • a leakage current can be implemented by the neuron to attenuate the integration variable over time when no pulse is received by the neuron.
  • the neurons N,, 3 of the output layer C s perform additional processing in that they calculate an error between an integration result of the pulses received by the neuron N,, s and an expected value or a target value which corresponds to the final state of the neuron of the output layer that one wishes to obtain in relation to the training input data.
  • the neurons of the output layer C s transmit the calculated errors to the neurons of the previous layer Ci + i which calculate a local error from the back propagated error of the previous layer and in turn pass this local error to the previous layer C
  • each neuron calculates, from the local error, an update value for the weights of the synapses to which it is connected and updates the synapses. The process continues for each layer of neurons until the penultimate layer which is responsible for updating the weights of the synapses which connect it to the input layer C e .
  • An object of the invention is to propose a particular implementation of the error back-propagation phase, which is adapted to the implementation constraints or material constraints of impulse neurons.
  • FIG. 2 describes an example of a pulse neuron according to a first embodiment of the invention and its operation during the data propagation phase.
  • FIG. 2 is shown a neuron N, belonging to an intermediate layer C
  • the neuron N is connected downstream to the neurons of a following layer Ci + i by means of synapses Wu + i , W 2 J + I, ... , WK , I + I.
  • the synapses are produced by memristive devices or memristors or any equivalent analog circuit.
  • the neuron N u is connected upstream to the neurons of a previous layer C M via synapses WI J, W 2, I ... , W K, I.
  • the neuron N u receives, via a first I / O 2 input / output interface, pulses emitted by the neurons of the previous layer and weighted by the weights of the synapses WI J, W 2 J ... , W K J.
  • Synapses produced by memristive devices, receive a signed or unsigned pulse of constant amplitude emitted by a neuron and emit at their output, a pulse amplified by a value representative of the weight of the synapse.
  • the signals received by the neuron N, i correspond to binary or ternary signals weighted by the weights of the synapses.
  • the signals received are integrated by an integration module INT which performs an integration function which depends on the nature of the neuron.
  • the integration function consists in summing or integrating, over time, the signals received.
  • the integration function comprises an attenuation or leakage function to decrease the integration variable over time when no signal is received by the neuron.
  • the integration variable Vg obtained at the output of the integration module INT is then compared with one or more activation thresholds via a comparator COMP. According to a first embodiment, a single positive activation threshold 0 ff is used.
  • an activation module ACT When the integration variable V u exceeds the positive activation threshold 0 ff , an activation module ACT generates a positive pulse and the integration variable Vg is reduced by the value of the threshold 0 ff .
  • a negative threshold -0 ff is used in addition to the positive threshold 0 ′′.
  • an activation module ACT When the integration variable V u decreases below the negative activation threshold -0 ff , an activation module ACT generates a negative pulse and the integration variable V u is reduced by the value of the threshold -0 ff ( or increased by the value 0 ff ).
  • the absolute values of the two thresholds can be equal or different.
  • the pulses generated by the activation module ACT are transmitted to an input / output interface E / Si to be propagated to the synapses W 1 J + 1 , W 2 , I + I ,. . . , W K, I + I connected between the neuron N, i and neurons of the next layer i + Ci.
  • the integration variable Vg calculated by the INT integration module over time can be represented by the following relation:
  • j (t) represents the pulse generated by the neuron whose value is given by the following relation in the case of a ternary signal:
  • the integration variable Vg can be initialized to a value other than 0 at the start of the data propagation phase.
  • Neuron N u The treatments described above and implemented by the neuron N u are only based on accumulations or comparisons and do not require any multiplication of floating numbers. Neuron N, i additionally performs two additional calculation functions to calculate variables which will be used during the error back-propagation phase.
  • a second FAE integration module is used to accumulate the pulses Sj generated by the neuron over time, the accumulation being weighted by a learning rate hi which is a parameter of the neural network. This learning rate hi can be different for each layer of the network.
  • variable x obtained at the output of this second integration module is represented by the following relation:
  • This variable can also be represented by the following relationships, where a,, ! corresponds to the accumulation over time of the impulses generated by the neuron and is called the equivalent activation function of the neuron.
  • the variable Xj (t) corresponds to the equivalent activation function weighted by a learning rate parameter hi.
  • the calculation of the variable Xj does not require multiplication either because the pulses s, take the values 1, 0 or -1.
  • the calculation of the variable Xi , i consists of an accumulation of the value hi.
  • a DER derivation module is also used to calculate an estimate a i of the equivalent activation function a. of the neuron. This estimate is used during the error back-propagation phase.
  • the estimate a'i j is equal to 1 if the integration variable Vj is greater than 0 or if the variable Xj is greater than 0 and is worth 0 otherwise.
  • other estimates a'i j of the equivalent activation function can be determined so as to produce a binary variable ⁇ 0; 1 ⁇ or ternary ⁇ -1; 0; 1 ⁇ .
  • a feature of the invention is that the estimate a'i j is a binary or ternary variable so as to avoid calculations involving multiplications of floating numbers.
  • Vj (t) and Xj (t) used to calculate a'i j are the last updated values calculated by the neuron during the data propagation phase for a learning sequence presented at the input of the neural network .
  • the calculation of the estimate a'i j of the equivalent activation function a, j of the neuron can be carried out during the data propagation phase in which case the value of a'i j is saved in a memory to be used during of the error back-propagation phase.
  • the calculation of the estimate a'i j can also be carried out during the error back-propagation phase, from the last values saved by the neuron of V, j (t) and Xj j (t).
  • the activation function implemented by the neuron can be modified as follows:
  • the sum of the pulses generated by the neuron, represented by the variable aj (t) is always greater than 0.
  • FIG. 3 represents a second embodiment of the impulse neuron according to the invention.
  • the neuron is implemented by a digital device.
  • Binary or ternary pulse signals are encoded by binary or ternary digital signals and are transmitted between neurons via a digital communications infrastructure.
  • the synapses are no longer performed by active devices positioned on the connections between two neurons.
  • the weights of the synapses are stored in a digital memory MEM_W.
  • the signals received by the I / O input interface 2 are binary or ternary signals (depending on the neuron model chosen).
  • the integration module INT is modified to calculate the sum of the received signals weighted by the weights of the synapses which are read in the memory MEM_W. In other words, the weighting of the signals by the weights of the synapses is carried out by the neuron and not by the synapses as in the first embodiment.
  • Each neuron in the output layer is configured to calculate a variable and a desired target value for that variable.
  • the variable used can be the integration variable Vi S calculated by the integration module INT or the pulses Si , s generated by the activation module ACT or the result of the function d activation equivalent to i S or any combination of one or more of these variables or other variables calculated by the output neuron.
  • the target value is chosen according to the application. For example, if the neuron network is used for object classification, the target value corresponds to the object that each output neuron is supposed to detect.
  • each output neuron can calculate a cost function dependent on one or more of the variables calculated or on a combination of one or more of these variables and a target value or desired value.
  • the error calculated by the output neuron is then equal to the derivative of the cost function with respect to each variable used.
  • the cost function used can depend only on the equivalent activation function ai , s (t) and the calculated error will be dependent on the derivative of this equivalent activation function.
  • 5 i S the error calculated by the output neuron N i S - This error is then coded, using an encoder that the output neuron includes, in the form of pulses or digital data. Different types of coding are possible.
  • the pulses can be coded on three levels as ternary signals.
  • the error 5i s, which is a floating number, is quantified on three levels -1, 0.1 and transmitted to the neurons of the preceding layer via a digital signal or a ternary pulse.
  • the error 5 i S is decomposed into a sum of the values -1, 0 and 1 and is coded by a group of digital data or of ternary pulses.
  • the value 5.3 can be coded by five successive positive pulses
  • the value -3.2 can be coded by three successive negative pulses.
  • the pulses can also be coded on two levels as binary signals according to the two coding variants presented above.
  • FIG. 4 represents an example of implementation of an output neuron N i S. It mainly includes an I / O 2 input / output interface, an INT integration module, an CALC_ER error calculation module and a COD encoder to encode the calculated error into pulses which are then back propagated to the previous layer via the I / O input / output interface 2 .
  • the output neuron N i S can also include a comparator COMP and an activation module ACT when these are necessary for the calculation of variables used for the calculation of the error.
  • FIG. 5 describes, according to the first embodiment of the invention, the operation of an impulse neuron Nu of an intermediate layer Ci during the back-propagation phase of the errors calculated by the neurons of the output layer.
  • the synapses are produced by memristive devices, memristors or any equivalent analog circuit as explained in the paragraph relating to FIG. 2.
  • FIG. 5 only the computation modules and operators of the neuron are represented. who intervene during the back-propagation phase. In a real implementation of the neuron, this comprises both the modules and operators described in FIG. 5 which are activated during the back propagation phase and the modules and operators described in FIG. 2 which are activated during the phase of data propagation.
  • the synapses W 1 J + 1 , W 2 J + I, ... , W K J + I receive the errors calculated by the neurons of the previous layer C (in the direction of back propagation) in the form of binary or ternary pulses. Each synapse sends in response a signal corresponding to the received pulse weighted by the weight of the synapse. These weighted error signals are received by the I / O input / output interface and then processed by an INT_ER integration module which achieves an accumulation of the received signals.
  • the INT_ER integration module activated during the back propagation phase performs the same function as the INT integration module activated during the data propagation phase. They can be implemented by two separate modules or by the same module.
  • the integration variable U obtained at the output of the integration module INT_ER is then compared with one or two activation thresholds. For example, when the signals received are ternary signals, a positive activation threshold 0 bp and a negative activation threshold -0 bp are used, via a comparator COMP_ER which can be produced by the same component as the comparator COMP described in Figure 2.
  • a comparator COMP_ER which can be produced by the same component as the comparator COMP described in Figure 2.
  • an activation module ACT_ER generates a positive pulse and the integration variable Ui j is reduced by the value of threshold 0 bp .
  • an activation module ACT_ER When the integration variable U, j decreases below the negative activation threshold -0 bp , an activation module ACT_ER generates a negative pulse and the integration variable U, is reduced by the value of the threshold -0 bp .
  • the ACT_ER activation module can be made by the same component as the ACT activation module described in Figure 2.
  • the integration variable of the neuron during the error back-propagation phase is given by the following relation:
  • the signal generated by the activation module ACT_ER is an intermediate impulse signal z u .11 can be represented by the following relation:
  • the positive activation threshold can be replaced by T + 0 bp and the negative activation threshold by T-0 bp with T a positive, negative or zero constant.
  • the intermediate pulse signal z can be represented by the relation:
  • a single threshold 0 bP is used instead of two thresholds.
  • the integration variable Ui j is reduced by a predetermined value which can be equal to the threshold 0 bp or to a value different from the threshold 0 bP .
  • a single activation threshold 0 bp is used.
  • the integration variable Uj is reduced by a predetermined value which can be equal to the 0 bp threshold or to a value different from the 0 bp threshold.
  • the back propagation algorithm executed by the neural network is implemented in two successive phases.
  • the back-propagation algorithm is executed by the neural network by considering an intermediate pulse signal z u binary represented by the following relation, with 6 bp a positive activation threshold:
  • the activation module ACT_ER When the integration variable Llg exceeds the activation threshold 0 bp , the activation module ACT_ER generates a positive pulse and the integration variable Uj is reduced by the value of the threshold 0 bp .
  • the back-propagation algorithm is executed by the neural network by considering an intermediate pulse signal z, binary represented by the following relation, with -9 bp a negative activation threshold:
  • the activation module ACT_ER When the integration variable u, i goes below the activation threshold -0 bp , the activation module ACT_ER generates a negative pulse and the integration variable Uj is reduced by the value of the threshold -0 bp (or increased by the value of 0 bp ).
  • This signal is used to then generate a local error in the same impulse form.
  • the local error is calculated by the calculation module ER_LOC from the intermediate signal z, and from the estimate a'i j of the derivative of the equivalent activation function of the neuron.
  • the estimate a'i j was calculated by the neuron during the data propagation phase (see Figure 2) or is calculated during the error back propagation phase from the last values of the variables Xj and V u which were saved at the end of the data propagation phase.
  • the local error is calculated by making a product of the intermediate signal z u and of the estimate a'i which is a binary or ternary variable, according to the model of the estimate a'i j chosen.
  • the calculation of the local error does not require multiplication of floating numbers and the result of this calculation is a ternary variable (which takes values 1, 0 or -1) or binary.
  • the local error 5j (t) is then propagated to the synapses of the next layer CM via the I / O interface 2 .
  • the neuron also includes a SHIFT module for calculating an update of the weights of the synapses. This update is calculated from the local error and from the variable X j, M transmitted by each neuron of layer C to which the neuron N, is connected.
  • the variable X jj -i was previously calculated during the data propagation phase.
  • the weights of the synapses are preferably updated after the propagation of the errors to the next layer, however it is also possible to reverse the order of these two actions.
  • the variable h - is stored in a memory MEM_X accessible to the neurons of two consecutive layers, as shown in FIG. 5.
  • the MAJ module for calculating an update of the weights of the synapses is eliminated and the updating of the synaptic weights is directly carried out by the memristive devices which produce the synapses.
  • the error signals 5j (t) are propagated from the neurons of the layer Ci to the synapses Wy , W 2J ... W K, i and the variables X j, are propagated in the form of signals from the neurons from the CM layer to the synapses WI J , W 2, i, ... W Kj .
  • Each synapse then updates its synaptic weight directly from the interaction of the two signals 5i j (t) and X jj -i (t) by analyzing the difference in the potentials of the two signals to update its weight with a value representative of the term
  • FIG. 6 describes the operation of an impulse neuron Nu of an intermediate layer Ci during the back-propagation phase according to the second embodiment of the invention described in FIG. 3.
  • the neuron is implemented by a digital device.
  • Binary or ternary error signals are coded by binary or ternary digital signals and are transmitted between neurons via the same communication infrastructure as for signals propagated during the data propagation phase.
  • the weights of the synapses are stored in a digital memory MEM_W
  • the integration module INT_ER is modified to calculate the sum of the received signals weighted by the weights of the synapses which are read in the memory MEM_W
  • of the layer Ci is directly updated by the MAJ calculation module for updating the weights of the synapses.
  • Different architectures are possible for storing the weights of the synapses.
  • the INT_ER integration module is configured to access the memory in which the weights of the synapses of the previous layer C
  • the signals exchanged between the neurons can be implemented by two separate buses.
  • a first data bus is used to transmit the pulses generated via a value 1, in the case of binary signals, or a value 1 or -1, in the case of ternary signals.
  • a second asynchronous signaling bus is used to signal to a neuron the reception (or transmission) of a data item.
  • the second asynchronous bus is used to transmit information of the presence of a value other than 0 on the data bus.
  • the second asynchronous bus can be, for example, a bus of the AER type "Address Event Representation".
  • the assembly formed by the data bus and the asynchronous bus is capable of transmitting a binary digital signal or a ternary digital signal.
  • the binary signal the reader will understand that the value "1" is indicated by the data bus and the value "0" by the asynchronous bus.
  • the ternary signal the values "1" and “-1” are indicated by the data bus and the value "0” by the asynchronous bus.
  • the invention has the advantage of using a binary or ternary representation of the local errors calculated during the error back-propagation phase so that no multiplication of floating numbers is necessary to implement the calculations d 'errors.
  • the same communication infrastructure can be used both for data propagation and for error back propagation since the two types of signals are coded in the same way.
  • Certain calculation modules and operators can be used jointly for the data propagation phase and for the error back propagation phase.
  • the integration modules INT, INT_ER, the comparators COMP, COMP_ER and the activation modules ACT, ACT_ER can be produced by a single component.
  • the invention makes it possible to use the same type of device or circuit to carry out the data propagation phase and the error back propagation phase since the signals propagated in the two phases are of similar natures and the treatments applied to these signals are limited to accumulations and comparisons for the two phases.
  • the invention can be implemented using hardware and / or software components.
  • the software elements may be available as a computer program product on a computer-readable medium, which may be electronic, magnetic, optical or electromagnetic.
  • the hardware elements may be available in whole or in part, in particular as dedicated integrated circuits (ASIC) and / or configurable integrated circuits (FPGA) and / or as neural circuits according to the invention or as a digital signal processor DSP and / or as a GPU graphics processor, and / or as a microcontroller and / or as a general processor for example.
  • the neural network according to the invention can be implemented by one or more digital device (s) comprising at least a digital memory and a communication infrastructure for propagating binary or ternary signals between neurons.
  • the neural network according to the invention can also be implemented by one or more analog device (s) comprising at least one memristive device and a communication infrastructure able to propagate analog signals in the form of signed or non-signed pulses. signed.
  • analog device comprising at least one memristive device and a communication infrastructure able to propagate analog signals in the form of signed or non-signed pulses. signed.
  • the synapses can be produced in the form of memristive devices or memristors, for example devices of the PCM (Phase-Change Memory) type or RAM or OXRAM memories or any other equivalent analog device or circuit.
  • a synapse can be implemented by an analog circuit based on at least one capacity or at least one capacitor and the charge of the capacity or of the capacitor making it possible to store the value of a synaptic weight.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)
  • Memory System (AREA)
  • Semiconductor Memories (AREA)
EP19787001.7A 2018-10-23 2019-10-22 Retro-propagation d'erreurs sous forme impulsionnelle dans un reseau de neurones impulsionnels Pending EP3871153A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1859760A FR3087560A1 (fr) 2018-10-23 2018-10-23 Retro-propagation d'erreurs sous forme impulsionnelle dans un reseau de neurones impulsionnels
PCT/EP2019/078669 WO2020083880A1 (fr) 2018-10-23 2019-10-22 Retro-propagation d'erreurs sous forme impulsionnelle dans un reseau de neurones impulsionnels

Publications (1)

Publication Number Publication Date
EP3871153A1 true EP3871153A1 (fr) 2021-09-01

Family

ID=66166046

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19787001.7A Pending EP3871153A1 (fr) 2018-10-23 2019-10-22 Retro-propagation d'erreurs sous forme impulsionnelle dans un reseau de neurones impulsionnels

Country Status (5)

Country Link
US (1) US20210397968A1 (ja)
EP (1) EP3871153A1 (ja)
JP (1) JP7433307B2 (ja)
FR (1) FR3087560A1 (ja)
WO (1) WO2020083880A1 (ja)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200076083A (ko) * 2018-12-19 2020-06-29 에스케이하이닉스 주식회사 오류 역전파를 이용하여 지도 학습을 수행하는 뉴로모픽 시스템
US20200293860A1 (en) * 2019-03-11 2020-09-17 Infineon Technologies Ag Classifying information using spiking neural network
KR102474053B1 (ko) * 2020-06-22 2022-12-06 주식회사 퓨리오사에이아이 뉴럴네트워크 프로세서
US11837281B2 (en) * 2021-08-31 2023-12-05 Integrated Circuit, Interface Circuit And Method Integrated circuit, interface circuit and method
CN114781633B (zh) * 2022-06-17 2022-10-14 电子科技大学 一种融合人工神经网络与脉冲神经网络的处理器
CN115392443B (zh) * 2022-10-27 2023-03-10 之江实验室 类脑计算机操作系统的脉冲神经网络应用表示方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150269485A1 (en) * 2014-03-24 2015-09-24 Qualcomm Incorporated Cold neuron spike timing back-propagation
US20170228646A1 (en) * 2016-02-04 2017-08-10 Qualcomm Incorporated Spiking multi-layer perceptron

Also Published As

Publication number Publication date
FR3087560A1 (fr) 2020-04-24
US20210397968A1 (en) 2021-12-23
WO2020083880A1 (fr) 2020-04-30
JP7433307B2 (ja) 2024-02-19
JP2022504942A (ja) 2022-01-13

Similar Documents

Publication Publication Date Title
WO2020083880A1 (fr) Retro-propagation d'erreurs sous forme impulsionnelle dans un reseau de neurones impulsionnels
EP3449423B1 (fr) Dispositif et procede de calcul de convolution d'un reseau de neurones convolutionnel
FR3025344A1 (fr) Reseau de neurones convolutionnels
WO2019020384A9 (fr) Calculateur pour reseau de neurones impulsionnel avec agregation maximale
CA2992036A1 (fr) Dispositif de traitement de donnees avec representation de valeurs par des intervalles de temps entre evenements
CN113537455B (zh) 突触权重训练方法、电子设备和计算机可读介质
WO2022008605A1 (fr) Dispositif électronique et procédé de traitement de données à base de réseaux génératifs inversibles, système électronique de détection et programme d'ordinateur associés
FR2716279A1 (fr) Réseau neuronal chaotique récurrent et algorithme d'apprentissage pour celui-ci.
EP3202044B1 (fr) Procede de codage d'un signal reel en un signal quantifie
EP4078817A1 (fr) Procede et dispositif de codage additif de signaux pour implementer des operations mac numeriques a precision dynamique
Babaeizadeh et al. A simple yet effective method to prune dense layers of neural networks
EP4315170A1 (fr) Autoencodeur multimodal a fusion de donnees latente amelioree
US11526735B2 (en) Neuromorphic neuron apparatus for artificial neural networks
WO2022171632A1 (fr) Circuit neuromorphique et procédé d'entraînement associé
US20210073620A1 (en) Neuromorphic spike integrator apparatus
EP4078816A1 (fr) Procede et dispositif de codage binaire de signaux pour implementer des operations mac numeriques a precision dynamique
US11727252B2 (en) Adaptive neuromorphic neuron apparatus for artificial neural networks
US20220121910A1 (en) Neural apparatus for a neural network system
EP4187445A1 (fr) Procédé d'apprentissage de valeurs de poids synaptique d'un réseau de neurones, procédé de traitement de données, programme d'ordinateur, calculateur et système de traitement associés
WO2021245227A1 (fr) Procédé de génération d'un système d'aide à la décision et systèmes associés
Nayak et al. Replication/NeurIPS 2019 Reproducibility Challenge
EP4322061A1 (fr) Dispositif électronique et procédé de traitement de données comportant au moins un modèle d'intelligence artificielle auto-adaptatif avec apprentissage local, système électronique et programme d'ordinateur associés
KR20220071091A (ko) 스파이킹 뉴럴 네트워크를 최적화하는 방법 및 장치
FR3134363A1 (fr) Procédé de prédiction de trajectoires de piétons pour le renforcement de la sécurité de la conduite autonome d’un véhicule, véhicule muni de moyens pour la mise en œuvre de ce procédé

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210412

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20240102