EP4078817A1 - Procede et dispositif de codage additif de signaux pour implementer des operations mac numeriques a precision dynamique - Google Patents
Procede et dispositif de codage additif de signaux pour implementer des operations mac numeriques a precision dynamiqueInfo
- Publication number
- EP4078817A1 EP4078817A1 EP20819793.9A EP20819793A EP4078817A1 EP 4078817 A1 EP4078817 A1 EP 4078817A1 EP 20819793 A EP20819793 A EP 20819793A EP 4078817 A1 EP4078817 A1 EP 4078817A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- signal
- coded
- encoding
- coding method
- mac
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 239000000654 additive Substances 0.000 title description 2
- 230000000996 additive effect Effects 0.000 title description 2
- 230000010354 integration Effects 0.000 claims abstract description 45
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 3
- 210000002569 neuron Anatomy 0.000 claims description 43
- 238000013528 artificial neural network Methods 0.000 claims description 24
- 238000004364 calculation method Methods 0.000 claims description 17
- 238000009825 accumulation Methods 0.000 claims description 13
- 230000000946 synaptic effect Effects 0.000 claims description 11
- 238000005265 energy consumption Methods 0.000 claims description 9
- 238000004088 simulation Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 21
- 230000000644 propagated effect Effects 0.000 description 18
- 238000010801 machine learning Methods 0.000 description 13
- 230000004913 activation Effects 0.000 description 9
- 238000001994 activation Methods 0.000 description 9
- 238000013459 approach Methods 0.000 description 7
- 230000015654 memory Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000013139 quantization Methods 0.000 description 3
- 238000013178 mathematical model Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 210000000225 synapse Anatomy 0.000 description 2
- FHKPLLOSJHHKNU-INIZCTEOSA-N [(3S)-3-[8-(1-ethyl-5-methylpyrazol-4-yl)-9-methylpurin-6-yl]oxypyrrolidin-1-yl]-(oxan-4-yl)methanone Chemical compound C(C)N1N=CC(=C1C)C=1N(C2=NC=NC(=C2N=1)O[C@@H]1CN(CC1)C(=O)C1CCOCC1)C FHKPLLOSJHHKNU-INIZCTEOSA-N 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 238000004513 sizing Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/02—Conversion to or from weighted codes, i.e. the weight given to a digit depending on the position of the digit within the block or code word
- H03M7/04—Conversion to or from weighted codes, i.e. the weight given to a digit depending on the position of the digit within the block or code word the radix thereof being two
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/544—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
- G06F7/5443—Sum of products
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/28—Programmable structures, i.e. where the code converter contains apparatus which is operator-changeable to modify the conversion process
Definitions
- the invention relates to the field of computing architectures for machine learning models or "machine learning”, in particular artificial neural networks and relates to a method and a device for encoding and integration of digital signals with precision. dynamics adapted to the signals propagated in an artificial neural network.
- the invention is applicable to any computing architecture implementing operations of the multiplication then accumulation (MAC) type.
- MAC multiplication then accumulation
- Artificial neural networks constitute calculation models imitating the functioning of biological neural networks.
- Artificial neural networks include neurons interconnected with each other by synapses, which are conventionally implemented by digital memories. Synapses can also be implemented by resistive components whose conductance varies according to the voltage applied to their terminals.
- Artificial neural networks are used in various fields of signal processing (visual, sound, or other) such as for example in the field of image classification or image recognition.
- a general problem for computer architectures implementing an artificial neural network concerns the overall energy consumption of the circuit making the network.
- the basic operation implemented by an artificial neuron is a MAC multiplication then accumulation operation.
- the number of MAC operations per unit of time required for real-time operation becomes constraining.
- a drawback of this method is that it does not make it possible to take into account the nature of the signals propagated in a digital computer implementing a learning function such as an artificial neural network.
- the invention provides a dynamic precision coding method which makes it possible to take into account the nature of the signals to be coded, in particular the variability of the dynamics of the values of the signals.
- the invention makes it possible to optimize the coding of the signals propagated in a neural network so as to limit the number and the complexity of MAC operations carried out and thus limit the power consumption of the circuit. or computer realizing the network.
- the invention relates to a method, implemented by computer, for encoding a digital signal quantized on a given number N d of bits and intended to be processed by a digital calculation system, the signal being encoded over a predetermined number N p of bits strictly less than N d , the method comprising the steps of:
- the method comprises a step of determining the size N p of the coded signal as a function of a statistical distribution of the values of the digital signal.
- the size N p of the encoded signal is configured so as to minimize the energy consumption of a digital calculation system in which the processed signals are encoded by means of said encoding method .
- the energy consumption is estimated by simulation or from an empirical model.
- the digital calculation system implements an artificial neural network.
- the size N p of the coded signal is parameterized independently for each layer of the artificial neural network.
- the invention also relates to an encoding device comprising an encoder configured to perform the encoding method according to the invention.
- the invention also relates to an integration device configured to perform a MAC multiplication then accumulation operation between a first number encoded by means of the encoding method according to the invention and a weighting coefficient, the device comprising a multiplier for multiplying the weighting coefficient with the encoded number, an adder and an accumulation register for accumulating the output signal of the multiplier.
- the invention also relates to an artificial neuron, implemented by a digital calculation system, comprising an integration device according to the invention, to perform an operation of multiplication then accumulation MAC between a received signal and a synaptic coefficient, and a coding device according to the invention for coding the output signal of the integration device, the artificial neuron being configured to propagate the coded signal to another artificial neuron.
- the invention also relates to an artificial neuron, implemented by a computer, comprising an integration device according to the invention to perform an operation of multiplication then accumulation MAC between an error signal received from another artificial neuron and a synaptic coefficient, a local error calculation module configured to calculate a local error signal from the output signal of the integration device and a coding device according to the invention for coding the local error signal, the artificial neuron being configured to propagate the local error signal back to another artificial neuron.
- the subject of the invention is also an artificial neural network comprising a plurality of artificial neurons according to the invention.
- FIG. 1 is a flowchart illustrating the steps for implementing the coding method according to the invention
- FIG. 2 represents a diagram of an encoder according to an embodiment of the invention
- FIG. 3 shows a diagram of an integration module for performing a MAC type operation for numbers quantized using the encoding method of Figure 1,
- FIG. 4 represents a functional diagram of an example of an artificial neuron comprising an integration module of the type of FIG. 3 for operation during a data propagation phase,
- FIG. 5 represents a functional diagram of an example of an artificial neuron comprising an integration module of the type of FIG. 3 for operation during a data back-propagation phase.
- FIG. 1 represents, in a flowchart, the steps of implementing a coding method according to one embodiment of the invention.
- An objective of the method is to encode a quantized number over N d bits into a group of values which can be transmitted (or propagated) separately in the form of events.
- the first step 101 of the method consists in receiving a number y quantized over N d bits, N d being an integer.
- the number y is, typically, a quantized sample of a signal, for example an image signal, an audio signal or a data signal intrinsically comprising information.
- the number N d is typically equal to 8,16,32 or 64 bits. It is in particular dimensioned according to the dynamics of the signal, that is to say the difference between the minimum value of a sample of the signal and its maximum value.
- the number N d is generally chosen so as to take this dynamic into account so as not to saturate or clip the high or low values of the samples of the signal. This can lead to choosing a high value for N d , which leads to a problem of overdimensioning of the computation operators who must carry out operations on samples thus quantified.
- the invention therefore aims to provide a signal coding method which makes it possible to adapt the size (in number of bits) of the samples transmitted as a function of their real value so as to be able to carry out operations on quantized samples with a lower number of bits.
- a second step 102 the number of bits N p is chosen on which the coded samples to be transmitted are quantized. N p is less than N b .
- K is a positive or zero integer
- v r is a residual value
- v max is the maximum value of a number quantized over N p bits.
- the sample is then coded there by the succession of k values v max and of the residual value v r which are transmitted successively.
- the end or the start of a new sample can be identified by the reception of a value different from the maximum value v max .
- the next value received then corresponds to a new sample.
- the coded signals are transmitted, for example via a data bus of appropriate size to a MAC operator with a view to performing a multiplication then accumulation operation.
- the proposed coding method makes it possible to reduce the size of the operators (which are designed to perform operations on N p bits) while making it possible to preserve all the dynamics of the signals. Indeed, the samples of high value (greater than v max ) are coded by several successive values while the samples of low value (less than v max ) are transmitted directly.
- this method does not require addressing to identify the coded values belonging to the same sample since a value less than V max signals the end or the start of a sample.
- FIG. 2 represents, in schematic form, an example of an encoder 200 configured to encode an input value y by applying the method described in FIG. 1.
- the values ⁇ 11111 ⁇ and ⁇ 10011 ⁇ are transmitted at two successive instants.
- the order of transmission is chosen by convention.
- An advantage of the proposed coding method is that it makes it possible to limit the size of the coded data transmitted to N p bits.
- Another advantage lies in its dynamic aspect because the parameter N p can be adapted according to the nature of the data to be coded or according to the dimensioning constraints of the operators used to perform calculations on the coded data.
- FIG. 3 schematically represents an integration module 300 configured to perform an operation of the multiplication then addition or MAC operation type.
- the integration module 300 described in FIG. 3 is optimized to process data encoded using the method according to the invention.
- the integration module 300 implements a MAC operation between an input data item p encoded via the encoding method according to the invention and a weighting coefficient w which corresponds to a parameter learned by an automatic learning model.
- the coefficient w corresponds for example to a synaptic weight in an artificial neural network.
- An integration module 300 of the type described in FIG. 3 can be duplicated to perform MAC operations in parallel between several input values p and several coefficients w.
- one and the same integration module can be activated sequentially to perform several successive MAC operations.
- the integration module 300 includes a MUL multiplier, an ADD adder and an RAC accumulation register.
- the RAC register When a new sample is signaled, for example by the reception of a value other than v max , the RAC register is reset to zero.
- the operators MUL, ADD of the device are dimensioned for numbers quantized on N p bits, which makes it possible to reduce the overall complexity of the device.
- the size of the RAC register must be greater than the sum of the maximum sizes of the values w and p. Typically, it will be of size N d + N w , which is the maximum size of a MAC operation between words of sizes N d and N w .
- a sign management module (not shown in detail in FIG. 3) is also necessary.
- the integration module 300 according to the invention can be advantageously used to implement an artificial neural network as illustrated in Figures 4 and 5.
- the function implemented by a machine learning model consists of an integration of the signals received as input and weighted by coefficients.
- the coefficients are called synaptic weights and the weighted sum is followed by the application of an activation function which, depending on the result of the integration, generates a signal to be propagated at the output of the neuron.
- the artificial neuron N comprises a first integration module 401 of the type of FIG. 3 to produce the product y .w, with y 1 1 a value encoded via the method according to the invention in the form of several events successively propagated between two neurons and w the value of a synaptic weight.
- a second conventional integration module 402 is then used to integrate the products y .w over time.
- an artificial neuron N can include several integration modules to perform MAC operations in parallel for several input data and weighting coefficients.
- the activation function a is, for example, defined by the generation of a signal when the integration of the received signals is complete.
- the activation signal is then encoded via an encoder 403 according to the invention (as described in Figure 2) which encodes the value into several events which are propagated successively to one or more other neuron (s).
- the output value of the activation function a 1 of a neuron of a layer of index I is given by the following relation:
- l ⁇ is the output value of the second integration module 402.
- b ⁇ represents a bias value which is the initial value of the accumulator in the second integration module 402,
- w ⁇ j represents a synaptic coefficient
- the output value y ⁇ is then encoded via an encoder 403 according to the invention (as described in FIG. 2) which encodes the value yj into several events which are propagated successively to one or more other neuron (s) (s).
- the various operations implemented successively in a neuron N can be carried out at different rates, that is to say with different time scales or clocks.
- the first integration device 401 operates at a faster rate than the second integration device 402 which itself operates at a faster rate than the operator performing the activation function.
- the error signals retro-propagated during the back-propagation phase can also be encoded by means of the encoding method according to the invention.
- an integration module according to the invention is implemented in each neuron to carry out the weighting of the coded error signals received with synaptic coefficients as illustrated in FIG. 5 which represents an artificial neuron configured to process and feedback. propagate error signals from layer 1 + 1 to layer I.
- a , f (/ ') is the value of the derivative of the activation function.
- the neuron described in FIG. 5 comprises a first integration module 501 of the type of FIG. 3 to perform the calculation of the product ô ⁇ wj ⁇ 1 , with ⁇ 3 ⁇ 4 +1 the error signal received from a neuron of the 1 + 1 layer and coded by means of the coding method according to the invention and wj ⁇ 1 the value of a synaptic coefficient.
- a second conventional integration module 502 is then used to integrate the results of the first module 501 over time.
- the neuron N comprises other specific operators necessary to calculate a local error d ⁇ which is then encoded via an encoder 503 according to the invention which codes the error in the form of several events which are then retro-propagated to the previous layer 1-1.
- the neuron N also comprises, moreover, a module for updating the synaptic weights 504 as a function of the calculated local error.
- the different operators of the neuron can operate at different rhythms or time scales.
- the first integration module 501 operates at the fastest rate.
- the second integration module 502 operates at a slower rate than the first module 501.
- the operators used to calculate the local error operate at a slower rate than the second module 502.
- the invention proposes a means of adapting the calculation operators of a digital calculation architecture according to the data received. It is particularly advantageous for architectures implementing machine learning models, in which the distribution of the data to be processed varies greatly depending on the input received.
- the invention has particular advantages when the propagated signals include a large number of low values or more generally when the signal has a high dynamic with a large variation in values. Indeed, in this case, the low values can be quantized directly on a limited number of bits while the higher values are coded by several successive events each quantized on the same number of bits.
- Another approach still consists in coding the values on a fixed number of bits but by adjusting the dynamics so as not to clip the maximum values.
- This second approach has the disadvantage of modifying the value of low value data which is very numerous.
- the coding method according to the invention is particularly suited to the statistical profile of the values propagated in a machine learning model because it makes it possible to take into account all the dynamics of the values without however using a high number of bits. fixed to quantify the set of values.
- the operators used for the implementation of a MAC operator can be scaled to handle smaller data size.
- One of the advantages of the invention is that the size N p of the coded samples is a parameter of the coding method.
- This parameter can be optimized as a function of the statistical properties of the data to be coded. This optimizes the coding so as to optimize the overall power consumption of the computer or circuit performing the machine learning model.
- the coding parameters influence the values which are propagated in the machine learning model and therefore the size of the operators performing the MAC operations.
- a first approach for optimizing the coding parameters consists in simulating the behavior of a machine learning model for a training data set and in simulating its energy consumption as a function of the number and the size of the operations carried out.
- a second approach consists in determining a mathematical model to express the energy consumed by the machine learning model or more generally the targeted computer, as a function of the coding parameter N p .
- the coding parameter N p may be different depending on the layer of the network. Indeed, the statistical properties of the propagated values can depend on the network layer. The further you go through the layers, the more information tends to concentrate on a few particular neurons. On the contrary, in the first layers, the distribution of information depends on the input data of the neuron, it can be more random.
- the energy consumed E 1 by a layer of a network depends on the energy consumed El nt by the integration of an event (a value received) by a neuron and the energy E e l ⁇ ⁇ (N p ) consumed by the encoding of this event by the preceding layer.
- n- nt is the number of neurons of layer I
- iV ⁇ st (iV p ) is the number of events transmitted by layer 1-1. This number depends on the coding parameter N p and on the distribution of the data.
- the functions can be determined from models or empirical functions by means of simulations or from actual measurements.
- An advantage of the invention is that it makes it possible to parameterize the value of Nj, independently for each layer I of the network, which makes it possible to finely take into account the statistical profile of the data propagated for each layer.
- the invention can also be applied to optimize the encoding of the backpropagated error values during a backpropagation phase of the gradient.
- the encoding parameters can be optimized independently for the propagation phase and the backpropagation phase.
- the values of the activations in the neural network can be constrained so as to promote a larger distribution of low values.
- This property can be obtained by acting on the cost function implemented in the last layer of the network. By adding a term to this cost function which depends on the values of the propagated signals, one can penalize the large values in the cost function and thus constrain the activations in the network to lower values.
- the coding method according to the invention can be advantageously applied to the coding of data propagated in a computer implementing a machine learning function, for example an artificial neural network function for classifying data according to a function d 'learning.
- a machine learning function for example an artificial neural network function for classifying data according to a function d 'learning.
- the coding method according to the invention can also be applied to the input data of the neural network, in other words the data produced at the input of the first layer of the network.
- the statistical profile of the data is used to best encode the information.
- the data to be encoded can correspond to pixels of the image or groups of pixels or also to differences between pixels of two consecutive images in a sequence of images (video ).
- the computer according to the invention can be implemented using hardware and / or software components.
- the software elements may be available as a computer program product on a computer readable medium, which medium may be electronic, magnetic, optical or electromagnetic.
- the elements hardware can be available all or in part, in particular as dedicated integrated circuits (ASIC) and / or configurable integrated circuits (FPGA) and / or as neural circuits according to the invention or as a digital signal processor DSP and / or as a GPU graphics processor, and / or as a microcontroller and / or as a general processor for example.
- the CONV computer also comprises one or more memories which can be registers, shift registers, RAM memory, ROM memory or any other type of memory suitable for implementing the invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Neurology (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1914706A FR3105660B1 (fr) | 2019-12-18 | 2019-12-18 | Procédé et dispositif de codage additif de signaux pour implémenter des opérations MAC numériques à précision dynamique |
PCT/EP2020/085417 WO2021122261A1 (fr) | 2019-12-18 | 2020-12-10 | Procede et dispositif de codage additif de signaux pour implementer des operations mac numeriques a precision dynamique |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4078817A1 true EP4078817A1 (fr) | 2022-10-26 |
Family
ID=69811268
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP20819793.9A Pending EP4078817A1 (fr) | 2019-12-18 | 2020-12-10 | Procede et dispositif de codage additif de signaux pour implementer des operations mac numeriques a precision dynamique |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230004351A1 (fr) |
EP (1) | EP4078817A1 (fr) |
FR (1) | FR3105660B1 (fr) |
WO (1) | WO2021122261A1 (fr) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3127603B1 (fr) * | 2021-09-27 | 2024-05-03 | Commissariat Energie Atomique | Procédé d’optimisation du fonctionnement d’un calculateur implémentant un réseau de neurones |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2511493B (en) * | 2013-03-01 | 2017-04-05 | Gurulogic Microsystems Oy | Entropy modifier and method |
FR3026905B1 (fr) | 2014-10-03 | 2016-11-11 | Commissariat Energie Atomique | Procede de codage d'un signal reel en un signal quantifie |
-
2019
- 2019-12-18 FR FR1914706A patent/FR3105660B1/fr active Active
-
2020
- 2020-12-10 WO PCT/EP2020/085417 patent/WO2021122261A1/fr unknown
- 2020-12-10 US US17/784,656 patent/US20230004351A1/en active Pending
- 2020-12-10 EP EP20819793.9A patent/EP4078817A1/fr active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2021122261A1 (fr) | 2021-06-24 |
FR3105660A1 (fr) | 2021-06-25 |
US20230004351A1 (en) | 2023-01-05 |
FR3105660B1 (fr) | 2022-10-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11574183B2 (en) | Efficient generation of stochastic spike patterns in core-based neuromorphic systems | |
CN112789625A (zh) | 承诺信息速率变分自编码器 | |
WO2020083880A1 (fr) | Retro-propagation d'erreurs sous forme impulsionnelle dans un reseau de neurones impulsionnels | |
WO2018212946A1 (fr) | Réseaux à dérivées de position sigma-delta | |
EP3323090A1 (fr) | Dispositif de traitement de données avec représentation de valeurs par des intervalles de temps entre événements | |
CN114402596A (zh) | 神经网络模型压缩 | |
EP4078817A1 (fr) | Procede et dispositif de codage additif de signaux pour implementer des operations mac numeriques a precision dynamique | |
EP4278303A1 (fr) | Compression de débit binaire variable faisant intervenir des modèles de réseaux de neurones artificiels | |
CN113424200A (zh) | 用于视频编码和视频解码的方法、装置和计算机程序产品 | |
EP3202044B1 (fr) | Procede de codage d'un signal reel en un signal quantifie | |
EP4078816A1 (fr) | Procede et dispositif de codage binaire de signaux pour implementer des operations mac numeriques a precision dynamique | |
EP4315170A1 (fr) | Autoencodeur multimodal a fusion de donnees latente amelioree | |
Chidambaram et al. | Poet-bin: Power efficient tiny binary neurons | |
US11727252B2 (en) | Adaptive neuromorphic neuron apparatus for artificial neural networks | |
Nayak et al. | A comprehensive study on binary optimizer and its applicability | |
US20230139347A1 (en) | Per-embedding-group activation quantization | |
Nayak et al. | Replication/NeurIPS 2019 Reproducibility Challenge | |
CN117436490A (zh) | 一种基于fpga脉冲神经网络的神经元硬件实现系统 | |
WO2022129156A1 (fr) | Mise a profit de la faible densite de donnees ou de poids non-nuls dans un calculateur de somme ponderee | |
Liu et al. | Rerise: An adversarial example restoration system for neuromorphic computing security | |
EP4195061A1 (fr) | Calculateur d'algorithme réalisé à partir de mémoires à technologies mixtes | |
KR20220071091A (ko) | 스파이킹 뉴럴 네트워크를 최적화하는 방법 및 장치 | |
FR2650926A1 (fr) | Procede et dispositif de quantification vectorielle de donnees numeriques | |
WO2022084765A1 (fr) | Appareil neuronal pour un système de réseau neuronal | |
CN118140229A (zh) | 按嵌入群激活量化 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20220610 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20240507 |