CN116562344A - Deep pulse neural network model and deep SNN on-chip real-time learning processor - Google Patents

Deep pulse neural network model and deep SNN on-chip real-time learning processor Download PDF

Info

Publication number
CN116562344A
CN116562344A CN202310616007.5A CN202310616007A CN116562344A CN 116562344 A CN116562344 A CN 116562344A CN 202310616007 A CN202310616007 A CN 202310616007A CN 116562344 A CN116562344 A CN 116562344A
Authority
CN
China
Prior art keywords
pulse
layer
aer
deep
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310616007.5A
Other languages
Chinese (zh)
Inventor
石匆
张靖雅
田敏
王腾霄
何俊贤
喻剑依
高灏然
王海冰
陈乐毅
陈思豪
庹云鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202310616007.5A priority Critical patent/CN116562344A/en
Publication of CN116562344A publication Critical patent/CN116562344A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a deep pulse neural network model and a deep SNN on-chip real-time learning processor, which comprises an input layer, L-1 hidden layers and an output layer, wherein each layer is completely connected with the previous layer through synaptic weights; the training method of the synaptic weight comprises the steps of directly projecting an output layer error vector to each hidden layer through a fixed random feedback matrix; IF neurons at each output layer during processing of a pulse sequence of training samplesjAt a time steptWhen a pulse is emitted but the label of the pulse is not matched with that of a training sample, a negative error immediately triggers the synaptic weight of neurons of each layerRe-updating; after all pulses of the training sample are processed, each IF neuron of the output layer which is never transmitted but has a label matched with the label of the input sample has a positive error, and the synaptic weight of each layer of neurons is triggered to be updated; the hardware is based on event driven, and adopts heterogeneous dual-core parallel array and pipeline circuit design. The invention has higher recognition precision and realizes quick on-chip learning.

Description

Deep pulse neural network model and deep SNN on-chip real-time learning processor
Technical Field
The invention relates to an artificial intelligence and brain-like intelligent chip, in particular to a deep pulse neural network model supporting on-chip real-time DFA-Errortsiger learning and a deep SNN on-chip real-time learning processor.
Background
With the rapid development of artificial intelligence technology, intelligent edge systems have placed higher demands on cost, processing speed and energy efficiency. However, the currently mainstream artificial neural network (Artificial Neural Network, ANN) model requires a large number of neurons to participate in synchronous intensive operations, and even if a special acceleration chip is adopted, the real-time processing speed and the energy efficiency index still have serious bottlenecks. However, when complex calculations such as learning and cognition are performed, the power consumption consumed by the human brain is only about 20W, and the energy efficiency is high. In recent years, there has been an increasing interest in the study of biomimetic impulse neural network models and neuromorphic systems. The impulse neural network (Spiking Neural Network, SNN) processor utilizes a spatially sparse binary pulse sequence to communicate and process sensory data, which is well suited for low cost, high speed and energy efficient intelligent edge systems.
Existing large-scale general-purpose neuromorphic processors can be assembled into large chip arrays to simulate millions of neurons and millions to billions of nerve synapses. These chips are mainly used for neuroscience research and data center computation, and they have high area cost and power consumption, and are not suitable for edge intelligent systems. To accommodate current multi-purpose edge computing, some small neuromorphic application specific integrated circuit (Application Specific Integrated Circuit, ASIC) systems have been proposed that can only configure limited neurons and synapses, with poor configuration flexibility in terms of neuron model, network topology, and on-chip learning. And in the image recognition task, the on-chip recognition accuracy of these small chips is low. Another branch of the edge neuromorphic hardware research is the rapid exploration and verification of different neuron models, various on-chip learning algorithms, and the achievable hardware architecture using field programmable gate array (Field Programmable Gate Array, FPGA) devices. However, many of these processors have low on-chip recognition accuracy (near or below 90%). The reason for this is that they use an unsupervised pulse time dependent plasticity (STDP) algorithm, which is very advantageous for hardware implementation, but has a low learning ability. Recently, a team of Chongqing university Dan Cong researchers fused a triple brain-like SNN algorithm, and a single-layer SNN processor of 256 neurons was realized based on Zynq-7045 FPGA. The processor achieves 95.1% recognition accuracy on the MNIST dataset and achieves a processing speed of 1350 frames/second. But its full connection synaptic weight requires 1024 (number of input nodes) ×256 (number of neurons) ×16-bit (synaptic weight bit precision) =512 KB of on-chip memory resources and power consumption is about 1W. This is detrimental to its deployment in some edge computing scenarios where hardware resource costs, power consumption are high.
In order to realize the integral improvement of the design of the edge nerve morphology system in terms of hardware resource cost, data throughput, processing delay and on-chip learning precision, the invention provides a deep pulse neural network model supporting on-chip real-time DFA-Errortigger learning and a deep SNN on-chip real-time learning processor.
Disclosure of Invention
The invention provides a deep pulse neural network model supporting on-chip real-time DFA-Errortigger learning and a deep SNN on-chip real-time learning processor, which can achieve higher recognition accuracy and realize quick on-chip learning.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
in a first aspect, the deep pulse neural network model of the invention comprises an input layer, L-1 hidden layers and an output layer, wherein each layer consists of a plurality of IF neurons, and each layer is completely connected with the previous layer through synaptic weight W (L);
the training method of the synaptic weight W (L) is that an output layer error vector e (L) is directly projected to each hidden layer through a fixed random feedback matrix B (L), and L is more than or equal to 1 and less than or equal to L-1;
during processing of the pulse sequence of training samples, whenever an IF neuron j of the output layer emits a pulse at time step t but its label does not match the training sample label, the error polarity p (L, j) = -1 of neuron j in L of the output layer immediately triggers an update of synaptic weights W (L) of neurons of the layers;
after processing all pulses of the training samples, each IF neuron of the output layer that never transmits but whose label matches the input sample label will have a positive error, i.e., p (L, j) = +1, triggering the synaptic weight W (L) update of the IF neurons of the layers.
Optionally, the input pulse coding mode is: rate encoding based on poisson distribution.
Optionally, the way of the out-pulse decoding is: counting the total number of the transmitted pulses of each class containing all the IF neurons, wherein the class with the maximum corresponding pulse total number is used as the classification result of the input sample; if there are as many pulses as there are two or more categories, the classification result is judged to be unknown.
In a second aspect, the invention provides a deep SNN on-chip real-time learning processor, which comprises M 1 An array of heterogeneous multi-cores and an error management unit; each array is connected in sequence, and the last array is connected with the error management unit;
each array is calculated in parallel, and each array is driven based on a pulse event and processes the pulse event in an AER mode; wherein AER comprises a source address generated by a pulse event;
the array comprises M 2 The system comprises a nerve computation core, an input AER FIFO and an arbiter for sequentially sending output AER of each nerve computation core, wherein the input AER FIFO is respectively connected with each nerve computation core, and the arbiter is respectively connected with the nerve computation cores; the AER with output of the current array is placed in the input FIFO of the next array, and when the next array processes one AER, the AER is fetched from the input AER FIFO of the current array;
the real-time DFA-Errortsiger learning algorithm model disclosed by the invention is integrated in the real-time learning processor on the deep SNN chip.
Optionally, the neural computing core includes a local controller, an IF neuron module, a learning engine module, an output AER FIFO, a membrane potential memory, and a synaptic weight memory; the local controller is respectively connected with the IF neuron module, the learning engine module, the output AER FIFO, the membrane potential memory and the synaptic weight memory, and the IF neuron module is respectively connected with the learning engine module, the output AER FIFO, the membrane potential memory and the synaptic weight memory, and the synaptic weight memory is also connected with the learning engine module.
Optionally, the IF neuron module adopts a two-stage pipeline, and in the first stage pipeline, when an input pulse AER from a previous layer exists, the membrane potential of all IF neurons is updated in series; the second stage pipeline is used to decide whether to transmit the output pulse AER.
Alternatively, the learning engine module employs a two-stage pipeline with the error p from the IF neuron j of the error management unit j Triggering the learning engine module to update the synaptic weight W (l) of the IF neuron in series, and not starting the learning engine module in the reasoning process.
The invention has the beneficial effects that:
(1) The invention provides a deep pulse neural network model supporting on-chip real-time DFA-Errortigger learning, which is based on a Direct Feedback Alignment (DFA) theoretical framework, can achieve higher identification accuracy by utilizing an error triggering mechanism, and is beneficial to the realization of an intelligent edge system with high real-time requirements.
(2) The hardware architecture is based on event driving, adopts heterogeneous dual-core parallel array and pipeline circuit design, and can greatly improve throughput performance.
(3) The hardware architecture provided by the invention has good expandability: aiming at different application scene requirements, the flexible configuration of the hardware architecture can be realized through parameter configuration.
Drawings
Fig. 1 is a deep pulse neural network model of the present embodiment.
Fig. 2 is a hardware architecture of the deep SNN on-chip real-time learning processor of the present embodiment.
Fig. 3 is a circuit diagram of the IF neuron module in the present embodiment.
Fig. 4 is a circuit diagram of the learning engine module in the present embodiment.
Detailed Description
As shown in fig. 1, in this embodiment, a deep pulse neural network model is provided, which is a deep pulse neural network model supporting on-chip real-time DFA-error learning. Based on a Direct Feedback Alignment (DFA) theoretical framework, a higher identification precision can be achieved by utilizing an error triggering mechanism, and the realization of an intelligent edge terminal system with high real-time requirements is facilitated. The designed hardware is based on event driving, and the throughput performance can be greatly improved by adopting a heterogeneous dual-core parallel array and pipeline circuit design.
1. Design of deep pulse neural network model:
in recent years, the direct feedback alignment theoretical framework projects the errors of the output layer of the neural network to each hidden layer independently through a fixed random feedback matrix, replaces the error layer-by-layer propagation in standard error Back Propagation (BP), and remarkably reduces the computational complexity and delay. The Errortister learning rule is used for updating the weight based on the number of input pulses accumulated on synapses, is simple to operate and is beneficial to hardware realization.
In this embodiment, a deep pulse neural network model is designed based on the DFA theoretical framework in combination with the error training rule. Each layer in the deep pulse neural network model consists of a plurality of IF neurons, and each layer is completely connected with the previous layer through a synaptic weight matrix W (l).
2. On-chip learning rule design:
under the DFA mechanism, in order to train the synaptic weight W (L), the output layer error vector e (L) can be directly projected to each hidden layer (1.ltoreq.l.ltoreq.L-1) through a fixed random feedback matrix B (L), and for the IF neuron k in the hidden layer L, the generalized error polarity p is defined as follows:
wherein the method comprises the steps ofB(l)(k,j)For the elements in matrix B (L) (for each hidden layer, the elements in B (L) are randomly determined, for output layer B (L) equal to cell matrix I), p (L, j) is the error polarity of IF neuron j in output layer L, p (L, j) is three-valued (i.e. can be +1/0/-1). During processing of the pulse sequence of training samples, whenever an output layer IF neuron j emits a pulse at time step t but its tag does not match the training sample tag, p (L, j) = -1, the synaptic weight W (L) update of the IF neurons of each layer is triggered immediately as follows:
Δw uv (t)=ηC u (t)p v (t) (2)
wherein: eta is learning rate, C u (t) is the accumulated pulse number on synapses of the R-th layer IF neuron v and the previous layer IF neuron u until the current time step t, wherein R is more than or equal to 1 and less than or equal to R, R is the layer number of the deep pulse neural network model, and p v And (t) is the error of the layer r IF neuron v. In addition, after processing all pulses of the training samples, each output layer IF neuron never transmitting but having a label matching the input sample label has a positive error, p (L, j) = +1, i.e. triggers the layer IF neuronsThe synaptic weights W (l) of the elements are updated, again in the form of equation (2).
3. Input pulse coding and output pulse decoding modes:
input pulse coding: rate encoding based on poisson distribution.
Output pulse decoding: and counting the total number of the transmitted pulses of all the IF neurons contained in each class, wherein the class with the maximum corresponding pulse total number is used as the classification result of the input sample. If there are as many pulses as there are two or more categories, the classification result is judged to be unknown.
4. Software fixed point, evaluate the recognition accuracy of the proposed deep pulse neural network model on classical MNIST data set:
based on the C# programming language, the software codes are spotted by using Visual Studio software tools so as to compare software and hardware data in subsequent experiments. When the deep pulse neural network model is 784-500-500-500-200, namely the input layer consists of 784 IF neurons, each hidden layer consists of 500 IF neurons, and the output layer consists of 200 IF neurons, the recognition accuracy on the MNIST data set is 96%.
In this embodiment, as shown in fig. 2, a deep SNN on-chip real-time learning processor is an object-side digital neuromorphic brain processor supporting on-chip real-time DFA-error learning.
1. Hardware architecture design
As shown in FIG. 2, in the present embodiment, M is included 1 An array of heterogeneous multi-cores and an error management unit; each array is connected in turn, and the last array is connected with the error management unit. Each array is computed in parallel, each array is event driven and processes their pulse events in the form of address-event representations (AERs). Wherein AER comprises a source address generated by a pulse event.
As shown in FIG. 2, the array includes M 2 The system comprises a nerve computation core, an input AER FIFO and an arbiter for sequentially sending output AER of each nerve computation core, wherein the input AER FIFO is respectively connected with each nerve computation core, and the arbiter is respectively connected with the nerve computation cores; the current array hasThe output AER is placed in the input FIFO of the next array, and when the next array processes one AER, the AER is fetched from the input AER FIFO of the present array; the real-time DFA-Errortigger learning algorithm model described in the embodiment is integrated in the real-time learning processor on the deep SNN chip.
In this embodiment, each of the neural computing cores serially performs the operations related to a maximum of m=128 IF neurons, and the neural computing cores include a local controller, an IF neuron module that performs the computation of formula (1), a learning engine module that performs formula (2), an output AER FIFO, a membrane potential memory, and a synaptic weight memory. The local controller is respectively connected with the IF neuron module, the learning engine module, the output AER FIFO, the membrane potential memory and the synaptic weight memory, and the IF neuron module is respectively connected with the learning engine module, the output AER FIFO, the membrane potential memory and the synaptic weight memory, and the synaptic weight memory is also connected with the learning engine module. The proposed architecture is scalable and can be flexibly configured to meet the trade-off between processing speed, recognition accuracy and resource cost for different applications.
2. Key computing module circuit design
Each calculation module adopts a pipeline circuit to design and optimize so as to ensure high processing frame rate. Once the impulse events arrive, they can complete the associated neural synapse operations at speeds up to one neuron per clock cycle. In the pipeline circuit of each computing module, one stage corresponds to one clock cycle.
As shown in fig. 3, in the present embodiment, the circuit structure of the IF neuron module adopts a two-stage pipeline in the calculation process, and in the first stage pipeline, when there is an input pulse AER from the previous layer, the membrane potential of all IF neurons is updated in series. The second stage pipeline decides whether to transmit the output pulse AER.
In this embodiment, as shown in fig. 4, the circuit of the learning engine module adopts a two-stage pipeline in the calculation process. Errors p from neurons j of an error management unit j Triggering a learning engine module to serially update synaptic weights for the IF neuronsHeavy. The 16-bit is used in this embodiment to calculate the exact weight change amount aw in equation (2) ij But stores the weights w in 8-bit form ij To save storage space. Δw ij At the time of adding to w ij Previously rounded randomly. By utilizing the characteristics of the complementary expression of 2, deltaw is calculated ij The least significant 8 bits are used as probability P.epsilon.0, 1-2 -8 ]≈[0,1]Is rounded and compared to another unsigned 8-bit random fraction R generated by a Linear Feedback Shift Register (LFSR). If P>R, rounding Δw ij And otherwise discarded. The learning engine module is not started during the reasoning process.
In this embodiment, IF neurons refer to the neuronal model of each layer of the network using an integral & Fire (IF) neuronal model. The IF neuron block is a circuit module that updates the neuron membrane potential and determines whether the neuron emits a pulse (IF the neuron membrane potential exceeds a threshold value).

Claims (7)

1. A deep pulse neural network model, characterized in that: comprises an input layer, L-1 hidden layers and an output layer, wherein each layer consists of a plurality of IF neurons, and each layer passes through synaptic weight W%l) Is completely connected with the previous layer;
wherein, synaptic weight W%l) The training method is that a fixed random feedback matrix B #, is used for trainingl) Will output layer error vectore(L) Directly projecting to each hidden layer, wherein the content of the hidden layers is less than or equal to 1%l L−1;
IF neurons at each output layer during processing of a pulse sequence of training samplesjAt a time steptOutput layer when pulse is transmitted but its label is not matched with training sample labelLMesoneuronsjError polarity of (2)p(L, j) Synaptic weight W of each layer of neurons is triggered immediately by = -1l) Updating;
after processing all pulses of the training samples, each IF neuron of the output layer that never emits but whose label matches the label of the input sample has a positive error, i.ep(L, j) = +1, triggering each layerSynaptic weight W of IF neuronl) Updating.
2. The deep pulse neural network model of claim 1, wherein: the input pulse coding mode is as follows: rate encoding based on poisson distribution.
3. The deep pulse neural network model of claim 1 or 2, wherein: the output pulse decoding mode is as follows: counting the total number of the transmitted pulses of each class containing all the IF neurons, wherein the class with the maximum corresponding pulse total number is used as the classification result of the input sample; if there are as many pulses as there are two or more categories, the classification result is judged to be unknown.
4. The utility model provides a deep SNN on-chip real-time study treater which characterized in that: includes M 1 An array of heterogeneous multi-cores and an error management unit; each array is connected in sequence, and the last array is connected with the error management unit;
each array is calculated in parallel, and each array is driven based on a pulse event and processes the pulse event in an AER mode; wherein AER comprises a source address generated by a pulse event;
the array comprises M 2 The system comprises a nerve computation core, an input AER FIFO and an arbiter for sequentially sending output AER of each nerve computation core, wherein the input AER FIFO is respectively connected with each nerve computation core, and the arbiter is respectively connected with the nerve computation cores; the AER with output of the current array is placed in the input FIFO of the next array, and when the next array processes one AER, the AER is fetched from the input AER FIFO of the current array;
the real-time DFA-Errortigger learning algorithm model as set forth in any one of claims 1 to 3 is integrated in the deep SNN on-chip real-time learning processor.
5. The deep SNN on-chip real-time learning processor of claim 4, wherein: the nerve computation core comprises a local controller, an IF neuron module, a learning engine module, an output AER FIFO, a membrane potential memory and a synaptic weight memory; the local controller is respectively connected with the IF neuron module, the learning engine module, the output AER FIFO, the membrane potential memory and the synaptic weight memory, and the IF neuron module is respectively connected with the learning engine module, the output AER FIFO, the membrane potential memory and the synaptic weight memory, and the synaptic weight memory is also connected with the learning engine module.
6. The deep SNN on-chip real-time learning processor of claim 4, wherein: the IF neuron module adopts a two-stage pipeline, and in the first stage pipeline, when an input pulse AER from a previous layer exists, the membrane potential of all the IF neurons is updated in series; the second stage pipeline is used to decide whether to transmit the output pulse AER.
7. The deep SNN on-chip real-time learning processor of claim 4, wherein: the learning engine module adopts a two-stage pipeline, and IF neurons from an error management unitjError of (2)p j Triggering a learning engine module to serially update the synaptic weight W of the IF neuronl) The learning engine module is not started during the reasoning process.
CN202310616007.5A 2023-05-29 2023-05-29 Deep pulse neural network model and deep SNN on-chip real-time learning processor Pending CN116562344A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310616007.5A CN116562344A (en) 2023-05-29 2023-05-29 Deep pulse neural network model and deep SNN on-chip real-time learning processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310616007.5A CN116562344A (en) 2023-05-29 2023-05-29 Deep pulse neural network model and deep SNN on-chip real-time learning processor

Publications (1)

Publication Number Publication Date
CN116562344A true CN116562344A (en) 2023-08-08

Family

ID=87496363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310616007.5A Pending CN116562344A (en) 2023-05-29 2023-05-29 Deep pulse neural network model and deep SNN on-chip real-time learning processor

Country Status (1)

Country Link
CN (1) CN116562344A (en)

Similar Documents

Publication Publication Date Title
US11657257B2 (en) Spiking neural network
CN107092959B (en) Pulse neural network model construction method based on STDP unsupervised learning algorithm
Wang et al. Neuromorphic hardware architecture using the neural engineering framework for pattern recognition
Liu et al. Time series prediction based on temporal convolutional network
US11017288B2 (en) Spike timing dependent plasticity in neuromorphic hardware
Tirumala Implementation of evolutionary algorithms for deep architectures
CN112149815A (en) Population clustering and population routing method for large-scale brain-like computing network
CN111340194B (en) Pulse convolution neural network neural morphology hardware and image identification method thereof
Fang et al. An event-driven neuromorphic system with biologically plausible temporal dynamics
Zhang et al. Synthesis of sigma-pi neural networks by the breeder genetic programming
Pu et al. Block-based spiking neural network hardware with deme genetic algorithm
CN116562344A (en) Deep pulse neural network model and deep SNN on-chip real-time learning processor
CN112598119B (en) On-chip storage compression method of neuromorphic processor facing liquid state machine
Guo et al. Efficient hardware implementation for online local learning in spiking neural networks
Cofiño et al. Evolving modular networks with genetic algorithms: application to nonlinear time series
Das et al. Study of spiking neural network architecture for neuromorphic computing
Shi et al. TEDOP: A Tiny Event-Driven Neural Network Hardware Core Enabling On-Chip Spike-Driven Synaptic Plasticity
Chen et al. Conversion of artificial neural network to spiking neural network for hardware implementation
US20230351165A1 (en) Method for operating neural network
Fahey et al. SwiftSpike: An efficient software framework for the development of spiking neural networks
AU2022287647B2 (en) An improved spiking neural network
Theodoridis et al. A Tour to Deep Learning: From the Origins to Cutting Edge Research and Open Challenges
Yasunaga et al. Parallel back-propagation using genetic algorithm: real-time BP learning on the massively parallel computer CP-PACS
CN117350345A (en) Reconfigurable processor fusing nerve morphology and general calculation and execution method
CN116663627A (en) Digital nerve morphology calculation processor and calculation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination