CN112163672A - WTA learning mechanism-based cross array impulse neural network hardware system - Google Patents
WTA learning mechanism-based cross array impulse neural network hardware system Download PDFInfo
- Publication number
- CN112163672A CN112163672A CN202010933121.7A CN202010933121A CN112163672A CN 112163672 A CN112163672 A CN 112163672A CN 202010933121 A CN202010933121 A CN 202010933121A CN 112163672 A CN112163672 A CN 112163672A
- Authority
- CN
- China
- Prior art keywords
- array
- synapses
- output
- input
- output layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 27
- 230000007246 mechanism Effects 0.000 title claims abstract description 15
- 210000002569 neuron Anatomy 0.000 claims abstract description 104
- 210000000225 synapse Anatomy 0.000 claims abstract description 82
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 230000000694 effects Effects 0.000 claims abstract description 6
- 230000002964 excitative effect Effects 0.000 claims description 26
- 230000002401 inhibitory effect Effects 0.000 claims description 24
- 230000001242 postsynaptic effect Effects 0.000 claims description 19
- 230000000638 stimulation Effects 0.000 claims description 14
- 230000000946 synaptic effect Effects 0.000 claims description 12
- 239000012528 membrane Substances 0.000 claims description 11
- 230000003518 presynaptic effect Effects 0.000 claims description 11
- 210000003050 axon Anatomy 0.000 claims description 10
- 230000005764 inhibitory process Effects 0.000 claims description 7
- 238000000034 method Methods 0.000 claims description 5
- 238000012421 spiking Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 210000004027 cell Anatomy 0.000 claims description 2
- 230000008859 change Effects 0.000 claims description 2
- 229910044991 metal oxide Inorganic materials 0.000 claims description 2
- 150000004706 metal oxides Chemical class 0.000 claims description 2
- 230000009191 jumping Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 8
- 238000005265 energy consumption Methods 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 3
- 206010001497 Agitation Diseases 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000002860 competitive effect Effects 0.000 description 2
- 230000000452 restraining effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a cross array impulse neural network hardware system based on a WTA learning mechanism, which realizes a true parallelization impulse neural network taking WTA as the learning mechanism, can efficiently finish the learning and classifying tasks of models such as images, voice, texts and the like, and comprises a data preprocessing module, input layer neurons, a synapse array and output layer neurons, wherein the data preprocessing module is connected with a plurality of input layer neuron input ends, the input layer neuron output ends are connected with the input ends of the synapse array, and the synapse array output ends are connected with the input ends of the output layer neurons. The substantial effects of the invention are as follows: a hardware architecture of a cross array pulse neural network based on a WTA learning mechanism can construct a pulse neural network for large-scale parallel computation, and a brain-like chip with high speed and low power consumption is realized.
Description
Technical Field
The invention relates to the field of a pulse neural network inspired by brain, in particular to a WTA learning mechanism-based hardware architecture of a cross array pulse neural network.
Technical Field
In the field of artificial intelligence such as image recognition, the biological neural system has remarkable advantages in speed and energy consumption, and the traditional neural network system has remarkable problems of energy consumption and processing speed when facing a large amount of instantaneous image information. The pulse neural form hardware architecture formed by being separated from the existing von Neumann computer architecture can effectively simulate the working mode of a biological nervous system, and solve the energy consumption problem and the efficiency problem of the artificial neural network at the present stage. When the traditional artificial neural network is used for processing the image problem, a large amount of complex operations and iterative processes are required, so that the traditional hardware system is unfriendly, the large-scale complex operations are effectively avoided, and an efficient learning mechanism is found and is the problem that the hardware of the impulse neural network is urgently needed to be solved. The neural hardware architecture is simplified in a single module, and large-scale integration can be realized.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a cross array pulse neural network hardware system based on a WTA learning mechanism.
The hardware system comprises a data preprocessing module, input layer neurons, a synapse array and output layer neurons, wherein the data preprocessing module is connected with a plurality of input layer neuron input ends, the input layer neuron output ends are connected with the input ends of the synapse array, and the synapse array output ends are connected with the input ends of the plurality of output layer neurons. In the field of image processing, a data preprocessing module converts an input image model into a signal form required by an input layer neuron; the input layer neurons send presynaptic pulses to synapses in the synapse array; synapses transmitting inhibitory or excitatory signals to output layer neurons; the neuron of the output layer represents the recognition effect on different input images according to the stimulation signal sent by the synapse and is reflected on the output frequency of the axon of the neuron of the output layer; the synapse array is a cross array, the synapses are in a randomized state in an initialization stage, part of the synapses can transmit a stimulation signal of an input layer neuron to an output layer neuron, and the rest part of the synapses cannot transmit the stimulation signal; different output layer neurons receive pulse stimulation signals of different degrees in a learning stage, and the membrane potential of one and only one output layer neuron exceeds a threshold value firstly in the learning process of each model; the only output layer neuron sends a post-synaptic pulse signal and a transverse inhibitory signal, the sent post-synaptic pulse signal locks a columnar synapse array connected with the neuron, part of synapses receive the post-synaptic pulse first and grow into inhibitory synapses, and part of synapses receive the pre-synaptic pulse first and then receive the post-synaptic pulse and grow into excitatory synapses.
Preferably, the lateral inhibitory signal sent by the neuron in the output layer inhibits other neurons in the output layer, so that the membrane potential of the neurons is attenuated, and the neurons fail to compete in the learning stage of the model;
preferably, there are two types of output ports for the synapse nodes in the crossbar array, namely an excitatory port and an inhibitory port, and when a synapse receives a pulse signal sent by an output layer neuron, the signal output by the synapse at the excitatory port or the inhibitory port is transmitted to the output layer neuron along with a state jump.
Preferably, the cross array is an MRAM based on MTJ, a PCRAM based on phase change memory medium, or an RRAM based on metal oxide device, and has a basic structure in which a plurality of row lines are connected to an output of neurons in an input layer, a plurality of column lines are connected to an input of neurons in an output layer, and at least one memory cell having a variable resistance value is present between each row line and each column line.
Preferably, in the initialization phase, the randomized states of the different synaptic states are obtained by inputting pseudo-random numbers into the array of synapses.
A method for realizing a lightweight cross-array pulse neural network hardware system based on a WTA learning mechanism comprises the following specific steps:
the pulse neural network hardware architecture can be briefly described as three stages in the process of realizing the learning and classifying tasks of the image model, namely an initialization stage, a learning stage and an identification stage. In the initialization phase, the state of synapses is randomized, and exhibits an excitatory state or a non-excitatory state, and when synapses are in an excitatory state, output layer neurons receive a stimulation signal transmitted from input layer neurons, and when synapses are in a non-excitatory state, output layer neurons do not receive a stimulation signal transmitted by input layer neurons. In the learning stage, the input layer neurons convert information transmitted by the data preprocessing module into pulse signals and transmit the pulse signals to synapses, at the moment, excitatory synapses transmit stimulation signals to output layer neurons, and because synapses are random in states presented in the process, different output layer neurons receive different stimulation signals, the output layer neurons receiving the most stimulation signals win in the learning stage and send inhibition signals to other output layer neurons to guarantee competitive advantages, meanwhile, the winning output layer neurons learn the input image model, and the synapses grow into inhibitory synapses or excitatory synapses according to the pulse signals of the input layer neurons and the output layer neurons in the learning stage. In the identification stage, the learned image model is input into the framework of the impulse neural network, the synapse array of the learned image model outputs more excitatory signals to the connected output layer neurons, the output layer neurons also obtain competitive advantages to inhibit other output layer neurons, and the axons of the output layer neurons output high-frequency impulse signals.
Compared with the prior art, the invention has the following effects: the system is suitable for use in a storage network that stores data in a cross-array format; all parts of the architecture can be realized by hardware, and a parallelization pulse neural network hardware system can be realized by utilizing the architecture; in the recognition of the fields of images, voice, text and the like, the architecture judges a learned model through a high-frequency pulse signal output by an output layer neuron. The impulse neural network architecture of the design is realized by an FPGA (field programmable gate array), and the FPGA has high flexibility and reusability.
Drawings
FIG. 1 is a schematic diagram of a spiking neural network architecture;
FIG. 2 is a schematic interface diagram of synapses;
FIG. 3 is a state jump diagram for synapses;
FIG. 4 is a block schematic diagram of an output layer neuron.
Wherein 1, input layer neurons; 2. an output layer neuron; 3. synapse; 4. a pre-synaptic pulse end; 5. a post-synaptic pulse end; 6. the transverse line inhibition connecting line of the neuron of the output layer; 7. a pre-synaptic pulse output of an input layer neuron; 8. a post-synaptic pulse output of an output layer neuron; 9. an array of synapses; 10. a data preprocessing module; 11. a random number input; 12. a learning enabling end; 13. an excitatory signal output; 14. a suppressive signal output; 15. a synaptic excitability signal input; 16. a lateral inhibit signal input; 17. a synapse inhibitory signal input; 18. a transverse suppression signal output terminal; 19. an axon pulse signal output; 20. a membrane potential register of an output layer neuron; 21. a membrane potential subtraction circuit for neurons in the output layer; 22. a membrane potential addition circuit for neurons in the output layer; 23. a post-synaptic pulse generating circuit; 24. a lateral suppression signal generation circuit; 25. an axon pulse signal generating circuit.
State S0: a random state;
state S1, synapse enters the intermediate state when receiving the pulse signal sent by input layer neuron;
state S2: synaptic growth is an inhibitory synaptic state;
state S3: synaptic growth to an excitatory synaptic state;
state S4: temporarily storing the state;
state S5: and (5) temporarily storing the state.
The condition (1) is that a post-synaptic pulse signal is present when the learning enable terminal is at a high level;
the condition (2) is that there is a presynaptic pulse signal when the learning enable terminal is at a high level;
condition (3) the learning enable terminal is at a high level;
condition (4) the learning enable terminal is low.
Detailed description of the invention
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The first embodiment is as follows:
a hardware system of a cross-array impulse neural network based on a WTA learning mechanism is shown in fig. 1, and the system includes an input layer neuron 1, an output layer neuron 2, a synapse array 9, and a data preprocessing module 10.
The data preprocessing module 10 in the first embodiment can convert the input model information into the signal form required by the input layer neurons; input layer neuron 1 may send a pre-synaptic pulse to synapse 3; synapse 3 may transmit inhibitory or excitatory signals to output layer neurons 2; the output layer neurons 2 can characterize the recognition effect on different input images according to the stimulation signals sent by the synapses 3 and reflect the frequency at the axon pulse signal output terminal 19.
FIG. 2 is a block diagram of a synapse 3 in a first embodiment. A random number input end 11 of a synapse module schematic diagram is connected with an external pseudo-random number generator, a learning enabling end 12 is connected with an external control signal end, and a pre-synaptic pulse end 4 is connected with a pre-synaptic pulse output end 7 of an input layer neuron of the input layer neuron; the post-synaptic pulse end 5 is connected with the post-synaptic pulse output end 8 of the output layer neuron. The excitatory signal output terminal 13 is connected to a synaptic excitatory signal input terminal 16 of an output layer neuron, and the inhibitory signal output terminal 14 is connected to a synaptic inhibitory signal input terminal 17 of an output layer neuron.
FIG. 3 is a diagram illustrating state transition of synapses in a first embodiment.
State S0: in the random state, synapses are now inactive, and state S1 is entered when condition (2) learning enable is high and there is a pre-synaptic pulse signal, and state S2 is entered when condition (1) learning enable is high and there is a post-synaptic pulse signal.
State S1 the synapse enters this intermediate state upon receiving a pulse signal sent by an input layer neuron, and the synapse enters state S3 upon condition (1) learning enable being high and a post-synaptic pulse signal.
State S2 inhibitory synapse status. In this state, when the synapse receives a pulse signal input by an input layer neuron, 14. the inhibitory signal output outputs the pulse signal. In the next learning phase, in order not to affect the learning of the input model by other synapse arrays, the state S4 is entered when the learning enable terminal is high in condition (3).
State S3 excitatory synaptic state. In this state, when the synapse receives a pulse signal input by an input layer neuron, 13, the excitatory signal output outputs a pulse signal. In the next learning phase, in order not to affect the learning of the input model by other synapse arrays, the state S5 is entered when the learning enable terminal is high in condition (3).
State S4, scratch pad state. When the learning enable is low in condition (4), it will return to state S2, where state 3. the synaptic inhibitory signal output 14 does not output a pulse signal.
State S5, scratch pad state. When the learning enable terminal is low in the condition (4), the state S3 is returned to, where the synaptic excitability signal output terminal 13 does not output a pulse signal.
FIG. 4 is a block diagram of neuron 2 in the first embodiment. The excitatory signal sent by synapse is transmitted to a synapse excitatory signal input port 15 to trigger a membrane potential adding circuit 22 of output layer neurons to enable a membrane potential register 20 of the output layer neurons to carry out accumulation operation, and the inhibitory signal sent by synapse and a transverse inhibitory signal sent by other output layer neurons trigger a membrane potential subtracting circuit 21 of the output layer neurons to enable 20. 20. The membrane potential register of the output layer neuron can influence the transverse restraining signal generating circuit 24, the axon pulse signal generating circuit 25 and the post-synaptic pulse generating circuit 23 of the output layer neuron, and further influence the generation of signals of the transverse restraining signal output end 18, the axon pulse signal output end 19 and the post-synaptic pulse output end 8 of the output layer neuron. The synapse excitatory signal input terminal 15 is shown as a bus port, connected to a single-bit excitatory signal output terminal 13 of a plurality of synapses in the vertical direction; the synapse inhibitory signal inputs 17 are bus ports connected to one-bit inhibitory signal outputs 14 of a plurality of synapses in the vertical direction. The transverse inhibition signal output end 18 is connected with the transverse inhibition signal input ends 15 of the rest 2 output layer neurons through the transverse inhibition connecting wires 6 of the output layer neurons to form a mutual inhibition relationship. The axon pulse signal output terminal 8 is connected to the post-synaptic pulse terminal 5 of a longitudinal synapse. And the axon pulse signal output end 19 is an output port of the system and is used for observing the recognition effect of the designed pulse neural network hardware architecture on the model.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
Claims (5)
1. A hardware system of a cross array pulse neural network based on a WTA learning mechanism is characterized in that: the pulse neural network system comprises a data preprocessing module, input layer neurons, a synapse array and output layer neurons, wherein the data preprocessing module is connected with a plurality of input layer neuron input ends, the input layer neuron output ends are connected with the input ends of the synapse array, and the synapse array output ends are connected with the input ends of the output layer neurons; the data preprocessing module converts the input model information into a signal form required by an input layer neuron; the input layer neurons send presynaptic pulses to synapses in the synapse array; synapses transmitting inhibitory or excitatory signals to output layer neurons; the output layer neuron represents the recognition effect on different input models according to the stimulation signals sent by synapses and is reflected on the axon output frequency of the output layer neuron; the synapse array is a cross array, the synapses are in a randomized state in an initialization stage, part of the synapses can transmit a stimulation signal of an input layer neuron to an output layer neuron, and the rest part of the synapses cannot transmit the stimulation signal; different output layer neurons receive pulse stimulation signals of different degrees in a learning stage, and the membrane potential of one and only one output layer neuron exceeds a threshold value firstly in the learning process of each model; the only output layer neuron sends a post-synaptic pulse signal and a transverse inhibitory signal, the sent post-synaptic pulse signal locks a columnar synapse array connected with the neuron, part of synapses receive the post-synaptic pulse first and grow into inhibitory synapses, and part of synapses receive the pre-synaptic pulse first and then receive the post-synaptic pulse first and grow into excitatory synapses.
2. The WTA learning mechanism based cross-array spiking neural network hardware system according to claim 1, wherein: the transverse inhibition signal sent by the neuron in the output layer can inhibit other neurons in the output layer to attenuate the membrane potential of the neurons, so that the neurons in the output layer fail to compete in the learning stage of the model.
3. The WTA learning mechanism based cross-array spiking neural network hardware system according to claim 1, wherein: the synapse nodes in the cross array have two types of output ports, namely an excitatory port and an inhibitory port, and when the synapses receive pulse signals sent by the output layer neurons, the signals output by the synapses at the excitatory port or the inhibitory port are transmitted to the output layer neurons along with state jumping.
4. The WTA learning mechanism based cross-array spiking neural network hardware system according to claim 1, wherein: the cross array is MRAM based on MTJ, PCRAM based on phase change storage medium, RRAM based on metal oxide device, and has the basic structure that multiple row lines are connected with the output of neuron in input layer, multiple column lines are connected with the input of neuron in output layer, and at least one memory cell with variable resistance exists between each row line and each column line.
5. The WTA learning mechanism based cross-array spiking neural network hardware system according to claim 1, wherein: in the initialization phase, the randomized states of the different synaptic states are obtained by inputting pseudo-random numbers into the array of synapses.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010933121.7A CN112163672B (en) | 2020-09-08 | 2020-09-08 | Cross array pulse neural network hardware system based on WTA learning mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010933121.7A CN112163672B (en) | 2020-09-08 | 2020-09-08 | Cross array pulse neural network hardware system based on WTA learning mechanism |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112163672A true CN112163672A (en) | 2021-01-01 |
CN112163672B CN112163672B (en) | 2024-02-20 |
Family
ID=73857932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010933121.7A Active CN112163672B (en) | 2020-09-08 | 2020-09-08 | Cross array pulse neural network hardware system based on WTA learning mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112163672B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113033782A (en) * | 2021-03-31 | 2021-06-25 | 广东工业大学 | Method and system for training handwritten number recognition model |
CN113408714A (en) * | 2021-05-14 | 2021-09-17 | 杭州电子科技大学 | Full-digital pulse neural network hardware system and method based on STDP rule |
CN113688978A (en) * | 2021-08-12 | 2021-11-23 | 华东师范大学 | Association learning neural network array based on three-terminal synapse device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120084240A1 (en) * | 2010-09-30 | 2012-04-05 | International Business Machines Corporation | Phase change memory synaptronic circuit for spiking computation, association and recall |
CN107092959A (en) * | 2017-04-07 | 2017-08-25 | 武汉大学 | Hardware friendly impulsive neural networks model based on STDP unsupervised-learning algorithms |
-
2020
- 2020-09-08 CN CN202010933121.7A patent/CN112163672B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120084240A1 (en) * | 2010-09-30 | 2012-04-05 | International Business Machines Corporation | Phase change memory synaptronic circuit for spiking computation, association and recall |
CN107092959A (en) * | 2017-04-07 | 2017-08-25 | 武汉大学 | Hardware friendly impulsive neural networks model based on STDP unsupervised-learning algorithms |
Non-Patent Citations (1)
Title |
---|
阮承妹;刘持标;邱锦明;: "脉冲神经网络中STDP学习算法的稳定性", 榆林学院学报, no. 06, 15 November 2017 (2017-11-15) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113033782A (en) * | 2021-03-31 | 2021-06-25 | 广东工业大学 | Method and system for training handwritten number recognition model |
CN113033782B (en) * | 2021-03-31 | 2023-07-07 | 广东工业大学 | Training method and system for handwriting digital recognition model |
CN113408714A (en) * | 2021-05-14 | 2021-09-17 | 杭州电子科技大学 | Full-digital pulse neural network hardware system and method based on STDP rule |
CN113688978A (en) * | 2021-08-12 | 2021-11-23 | 华东师范大学 | Association learning neural network array based on three-terminal synapse device |
Also Published As
Publication number | Publication date |
---|---|
CN112163672B (en) | 2024-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10628732B2 (en) | Reconfigurable and customizable general-purpose circuits for neural networks | |
CN112163672B (en) | Cross array pulse neural network hardware system based on WTA learning mechanism | |
US8515885B2 (en) | Neuromorphic and synaptronic spiking neural network with synaptic weights learned using simulation | |
US9489622B2 (en) | Event-driven universal neural network circuit | |
Yu et al. | Stock market forecasting research based on neural network and pattern matching | |
CN112598119B (en) | On-chip storage compression method of neuromorphic processor facing liquid state machine | |
Qiao et al. | A neuromorphic-hardware oriented bio-plausible online-learning spiking neural network model | |
WO2015047589A2 (en) | Methods and apparatus for implementation of group tags for neural models | |
Huang et al. | Memristor neural network design | |
Sun et al. | Low-consumption neuromorphic memristor architecture based on convolutional neural networks | |
Ogbodo et al. | Light-weight spiking neuron processing core for large-scale 3D-NoC based spiking neural network processing systems | |
CN113408714B (en) | Full-digital pulse neural network hardware system and method based on STDP rule | |
Jiang et al. | Circuit design of RRAM-based neuromorphic hardware systems for classification and modified Hebbian learning | |
Zhang et al. | Neuromorphic architecture for small-scale neocortical network emulation | |
CN110111234B (en) | Image processing system architecture based on neural network | |
Evanusa et al. | Deep reservoir networks with learned hidden reservoir weights using direct feedback alignment | |
Nwagbo et al. | REVIEW OF NEURONAL MULTIPLEXERS WITH BACK PROPAGATION ALGORITHM | |
Phung et al. | Designing a Compact Spiking Neural Network for Learning and Recognizing Digits on 180nm CMOS Process | |
Islam et al. | Pattern Recognition Using Neuromorphic Computing | |
Yarushev et al. | Time Series Prediction based on Hybrid Neural Networks. | |
Yan | A Mixed Signal 65nm CMOS Implementation of a Spiking Neural Network | |
Roy et al. | Online unsupervised structural plasticity algorithm for multi-layer Winner-Take-All with binary synapses | |
Domen et al. | Implementation of Massive Artificial Neural Networks with Field-programmable Gate Arrays | |
Bakó et al. | Hardware spiking neural networks: parallel implementations using FPGAs | |
Koutník et al. | Neural Network Generating Hidden Markov Chain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |