CN105719000A - Neuron hardware structure and method of simulating pulse neural network by adopting neuron hardware structure - Google Patents

Neuron hardware structure and method of simulating pulse neural network by adopting neuron hardware structure Download PDF

Info

Publication number
CN105719000A
CN105719000A CN201610039384.7A CN201610039384A CN105719000A CN 105719000 A CN105719000 A CN 105719000A CN 201610039384 A CN201610039384 A CN 201610039384A CN 105719000 A CN105719000 A CN 105719000A
Authority
CN
China
Prior art keywords
neuron
synapse
layer
pulse
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610039384.7A
Other languages
Chinese (zh)
Other versions
CN105719000B (en
Inventor
罗玉玲
万雷
丘森辉
莫家玲
岑明灿
刘俊秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Normal University
Original Assignee
Guangxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Normal University filed Critical Guangxi Normal University
Priority to CN201610039384.7A priority Critical patent/CN105719000B/en
Publication of CN105719000A publication Critical patent/CN105719000A/en
Application granted granted Critical
Publication of CN105719000B publication Critical patent/CN105719000B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a neuron hardware structure and a method of simulating a pulse neural network by adopting the neuron hardware structure. The neuron hardware structure is characterized in that a neural network is provided, and comprises a plurality of neuron layers, which comprise a plurality of neurons; every neuron comprises a synapse layer, which comprises a plurality of synapses. The above mentioned method is characterized in that 1) the synapse model can be determined; 2) the synapses can be simulated; 3) the neurons can be simulated; 4) the neuron layers can be simulated; 5) the neural network can be simulated. By adopting the neuron hardware structure, the hardware resources occupied by the single neuron node can be reduced. By adopting the neuron hardware structure to simulate the pulse neural network, the simulation time is short, the expandability is good, the hardware resources occupied by the pulse neural network can be reduced, and then the capability of the hardware device accommodating the neurons can be improved.

Description

A kind of neuron hardware configuration and the method with this structural simulation impulsive neural networks
Technical field
The present invention relates to extensive impulsive neural networks technology, specifically a kind of method of neuron hardware configuration analog pulse neutral net.
Background technology
The fast development of neuroscience have accumulated a lot of knowledge about human brain 26S Proteasome Structure and Function.Research shows that brain is made up of intensive, complicated neuron interconnection, and it shows many surprising characteristics, for instance pattern recognition and Decision Control etc..Current being understood that about biological neuron: they transmit information by the sequential of pulse and are calculated.Researcher proposes the computation model of impulsive neural networks, and it simulates the behaviors such as the signal processing of the information transmission between neuron and inside neurons.Many fields are had to adopt the operation method based on impulsive neural networks at present, such as prediction, image procossing, pattern recognition and Human Visual System etc., these application are desirable that the interconnection of substantial amounts of neuron forms an impulsive neural networks system, accordingly, it would be desirable to an efficient framework goes to build impulsive neural networks hardware system.
At present when realizing impulsive neural networks, conventional method is the method adopting software analog pulse neutral net.Software modeling analog pulse neutral net be easier realize, the construction cycle short, but the serial that software is generally based on von Neumann performs framework, therefore, for Large Scale Neural Networks, software simulation needs to take substantial amounts of simulated time and expansibility is poor.
Other the method that realizes includes adopting special IC.The such as neuron special IC system of monolithic VLSI model (VeryLargeScaleIntegration, namely VLSI, super large-scale integration) and wafer-scale.The method of relative first kind software simulation, this method is greatly improved in execution speed;Additionally can also adopt the device (FieldProgrammableGateArray of FPGA, namely FPGA, field programmable gate array) realize, FPGA can realize the digital display circuit of a highly-parallel, and bit stream change device configuration can be passed through, there is good motility.Adopting special IC and FPGA to realize in the process of impulsive neural networks, the hardware configuration of neuron node is extremely important.Because if the hardware resource that neuron node takies is less, then hardware device just can hold more neuron, it is especially advantageous for realizing Large Scale Neural Networks hardware system.
Therefore the realization of impulsive neural networks needs to consider the ability of system extension exhibition, relatively low calculating resource consumption, and higher execution speed.
Summary of the invention
It is an object of the invention to for the deficiencies in the prior art, and a kind of method that neuron hardware configuration analog pulse neutral net is provided.This neuron hardware configuration can reduce the hardware resource that single neuron node takies.By using this neuron hardware configuration analog pulse neutral net, have that simulated time is short, expansibility better, and the hardware resource that impulsive neural networks takies can be reduced, and then hardware device can be improved hold neuronic ability.
The technical scheme realizing present invention is:
A kind of neuron hardware configuration, including neutral net, described neutral net includes multiple neuronal layers, and described neuronal layers comprises multiple neuron, and described neuron comprises a synapse layer, and described synapse layer comprises multiple synapse.
Described synapse is IP kernel, and the input of described IP kernel, output signal end mouth include pulse input end mouth, configuration information input port, recovering the resource number input/output end port of state, the resource number input/output end port in active state, the resource number input/output end port in inactive state, synaptic currents input/output end port, the utilization rate input/output end port of synaptic efficacy and input/output handshake port.
Described synapse layer is the synaptic web that multiple synapse parallel connection is formed.
Described neuron includes neuron and calculates core, decoded packet data device, parameter storage, pulse buffer, cell controller, pulses generation controller, topology information memorizer and communication interface modules, communication interface modules, decoded packet data device, pulse buffer, neuron calculate core, cell controller and pulses generation controller and are linked in sequence, pulses generation controller is connected with communication interface modules, and parameter storage calculates core with decoded packet data device, neuron, pulses generation controller is connected;Topology information memorizer is connected with decoded packet data device, pulses generation controller;
Communication interface modules is connected with the layer controller of neuronal layers.
Described neuronal layers is the neuroid that multiple neuron parallel connection is formed.
Described neuronal layers includes the layer data packet decoder, memorizer, neuron computing block, layer controller, layer data bag maker and the layer communication interface modules that are linked in sequence, and layer communication interface modules is connected with layer data packet decoder;
Layer communication interface modules is connected with global communication module.
The method of the neuron hardware configuration analog pulse neutral net that a kind of use is above-mentioned, comprises the steps:
1) synapse model is determined: adopt the dynamics of neuron ischemia simulation synapse biology with dynamic synapse characteristic;
2) simulation synapse: based on the neuron ischemia with dynamic synapse characteristic, adopts one IP kernel to simulate the function of single synapse, and using IP kernel as the shared computation module of multiple synapses;
3) imictron: in same neuron, the computation module IP kernel of a synapse, multiple virtual synapses and decoded packet data device, parameter storage, pulse buffer, cell controller, pulses generation controller, topology information memorizer and communication interface modules one neuronic function of simulation are shared in multiple synapses;
4) imictron layer: in same layer impulsive neural networks, multiple neurons share a neuronic computation module, and multiple virtual neurons simulate multiple neurons with layer data packet decoder, memorizer, neuron computing block, layer controller, layer data bag maker and layer communication interface modules;
5) simulative neural network: adopt the packet of consolidation form to be interconnected by multiple neuronal layers, and between the internal and multiple neuronal layers of neuronal layers, communicated by packet, global communication module is responsible for the traffic control of all packets, thus building an extensive impulsive neural networks.
The packet of described consolidation form refers to that data include four parts: neuron node address, synapse address, type of data packet and load,
In the packet, neuron node address represents the address of purpose neuron node;
Synapse address defines the address of synapse in a neuron;
Type of data packet part is used to flag data report type, total two types: configuration data bag and pulse data bag.
The described neuron ischemia with dynamic synapse characteristic is the Tsodykes synapse kinetic model proposed, the neuron behavior that this model describes is as follows: accept information by synapse from other neuron, it has multiple synaptic input and single output, input synapse produces irritability or inhibitory postsynaptic potential enters neuron cell body, then causes the change of cell membrane potential;If cell membrane potential exceedes threshold value, neuron will export pulse, then not have pulse to export on the contrary.
Sharing of synapse layer is as follows with single neuron hardware configuration operation principle:
When communication interface modules receives packet, decoded packet data device decodes the packet received.Packet has two types: configuration data bag and pulse data bag.After packet has been decoded, if the packet received is configuration data bag, the configuration information then carried, such as weights, attenuation quotient, threshold value and topology information etc., it is stored in parameter storage and topology information memorizer, these parameters and topological arrangement information respectively and is respectively used to synapse calculating and pulse generation.When pulse data bag arrives, first reading configuration parameter and initial cell transmembrane potential value from parameter storage, then neuron calculates core component and calculates the excitatory postsynaptic potential of each synapse or inhibitory synapse current potential successively and to be input to pulse formation controller medium pending.After completing the calculating of all synapses, pulse formation controller will generate corresponding pulse data bag according to the messaging parameter read from topology information memorizer, and be sent to purpose neuron and synapse by communication interface modules.
Sharing of neuronal layers: impulsive neural networks system is usually and is made up of different layers.For hardware pulse nerve network system, except the shared mechanism of single inside neurons, optimizing design further for providing, the neuron at identical layer also shares a neuronic computation module.The share framework of neuronal layers simply uses a neuronic computation module, also includes layer data packet decoder, memorizer, neuron computing block, layer controller, layer data bag maker and layer communication interface modules.Similar with above-mentioned single neuronic share framework in the shared mechanism of same neuronal layers;Single neuronic computation module is as nucleus module, and shares a neuronic computation module at the different neurons of same layer;Random data memorizer is used as storing different neuronic variablees, and each neuron parameter and variable storage are at the random data memory area specified;Level controller is used for managing the workflow of neuronal layers.
Neuronal layers transmits information by packet, and data include four parts: neuron node address, synapse address, type of data packet and load.In the packet, neuron node address represents the address of purpose neuron node;Synapse address defines the address of synapse in a neuron;Type of data packet part is used to flag data report type (such as configuration data bag or pulse data bag).Configuration data is coated for differently configured neuron and synapse, for instance parameter initialization and topological arrangement;Pulse data is coated the exchange as neuronal messages different in impulsive neural networks system.
The neuron hardware configuration of the present invention is based on synapse layer and neuronal layers two-layer computation module shared mechanism.The computation module of a synapse is shared in multiple synapses in synapse layer, a neuronal cell;In neuronal layers, impulsive neural networks system, multiple neurons of same layer network share a neuronic computation module.
This neuron hardware configuration decreases the hardware resource that single neuron node takies.By using this neuron hardware configuration analog pulse neutral net, have that simulated time is short, expansibility better, and decrease the hardware resource that impulsive neural networks takies, and then improve hardware device and hold neuronic ability.
Accompanying drawing explanation
Fig. 1 is impulsive neural networks hardware architecture diagram in embodiment;
Fig. 2 is the method flow schematic diagram in embodiment;
Fig. 3 is the input and output signal schematic diagram of single synapse computation module in embodiment;
Fig. 4 is the workflow schematic diagram of multiple synapses in embodiment;
Fig. 5 is single neuronic internal structure schematic diagram in embodiment;
Fig. 6 is the internal structure schematic diagram of neuronal layers in embodiment;
Fig. 7 is the data packet format schematic diagram communicated between the internal and multiple neuronal layers of neuronal layers in embodiment;
Fig. 8 is multiple neuronal layers mutual contact mode schematic diagrams in embodiment.
Detailed description of the invention
Below in conjunction with drawings and Examples, present invention is further elaborated, but is not limitation of the invention.
Embodiment:
Referring to Fig. 1, a kind of neuron hardware configuration, including neutral net, described neutral net includes multiple neuronal layers, and described neuronal layers comprises multiple neuron, and described neuron comprises a synapse layer, and described synapse layer comprises multiple synapse.
Referring to Fig. 3, described synapse is IP kernel, and the input of described IP kernel, output signal end mouth include pulse input end mouth, configuration information input port, recovering the resource number input/output end port of state, the resource number input/output end port in active state, the resource number input/output end port in inactive state, synaptic currents input/output end port, the utilization rate input/output end port of synaptic efficacy and input/output handshake port.
Described synapse layer is the synaptic web that multiple synapse parallel connection is formed.
Referring to Fig. 5, described neuron includes neuron and calculates core, decoded packet data device, parameter storage, pulse buffer, cell controller, pulses generation controller, topology information memorizer and communication interface modules, communication interface modules, decoded packet data device, pulse buffer, neuron calculate core, cell controller and pulses generation controller and are linked in sequence, pulses generation controller is connected with communication interface modules, and parameter storage calculates core with decoded packet data device, neuron, pulses generation controller is connected;Topology information memorizer is connected with decoded packet data device, pulses generation controller;
Communication interface modules is connected with the layer controller of neuronal layers.
Described neuronal layers is the neuroid that multiple neuron parallel connection is formed.
Referring to Fig. 6, described neuronal layers includes the layer data packet decoder, memorizer, neuron computing block, layer controller, layer data bag maker and the layer communication interface modules that are linked in sequence, and layer communication interface modules is connected with layer data packet decoder;
Layer communication interface modules is connected with global communication module.
Described global communication module is router, it is stipulated that data packet dispatching rule.
Referring to Fig. 2, by the method for above-mentioned neuron hardware configuration analog pulse neutral net, comprise the steps:
1) synapse model is determined: adopt the dynamics of neuron ischemia simulation synapse biology with dynamic synapse characteristic;
2) simulation synapse: based on the neuron ischemia with dynamic synapse characteristic, adopts one IP kernel to simulate the function of single synapse, and using IP kernel as the shared computation module of multiple synapses;
3) imictron: in same neuron, the computation module IP kernel of a synapse, multiple virtual synapses and decoded packet data device, parameter storage, pulse buffer, cell controller, pulses generation controller, topology information memorizer and communication interface modules one neuronic function of simulation are shared in multiple synapses;
4) imictron layer: in same layer impulsive neural networks, multiple neurons share a neuronic computation module, and multiple virtual neurons simulate multiple neurons with layer data packet decoder, memorizer, neuron computing block, layer controller, layer data bag maker and layer communication interface modules;
5) simulative neural network: adopt the packet of consolidation form to be interconnected by multiple neuronal layers, and between the internal and multiple neuronal layers of neuronal layers, communicated by packet, global communication module is responsible for the traffic control of all packets, thus building an extensive impulsive neural networks.
The packet of described consolidation form refers to that data include four parts: neuron node address, synapse address, type of data packet and load,
In the packet, neuron node address represents the address of purpose neuron node;
Synapse address defines the address of synapse in a neuron;
Type of data packet part is used to flag data report type, total two types: configuration data bag and pulse data bag.
The described neuron ischemia with dynamic synapse characteristic is the Tsodykes synapse kinetic model proposed, the neuron behavior that this model describes is as follows: accept information by synapse from other neuron, it has multiple synaptic input and single output, input synapse produces irritability or inhibitory postsynaptic potential enters neuron cell body, then causes the change of cell membrane potential;If cell membrane potential exceedes threshold value 0.009mV, neuron will export pulse, then not have pulse to export on the contrary.
As shown in Figure 4, neuron calculates the workflow that core component is shared by multiple synapses, including
Step 1: configuration initializes, calculates after nucleus module starts at neuron, and it will accept the initialization of parameter, it is therefore an objective to deallocation puts synapse, for instance the initial value of weights, the initial potential value of cell membrane and other various variablees;
Step 2: update configuration parameter, receives new configuration parameter as crossed, then replace parameter value old in random data memorizer;
Step 3: judge whether pulse arrives, it is judged that whether the packet received is pulse data bag, if it is forwards step 4 to, if not the arrival then continuing waiting for pulse data bag;
Step 4: read configuration parameter, when pulse data bag arrives, resolution data bag is about to the purpose neuron address transmitted, and then reads corresponding configuration information from random data memorizer;
Step 5: calculate voltage, calculate the output voltage of corresponding synapse;
Step 6: judge that all synapses calculate and whether complete, judge whether these neuronic all synapses have all calculated after having calculated the voltage of certain synapse every time, without then forwarding step 3 to, continue waiting for the arrival of pulse, calculate if all of synapse and all calculated, then forward step 7 to;
Step 7: calculating current potential output, voltage step 6 exported and threshold value contrast, then produces an output pulse, on the contrary then no pulse output if above threshold value;
Step 8: repeat the above steps.
Sharing of synapse layer is as follows with single neuron hardware configuration operation principle:
Referring to Fig. 5, when communication interface modules receives packet, decoded packet data device decodes the packet received.Packet has two types: configuration data bag and pulse data bag.After packet has been decoded, if the packet received is configuration data bag, the configuration information then carried, such as weights, attenuation quotient, threshold value and topology information etc., it is stored in parameter storage and topology information memorizer, these parameters and topological arrangement information respectively and is respectively used to synapse calculating and pulse generation.When pulse data bag arrives, it is stored in pulse buffer and carries out buffer memory, when neuron calculating core reads pulse data bag from pulse buffer, header packet information according to packet, first reading configuration parameter and initial cell transmembrane potential value from parameter storage, then neuron calculates core component and calculates the excitatory postsynaptic potential of each synapse or inhibitory synapse current potential successively and to be input to pulse formation controller medium pending.After completing the calculating of all synapses, pulse formation controller will generate corresponding pulse data bag according to the messaging parameter read from topology information memorizer, and be sent to purpose neuron and synapse by communication interface modules.
Referring to Fig. 6, sharing of neuronal layers: impulsive neural networks system is usually and is made up of different layers.For hardware pulse nerve network system, except the shared mechanism of single inside neurons, optimizing design further for providing, the neuron at identical layer also shares single neuronic computation module as shown in Figure 5, the namely neuron computing block in figure.The share framework of neuronal layers simply uses a neuronic computation module, also includes layer data packet decoder, memorizer, neuron computing block, layer controller, layer data bag maker and layer communication interface modules.Similar with above-mentioned single neuronic share framework in the shared mechanism of same neuronal layers;Single neuronic computation module is as nucleus module, and shares a neuronic computation module at the different neurons of same layer;Memorizer is used as storing different neuronic variablees, and each neuron parameter and variable storage are at the random data memory area specified;Level controller is used for managing the workflow of neuronal layers.
Referring to Fig. 7, neuronal layers transmits information by packet, and data include four parts: neuron node address, synapse address, type of data packet and load.In the packet, neuron node address represents the address of purpose neuron node;Synapse address defines the address of synapse in a neuron;Type of data packet part is used to flag data report type (such as configuration data bag or pulse data bag).Configuration data is coated for differently configured neuron and synapse, for instance parameter initialization and topological arrangement;Pulse data is coated the exchange as neuronal messages different in impulsive neural networks system.
Referring to Fig. 8, here the scheduling rule of packet is by regulation global communication module: the packet that neuronal layers #1 exports is sent to neuronal layers #2, the packet that neuronal layers #2 exports is sent to neuronal layers #3, so the like, the packet of neuronal layers #N-1 is sent to neuronal layers #N.

Claims (9)

1. a neuron hardware configuration, is characterized in that, including neutral net, described neutral net includes multiple neuronal layers, and described neuronal layers comprises multiple neuron, and described neuron comprises a synapse layer, and described synapse layer comprises multiple synapse.
2. neuron hardware configuration according to claim 1, it is characterized in that, described synapse is IP kernel, and the input of described IP kernel, output signal end mouth include pulse input end mouth, configuration information input port, recovering the resource number input/output end port of state, the resource number input/output end port in active state, the resource number input/output end port in inactive state, synaptic currents input/output end port, the utilization rate input/output end port of synaptic efficacy and input/output handshake port.
3. neuron hardware configuration according to claim 1, is characterized in that, described synapse layer is the synaptic web that multiple synapse parallel connection is formed.
4. neuron hardware configuration according to claim 1, it is characterized in that, described neuron includes neuron and calculates core, decoded packet data device, parameter storage, pulse buffer, cell controller, pulses generation controller, topology information memorizer and communication interface modules, communication interface modules, decoded packet data device, pulse buffer, neuron calculates core, cell controller and pulses generation controller are linked in sequence, pulses generation controller is connected with communication interface modules, parameter storage and decoded packet data device, neuron calculates core, pulses generation controller connects;Topology information memorizer is connected with decoded packet data device, pulses generation controller;
Communication interface modules is connected with the layer controller of neuronal layers.
5. neuron hardware configuration according to claim 1, is characterized in that, described neuronal layers is the neuroid that multiple neuron parallel connection is formed.
6. neuron hardware configuration according to claim 1, it is characterized in that, described neuronal layers includes the layer data packet decoder, memorizer, neuron computing block, layer controller, layer data bag maker and the layer communication interface modules that are linked in sequence, and layer communication interface modules is connected with layer data packet decoder;
Layer communication interface modules is connected with global communication module.
7., by a method for the neuron hardware configuration analog pulse neutral net described in any one of claim 1-6, it is characterized in that, comprise the steps:
1) synapse model is determined: adopt the dynamics of neuron ischemia simulation synapse biology with dynamic synapse characteristic;
2) simulation synapse: based on the neuron ischemia with dynamic synapse characteristic, adopts one IP kernel to simulate the function of single synapse, and using IP kernel as the shared computation module of multiple synapses;
3) imictron: in same neuron, the computation module IP kernel of a synapse, multiple virtual synapses and decoded packet data device, parameter storage, pulse buffer, cell controller, pulses generation controller, topology information memorizer and communication interface modules one neuronic function of simulation are shared in multiple synapses;
4) imictron layer: in same layer impulsive neural networks, multiple neurons share a neuronic computation module, and multiple virtual neurons simulate multiple neurons with layer data packet decoder, memorizer, neuron computing block, layer controller, layer data bag maker and layer communication interface modules;
5) simulative neural network: adopt the packet of consolidation form to be interconnected by multiple neuronal layers, and between the internal and multiple neuronal layers of neuronal layers, communicated by packet, global communication module is responsible for the traffic control of all packets, thus building an extensive impulsive neural networks.
8. the method for analog pulse neutral net according to claim 7, is characterized in that, the packet of described consolidation form refers to that data include four parts: neuron node address, synapse address, type of data packet and load,
In the packet, neuron node address represents the address of purpose neuron node;
Synapse address defines the address of synapse in a neuron;
Type of data packet part is used to flag data report type, total two types: configuration data bag and pulse data bag.
9. the method for analog pulse neutral net according to claim 7, it is characterized in that, the described neuron ischemia with dynamic synapse characteristic is the Tsodykes synapse kinetic model proposed, the neuron behavior that this model describes is as follows: accept information by synapse from other neuron, it has multiple synaptic input and single output, input synapse produces irritability or inhibitory postsynaptic potential enters neuron cell body, then causes the change of cell membrane potential;If cell membrane potential exceedes threshold value, neuron will export pulse, then not have pulse to export on the contrary.
CN201610039384.7A 2016-01-21 2016-01-21 A kind of neuron hardware unit and the method with this unit simulation impulsive neural networks Expired - Fee Related CN105719000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610039384.7A CN105719000B (en) 2016-01-21 2016-01-21 A kind of neuron hardware unit and the method with this unit simulation impulsive neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610039384.7A CN105719000B (en) 2016-01-21 2016-01-21 A kind of neuron hardware unit and the method with this unit simulation impulsive neural networks

Publications (2)

Publication Number Publication Date
CN105719000A true CN105719000A (en) 2016-06-29
CN105719000B CN105719000B (en) 2018-02-16

Family

ID=56153681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610039384.7A Expired - Fee Related CN105719000B (en) 2016-01-21 2016-01-21 A kind of neuron hardware unit and the method with this unit simulation impulsive neural networks

Country Status (1)

Country Link
CN (1) CN105719000B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372721A (en) * 2016-08-29 2017-02-01 中国传媒大学 Large-scale nerve network 3D visualization method
CN106650922A (en) * 2016-09-29 2017-05-10 清华大学 Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system
CN106934457A (en) * 2017-03-08 2017-07-07 杭州领芯电子有限公司 One kind flexibly can realize framework by time-multiplexed spiking neuron
CN108154225A (en) * 2016-12-06 2018-06-12 上海磁宇信息科技有限公司 A kind of neural network chip calculated using simulation
CN108446762A (en) * 2018-03-30 2018-08-24 广西师范大学 A kind of hardware circuit of the analog pulse neuron based on MOS field-effect transistors and its application
CN108673534A (en) * 2018-04-20 2018-10-19 江苏大学 A kind of software manipulator for realizing intelligent sorting using artificial synapse network system
CN108694694A (en) * 2017-04-10 2018-10-23 英特尔公司 Abstraction library for allowing for scalable distributed machine learning
CN109643383A (en) * 2016-07-28 2019-04-16 谷歌有限责任公司 Domain separates neural network
CN109800851A (en) * 2018-12-29 2019-05-24 中国人民解放军陆军工程大学 Neurosynaptic circuit and spiking neural network circuit
CN110111234A (en) * 2019-04-11 2019-08-09 上海集成电路研发中心有限公司 A kind of image processing system framework neural network based
CN110287858A (en) * 2019-06-21 2019-09-27 天津大学 Bionical impulsive neural networks visual identifying system based on FPGA
CN111476354A (en) * 2020-04-11 2020-07-31 复旦大学 Pulse neural network based on flexible material
CN111756352A (en) * 2020-05-18 2020-10-09 北京大学 Pulse array time domain filtering method, device, equipment and storage medium
CN112101517A (en) * 2020-08-04 2020-12-18 西北师范大学 FPGA implementation method based on piecewise linear pulse neuron network
CN113767402A (en) * 2019-04-29 2021-12-07 ams国际有限公司 Computationally efficient implementation of simulated neurons
WO2022078334A1 (en) * 2020-10-13 2022-04-21 北京灵汐科技有限公司 Processing method for processing signals using neuron model and network, medium and device
WO2022099559A1 (en) * 2020-11-11 2022-05-19 浙江大学 Brain-like computer supporting hundred million neurons
CN114611686A (en) * 2022-05-12 2022-06-10 之江实验室 Synapse delay implementation system and method based on programmable neural mimicry core

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090313195A1 (en) * 2008-06-17 2009-12-17 University Of Ulster Artificial neural network architecture
CN101625735A (en) * 2009-08-13 2010-01-13 西安理工大学 FPGA implementation method based on LS-SVM classification and recurrence learning recurrence neural network
CN101639901A (en) * 2009-09-03 2010-02-03 王连明 Feedforward neural network hardware realization method based on multicore technology
CN101997538A (en) * 2009-08-19 2011-03-30 中国科学院半导体研究所 Pulse coupling based silicon-nanowire complementary metal oxide semiconductors (CMOS) neuronal circuit
CN105095966A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid computing system of artificial neural network and impulsive neural network
CN105095961A (en) * 2015-07-16 2015-11-25 清华大学 Mixing system with artificial neural network and impulsive neural network
CN105095967A (en) * 2015-07-16 2015-11-25 清华大学 Multi-mode neural morphological network core
CN105095965A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid communication method of artificial neural network and impulsive neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090313195A1 (en) * 2008-06-17 2009-12-17 University Of Ulster Artificial neural network architecture
CN101625735A (en) * 2009-08-13 2010-01-13 西安理工大学 FPGA implementation method based on LS-SVM classification and recurrence learning recurrence neural network
CN101997538A (en) * 2009-08-19 2011-03-30 中国科学院半导体研究所 Pulse coupling based silicon-nanowire complementary metal oxide semiconductors (CMOS) neuronal circuit
CN101639901A (en) * 2009-09-03 2010-02-03 王连明 Feedforward neural network hardware realization method based on multicore technology
CN105095966A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid computing system of artificial neural network and impulsive neural network
CN105095961A (en) * 2015-07-16 2015-11-25 清华大学 Mixing system with artificial neural network and impulsive neural network
CN105095967A (en) * 2015-07-16 2015-11-25 清华大学 Multi-mode neural morphological network core
CN105095965A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid communication method of artificial neural network and impulsive neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
S. CARRILLO,J. HARKIN,L.J. MCDAID: "Hierarchical Network-on-Chip and Traffic Compression for Spiking Neural Network Implementations", 《2012 SIXTH IEEE/ACM INTERNATIONAL SYMPOSIUM ON NETWORKS-ON-CHIP》 *
刘培龙: "基于FPGA的神经网络硬件实现的研究与设计", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109643383B (en) * 2016-07-28 2023-09-01 谷歌有限责任公司 Domain split neural network
CN109643383A (en) * 2016-07-28 2019-04-16 谷歌有限责任公司 Domain separates neural network
CN106372721B (en) * 2016-08-29 2018-08-21 中国传媒大学 The 3D method for visualizing of Large Scale Neural Networks
CN106372721A (en) * 2016-08-29 2017-02-01 中国传媒大学 Large-scale nerve network 3D visualization method
CN106650922A (en) * 2016-09-29 2017-05-10 清华大学 Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system
CN106650922B (en) * 2016-09-29 2019-05-03 清华大学 Hardware neural network conversion method, computing device, software and hardware cooperative system
CN108154225A (en) * 2016-12-06 2018-06-12 上海磁宇信息科技有限公司 A kind of neural network chip calculated using simulation
CN108154225B (en) * 2016-12-06 2021-09-03 上海磁宇信息科技有限公司 Neural network chip using analog computation
CN106934457B (en) * 2017-03-08 2019-12-06 杭州领芯电子有限公司 Pulse neuron implementation framework capable of realizing flexible time division multiplexing
CN106934457A (en) * 2017-03-08 2017-07-07 杭州领芯电子有限公司 One kind flexibly can realize framework by time-multiplexed spiking neuron
CN108694694A (en) * 2017-04-10 2018-10-23 英特尔公司 Abstraction library for allowing for scalable distributed machine learning
CN108694694B (en) * 2017-04-10 2024-03-19 英特尔公司 Abstract library for enabling scalable distributed machine learning
CN108446762A (en) * 2018-03-30 2018-08-24 广西师范大学 A kind of hardware circuit of the analog pulse neuron based on MOS field-effect transistors and its application
CN108673534A (en) * 2018-04-20 2018-10-19 江苏大学 A kind of software manipulator for realizing intelligent sorting using artificial synapse network system
CN109800851A (en) * 2018-12-29 2019-05-24 中国人民解放军陆军工程大学 Neurosynaptic circuit and spiking neural network circuit
CN109800851B (en) * 2018-12-29 2024-03-01 中国人民解放军陆军工程大学 Neural synapse circuit and impulse neural network circuit
CN110111234A (en) * 2019-04-11 2019-08-09 上海集成电路研发中心有限公司 A kind of image processing system framework neural network based
CN110111234B (en) * 2019-04-11 2023-12-15 上海集成电路研发中心有限公司 Image processing system architecture based on neural network
CN113767402A (en) * 2019-04-29 2021-12-07 ams国际有限公司 Computationally efficient implementation of simulated neurons
CN110287858A (en) * 2019-06-21 2019-09-27 天津大学 Bionical impulsive neural networks visual identifying system based on FPGA
CN111476354A (en) * 2020-04-11 2020-07-31 复旦大学 Pulse neural network based on flexible material
CN111476354B (en) * 2020-04-11 2022-10-11 复旦大学 Pulse neural network circuit based on flexible material
CN111756352B (en) * 2020-05-18 2022-08-19 北京大学 Pulse array time domain filtering method, device, equipment and storage medium
CN111756352A (en) * 2020-05-18 2020-10-09 北京大学 Pulse array time domain filtering method, device, equipment and storage medium
CN112101517A (en) * 2020-08-04 2020-12-18 西北师范大学 FPGA implementation method based on piecewise linear pulse neuron network
CN112101517B (en) * 2020-08-04 2024-03-08 西北师范大学 FPGA implementation method based on piecewise linear impulse neuron network
WO2022078334A1 (en) * 2020-10-13 2022-04-21 北京灵汐科技有限公司 Processing method for processing signals using neuron model and network, medium and device
WO2022099559A1 (en) * 2020-11-11 2022-05-19 浙江大学 Brain-like computer supporting hundred million neurons
CN114611686A (en) * 2022-05-12 2022-06-10 之江实验室 Synapse delay implementation system and method based on programmable neural mimicry core
CN114611686B (en) * 2022-05-12 2022-08-30 之江实验室 Synapse delay implementation system and method based on programmable neural mimicry core

Also Published As

Publication number Publication date
CN105719000B (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN105719000A (en) Neuron hardware structure and method of simulating pulse neural network by adopting neuron hardware structure
US11055609B2 (en) Single router shared by a plurality of chip structures
US10885424B2 (en) Structural plasticity in spiking neural networks with symmetric dual of an electronic neuron
US10521714B2 (en) Multi-compartment neurons with neural cores
US10891544B2 (en) Event-driven universal neural network circuit
US20190377997A1 (en) Synaptic, dendritic, somatic, and axonal plasticity in a network of neural cores using a plastic multi-stage crossbar switching
JP5963315B2 (en) Methods, devices, and circuits for neuromorphic / synaptronic spiking neural networks with synaptic weights learned using simulation
US8874498B2 (en) Unsupervised, supervised, and reinforced learning via spiking computation
Ogbodo et al. Light-weight spiking neuron processing core for large-scale 3D-NoC based spiking neural network processing systems
US8983886B2 (en) Self-evolvable logic fabric
WO2022108704A1 (en) Routing spike messages in spiking neural networks
Ying et al. A scalable hardware architecture for multi-layer spiking neural networks
Martin Scalable interconnect strategies for neuro-glia networks using Networks-on-Chip.
Mehrabi et al. FPGA-Based Spiking Neural Networks
Trefzer et al. Hierarchical networks-on-chip architecture for neuromorphic hardware

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180216

Termination date: 20210121

CF01 Termination of patent right due to non-payment of annual fee