CN110909869B - Brain-like computing chip based on impulse neural network - Google Patents

Brain-like computing chip based on impulse neural network Download PDF

Info

Publication number
CN110909869B
CN110909869B CN201911148787.5A CN201911148787A CN110909869B CN 110909869 B CN110909869 B CN 110909869B CN 201911148787 A CN201911148787 A CN 201911148787A CN 110909869 B CN110909869 B CN 110909869B
Authority
CN
China
Prior art keywords
synapse
neuron
neurons
data packet
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911148787.5A
Other languages
Chinese (zh)
Other versions
CN110909869A (en
Inventor
马德
李一涛
吴叶倩
戴书画
段会康
潘纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201911148787.5A priority Critical patent/CN110909869B/en
Publication of CN110909869A publication Critical patent/CN110909869A/en
Priority to PCT/CN2020/128470 priority patent/WO2021098588A1/en
Application granted granted Critical
Publication of CN110909869B publication Critical patent/CN110909869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a brain-like computing chip based on a pulse neural network, which belongs to the technical field of artificial neural networks and consists of a plurality of nodes integrating storage and computation; one node comprises a plurality of neurons; the neurons are connected through synapses, the neurons of the same node share a synapse memory, and the synapse memory is divided into a linked list area and a synapse data packet area; the linked list area comprises a linked list of each neuron; the linked list is used for storing the address of a synapse data packet of each neuron; the synaptic data packet area contains a synaptic data packet of each neuron; synapse packets are used to store synapse information for each neuron in the node. According to the invention, aiming at the great difference of the number of synaptic connections among different neurons, each neuron can be distributed as required by utilizing a shared storage synaptic information implementation mode and dynamically dividing a synaptic storage, and the storage space is fully utilized.

Description

Brain-like computing chip based on impulse neural network
Technical Field
The invention relates to the technical field of artificial neural networks, in particular to a brain-like computing chip based on a pulse neural network.
Background
In recent years, "memory wall" and "power wall" effects become more and more serious, and the von neumann architecture followed by the traditional computer is facing a great challenge, and in the later molar times, the semiconductor industry needs to seek new architecture and method to meet the demands of the electronic industry for ever-increasing computing performance and extremely low power consumption. With the development of brain science, people gradually know that the human brain is a computer with extremely high energy efficiency, brain-like calculation occurs at the same time, and a memory and a calculating unit are combined into a whole, so that the problem of a memory wall of a classic Von Neumann system architecture is fundamentally solved. The basic idea of brain-like computing is to apply the concept of biological neural networks to computer system design to improve performance and reduce power consumption for specific applications of intelligent information processing.
The impulse neural network as a third generation neural network has high biological authenticity, and the impulse neural network has unique advantages in the real world learning task, so that the impulse neural network is rapidly a research hotspot of brain-like computing chips, and a plurality of brain-like chips based on the impulse neural network are released in the industry.
In 2015, IBM released a treuenorth-like brain chip, supporting millions of neurons, with extremely low operating power consumption, and released a truuenorth-based brain-like supercomputing platform at 2016; in 2017, the Intel publishing brain chip Loihi supports the function of online autonomous learning; in 2019, the "Tian Mao" of Qinghua university was signed on a Nature cover, and two main intelligent research directions, namely, based on computer science and based on neuroscience, were integrated together.
Application publication No. CN 109901878A discloses a brain-like computing chip and computing equipment, wherein the brain-like computing chip is a many-core system composed of one or more functional cores, and data transmission is performed between the functional cores through a network on chip; the functional core comprises: at least one programmable neuron processor for computing a plurality of neuron models, at least one co-processor coupled to said neuron processor for performing an integration operation and/or a multiply-add operation; wherein the neuron processor can call the coprocessor to execute multiplication and addition operation.
Application publication No. CN 107368888A discloses a brain-like computing system and synapses thereof, said synapses being MTJ synapses comprising memory MTJs and reference MTJs; the output end of the MTJ synapse is connected to a neuron which inputs electric charge in a brain-like computing system, and the input end of the MTJ synapse is connected to a neuron which outputs electric charge in the brain-like computing system; an output of an MTJ synapse is placed at a reference potential, a memory MTJ is adapted to receive a first pulse from an input of the MTJ synapse, a reference MTJ is adapted to receive a second pulse from the input of the MTJ synapse, the first pulse is transmitted at the same time as the second pulse, and is of the same shape and opposite sign; the MTJ synapse further comprises a first gating device connected to the memory MTJ and a second gating device connected to the reference MTJ, and the first gating device and the second gating device are respectively arranged on different current paths between the neuron outputting the charges and the neuron inputting the charges, and the current guiding directions of the first gating device and the neuron inputting the charges are opposite.
Because the function of a single neuron is limited, only millions of neurons can perform cooperative work to show unique advantages in the aspect of specific intelligent information processing, the realization of the large-scale of the neurons of the impulse neural network chip is still a core problem, the neural synapses are used for expressing the connection relation among impulse neural networks and comprise information such as connection topology, weight, delay and the like, and an efficient synapse realization mode is very important for the flexible reconfiguration of a neural network topological structure and the flexible and extensible neuron scale.
Disclosure of Invention
The invention provides a brain-like computing chip based on a pulse neural network, which can support flexible configuration of a neural network topological structure.
A brain-like computing chip based on a pulse neural network is composed of a plurality of nodes which are stored and computed into a whole; the node comprises a plurality of neurons; the neurons are connected through synapses, the neurons of the same node share a synapse memory, and the synapse memory is divided into a linked list area containing each neuron linked list and a synapse data packet area containing each neuron synapse data packet.
In order to make the number of synaptic connections of each neuron in a node flexible, the synaptic storage can flexibly divide the boundaries of the linked list area and the synaptic data packet area.
The linked list and the synapse data packet form a two-stage storage organization mode, dynamic alignment is carried out on different neurons in the node, and the number of neurons supported by the node and the number of synapse connections of each neuron are flexibly aligned.
The linked list is used for storing the storage address of the neuron synapse data packet; the length of the linked list is the address range of a synapse data packet corresponding to a neuron.
Preferably, in the saving process, the difference between the start address of the synapse data packet of the previous neuron and the start address of the synapse data packet of the next neuron is the address range of the synapse data packet of the current neuron, and the last neuron needs to provide the start address and the end address of the synapse data packet.
The synapse data packet is used for storing synapse information of each neuron in the node; the length of the synaptic data packet is determined by the number of connected target neurons; the synapse data packet is composed of synapse information of neurons connected to the same node.
The synapse data packet comprises a header and a load; the packet header includes a destination node address and a packet length.
The packet length can be represented by a fixed length mode and a variable length mode;
each node is configured with a length lookup table, and when a fixed-length mode is adopted, 2-bit information is adopted to select a packet length value from pre-configured packet lengths with 8-bit wide; when the packet length belongs to 8 bits of bit width, the storage quantity occupied by the packet header information can be effectively reduced; the bit width of the packet length can be configured according to application requirements; when the packet length does not belong to the pre-configured packet length with the bit width of 8 bits, 8 bits of information is additionally added to indicate the actual length.
The load comprises a target neuron number and a synaptic connection weight which are specified by the packet length; the synaptic connection weight supports a linear mode and a nonlinear mode; when the synapse connection weight value is in a linear mode, the synapse connection weight value is an actual value of synapses connected with two neurons; when the synapse connection weight value is in a nonlinear mode, the synapse connection weight value is used as an index to remove a high-precision weight lookup table to index a weight value with higher precision.
In the brain-like computing chip, a delay information lookup table is established in each target node, delay information is obtained from the delay information lookup table according to the length of a transmission path between a source node and the target node of a synapse data packet, and different synapse delays are established among neurons in the target node according to the obtained delay information; because different delay lookup tables can be established in each target node, delay values corresponding to a certain range of transmission distances can be set in the target nodes by users, so that neurons on the target nodes with the same transmission path length can have different synaptic delays.
The brain-like computing chip sets a relative reference origin for each node, and when a target node address is stored in a synapse data packet header, only the relative address relative to the reference origin needs to be stored, so that the bit width of the address to be stored can be reduced.
Preferably, the target node address stored in the synapse data packet header may be an absolute address of the target node or a relative address of the target node; when the relative address of the target node is used, the target node needs to be assigned with a reference origin, and the absolute address of the target node is the value obtained by adding the relative address to the reference origin, so that when the target node connected with the neurons of the target node is far away, one reference origin is selected for the neurons, the bit width of the address of the target node can be reduced, the utilization rate of a memory is improved, and the number of synapses to be stored is increased under the condition of the same storage space.
Compared with the prior art, the invention has the following effects:
(1) the brain-like chip provided by the invention realizes the dynamic division of the linked list area and the synapse data packet area in the synapse memory by using a mode of sharing and storing synapse information aiming at the huge difference of the connection numbers of different neurons, so that each neuron can be distributed as required, and the storage space is fully utilized.
(2) When the packet length of the synapse data packet is the pre-configured packet length with the bit width of 8 bits, only 2 bits of information are needed to select the packet length value, and more bit widths are not needed to completely represent the packet length value, so that the storage space occupied by the packet header information can be effectively reduced.
(3) According to the brain-like chip provided by the invention, the delay information is stored in the delay information lookup table of the target node by establishing the delay information lookup table in each target node, the bit width of a synapse data packet is not required to be occupied, the storage space occupied by each synapse data packet information can be effectively reduced, and further more synapse connections can be stored.
(4) The brain-like chip provided by the invention sets a relative reference origin for each node, and stores the target node by adopting the relative address, thereby effectively reducing the bit width occupied by the address of the target node and further reducing the storage space occupied by the header information.
Drawings
FIG. 1 is a schematic diagram of a structure of a brain-like chip shared synapse memory according to the present invention.
FIG. 2 is an exemplary diagram of a neural mimicry brain-like computing chip architecture according to the present invention.
Detailed Description
The invention is further illustrated by the following specific examples, which are intended to facilitate a better understanding of the contents of the invention.
As shown in fig. 1, a brain-like computing chip based on a spiking neural network is composed of a plurality of nodes stored in a computing body; the node comprises a plurality of neurons; the neurons are connected through synapses, the number of the neurons in one node is M, wherein n neurons are connected (M is larger than or equal to n), and the rest neurons are not connected, so that the synapses of the n neurons share one on-chip storage; the synaptic memory is divided into two parts: a linked list area and a synaptic packet area, the boundary between which can be dynamically changed according to the number N of connected neurons (address N).
Each unit of the linked list represents the initial address of a synaptic data packet of a neuron, the difference value between the address of the current unit and the address of the next unit in the linked list is the address space where all synaptic data packets of the neuron are located, and when the synaptic data packets of the last neuron are stored, the initial address and the termination address need to be provided.
The synapse data packets store synapse information in each neuron, and each synapse information is composed of synapse information of neurons connected to the same node, wherein m0 synapse data packets of neuron 0 in FIG. 1 indicate that neuron 0 is connected to m0 nodes.
The synapse data packet is divided into a packet header and a load, the packet header information comprises a target node address and a packet length, and the node address is composed of a 3-bit X coordinate and a 3-bit Y coordinate address; the packet length can be represented by a fixed length mode and a variable length mode, a length lookup table is configured for each node, and when the fixed length mode is adopted, a packet length value is selected from three common packet lengths in the length lookup table through 2 bits of information; as shown in fig. 1, the values 00, 01, and 10 correspond to three values of packet lengths N0, N1, and N2, respectively, which can be configured according to application requirements; and when the value is 11, 8 bits of information is additionally added to represent the actual length.
The load comprises a target neuron number and a synaptic connection weight which are specified by the packet length; wherein, the neuron number and the synapse weight respectively account for 8 bits; the weight supports two modes of linearity and nonlinearity; when the load is in a linear mode, the weight value in the load is an actual value of synapse connected with two neurons, namely the actual precision of the weight is 8 bits; when the load is in a nonlinear mode, the weight value in the load is used as an index to index a weight value with higher precision in the high-precision weight lookup table, as shown in figure 1, number 3, and an 8-bit weight is used as an index to index a weight value with 16 bits in the high-precision weight lookup table, so that less data can be stored, and the high-precision weight value can be realized.
As shown in fig. 2, in the architecture of a neurostimoid brain-like computing chip, 256 neurons are combined into a single neural node and interconnected through a network-on-chip structure, and the whole network supports 8 × 8 neural nodes; the synapse time delay implementation mode based on the distance between the neurons provided by the invention assumes that synapses support 4 different delays which are respectively 1, 2, 3 and 4 delay units; the range of transmission distances in fig. 2 is 0 to 14, the distance range 4 is divided equally: the delay time for transmission distances of 0 and 2 is set to 1; delay setting 2 with transmission distances of 3 and 6; the delay for a transmission distance of 7 to 10 is set to 3; the delay for transmission distances 11 and 14 is set to 4; the delay value corresponding to a certain range of transmission distance can be set in the target node by the user, and different synaptic delays can be provided between neurons on the target node with the same transmission path length.
In the brain-like chip connection representation method based on the reference origin, the target node address stored in the synapse data packet header is the X coordinate and the Y coordinate of the packet header in FIG. 1, and can be an absolute address or a relative address; when the node is a relative address, the node needs to specify a reference origin, the reference origin specified by the node (0,1) is (2,5), the corresponding destination coordinate in the packet header is (4,1), and then the actually connected target node is (6, 6); when the address is an absolute address, the corresponding coordinate in the packet header is an address relative to the (0, 0) coordinate, that is, if the corresponding coordinate in the packet header is (4,1), the actual connection target node is also (4, 1); based on the connection mode of the reference origin, the range of network connection can be increased under the condition of the same synapse storage; if the coordinate bit width in the synaptic data packet header is 3 bits, only an 8x8 network structure can be supported if an absolute address mode is adopted, but if a reference origin is added and the coordinate bit width of the reference origin is also 3 bits, the accessed network structure can be increased to 16x 16; in another aspect, a network structure of a certain scale is supported, and the bit width of the target node address can be reduced by adopting a mode of referring to the origin, so that the utilization rate of the memory is improved, and more neural synapse numbers can be supported under the condition of the same memory space.
In summary, in the present invention, for the huge difference in the number of connections between different neurons, the storage space is fully utilized by using a synapse information implementation manner shared for storage, a synapse delay implementation manner based on the distance between the neurons, or a synapse connection representation method based on the reference origin, by dynamically dividing the synapse memory, reducing the bit width of the data packet, and the like.

Claims (6)

1. A brain-like computing chip based on a pulse neural network is composed of a plurality of nodes which are stored and computed into a whole; the node comprises a plurality of neurons; the neurons are connected through synapses, and the neurons at the same node share a synapse memory which is divided into a linked list area containing each neuron linked list and a synapse data packet area containing each neuron synapse data packet; the neuron chain table is used for storing the storage address of a synapse data packet of each neuron; the neuron synapse data packet is used for storing synapse information of neurons;
the synapse data packet comprises header information and a load; the packet header information comprises a target node address and a packet length; the load comprises a target neuron number and a synaptic connection weight specified in the packet length;
wherein, the synaptic connection weight supports a linear mode and a nonlinear mode; when the synapse connection weight value is in a linear mode, the synapse connection weight value is the actual weight value of synapses connected with two neurons; when the method is a non-linear mode, the synaptic connection weight value is used as an index to remove the weight of the target weight value of the lookup table index.
2. The brain-like computing chip based on the spiking neural network according to claim 1, wherein the packet length can be represented by a fixed length and a variable length; configuring a length lookup table at each node, and selecting a packet length value from preconfigured packet lengths with 8 bits of bit width by adopting 2-bit information when a fixed-length mode is adopted; when the packet length does not belong to the pre-configured packet length with the bit width of 8 bits, 8 bits of information is additionally added for representing the actual length.
3. The brain-like computing chip based on spiking neural network according to claim 1, wherein the linked list is used to store the start address and the end address of each neuron synapse data packet.
4. The brain-like computing chip based on the spiking neural network according to any of claims 1-3, wherein the synapse memory aligns the number of neurons and the number of synaptic connections of each neuron in the same node by partitioning the chain table region and the boundary of the synaptic data packet region.
5. The brain-like computing chip according to claim 1, wherein the nodes establish a delay information look-up table, the target node calculates the transmission path length between the source node and the target node after receiving the synaptic data packet, obtains the delay information from the delay look-up table, and establishes different synaptic delays between neurons in the target node according to the obtained delay information.
6. The brain-like computing chip based on the impulse neural network of claim 1, wherein when the node is set to a relative reference origin, only the relative address to the reference origin needs to be stored when the address of the target node is stored in the packet header information; the absolute address of the target node is the value of the reference origin plus the value of the relative address.
CN201911148787.5A 2019-11-21 2019-11-21 Brain-like computing chip based on impulse neural network Active CN110909869B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911148787.5A CN110909869B (en) 2019-11-21 2019-11-21 Brain-like computing chip based on impulse neural network
PCT/CN2020/128470 WO2021098588A1 (en) 2019-11-21 2020-11-12 Brain-inspired computing chip based on spiking neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911148787.5A CN110909869B (en) 2019-11-21 2019-11-21 Brain-like computing chip based on impulse neural network

Publications (2)

Publication Number Publication Date
CN110909869A CN110909869A (en) 2020-03-24
CN110909869B true CN110909869B (en) 2022-08-23

Family

ID=69818306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911148787.5A Active CN110909869B (en) 2019-11-21 2019-11-21 Brain-like computing chip based on impulse neural network

Country Status (2)

Country Link
CN (1) CN110909869B (en)
WO (1) WO2021098588A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909869B (en) * 2019-11-21 2022-08-23 浙江大学 Brain-like computing chip based on impulse neural network
CN112270406B (en) * 2020-11-11 2023-05-23 浙江大学 Nerve information visualization method of brain-like computer operating system
CN112270407B (en) * 2020-11-11 2022-09-13 浙江大学 Brain-like computer supporting hundred-million neurons
CN112269606B (en) * 2020-11-12 2021-12-07 浙江大学 Application processing program dynamic loading method of brain-like computer operating system
CN112269751B (en) * 2020-11-12 2022-08-23 浙江大学 Chip expansion method for hundred million-level neuron brain computer
CN112434800B (en) * 2020-11-20 2024-02-20 清华大学 Control device and brain-like computing system
CN112468401B (en) * 2020-11-26 2022-05-20 中国人民解放军国防科技大学 Network-on-chip routing communication method for brain-like processor and network-on-chip
CN112651504B (en) * 2020-12-16 2023-08-25 中山大学 Acceleration method for brain-like simulation compiling based on parallelization
CN112784972B (en) * 2021-01-15 2022-10-11 之江实验室 Synapse implementation architecture for on-chip neural network
CN112561042B (en) * 2021-03-01 2021-06-29 浙江大学 Neural model mapping method of brain-like computer operating system
CN113313240B (en) * 2021-08-02 2021-10-15 成都时识科技有限公司 Computing device and electronic device
CN114202068B (en) * 2022-02-17 2022-06-28 浙江大学 Self-learning implementation system for brain-like computing chip
CN114399033B (en) * 2022-03-25 2022-07-19 浙江大学 Brain-like computing system and method based on neuron instruction coding
CN114611686B (en) * 2022-05-12 2022-08-30 之江实验室 Synapse delay implementation system and method based on programmable neural mimicry core
CN117634550A (en) * 2024-01-25 2024-03-01 之江实验室 Time synchronization method and device for multi-class brain chip cascade system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095966A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid computing system of artificial neural network and impulsive neural network
CN106934457A (en) * 2017-03-08 2017-07-07 杭州领芯电子有限公司 One kind flexibly can realize framework by time-multiplexed spiking neuron
CN107589700A (en) * 2017-10-16 2018-01-16 浙江大学 A kind of EEG signals simulation generator
CN109121435A (en) * 2017-04-19 2019-01-01 上海寒武纪信息科技有限公司 Processing unit and processing method
CN109816102A (en) * 2017-11-22 2019-05-28 英特尔公司 Reconfigurable nerve synapse core for spike neural network
CN110121722A (en) * 2016-12-28 2019-08-13 英特尔公司 For storing and generating the Neuromorphic circuit of connectivity information

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11301753B2 (en) * 2017-11-06 2022-04-12 Samsung Electronics Co., Ltd. Neuron circuit, system, and method with synapse weight learning
US11593623B2 (en) * 2017-12-22 2023-02-28 Intel Corporation Spiking neural network accelerator using external memory
US11763139B2 (en) * 2018-01-19 2023-09-19 International Business Machines Corporation Neuromorphic chip for updating precise synaptic weight values
CN108985447B (en) * 2018-06-15 2020-10-16 华中科技大学 Hardware pulse neural network system
CN110059812B (en) * 2019-01-26 2021-09-14 中国科学院计算技术研究所 Pulse neural network operation chip and related operation method
CN109901878B (en) * 2019-02-25 2021-07-23 北京灵汐科技有限公司 Brain-like computing chip and computing equipment
CN110909869B (en) * 2019-11-21 2022-08-23 浙江大学 Brain-like computing chip based on impulse neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105095966A (en) * 2015-07-16 2015-11-25 清华大学 Hybrid computing system of artificial neural network and impulsive neural network
CN110121722A (en) * 2016-12-28 2019-08-13 英特尔公司 For storing and generating the Neuromorphic circuit of connectivity information
CN106934457A (en) * 2017-03-08 2017-07-07 杭州领芯电子有限公司 One kind flexibly can realize framework by time-multiplexed spiking neuron
CN109121435A (en) * 2017-04-19 2019-01-01 上海寒武纪信息科技有限公司 Processing unit and processing method
CN107589700A (en) * 2017-10-16 2018-01-16 浙江大学 A kind of EEG signals simulation generator
CN109816102A (en) * 2017-11-22 2019-05-28 英特尔公司 Reconfigurable nerve synapse core for spike neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Reconfigurable and Biologically Inspired Paradigmfor Computation Using Network-On-Chip and Spiking Neural Networks;JimHarkin 等;《International Journal of Reconfigurable Computing》;20091231;1-13 *
基于STDP的脉冲神经网络图像识别算法研究;柯成仁;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190215(第(2019)02期);I138-2093 *
杨双鸣 等.大规模类脑计算系统 BiCoSS: 架构、实现及应用.《自动化学报》.2019,第47卷(第9期),2154-2169. *
面向仿脑计算芯片"达尔文 的开发环境研究;喻富豪;《https://d.wanfangdata.com.cn/thesis/ChJUaGVzaXNOZXdTMjAyMjA1MjYSCFkzNTYzNDE2Ggh1OGo3aGhqZQ%3D%3D》;20190822;第1-7页摘要、第2.3.1、3.2.2.1节、图2.5 *
面向脑机接口的神经拟态芯片;马德 等;《人工智能》;20180410;131-136 *

Also Published As

Publication number Publication date
WO2021098588A1 (en) 2021-05-27
CN110909869A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
CN110909869B (en) Brain-like computing chip based on impulse neural network
CN110222308B (en) Matrix multiplication matrix operation method and device
CN107578095B (en) Neural computing device and processor comprising the computing device
CN107918794A (en) Neural network processor based on computing array
CN107301456B (en) Deep neural network multi-core acceleration implementation method based on vector processor
CN107729990A (en) Support the device and method for being used to perform artificial neural network forward operation that discrete data represents
CN109492187A (en) For executing the method and system of simulation complex vector matrix multiplication
CN108510064A (en) The processing system and method for artificial neural network including multiple cores processing module
CN110728364A (en) Arithmetic device and arithmetic method
CN109740739A (en) Neural computing device, neural computing method and Related product
CN107122490A (en) The data processing method and system of aggregate function in a kind of Querying by group
KR102610842B1 (en) Processing element and operating method thereof in neural network
CN110163362A (en) A kind of computing device and method
CN108304926B (en) Pooling computing device and method suitable for neural network
Jang et al. Line limit preserving power system equivalent
JP2666830B2 (en) Triangle Scalable Neural Array Processor
CN112101517A (en) FPGA implementation method based on piecewise linear pulse neuron network
CN109993301A (en) Neural metwork training device and Related product
CN108334944A (en) A kind of device and method of artificial neural network operation
CN112149047A (en) Data processing method and device, storage medium and electronic device
CN109615061B (en) Convolution operation method and device
CN114970831A (en) Digital-analog hybrid storage and calculation integrated equipment
CN111260070B (en) Operation method, device and related product
CN108805271A (en) A kind of arithmetic unit and method
CN111258641B (en) Operation method, device and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant