CN112308221A - Working memory hardware implementation method based on reserve pool calculation - Google Patents

Working memory hardware implementation method based on reserve pool calculation Download PDF

Info

Publication number
CN112308221A
CN112308221A CN202011095450.5A CN202011095450A CN112308221A CN 112308221 A CN112308221 A CN 112308221A CN 202011095450 A CN202011095450 A CN 202011095450A CN 112308221 A CN112308221 A CN 112308221A
Authority
CN
China
Prior art keywords
reserve pool
network
working memory
calculation
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011095450.5A
Other languages
Chinese (zh)
Other versions
CN112308221B (en
Inventor
俞德军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deep Creatic Technologies Co ltd
Original Assignee
Deep Creatic Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Creatic Technologies Co ltd filed Critical Deep Creatic Technologies Co ltd
Priority to CN202011095450.5A priority Critical patent/CN112308221B/en
Publication of CN112308221A publication Critical patent/CN112308221A/en
Application granted granted Critical
Publication of CN112308221B publication Critical patent/CN112308221B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a method for realizing working memory hardware based on reserve pool calculation, which belongs to the technical field of artificial intelligence and life science and is realized by the following system, wherein the method comprises a reserve pool calculation module and a readout calculation module, and the reserve pool calculation module comprises a reserve pool input part, a reserve pool calculation part and a reserve pool output part; the readout calculation module comprises a nerve excitation network, a nerve inhibition network, an activation function part and a full connection layer part, the invention combines a reserve pool calculation method and a working memory concept, a memory unit with specific properties is utilized to build the reserve pool calculation network, the reserve pool calculation network has a memory function for input information, the past input information can be associated with the current input information, so that the relation between neuron input stimuli can be better mastered, the inhibition and excitation of the neuron stimuli are completed through the readout calculation module, the issuing characteristic of the working memory is realized, and the hardware simulation of the working memory is better realized.

Description

Working memory hardware implementation method based on reserve pool calculation
Technical Field
The invention relates to the technical field of artificial intelligence and life science, in particular to a working memory hardware implementation method based on reserve pool calculation.
Background
In 2001, Jaeger modifies the traditional recurrent neural network, simulates the nonlinear state of neurons by using a nonlinear Sigmoid function, and calls the improved network as an echo state neural network; in the same year, Maass has proposed a fluid state machine, the idea adopted by the state machine network is the same as that of an echo state neural network, but the basis of the fluid state machine is neural computation, the basis of the echo state neural network is machine learning, a reserve pool computation is a general neural morphology computation method, the fluid state machine is developed from the echo state neural network and the fluid state machine, compared with the echo state neural network and the fluid state machine, a reserve pool computation algorithm is more convenient and easy to use, when a reserve pool computing system is trained, only the weight of an output part needs to be frequently modified, the weights of an input part and a computation part are fixed and unchanged, the algorithm design is convenient for hardware realization, and can also save a large amount of storage resources, so that the realization of a low-power-consumption time sequence neural morphology computation hardware accelerator becomes possible, and at the same time, the development of new semiconductor devices is expected, and some devices have natural characteristics of recording past input information, so that the device enables the reserve pool to be better applied and developed.
The working memory is a memory system for human to maintain and execute the relevant information capacity of the task in a short time, plays a key role in the high-level cognitive processes of language understanding, learning, thrust, thinking and the like, and consists of 4 main internal systems: the system comprises a central execution system, a voice loop system, a visual space system and a scene buffer system, and people can store information in the environment through the internal systems and finally utilize the information. However, in the current life science field, people still stay at the stage of acquiring data between the interior of a biological neuron and the neuron by using the technologies such as the MEMS probe technology and the CMOS nano-electrode, and cannot directly simulate the release characteristic of the neuron through hardware according to the input stimulation of the neuron, so how to simulate the working memory characteristic of the neuron through a software algorithm and hardware is a difficult problem to solve at present.
Disclosure of Invention
The invention aims to: aiming at the problems or the defects of the prior art and solving the technical problem that the working memory is difficult to realize through hardware, the invention provides a working memory hardware realization method based on reserve pool calculation, which combines the reserve pool calculation and the working memory, constructs a reserve pool calculation circuit through a novel semiconductor memory device, associates the current input stimulation of a neuron with the past input stimulation while saving a large amount of storage resources, realizes the memory function, and respectively excites and inhibits the neuron in a hardware network through a nerve excitation type network and a nerve inhibition type network in a subsequent readout calculation module, realizes the release characteristic of the working memory, and better realizes the hardware simulation of the working memory.
The technical scheme adopted by the invention is as follows: a working memory hardware implementation method based on reserve pool computing is realized by the following system, and the system comprises two parts: the reserve pool calculating module mainly realizes the association between input stimuli at different moments and comprises: the device comprises a reserve pool input part, a reserve pool calculation part and a reserve pool output part; the readout computation model mainly realizes the functions of nerve excitation and nerve inhibition in working memory, and comprises the following steps: a neural excitation network, a neural inhibition network, an activation function part and a full connection layer part.
Further, the reserve pool input part is responsible for receiving externally given neuron stimulation signals and distributing the neuron stimulation signals to each storage unit in the reserve pool calculation according to a weight distribution principle.
Further, the reserve pool calculating part is composed of memory units with memory function characteristics, the memory units support but are not limited to memristor device units, the memory units are connected with each other, and after receiving information from the reserve pool input part, the memory units correlate current signals and past input signals and map the signals into a high-dimensional space, and then the signals are output by a subsequent reserve pool output part.
Further, the reserve pool output part is responsible for outputting the calculation result of the reserve pool, and transmits the calculation result of the reserve pool part to the readout calculation module according to the weight distribution principle.
Furthermore, the storage unit in the reserve pool computing module is composed of a novel semiconductor storage device and has a memory function for time domain input signals, the storage characteristic of the storage unit is in a low state under the condition that no input stimulus is applied for a long time, after an input stimulus is applied to the storage unit, the characteristic of the storage unit can change rapidly in a short time and enters a high state, and after the storage characteristic reaches the highest point, the characteristic can be attenuated continuously and slowly along with the time until the default state is recovered.
Furthermore, the neural excitation type network is responsible for expression of excitability of working memory and is composed of a neural network, all layers of neurons in the neural network are closely connected and have close logical distances, working memory areas repeatedly mentioned in continuous input stimulation can be analyzed through complex calculation, and excitation expression is carried out on the working memory, so that the characteristic of the firing rate of the neurons is improved.
Furthermore, the neural inhibition type network is responsible for the expression of work memory inhibition and is composed of a neural network, wherein all layers of neurons in the neural network are sparse in relation and long in logical distance, different parts in the current input stimulation and the past input stimulation can be analyzed through complex calculation, excited parts in the past input stimulation are extracted, and if the excited areas are not expressed in the current input, the work memory area is subjected to inhibition expression, so that the release characteristic of the neurons is reduced.
Furthermore, the activation function part carries out nonlinear mapping on the calculation analysis results of the nerve excitation type network and the nerve inhibition type network, so that the output of a subsequent full-connection layer part is facilitated.
Further, the full connection layer part is responsible for outputting the working memory issuing characteristic.
A working memory hardware implementation method based on reserve pool calculation comprises the following specific working procedures:
s1: global reset, all modules in the system are initialized;
s2: storing the weight information and the network structure information in the trained neural network mathematical model into a readout calculation module;
s3: inputting a neuron stimulation signal to working memory hardware;
s4: the reserve pool computing module collects the stimulation signals and distributes the stimulation signals to the reserve pool computing part, a memory with a memory function is used for computing the stimulation signals, the past stimulation information is related to the current stimulation information, and the computing result is transmitted to the readout computing module through the reserve pool output part;
s5: the readout calculation module receives the signals from the reserve pool calculation module, transmits the signals to a neural excitation network and a neural inhibition network respectively, the two neural networks respectively perform calculation analysis on signal data, perform excitation and inhibition expression of working memory, and transmit the expression result to a subsequent activation function part;
s6: after receiving the signal from the readout calculating module, the activating function part carries out nonlinear mapping on the data, so that the subsequent full-connection layer can be conveniently output;
s7: and the full connection layer part expresses and outputs the neuron working memory release characteristics according to the processing result of the activation function, and after the expression of the input stimulation result is completed, the step 3 is skipped to continue to perform a new neuron stimulation input until the whole working memory hardware is closed.
Compared with the prior art, the invention has the beneficial effects that:
(1) the invention provides a method for realizing working memory hardware based on reserve pool calculation, which combines the reserve pool calculation and the working memory, constructs a reserve pool calculation circuit through a novel semiconductor memory device, saves a large amount of storage resources, associates the current input stimulation of a neuron with the past input stimulation to realize the memory function, and respectively excites and inhibits the neuron in a hardware network through a nerve excitation type network and a nerve inhibition type network in a subsequent readout calculation module to realize the distribution characteristic of the working memory and better realize the hardware simulation of the working memory.
Drawings
FIG. 1 is a hardware implementation diagram of a working memory hardware implementation method based on pool computing according to the present invention;
FIG. 2 is a schematic diagram illustrating characteristics of a memory cell in a working memory hardware implementation method based on pool computing according to the present invention;
FIG. 3 is a schematic diagram illustrating characteristics of a memory cell in a working memory hardware implementation method based on pool computation under different input pulse conditions;
FIG. 4 is a schematic structural diagram of a reserve pool computing module of the working memory hardware implementation method based on reserve pool computing according to the present invention;
FIG. 5 is a schematic structural diagram of a ready computing module of a working memory hardware implementation method based on pool computing according to the present invention;
FIG. 6 is a schematic diagram of a workflow of a working memory hardware implementation method based on pool computing according to the present invention.
Detailed Description
The present invention will be described in further detail in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the detailed description and specific examples, while indicating the preferred embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, the figure shows a schematic diagram of a working memory hardware implementation based on reserve pool computing according to the present invention, and the system mainly includes two parts: the reserve pool computing module comprises: the reserve pool computing module is mainly used for realizing the association between input stimuli at different moments, receiving neuron input stimuli given by the outside, associating past input stimulus information with current input stimulus information through an internal reserve pool computing network, realizing the memory function of continuous stimuli in a short time, and transmitting the processed information to the readout computing module; the readout calculation module comprises: the nerve excitation type network, the nerve inhibition type network, the activation function part, the full connection layer part and the ready calculation module mainly realize the functions of nerve excitation and nerve inhibition in the working memory, carry out excitation and inhibition processing on neurons according to the processed stimulation information, obtain the release characteristic of the neurons, and finally realize the simulation of the release characteristic of the neuron working memory.
The reserve pool input part of the reserve pool calculation module is responsible for receiving neuron stimulation signals given by the outside and distributing the neuron stimulation signals to each storage unit in the reserve pool calculation according to a weight distribution principle; the reserve pool computing part of the reserve pool computing module is composed of memory units with memory function characteristics, the memory units support but are not limited to memristor device units, the memory units are connected with each other, and after receiving information from the reserve pool input part, the memory units correlate current signals and past input signals and map the signals into a high-dimensional space, and then the signals are output by a subsequent reserve pool output part; and the reserve pool output part of the reserve pool calculation module is responsible for outputting the calculation result of the reserve pool, and transmits the calculation result of the reserve pool part to the readout calculation module according to the weight distribution principle.
The schematic diagram of the characteristics of the memory cell in the reserve pool computing module of the invention is shown in fig. 2, the memory cell is a novel semiconductor device, and has a memory function for time domain input signals, the memory cell has the memory characteristics in a low state under the condition that no input stimulus is provided for a long time, after an input stimulus is provided for the memory cell, the characteristics of the memory cell can change rapidly in a short time and enter a high state, and after the memory characteristics reach the highest point, the characteristics can continuously and slowly decay along with the time until the default state is recovered.
The specific structure of the reservoir calculation module is shown in fig. 4, the reservoir calculation module is developed and improved based on an echo state neural network and a fluid state machine, as shown in fig. 4, the reservoir calculation module comprises: the device comprises a reserve pool input layer, a reserve pool calculation layer and a reserve pool output layer, wherein the number of neurons of each layer needs to be determined according to practical application conditions, the number of storage units needed by the reserve pool calculation layer is not too large, otherwise, an overfitting phenomenon can occur, in addition, the reserve pool input layer is connected with the calculation layer through a weight matrix, the reserve pool calculation layer is connected with the output layer through the weight matrix, after receiving input stimulation, a reserve pool calculation module multiplies input signals by the input weights and distributes the input signals to different storage units in the calculation layer, the storage units are randomly connected with each other, after receiving the input signals, the calculation results of the stimulation signals are reflected through storage characteristics, the calculation results are multiplied by the output matrix, and the final signal processing results are output to a subsequent readout calculation module through the output layer.
The detailed structure of the readout calculating module is shown in fig. 5, the readout calculating module is based on a neural network, and includes: a neural inhibition type network, a neural excitability network, an activation function layer and a full connection layer. The nerve inhibition type network is responsible for the expression of work memory inhibition and is composed of a nerve network, neurons in the nerve inhibition type network are not closely connected and have long logical distance, different parts in current input stimulation and past input stimulation can be analyzed through complex calculation, excited parts in the past input stimulation are extracted, if the excited areas are not expressed in the current input, the work memory area is subjected to inhibition expression, so that the release characteristic of the neurons is reduced, and the aim of intentionally losing part of the work memory area which is not mentioned for a long time is achieved; the neural excitation type network is responsible for expressing the excitability of the working memory and is composed of a neural network, neurons in the neural excitation type network are closely connected and have close logical distance, the network can analyze the working memory area repeatedly mentioned in continuous input stimulation through complex calculation, and the excitation expression is carried out on the working memory, so that the characteristic of the firing rate of the neurons is improved.
After receiving the data from the reserve pool computing module, the readout computing module respectively performs computing processing on the data through a nerve inhibition type network and a nerve excitation type network, analyzes a inhibition area and an excitation area in the data, performs inhibition expression on the area needing to be inhibited in the working memory, performs excitation expression on the area needing to be excited in the working memory, transmits an expression result to an activation function layer, performs nonlinear processing on the expression result by the activation function layer, supports nonlinear activation functions including but not limited to Tanh, Sigmoid, ReLU, Leaky-ReLU and the like, multiplies the processed data by an output matrix, and finally performs output of the distribution characteristic of the working memory by a full connection layer.
As shown in fig. 3, which shows a characteristic diagram of the memory cell in the pool computing module according to the present invention under different input pulses, first 3 consecutive stimuli are input to the memory cell, and the characteristic of the memory cell is rapidly excited from a default state to a high state. Subsequently, without inputting a stimulus to the memory cell, the characteristics of the memory cell gradually decay. Then 3 discrete stimuli are input to the memory cell, the characteristics of which rise first and then decay. Finally, continuous stimulation is input into the storage unit, the characteristics of the storage unit rapidly rise, and by using the functional characteristics of the storage unit, the past input stimulation and the current input stimulation can be associated, so that the memory function of the input stimulation is realized.
A working memory hardware implementation method based on reserve pool calculation comprises the following specific working procedures:
s1: global reset, all modules in the system are initialized;
s2: storing the weight information and the network structure information in the trained neural network mathematical model into a readout calculation module;
s3: inputting a neuron stimulation signal to working memory hardware;
s4: the reserve pool computing module collects the stimulation signals and distributes the stimulation signals to the reserve pool computing part, a memory with a memory function is used for computing the stimulation signals, the past stimulation information is related to the current stimulation information, and the computing result is transmitted to the readout computing module through the reserve pool output part;
s5: the readout calculation module receives the signals from the reserve pool calculation module, transmits the signals to a neural excitation network and a neural inhibition network respectively, the two neural networks respectively perform calculation analysis on signal data, perform excitation and inhibition expression of working memory, and transmit the expression result to a subsequent activation function part;
s6: after receiving the signal from the readout calculating module, the activating function part carries out nonlinear mapping on the data, so that the subsequent full-connection layer can be conveniently output;
s7: and the full connection layer part expresses and outputs the neuron working memory release characteristics according to the processing result of the activation function, and after the expression of the input stimulation result is completed, the step 3 is skipped to continue to perform a new neuron stimulation input until the whole working memory hardware is closed.
The above-mentioned embodiments only express the specific embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for those skilled in the art, without departing from the technical idea of the present application, several changes and modifications can be made, which are all within the protection scope of the present application.

Claims (7)

1. A working memory hardware implementation method based on reserve pool computing is characterized in that the method is implemented by the following system, and the system comprises two parts: the system comprises a reserve pool computing module and a readout computing module, wherein the reserve pool computing module comprises a reserve pool input part, a reserve pool computing part and a reserve pool output part; the readout calculating module comprises a nerve excitation type network, a nerve inhibition type network, an activation function part and a full connection layer part, the reserve pool calculating part is composed of storage units with memory function characteristics, the storage units are connected with each other, and the storage units are composed of novel semiconductor storage devices.
2. The working memory hardware implementation method based on reserve pool computing according to claim 1, wherein the memory cells comprise memristor device cells.
3. The method as claimed in claim 1, wherein the storage unit has a low storage characteristic when no input stimulus is applied for a long time, the characteristic of the storage unit rapidly changes in a short time after an input stimulus is applied to the storage unit, and the storage unit enters a high state, and the characteristic gradually decays with time until the storage unit returns to the default state after the storage characteristic reaches the maximum point.
4. The method as claimed in claim 1, wherein the neural excitation type network is composed of a neural network, and the neurons in the neural network have close connection and close logical distance, so that the working memory areas mentioned repeatedly in the continuous input stimuli can be analyzed through complex calculation, and the working memories are excitatory expressed, thereby improving the firing rate characteristics of the neurons.
5. The method as claimed in claim 1, wherein the neural inhibition type network is composed of a neural network, each layer of neurons in the neural network has sparse connections and long logical distance, different parts of the current input stimulation and the past input stimulation can be analyzed through complex calculation, excited parts of the past input stimulation are extracted, and if the excited regions are not expressed in the current input, the working memory regions are inhibited and expressed, so as to reduce the release characteristics of the neurons.
6. The method as claimed in claim 1, wherein the activation function part performs nonlinear mapping on the calculation analysis results of the neural excitation network and the neural inhibition network, and the nonlinear activation functions include Tanh, Sigmoid, ReLU, and leak-ReLU.
7. The method for realizing the working memory hardware based on the reserve pool computing as claimed in claim 1, wherein the method comprises the following steps:
s1: global reset, all modules in the system are initialized;
s2: storing weight information and network structure information in the trained neural network mathematical model into the readout calculation module;
s3: inputting neuron stimulation signals to the reserve pool computing module;
s4: the reserve pool computing module collects the stimulation signals and distributes the stimulation signals to the reserve pool computing part, a storage unit with a memory function is used for computing the stimulation signals, the past stimulation information is related to the current stimulation information, and the computing result is transmitted to the readout computing module through the reserve pool output part;
s5: the readout calculation module receives the signals from the reserve pool calculation module, transmits the signals to a neural excitation network and a neural inhibition network respectively, the two neural networks respectively perform calculation analysis on signal data, perform excitation and inhibition expression of working memory, and transmit the expression result to a subsequent activation function part;
s6: after receiving the signal from the readout calculating module, the activating function part carries out nonlinear mapping on the data, so that the subsequent full-connection layer can be conveniently output;
s7: and the full connection layer part expresses and outputs the neuron working memory release characteristics according to the processing result of the activation function, and after the expression of the input stimulation result is completed, the step 3 is skipped to continue to perform a new neuron stimulation input until the whole working memory hardware is closed.
CN202011095450.5A 2020-10-14 2020-10-14 Working memory hardware implementation method based on reserve pool calculation Active CN112308221B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011095450.5A CN112308221B (en) 2020-10-14 2020-10-14 Working memory hardware implementation method based on reserve pool calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011095450.5A CN112308221B (en) 2020-10-14 2020-10-14 Working memory hardware implementation method based on reserve pool calculation

Publications (2)

Publication Number Publication Date
CN112308221A true CN112308221A (en) 2021-02-02
CN112308221B CN112308221B (en) 2024-02-27

Family

ID=74489682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011095450.5A Active CN112308221B (en) 2020-10-14 2020-10-14 Working memory hardware implementation method based on reserve pool calculation

Country Status (1)

Country Link
CN (1) CN112308221B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819143A (en) * 2021-02-04 2021-05-18 成都市深思创芯科技有限公司 Work memory computing system and method based on graph neural network
CN112819142A (en) * 2021-02-04 2021-05-18 成都市深思创芯科技有限公司 Short-time synaptic plasticity working memory computing system and method
WO2023130725A1 (en) * 2022-01-04 2023-07-13 中国科学院微电子研究所 Hardware implementation method and apparatus for reservoir computing model based on random resistor array, and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874629A (en) * 2019-09-25 2020-03-10 天津医科大学 Structure optimization method of reserve pool network based on excitability and inhibition STDP
CN111553415A (en) * 2020-04-28 2020-08-18 哈尔滨理工大学 Memristor-based ESN neural network image classification processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874629A (en) * 2019-09-25 2020-03-10 天津医科大学 Structure optimization method of reserve pool network based on excitability and inhibition STDP
CN111553415A (en) * 2020-04-28 2020-08-18 哈尔滨理工大学 Memristor-based ESN neural network image classification processing method

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
XINMIN LI 等: ""Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity"", 《PHYSICA:STATISTICAL MECHANICS AND ITS APPLICATIONS》 *
李军;李青;: "基于CEEMDAN-排列熵和泄漏积分ESN的中期电力负荷预测研究", 电机与控制学报, no. 08 *
李达: ""基于回声状态网络的运动想象分类"", 《中国优秀硕士学位论文全文数据库基础科学辑》 *
王秀青;侯增广;潘世英;谭民;王永吉;曾慧;: "基于多超声传感器信息和NeuCube的移动机器人走廊场景识别", 计算机应用, no. 10 *
罗熊;黎江;孙增圻;: "回声状态网络的研究进展", 北京科技大学学报 *
蔺想红;王向文;张宁;马慧芳;: "脉冲神经网络的监督学习算法研究综述", 电子学报, no. 03 *
陈国钦;詹仁辉;: "基于回声状态网络的声环境中目标信号增强方法", 福建师范大学学报(自然科学版), no. 02 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819143A (en) * 2021-02-04 2021-05-18 成都市深思创芯科技有限公司 Work memory computing system and method based on graph neural network
CN112819142A (en) * 2021-02-04 2021-05-18 成都市深思创芯科技有限公司 Short-time synaptic plasticity working memory computing system and method
CN112819142B (en) * 2021-02-04 2024-01-19 成都市深思创芯科技有限公司 Short-time synaptic plasticity work memory computing system and method
CN112819143B (en) * 2021-02-04 2024-02-27 成都市深思创芯科技有限公司 Working memory computing system and method based on graph neural network
WO2023130725A1 (en) * 2022-01-04 2023-07-13 中国科学院微电子研究所 Hardware implementation method and apparatus for reservoir computing model based on random resistor array, and electronic device

Also Published As

Publication number Publication date
CN112308221B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN112308221B (en) Working memory hardware implementation method based on reserve pool calculation
Wang et al. Water quality prediction method based on LSTM neural network
CN108805270B (en) Convolutional neural network system based on memory
CN106295799B (en) A kind of implementation method of deep learning multilayer neural network
CN112163465A (en) Fine-grained image classification method, fine-grained image classification system, computer equipment and storage medium
CN112365885B (en) Training method and device of wake-up model and computer equipment
Mehrtash et al. Synaptic plasticity in spiking neural networks (SP/sup 2/INN): a system approach
WO2015053864A1 (en) Compiling network descriptions to multiple platforms
CN112101535B (en) Signal processing method of impulse neuron and related device
CN104680236B (en) The FPGA implementation method of kernel function extreme learning machine grader
Herrmann et al. A neural model of the dynamic activation of memory
CN113570039B (en) Block chain system based on reinforcement learning optimization consensus
CN105701540A (en) Self-generated neural network construction method
Feng et al. One-dimensional VGGNet for high-dimensional data
CN114202068B (en) Self-learning implementation system for brain-like computing chip
CN111382840B (en) HTM design method based on cyclic learning unit and oriented to natural language processing
CN112101418A (en) Method, system, medium and equipment for identifying breast tumor type
Soltani et al. Optimized echo state Network based on PSO and Gradient Descent for choatic time series prediction
CN108470212A (en) A kind of efficient LSTM design methods that can utilize incident duration
CN115456149B (en) Impulse neural network accelerator learning method, device, terminal and storage medium
CN112819143B (en) Working memory computing system and method based on graph neural network
CN113628615B (en) Voice recognition method and device, electronic equipment and storage medium
CN115358375A (en) Pulse neural network reserve pool calculation model construction method and device
CN112862173B (en) Lake and reservoir cyanobacterial bloom prediction method based on self-organizing deep confidence echo state network
CN112183848B (en) Power load probability prediction method based on DWT-SVQR integration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant