CN114118383A - Multi-synaptic plasticity pulse neural network-based fast memory coding method and device - Google Patents
Multi-synaptic plasticity pulse neural network-based fast memory coding method and device Download PDFInfo
- Publication number
- CN114118383A CN114118383A CN202111497073.2A CN202111497073A CN114118383A CN 114118383 A CN114118383 A CN 114118383A CN 202111497073 A CN202111497073 A CN 202111497073A CN 114118383 A CN114118383 A CN 114118383A
- Authority
- CN
- China
- Prior art keywords
- layer
- neurons
- input
- output layer
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000015654 memory Effects 0.000 title claims abstract description 64
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 44
- 210000002569 neuron Anatomy 0.000 claims abstract description 110
- 230000000946 synaptic effect Effects 0.000 claims abstract description 35
- 230000001537 neural effect Effects 0.000 claims abstract description 23
- 239000012528 membrane Substances 0.000 claims abstract description 19
- 230000005764 inhibitory process Effects 0.000 claims abstract description 9
- 238000000926 separation method Methods 0.000 claims abstract description 5
- 230000002401 inhibitory effect Effects 0.000 claims description 35
- 238000012421 spiking Methods 0.000 claims description 31
- 238000010304 firing Methods 0.000 claims description 30
- 230000001242 postsynaptic effect Effects 0.000 claims description 15
- 210000005215 presynaptic neuron Anatomy 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 12
- 230000010355 oscillation Effects 0.000 claims description 11
- 230000000306 recurrent effect Effects 0.000 claims description 8
- 230000003956 synaptic plasticity Effects 0.000 claims description 8
- 210000004027 cell Anatomy 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 230000001629 suppression Effects 0.000 claims description 5
- 230000002964 excitative effect Effects 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 102000004868 N-Methyl-D-Aspartate Receptors Human genes 0.000 claims description 3
- 108090001041 N-Methyl-D-Aspartate Receptors Proteins 0.000 claims description 3
- 210000000857 visual cortex Anatomy 0.000 claims description 3
- 230000003313 weakening effect Effects 0.000 claims description 3
- 230000032611 negative regulation of synaptic plasticity Effects 0.000 claims description 2
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 abstract description 6
- 210000000225 synapse Anatomy 0.000 abstract description 4
- 230000003213 activating effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 11
- 230000000875 corresponding effect Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 8
- 210000004205 output neuron Anatomy 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 210000004556 brain Anatomy 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 125000004122 cyclic group Chemical group 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 239000000306 component Substances 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000010344 co-firing Methods 0.000 description 1
- 230000003930 cognitive ability Effects 0.000 description 1
- 230000003920 cognitive function Effects 0.000 description 1
- 230000036992 cognitive tasks Effects 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000028161 membrane depolarization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明属于类脑智能和人工智能领域,涉及基于多突触可塑性脉冲神经网络快速记忆编码方法和装置。The invention belongs to the field of brain-like intelligence and artificial intelligence, and relates to a fast memory coding method and device based on a polysynaptic plasticity impulse neural network.
背景技术Background technique
脉冲神经网络(Spiking Neural Network,SNN)模拟了生物神经系统的信息处理机制,通过离散脉冲事件驱动计算,相比于传统人工神经网络,具有低功耗以及更强的信息表达能力等优势,是分析和仿真大脑认知功能的强有力工具。记忆作为一种对经历过的事物的识记、保持和再现的认知能力,是大脑智能最核心的组成部分之一。The Spiking Neural Network (SNN) simulates the information processing mechanism of the biological nervous system. It drives computing through discrete pulse events. Compared with traditional artificial neural networks, it has the advantages of low power consumption and stronger information expression capabilities. A powerful tool for analyzing and simulating the cognitive functions of the brain. Memory, as a cognitive ability to remember, maintain and reproduce what has been experienced, is one of the core components of brain intelligence.
突触可塑性长期以来被认为是学习和记忆的基础,神经科学研究表明生物大脑依赖于多种突触可塑性协同来保障认知任务的可靠实现。但是由于缺乏对不同形式的突触可塑性规则之间的相互作用的理解以及神经网络的结构和动力学的复杂性,现有的脉冲记忆模型大多数仅使用无监督突触可塑性规则(如脉冲时间依赖可塑性(Spike-Timing-Dependent-Plasticity,STDP))训练循环网络,通过调整循环网络中激活神经元之间的突触权值,形成加强连接的循环子网络存储记忆,对外部输入模式的记忆就由神经群体的集体共发放活动编码。然而,基于无监督可塑性规则的记忆模型学习过程缓慢,需要多次重复地输入感知刺激才能生成神经群体表达记忆,记忆效率低。此外,基于无监督可塑性规则的记忆模型无法保障神经群体的稳定生成,在学习过程中响应不同输入模式的神经群体之间可能会产生干扰从而破坏记忆。Synaptic plasticity has long been recognized as the basis of learning and memory, and neuroscience research has shown that biological brains rely on the synaptic plasticity of multiple synapses to ensure the reliable realization of cognitive tasks. However, due to the lack of understanding of the interplay between different forms of synaptic plasticity rules and the complexity of the structure and dynamics of neural networks, most existing spiking memory models only use unsupervised synaptic plasticity rules (such as spike timing Dependent plasticity (Spike-Timing-Dependent-Plasticity, STDP)) training a recurrent network, by adjusting the synaptic weights between the activated neurons in the recurrent network, a recurrent sub-network that strengthens connections is formed to store memory and memory for external input patterns is encoded by the collective co-firing activity of the neural population. However, the learning process of memory models based on unsupervised plasticity rules is slow, requiring repeated input of perceptual stimuli to generate neural population expressive memory, and the memory efficiency is low. In addition, memory models based on unsupervised plasticity rules cannot guarantee the stable generation of neural populations, and there may be interference between neural populations responding to different input patterns during the learning process, thereby destroying memory.
发明内容SUMMARY OF THE INVENTION
为了解决现有技术中存在的当前基于无监督可塑性的脉冲记忆模型记忆编码效率低、不稳定的问题,本发明提出一种基于多突触可塑性脉冲神经网络快速记忆编码方法和装置,其具体技术方案如下:In order to solve the problem of low memory coding efficiency and instability in the current pulse memory model based on unsupervised plasticity in the prior art, the present invention proposes a method and device for fast memory coding based on a multi-synaptic plasticity pulse neural network. The plan is as follows:
一种基于多突触可塑性的脉冲神经网络快速记忆编码方法,包括以下步骤:A method for fast memory coding of spiking neural network based on polysynaptic plasticity, comprising the following steps:
步骤一:基于层级编码策略将外部刺激转换为输入脉冲序列;Step 1: Convert external stimuli into input pulse trains based on a hierarchical coding strategy;
步骤二:脉冲神经网络收到输入脉冲后,基于改进的SRM模型更新输出层神经元的膜电位;Step 2: After receiving the input pulse, the spiking neural network updates the membrane potential of the neurons in the output layer based on the improved SRM model;
步骤三:使用监督群体Tempotron更新输入到输出层间的突触权值,激活输出层神经元记忆输入;Step 3: Use the supervised group Tempotron to update the synaptic weights between the input and the output layer, and activate the output layer neurons to memorize the input;
步骤四:输出层神经元激活后,使用无监督STDP更新层内激活神经元间的突触权值,形成增强的循环子网络存储记忆;Step 4: After the neurons in the output layer are activated, use unsupervised STDP to update the synaptic weights between the activated neurons in the layer to form an enhanced recurrent sub-network storage memory;
步骤五:在执行步骤四的同时,使用无监督抑制突触可塑性,更新抑制层到输出层间的突触权值,抑制反馈保障记忆不同输入的神经群体发放时间上的分离。Step 5: While performing
优选的,所述步骤一具体为:Preferably, the step one is specifically:
所述外部刺激为从MNIST手写体数字集的‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’七类中任意选取的七张图像,每张图像的尺寸为28pixel*28pixel,将每张图像均匀切分,获得49个4pixel*4pixel大小的接收域;The external stimuli are seven images arbitrarily selected from seven categories of '0', '1', '2', '3', '4', '5', and '6' in the MNIST handwritten digit set. The size of the image is 28pixel*28pixel, and each image is evenly divided to obtain 49 receptive fields of 4pixel*4pixel size;
所述层级编码策略使用由简单细胞Simple Cell的S层和复杂细胞Complex Cell的C层构成的两层结构对外部刺激进行特征提取,然后通过延迟编码方法对提取到的特征进行编码,其中:The hierarchical encoding strategy uses a two-layer structure composed of the S layer of the simple cell and the C layer of the complex cell to extract the features of the external stimulus, and then encode the extracted features through the delay encoding method, where:
S层:提取图像的边缘方向特征,S层的滤波核由4个方向分别为 、、、的 Gabor滤波器组成,4个方向的所述Gabor滤波器均模拟视觉皮层的感受野对每个接收域对 应的图像块分别执行滤波操作,滤波后获得49*4个尺寸为4pixel *4 pixel的特征图; S layer: Extract the edge direction features of the image. The filter kernel of the S layer consists of four directions: , , , The Gabor filters in the four directions all simulate the receptive field of the visual cortex and perform filtering operations on the image blocks corresponding to each receptive field. feature map;
C层:先对S层输出的每个特征图执行相加操作获得尺寸为49 pixel *4 pixel的特征图,对特征图执行线性归一化将数据分布到[0,1]的区间内,然后按行执行最大竞争操作,所述按行执行最大竞争操作包括:每行保留强度最强的特征作为该位置上最匹配的方向特征,其它方向的特征值被置为0,获得尺寸为4 pixel 9*4 pixel的特征图的稀疏化表达;Layer C: First perform an addition operation on each feature map output by layer S to obtain a feature map with a size of 49 pixel * 4 pixel, and perform linear normalization on the feature map to distribute the data into the interval [0, 1]. Then perform the maximum competition operation by row, the performing maximum competition operation by row includes: each row retains the feature with the strongest strength as the most matching directional feature at the position, the feature values of other directions are set to 0, and the obtained size is 4 The sparse expression of the feature map of pixel 9*4 pixel;
获得的特征通过延迟编码转换为输入脉冲序列,按照以下公式计算:The obtained features are transformed into the input pulse train by delay encoding, which is calculated according to the following formula:
其中,是第i个神经元的发放时间,T是编码窗口的长度,是第i个特征的强度 值。 in, is the firing time of the ith neuron, T is the length of the encoding window, is the intensity value of the ith feature.
优选的,所述步骤二具体为:Preferably, the
所述改进的SRM模型在不应核中引入后去极化电位,同时引入振荡作为外部输入 电流注入到输出层神经元中,在所述的脉冲神经网络中,当输出层其一神经元发放脉冲后, 该脉冲会传递给同层的其他神经元以及抑制层神经元,模型中输出层神经元的膜电位收到 来自输入层以及同层的其他神经元的兴奋输入和来自抑制神经元的抑制反馈,则模型中输 出层神经元的膜电位按如下公式计算: The improved SRM model introduces a post-depolarization potential in the refractory nucleus while introducing Oscillation is injected into the neurons in the output layer as external input currents. In the above-mentioned spiking neural network, when one neuron in the output layer emits a pulse, the pulse will be transmitted to other neurons in the same layer and neurons in the inhibitory layer. The membrane potential of the output layer neurons in the model receives the excitatory input from the input layer and other neurons in the same layer and the inhibitory feedback from the inhibitory neurons, then the membrane potential of the output layer neurons in the model is calculated according to the following formula:
其中,是神经元i的膜电位,是输入层神经元的总输入,是同层神经元 的总输入,是抑制反馈,是神经元i在处发放脉冲后膜电位的后去极化过程, 是振荡源,、、分别是输入层神经元到输出层神经元、输出层内神经元、抑 制神经元到输出层神经元的突触权重,是归一化因子,、、分别是抑制反馈、后 去极化、振荡的幅值,、分别是突触前神经元j、突触后神经元i的脉冲发放时刻,、、、是时间常量,f是振荡的频率,t是时间变量,是初始相位。 in, is the membrane potential of neuron i, is the total input of neurons in the input layer, is the total input of neurons in the same layer, is the inhibitory feedback, is the neuron i in The post-depolarization process of the membrane potential after the pulse is fired, Yes oscillator source, , , are the synaptic weights from the input layer neurons to the output layer neurons, the neurons in the output layer, and the inhibitory neurons to the output layer neurons, is the normalization factor, , , Inhibitory feedback, post-depolarization, the amplitude of the oscillation, , are the impulse firing times of presynaptic neuron j and postsynaptic neuron i, respectively, , , , is the time constant, f is frequency of oscillation, t is the time variable, is the initial phase.
优选的,所述步骤三中的监督群体Tempotron更新输入层和输出层间突触权值,其更新方法为:Preferably, the supervised group Tempotron in the third step updates the synaptic weights between the input layer and the output layer, and the update method is as follows:
其中,是输入层到输出层间连接的权值,是层间权值的学习率,d是期 望的输出标记包括0 或1,是突触前神经元j的脉冲发放时刻,K是距离达到神经群体大小 还需激活的神经元数目,是第k个最活跃突触后神经元最大膜电位对应时刻。 in, is the weight of the connection between the input layer and the output layer, is the learning rate of the weights between layers, d is the expected output label including 0 or 1, is the pulse firing time of the presynaptic neuron j, K is the number of neurons that need to be activated before reaching the size of the neural population, is the time corresponding to the maximum membrane potential of the kth most active postsynaptic neuron.
优选的,所述步骤四中的无监督STDP更新层内激活神经元间的突触权值,其更新方法为:Preferably, the unsupervised STDP in the
其中,是输出层内连接权值的更新量,和分别表示增强和减弱的幅值, 和分别表示突触前神经元j和突触后神经元i脉冲发放时刻,、是时间常数,表 示NMDA受体的衰变时间常量。 in, is the update amount of the connection weights in the output layer, and represent the magnitudes of enhancement and weakening, respectively, and are the pulse firing times of presynaptic neuron j and postsynaptic neuron i, respectively, , is the time constant, represents the decay time constant of NMDA receptors.
优选的,所述步骤五中的抑制突触可塑性的更新方法为:Preferably, the updating method for inhibiting synaptic plasticity in the
其中,是抑制层到输出层间连接权值的更新量,表示突触后神经元i的 脉冲发放时刻。 in, is the update amount of the connection weight between the suppression layer and the output layer, represents the timing of the firing of the postsynaptic neuron i.
一种基于多突触可塑性脉冲神经网络快速记忆编码装置,包括一个或多个处理器,用于实现所述的基于多突触可塑性脉冲神经网络快速记忆编码方法。A fast memory coding device based on polysynaptic plasticity spiking neural network, comprising one or more processors for implementing the fast memory coding method based on polysynaptic plasticity spiking neural network.
一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时,实现所述的基于多突触可塑性脉冲神经网络快速记忆编码方法。A computer-readable storage medium on which a program is stored, and when the program is executed by a processor, realizes the fast memory coding method based on the polysynaptic plasticity impulse neural network.
本发明的有益效果:Beneficial effects of the present invention:
本发明提出的有监督群体Tempotron、无监督STDP以及无监督抑制可塑性的协同,为脉冲神经网络快速记忆外部刺激提供了一种可行且高效的方式。与现有的脉冲记忆模型相比,有监督和无监督可塑性的协同,更具生物可解释性,并且有效地提升了记忆的编码速度与稳定性。The synergy of supervised group Tempotron, unsupervised STDP and unsupervised inhibitory plasticity proposed in the present invention provides a feasible and efficient way for the spiking neural network to quickly memorize external stimuli. Compared with existing impulse memory models, the synergy of supervised and unsupervised plasticity is more biologically interpretable, and effectively improves the encoding speed and stability of memory.
附图说明Description of drawings
图1是本发明一种实施例的基于多突触可塑性脉冲神经网络快速记忆编码方法的计算框架示意图;1 is a schematic diagram of a computational framework of a fast memory coding method based on a polysynaptic plasticity spiking neural network according to an embodiment of the present invention;
图2是本发明一种实施例的基于多突触可塑性脉冲神经网络快速记忆编码方法的流程示意图;2 is a schematic flowchart of a method for fast memory encoding based on a polysynaptic plasticity spiking neural network according to an embodiment of the present invention;
图3是外部刺激编码过程示意图;Figure 3 is a schematic diagram of an external stimulus encoding process;
图4是群体Tempotron学习策略的示意图;Figure 4 is a schematic diagram of a group Tempotron learning strategy;
图5(a)是本发明实施例的在脉冲神经网络中,每张图像编码后的输入脉冲序列按 照‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’的顺序分别在振荡不同周期的波谷处输入网络的示意 图; Fig. 5(a) is the spiking neural network according to the embodiment of the present invention, the input pulse sequence after each image is encoded according to '0', '1', '2', '3', '4', '5' , '6' in the order of Schematic diagram of the input network at the troughs of different periods of oscillation;
图5(b)是本发明实施例的‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’、‘0’、‘01’、‘012’、‘0123’、‘01234’、‘012345’、‘0123456’对应的活动示意图;Figure 5(b) shows '0', '1', '2', '3', '4', '5', '6', '0', '01', '012' of the embodiment of the present invention , '0123', '01234', '012345', '0123456' corresponding activity diagrams;
图6(a)是层间突触权值变化图;图6(b)是层内突触权值变化图;Figure 6(a) is the change diagram of synaptic weights between layers; Figure 6(b) is the change diagram of synaptic weights within layers;
图7是层间突触权值的收敛速度示意图;FIG. 7 is a schematic diagram of the convergence speed of the synaptic weights between layers;
图8是本发明提供的一种基于多突触可塑性脉冲神经网络快速记忆编码装置的结构框图。FIG. 8 is a structural block diagram of a fast memory coding device based on a multi-synaptic plasticity impulse neural network provided by the present invention.
具体实施方式Detailed ways
为了使本发明的目的、技术方案和技术效果更加清楚明白,以下结合说明书附图和实施例,对本发明作进一步详细说明。In order to make the objectives, technical solutions and technical effects of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments of the description.
为了解决/缓解上述的技术问题,本发明基于有监督群体Tempotron以及无监督STDP、抑制突触可塑性,构建了一个层间全连接,层内具有循环连接的脉冲神经网络模型,并在数字序列记忆任务上对模型功能进行验证,结果表明,该发明可以有效地提升记忆的编码速度与稳定性。In order to solve/mitigate the above technical problems, the present invention constructs a spiking neural network model with full connection between layers and recurrent connections in layers based on supervised group Tempotron and unsupervised STDP, suppressing synaptic plasticity, and memory in digital sequence. The function of the model is verified on the task, and the results show that the invention can effectively improve the encoding speed and stability of memory.
如图1所示,是基于多突触可塑性的脉冲神经网络快速记忆编码方法的计算框架,该计算框架包含输入层、输出层以及抑制层。输入层神经元全连接到输出层神经元,输入脉冲经由输入层传递到输出层,输入层到输出层之间的突触权值由有监督群体Tempotron更新;输出层内存在循环连接,输出层内的突触权值由无监督STDP更新;抑制层与输出层之间存在双向连接,其中,输出层到抑制层采用一对一的连接方式,这部分连接的突触权值固定,不参与权值训练,而抑制层到输出层采用对角无连接的全连接方式,突触权值由无监督抑制可塑性规则更新。As shown in Figure 1, it is a computational framework of the fast memory coding method of spiking neural network based on polysynaptic plasticity, which includes an input layer, an output layer and an inhibition layer. The neurons in the input layer are fully connected to the neurons in the output layer, the input pulse is transmitted to the output layer through the input layer, and the synaptic weights between the input layer and the output layer are updated by the supervised group Tempotron; there are cyclic connections in the output layer, and the output layer The synaptic weights within are updated by unsupervised STDP; there is a bidirectional connection between the inhibitory layer and the output layer, in which the output layer to the inhibitory layer adopts a one-to-one connection, and the synaptic weights of this part of the connection are fixed and do not participate in Weight training, while the full connection from the inhibition layer to the output layer is diagonally disconnected, and the synaptic weights are updated by the unsupervised inhibitory plasticity rule.
所述基于多突触可塑性脉冲神经网络快速记忆编码方法,如图2所示,具体包括以下步骤:The fast memory coding method based on the polysynaptic plasticity spiking neural network, as shown in Figure 2, specifically includes the following steps:
步骤一:基于层级编码策略将外部刺激转换为输入脉冲序列;Step 1: Convert external stimuli into input pulse trains based on a hierarchical coding strategy;
外部刺激的实值数据在输入到脉冲神经网络前首先被编码成脉冲模式,本实施例的外部刺激为从MNIST手写体数字集的‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’七类中任意选取的七张图像,图像尺寸为28 pixel *28 pixel,切分为49个4 pixel *4 pixel大小的接收域。如图3展示了数字‘0’的编码过程,首先经由简单细胞Simple Cell,S层和复杂细胞ComplexCell,C层构成的两层结构对其进行特征提取,然后通过延迟编码方法对提取到的特征进行编码,其中:The real-valued data of external stimuli are first encoded into a spiking pattern before being input to the spiking neural network. The external stimuli in this embodiment are '0', '1', '2', '3', ' from the MNIST handwritten digit set Seven images randomly selected from the seven categories of 4', '5', and '6', the image size is 28 pixel * 28 pixel, and is divided into 49 receptive fields of 4 pixel * 4 pixel size. Figure 3 shows the encoding process of the number '0'. First, the features are extracted through the two-layer structure composed of the simple cell, the S layer and the complex cell, the C layer, and then the extracted features are extracted by the delay encoding method. to encode, where:
S层:提取图像的边缘方向特征,S层的滤波核由方向分别为、、、的4个 Gabor滤波器组成,其模拟视觉皮层的感受野对接收域图像执行滤波操作,滤波后获得49*4 个尺寸为4 pixel *4 pixel的特征图; S layer: extract the edge direction features of the image, the filter kernel of the S layer is divided by the direction , , , It is composed of 4 Gabor filters, which simulate the receptive field of the visual cortex to perform filtering operations on the receptive field image, and obtain 49*4 feature maps with a size of 4 pixel * 4 pixel after filtering;
C层:先对S层输出的每个特征图执行相加操作获得尺寸为49*4的特征图,对特征图执行线性归一化将数据分布到[0,1]的区间内,然后按行执行最大竞争操作,即每行只保留强度最强的特征作为该位置上最匹配的方向特征,其它方向的特征值被置为0,获得尺寸为49*4的特征图的稀疏化表达。Layer C: First perform an addition operation on each feature map output by layer S to obtain a feature map with a size of 49*4, perform linear normalization on the feature map to distribute the data into the interval [0, 1], and then press The row performs the maximum competition operation, that is, each row only retains the feature with the strongest strength as the most matching directional feature at the position, and the eigenvalues of other directions are set to 0 to obtain a sparse representation of a feature map with a size of 49*4.
获得的特征通过延迟编码转换为输入脉冲序列,按照以下公式计算:The obtained features are transformed into the input pulse train by delay encoding, which is calculated according to the following formula:
其中,是第i个神经元的发放时间,T=20e-3是编码窗口的长度,是第i个特征的 强度值。 in, is the firing time of the ith neuron, T=20e-3 is the length of the coding window, is the intensity value of the ith feature.
每张图像的脉冲序列为49*4=196个,不同的图像由不同的脉冲序列表示,编码后的脉冲序列经由输入层进入网络,输入层神经元总数量为7*49*4=1372个。The pulse sequence of each image is 49*4=196, and different images are represented by different pulse sequences. The encoded pulse sequence enters the network through the input layer, and the total number of neurons in the input layer is 7*49*4=1372 .
步骤二:收到输入脉冲后,基于改进的SRM模型更新输出层神经元的膜电位;Step 2: After receiving the input pulse, update the membrane potential of the neurons in the output layer based on the improved SRM model;
本实施例中,脉冲神经网络的神经元采用改进的SRM模型,通过连续的核函数提供了对脉冲神经元动力学的简明描述,有利于清晰展示多个输入源对神经元发放的贡献。相比于原始的SRM模型,改进的SRM模型考虑了更多的生理细节,在不应核中引入后去极化电位来描述神经元发放脉冲后膜电位缓慢上升的过程:In this embodiment, the neurons of the spiking neural network adopt an improved SRM model, which provides a concise description of the dynamics of the spiking neurons through a continuous kernel function, which is conducive to clearly showing the contributions of multiple input sources to neuron firing. Compared with the original SRM model, the improved SRM model considers more physiological details, and introduces a post-depolarization potential in the refractory nucleus to describe the process of the slow rise of the membrane potential after the neuron fires a pulse:
是神经元i在处发放脉冲后膜电位的后去极化过程,是后去极化的 幅值,是神经元i的脉冲发放时刻,是时间常量。 is the neuron i in The post-depolarization process of the membrane potential after the pulse is fired, is the magnitude of the post-depolarization, is the pulse firing time of neuron i, is a time constant.
引入了振荡这种同步神经活动的重要脑电波,将其作为外部输入电流注入到输 出神经元中,被建模为余弦波。 introduced The vital brain waves that oscillate this synchronized neural activity, injected as external input currents into output neurons, are modeled as cosine waves.
其中,是振荡源,是振荡的幅值,是振荡的频率,是初始相位。在所述 的脉冲神经网络中,每张图像编码后的输入脉冲序列按照‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’ 的顺序分别在振荡不同周期的波谷处输入网络,如图5(a)所示。网络中输出层神经元接收 的来自输入层的兴奋性输入表达如下: in, Yes oscillator source, Yes the amplitude of the oscillation, Yes the frequency of oscillation, is the initial phase. In the described spiking neural network, the encoded input pulse sequence of each image is in the order of '0', '1', '2', '3', '4', '5', '6', respectively. The input network is oscillated at the troughs of different periods, as shown in Fig. 5(a). The excitatory input from the input layer received by the output layer neurons in the network is expressed as follows:
其中,是输入层神经元的总输入,是输入层神经元j到输出层神经元i之间 的突触权值,是归一化因子,是突触前神经元j的脉冲发放时刻,、是时间常量。 in, is the total input of neurons in the input layer, is the synaptic weight between the input layer neuron j to the output layer neuron i, is the normalization factor, is the pulse firing time of the presynaptic neuron j, , is a time constant.
当输出层神经元发放脉冲后,该脉冲会传递给同层的其他神经元,以及抑制层神经元。输出层神经元收到的来自同层的其他神经元的兴奋输入表达如下:When a neuron in the output layer emits a pulse, the pulse will be transmitted to other neurons in the same layer, as well as neurons in the inhibitory layer. The excitation input received by neurons in the output layer from other neurons in the same layer is expressed as follows:
是同层神经元的总输入,是输出层内突触前神经元j与突触后神经元i 之间的突触权值。 is the total input of neurons in the same layer, is the synaptic weight between the presynaptic neuron j and the postsynaptic neuron i in the output layer.
收到的来自抑制神经元的抑制反馈表达如下:The inhibitory feedback received from inhibitory neurons is expressed as follows:
是抑制反馈,是抑制神经元到输出层神经元之间的突触权重,是抑 制反馈的幅值,是输出层神经元的脉冲发放时刻,是时间常量。 is the inhibitory feedback, is the synaptic weight between the inhibitory neuron and the output layer neuron, is the magnitude of the suppression feedback, is the firing moment of the output layer neuron, is a time constant.
综上所述,模型中输出层神经元的膜电位按如下公式计算:In summary, the membrane potential of the output layer neurons in the model is calculated according to the following formula:
本实施例中,3,,,,,,,,,仿真时间间隔 ,一次仿真迭代时长,迭代次数设置为20。 In this embodiment, 3, , , ,, , , , , the simulation time interval , the duration of one simulation iteration , the number of iterations is set to 20.
步骤三:使用监督群体Tempotron更新输入到输出层间的突触权值,快速激活一群输出神经元记忆输入;Step 3: Use the supervised group Tempotron to update the synaptic weights between the input and output layers, and quickly activate a group of output neurons to memorize the input;
如图4所示,是群体Tempotron学习策略的示意图。在每次迭代学习过程中,对于每个输入脉冲模式,若输出层响应该输入模式的神经元个数还差K个才达到预设的神经群体大小,将加强从输入层的发放神经元到输出层前K个最大膜电位对应的未发放神经元之间的突触权值,层间权值更新公式如下:As shown in Figure 4, it is a schematic diagram of the group Tempotron learning strategy. In each iterative learning process, for each input pulse pattern, if the number of neurons in the output layer responding to the input pattern is less than K before reaching the preset neural population size, it will strengthen the firing neurons from the input layer to the The synaptic weight between the unfired neurons corresponding to the first K maximum membrane potentials in the output layer, the update formula of the weight between the layers is as follows:
其中,是输入层到输出层间连接权值的更新量,是层间权值的学习 率,d是期望的输出标记包括:0 或1,是突触前神经元j脉冲发放时刻,K是距离达到神经 群体大小还需激活的神经元数目,是第k个最活跃突触后神经元最大膜电位对应时刻。 in, is the update amount of the connection weight between the input layer and the output layer, is the learning rate of the weights between layers, d is the desired output label including: 0 or 1, is the pulse firing time of the presynaptic neuron j, K is the number of neurons that need to be activated until the distance reaches the size of the neural population, is the time corresponding to the maximum membrane potential of the kth most active postsynaptic neuron.
本实施例中,,输出神经元的数目为100,神经群体内的神经元个数为 12。学习前,层间初始权值服从正态分布随机初始化,均值为0.001,标准差为0.002。如图6 (a)所示,学习后,对于每个输入模式,输入层的发放神经元与输出层的响应神经群体间的 权值被增强。由于层间突触权值的增强,在输入脉冲模式呈现时刻附近,输出层呈现神经群 体发放活动编码输入,如图5(b)的‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’对应的活动所示。 In this embodiment, , the number of output neurons is 100, and the number of neurons in the neural population is 12. Before learning, the initial weights between layers are randomly initialized from a normal distribution, with a mean of 0.001 and a standard deviation of 0.002. As shown in Fig. 6(a), after learning, for each input pattern, the weights between the firing neurons in the input layer and the responding neural population in the output layer are enhanced. Due to the enhancement of synaptic weights between layers, near the moment when the input pulse pattern is presented, the output layer presents the neural group firing activity encoding input, as shown in Figure 5(b) '0', '1', '2', '3' , '4', '5', '6' corresponding activities are shown.
步骤四:输出层神经元激活后,使用无监督STDP更新层内激活神经元间的突触权值,形成增强的循环子网络存储记忆;Step 4: After the neurons in the output layer are activated, use unsupervised STDP to update the synaptic weights between the activated neurons in the layer to form an enhanced recurrent sub-network storage memory;
输出层神经元激活后,当突触前神经元与突触后神经元的发放时间差在[]区间内,突触权值将被更新。突触前神经元先于突触后神经元发放时,突触权 值被增强;突触前神经元晚于突触后神经元发放时,突触权值被减弱。层内权值更新公式如 下: After the output layer neurons are activated, when the firing time difference between the presynaptic neuron and the postsynaptic neuron is [ ] interval, the synaptic weights will be updated. When the presynaptic neuron fires before the postsynaptic neuron, the synaptic weight is enhanced; when the presynaptic neuron fires later than the postsynaptic neuron, the synaptic weight is weakened. The weight update formula in the layer is as follows:
其中,是输出层内连接权值的更新量,和分别表示增强和减弱的幅值,和分别表示突触前神经元j和突触后神经元i脉冲发放时刻,、是时间常数, 表示NMDA受体的衰变时间常量。 in, is the update amount of the connection weights in the output layer, and represent the enhancement and weakening amplitudes, respectively, and are the pulse firing times of presynaptic neuron j and postsynaptic neuron i, respectively, , is the time constant, represents the decay time constant of NMDA receptors.
本实施例中,,。学习前,层内循环连 接的权值随机初始化,初始权值在[0,1e-5]区间内;学习后,响应输入层每个输入模式的神 经元群体的内部连接被增强,形成循环子网络,如图6(b)所示。自联想记忆存储在神经群体 内增强的循环连接权值中。 In this embodiment, , . Before learning, the weights of the cyclic connections in the layer are randomly initialized, and the initial weights are in the interval [0, 1e-5]; after learning, the internal connections of the neuron population responding to each input pattern of the input layer are enhanced to form a cyclic sub-layer. network, as shown in Figure 6(b). Auto-associative memories are stored in the weights of recurrent connections that are enhanced within the neural population.
步骤五:在执行步骤四的同时,使用无监督抑制可塑性更新抑制层到输出层间的突触权值,抑制反馈保障记忆不同输入的神经群体发放时间上的分离。Step 5: While performing
输出层到抑制层采用一对一的连接方式,这部分连接的突触权值固定,不参与权 值训练。抑制层到输出层采用对角无连接的全连接方式,权值矩阵对角线上的值为0,其余 的值均为1。当前时刻t与输出神经元的最近一次发放的时间差在[]区间内,输出层 对抑制神经元有输入的神经元的抑制权值将被减小,而对抑制神经元没有输入的抑制权值 不变。抑制突触可塑性的更新公式如下: The output layer to the inhibition layer adopts a one-to-one connection. The synaptic weights of this part of the connection are fixed and do not participate in weight training. From the suppression layer to the output layer, the diagonally unconnected full connection method is adopted. The value on the diagonal of the weight matrix is 0, and the rest of the values are 1. The time difference between the current time t and the last firing of the output neuron is [ ] interval, the inhibitory weight of the output layer to the neuron with input to the inhibitory neuron will be reduced, while the inhibitory weight of the inhibitory neuron without input will remain unchanged. The updated formula for inhibiting synaptic plasticity is as follows:
其中,是抑制层到输出层间连接权值的更新量,表示突触后神经元i的脉 冲发放时刻。输出层神经群体在受到刺激发放脉冲后,来自同层的神经群体内的兴奋性输 入、放电后的去极化电流以及外部振荡输入使得神经群体在后续没有刺激输入的情况下 能够在随后的振荡周期中持续发放,从而维持短期记忆。在输出层神经元发放后,来自抑 制层神经元的抑制反馈,阻止神经群体在发放后连续发放,避免了不同记忆项之间的互相 干扰,如图5(b)‘0’、‘01’、‘012’、‘0123’、‘01234’、‘012345’、‘0123456’对应的活动所示, 在输入刺激出现的随后的振荡周期的波峰附近,响应不同输入刺激的神经群体按顺序在 不同的时刻发放,抑制反馈保障记忆不同输入的神经群体发放时间上的分离。 in, is the update amount of the connection weight between the suppression layer and the output layer, represents the timing of the firing of the postsynaptic neuron i. After the neural population in the output layer is stimulated to emit pulses, the excitatory input from the neural population in the same layer, the post-discharge depolarization current, and the external Oscillating input enables neural populations to respond to subsequent Continuous firing during oscillation cycles, thereby maintaining short-term memory. After the neurons in the output layer are fired, the inhibitory feedback from the neurons in the inhibitory layer prevents the neural group from firing continuously after firing, avoiding the mutual interference between different memory items, as shown in Figure 5(b) '0', '01' , '012', '0123', '01234', '012345', '0123456' corresponding to the activities shown in the following Near the peak of the oscillation cycle, the neural populations responding to different input stimuli fire in sequence at different times, and the inhibitory feedback ensures the separation of the firing time of the neural populations that remember different inputs.
本实施例中,抑制神经元的数目为100,与输出神经元的数目一样,。 In this embodiment, the number of inhibitory neurons is 100, which is the same as the number of output neurons. .
如图7所示的是层间突触权值的收敛速度,在群体Tempotron 学习策略下,快速激活了一群输出神经元对输入进行记忆编码,层间突触权值只经过4次学习就收敛了。实验结果表明本发明提出的基于多突触可塑性的脉冲神经网络快速记忆编码方法,能够有效地提升记忆的编码速度与稳定性。Figure 7 shows the convergence speed of the synaptic weights between layers. Under the group Tempotron learning strategy, a group of output neurons are quickly activated to encode the input in memory, and the synaptic weights between layers converge after only 4 times of learning. . The experimental results show that the fast memory coding method based on the multi-synaptic plasticity of the spiking neural network proposed by the present invention can effectively improve the coding speed and stability of memory.
与前述基于多突触可塑性脉冲神经网络快速记忆编码方法的实施例相对应,本发明还提供了基于多突触可塑性脉冲神经网络快速记忆编码装置的实施例。Corresponding to the foregoing embodiments of the fast memory coding method based on the polysynaptic plasticity spiking neural network, the present invention also provides an embodiment of the fast memory coding device based on the polysynaptic plasticity spiking neural network.
参见图8,本发明实施例提供的一种基于多突触可塑性脉冲神经网络快速记忆编码装置,包括一个或多个处理器,用于实现上述实施例中的基于多突触可塑性脉冲神经网络快速记忆编码方法。Referring to FIG. 8 , an embodiment of the present invention provides a device for fast memory coding based on a spiking neural network with polysynaptic plasticity, including one or more processors for implementing the fast memory coding based on a spiking neural network with polysynaptic plasticity in the above embodiment Memory coding method.
本发明基于多突触可塑性脉冲神经网络快速记忆编码装置的实施例可以应用在任意具备数据处理能力的设备上,该任意具备数据处理能力的设备可以为诸如计算机等设备或装置。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在任意具备数据处理能力的设备的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图8所示,为本发明基于多突触可塑性脉冲神经网络快速记忆编码装置所在任意具备数据处理能力的设备的一种硬件结构图,除了图8所示的处理器、内存、网络接口、以及非易失性存储器之外,实施例中装置所在的任意具备数据处理能力的设备通常根据该任意具备数据处理能力的设备的实际功能,还可以包括其他硬件,对此不再赘述。The embodiments of the present invention based on the multi-synaptic plasticity spiking neural network fast memory coding device can be applied to any device with data processing capability, which can be a device or device such as a computer. The apparatus embodiment may be implemented by software, or may be implemented by hardware or a combination of software and hardware. Taking software implementation as an example, a device in a logical sense is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory through the processor of any device with data processing capability where it is located. From the perspective of hardware, as shown in FIG. 8 , it is a hardware structure diagram of any device with data processing capability where the multi-synaptic plasticity spiking neural network fast memory coding device of the present invention is located, except for the processor shown in FIG. 8 , memory, network interface, and non-volatile memory, any device with data processing capability where the device is located in the embodiment may also include other hardware, usually according to the actual function of any device with data processing capability. No longer.
上述装置中各个单元的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。For details of the implementation process of the functions and functions of each unit in the above device, please refer to the implementation process of the corresponding steps in the above method, which will not be repeated here.
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本发明方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。For the apparatus embodiments, since they basically correspond to the method embodiments, reference may be made to the partial descriptions of the method embodiments for related parts. The device embodiments described above are only illustrative, wherein the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed over multiple network elements. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the present invention. Those of ordinary skill in the art can understand and implement it without creative effort.
本发明实施例还提供一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时,实现上述实施例中的基于多突触可塑性脉冲神经网络快速记忆编码方法。Embodiments of the present invention further provide a computer-readable storage medium on which a program is stored, and when the program is executed by a processor, implements the method for fast memory coding based on a multi-synaptic plasticity spiking neural network in the above embodiment.
所述计算机可读存储介质可以是前述任一实施例所述的任意具备数据处理能力的设备的内部存储单元,例如硬盘或内存。所述计算机可读存储介质也可以是风力发电机的外部存储设备,例如所述设备上配备的插接式硬盘、智能存储卡(Smart Media Card,SMC)、SD卡、闪存卡(Flash Card)等。进一步的,所述计算机可读存储介质还可以既包括任意具备数据处理能力的设备的内部存储单元也包括外部存储设备。所述计算机可读存储介质用于存储所述计算机程序以及所述任意具备数据处理能力的设备所需的其他程序和数据,还可以用于暂时地存储已经输出或者将要输出的数据。The computer-readable storage medium may be an internal storage unit of any device with data processing capability described in any of the foregoing embodiments, such as a hard disk or a memory. The computer-readable storage medium may also be an external storage device of the wind turbine, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), an SD card, and a flash memory card (Flash Card) equipped on the device. Wait. Further, the computer-readable storage medium may also include both an internal storage unit of any device with data processing capability and an external storage device. The computer-readable storage medium is used to store the computer program and other programs and data required by the device with data processing capability, and can also be used to temporarily store data that has been output or will be output.
以上所述,仅为本发明的优选实施案例,并非对本发明做任何形式上的限制。虽然前文对本发明的实施过程进行了详细说明,对于熟悉本领域的人员来说,其依然可以对前述各实例记载的技术方案进行修改,或者对其中部分技术特征进行同等替换。凡在本发明精神和原则之内所做修改、同等替换等,均应包含在本发明的保护范围之内。The above descriptions are only preferred implementation examples of the present invention, and do not limit the present invention in any form. Although the implementation process of the present invention has been described in detail above, those skilled in the art can still modify the technical solutions described in the foregoing examples, or perform equivalent replacements for some of the technical features. All modifications, equivalent replacements, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111497073.2A CN114118383B (en) | 2021-12-09 | 2021-12-09 | Fast memory encoding method and device based on multi-synaptic plasticity spiking neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111497073.2A CN114118383B (en) | 2021-12-09 | 2021-12-09 | Fast memory encoding method and device based on multi-synaptic plasticity spiking neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114118383A true CN114118383A (en) | 2022-03-01 |
CN114118383B CN114118383B (en) | 2025-02-28 |
Family
ID=80364420
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111497073.2A Active CN114118383B (en) | 2021-12-09 | 2021-12-09 | Fast memory encoding method and device based on multi-synaptic plasticity spiking neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114118383B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114611686A (en) * | 2022-05-12 | 2022-06-10 | 之江实验室 | Synapse delay implementation system and method based on programmable neural mimicry core |
CN115327373A (en) * | 2022-04-20 | 2022-11-11 | 岱特智能科技(上海)有限公司 | Hemodialysis equipment fault diagnosis method based on BP neural network and storage medium |
CN115429293A (en) * | 2022-11-04 | 2022-12-06 | 之江实验室 | Sleep type classification method and device based on impulse neural network |
CN116080688A (en) * | 2023-03-03 | 2023-05-09 | 北京航空航天大学 | Brain-inspiring-like intelligent driving vision assisting method, device and storage medium |
CN116542291A (en) * | 2023-06-27 | 2023-08-04 | 北京航空航天大学 | A method and system for generating impulse memory images inspired by memory loops |
CN117456577A (en) * | 2023-10-30 | 2024-01-26 | 苏州大学 | System and method for expression recognition based on optical pulse neural network |
WO2024216856A1 (en) * | 2023-04-17 | 2024-10-24 | 北京大学 | Brain-like synaptic learning method, and neuromorphic hardware system based on brain-like technique |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150269483A1 (en) * | 2014-03-18 | 2015-09-24 | Panasonic Intellectual Property Management Co., Ltd. | Neural network circuit and learning method for neural network circuit |
CN108985447A (en) * | 2018-06-15 | 2018-12-11 | 华中科技大学 | A kind of hardware pulse nerve network system |
CN111882064A (en) * | 2020-08-03 | 2020-11-03 | 中国人民解放军国防科技大学 | Method and system for implementing competitive learning mechanism of spiking neural network based on memristor |
CN113298242A (en) * | 2021-06-08 | 2021-08-24 | 浙江大学 | Brain-computer interface decoding method based on impulse neural network |
-
2021
- 2021-12-09 CN CN202111497073.2A patent/CN114118383B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150269483A1 (en) * | 2014-03-18 | 2015-09-24 | Panasonic Intellectual Property Management Co., Ltd. | Neural network circuit and learning method for neural network circuit |
CN108985447A (en) * | 2018-06-15 | 2018-12-11 | 华中科技大学 | A kind of hardware pulse nerve network system |
CN111882064A (en) * | 2020-08-03 | 2020-11-03 | 中国人民解放军国防科技大学 | Method and system for implementing competitive learning mechanism of spiking neural network based on memristor |
CN113298242A (en) * | 2021-06-08 | 2021-08-24 | 浙江大学 | Brain-computer interface decoding method based on impulse neural network |
Non-Patent Citations (1)
Title |
---|
匡载波;王江;: "基于脑启发视觉神经元网络输电线路部件识别的研究", 电力系统及其自动化学报, vol. 32, no. 4, 3 February 2020 (2020-02-03) * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115327373A (en) * | 2022-04-20 | 2022-11-11 | 岱特智能科技(上海)有限公司 | Hemodialysis equipment fault diagnosis method based on BP neural network and storage medium |
CN114611686A (en) * | 2022-05-12 | 2022-06-10 | 之江实验室 | Synapse delay implementation system and method based on programmable neural mimicry core |
CN115429293A (en) * | 2022-11-04 | 2022-12-06 | 之江实验室 | Sleep type classification method and device based on impulse neural network |
CN115429293B (en) * | 2022-11-04 | 2023-04-07 | 之江实验室 | Sleep type classification method and device based on impulse neural network |
CN116080688A (en) * | 2023-03-03 | 2023-05-09 | 北京航空航天大学 | Brain-inspiring-like intelligent driving vision assisting method, device and storage medium |
WO2024216856A1 (en) * | 2023-04-17 | 2024-10-24 | 北京大学 | Brain-like synaptic learning method, and neuromorphic hardware system based on brain-like technique |
CN116542291A (en) * | 2023-06-27 | 2023-08-04 | 北京航空航天大学 | A method and system for generating impulse memory images inspired by memory loops |
CN116542291B (en) * | 2023-06-27 | 2023-11-21 | 北京航空航天大学 | Pulse memory image generation method and system for memory loop inspiring |
CN117456577A (en) * | 2023-10-30 | 2024-01-26 | 苏州大学 | System and method for expression recognition based on optical pulse neural network |
CN117456577B (en) * | 2023-10-30 | 2024-04-26 | 苏州大学 | System and method for facial expression recognition based on optical pulse neural network |
Also Published As
Publication number | Publication date |
---|---|
CN114118383B (en) | 2025-02-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114118383A (en) | Multi-synaptic plasticity pulse neural network-based fast memory coding method and device | |
Yu et al. | Spike timing or rate? Neurons learn to make decisions for both through threshold-driven plasticity | |
Hunsberger et al. | Spiking deep networks with LIF neurons | |
CN103201610B (en) | For providing the Method and circuits of neuromorphic-synapse device system | |
Tsuda et al. | Memory dynamics in asynchronous neural networks | |
CN113272828A (en) | Elastic neural network | |
WO2015020802A2 (en) | Computed synapses for neuromorphic systems | |
KR20170031695A (en) | Decomposing convolution operation in neural networks | |
TW201543382A (en) | Neural network adaptation to current computational resources | |
EP3050004A2 (en) | Methods and apparatus for implementation of group tags for neural models | |
Ivancevic et al. | Quantum neural computation | |
Ma et al. | A memristive neural network model with associative memory for modeling affections | |
CN112101535A (en) | Signal processing method of pulse neuron and related device | |
CN114118378A (en) | Hardware-friendly STDP learning method and system based on threshold adaptive neuron | |
Zheng et al. | An introductory review of spiking neural network and artificial neural network: From biological intelligence to artificial intelligence | |
Ma et al. | Double layers self-organized spiking neural P systems with anti-spikes for fingerprint recognition | |
CN107798384B (en) | Iris florida classification method and device based on evolvable pulse neural network | |
Ravichandran et al. | Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks | |
CN109635942B (en) | Brain excitation state and inhibition state imitation working state neural network circuit structure and method | |
Thorpe | Timing, Spikes, and the Brain | |
KR102535635B1 (en) | Neuromorphic computing device | |
Li et al. | A review on synergistic learning | |
Xue et al. | Improving liquid state machine with hybrid plasticity | |
US11289175B1 (en) | Method of modeling functions of orientation and adaptation on visual cortex | |
Lacko | From perceptrons to deep neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |