CN114118383A - Multi-synaptic plasticity pulse neural network-based fast memory coding method and device - Google Patents

Multi-synaptic plasticity pulse neural network-based fast memory coding method and device Download PDF

Info

Publication number
CN114118383A
CN114118383A CN202111497073.2A CN202111497073A CN114118383A CN 114118383 A CN114118383 A CN 114118383A CN 202111497073 A CN202111497073 A CN 202111497073A CN 114118383 A CN114118383 A CN 114118383A
Authority
CN
China
Prior art keywords
layer
neurons
input
output layer
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111497073.2A
Other languages
Chinese (zh)
Other versions
CN114118383B (en
Inventor
袁孟雯
唐华锦
王笑
陆宇婧
张梦骁
洪朝飞
黄恒
赵文一
燕锐
潘纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Zhejiang Lab
Original Assignee
Zhejiang University ZJU
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU, Zhejiang Lab filed Critical Zhejiang University ZJU
Priority to CN202111497073.2A priority Critical patent/CN114118383B/en
Publication of CN114118383A publication Critical patent/CN114118383A/en
Application granted granted Critical
Publication of CN114118383B publication Critical patent/CN114118383B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a quick memory coding method based on a multi-synaptic plasticity pulse neural network, which comprises the following steps: the method comprises the following steps: converting the external stimulus into an input pulse sequence based on a hierarchical coding strategy; step two: after receiving the input pulse, the pulse neural network updates the membrane potential of the neuron on the output layer based on the improved SRM model; step three: updating synapse weight values input to an output layer by using a supervision group Tempotron, and activating neuron memory input of the output layer; step four: after neurons in an output layer are activated, updating synaptic weights among the activated neurons in the layer by using unsupervised STDP to form enhanced storage memory of a circulation sub-network; step five: and when the step four is executed, unsupervised inhibition synapse plasticity is used, synapse weights from an inhibition layer to an output layer are updated, and separation of release time of different input neural populations of feedback guarantee memory is inhibited. The invention also provides a quick memory coding device based on the multi-synaptic plasticity pulse neural network. The invention effectively improves the coding speed and stability of memory.

Description

基于多突触可塑性脉冲神经网络快速记忆编码方法和装置Fast memory coding method and device based on polysynaptic plasticity spiking neural network

技术领域technical field

本发明属于类脑智能和人工智能领域,涉及基于多突触可塑性脉冲神经网络快速记忆编码方法和装置。The invention belongs to the field of brain-like intelligence and artificial intelligence, and relates to a fast memory coding method and device based on a polysynaptic plasticity impulse neural network.

背景技术Background technique

脉冲神经网络(Spiking Neural Network,SNN)模拟了生物神经系统的信息处理机制,通过离散脉冲事件驱动计算,相比于传统人工神经网络,具有低功耗以及更强的信息表达能力等优势,是分析和仿真大脑认知功能的强有力工具。记忆作为一种对经历过的事物的识记、保持和再现的认知能力,是大脑智能最核心的组成部分之一。The Spiking Neural Network (SNN) simulates the information processing mechanism of the biological nervous system. It drives computing through discrete pulse events. Compared with traditional artificial neural networks, it has the advantages of low power consumption and stronger information expression capabilities. A powerful tool for analyzing and simulating the cognitive functions of the brain. Memory, as a cognitive ability to remember, maintain and reproduce what has been experienced, is one of the core components of brain intelligence.

突触可塑性长期以来被认为是学习和记忆的基础,神经科学研究表明生物大脑依赖于多种突触可塑性协同来保障认知任务的可靠实现。但是由于缺乏对不同形式的突触可塑性规则之间的相互作用的理解以及神经网络的结构和动力学的复杂性,现有的脉冲记忆模型大多数仅使用无监督突触可塑性规则(如脉冲时间依赖可塑性(Spike-Timing-Dependent-Plasticity,STDP))训练循环网络,通过调整循环网络中激活神经元之间的突触权值,形成加强连接的循环子网络存储记忆,对外部输入模式的记忆就由神经群体的集体共发放活动编码。然而,基于无监督可塑性规则的记忆模型学习过程缓慢,需要多次重复地输入感知刺激才能生成神经群体表达记忆,记忆效率低。此外,基于无监督可塑性规则的记忆模型无法保障神经群体的稳定生成,在学习过程中响应不同输入模式的神经群体之间可能会产生干扰从而破坏记忆。Synaptic plasticity has long been recognized as the basis of learning and memory, and neuroscience research has shown that biological brains rely on the synaptic plasticity of multiple synapses to ensure the reliable realization of cognitive tasks. However, due to the lack of understanding of the interplay between different forms of synaptic plasticity rules and the complexity of the structure and dynamics of neural networks, most existing spiking memory models only use unsupervised synaptic plasticity rules (such as spike timing Dependent plasticity (Spike-Timing-Dependent-Plasticity, STDP)) training a recurrent network, by adjusting the synaptic weights between the activated neurons in the recurrent network, a recurrent sub-network that strengthens connections is formed to store memory and memory for external input patterns is encoded by the collective co-firing activity of the neural population. However, the learning process of memory models based on unsupervised plasticity rules is slow, requiring repeated input of perceptual stimuli to generate neural population expressive memory, and the memory efficiency is low. In addition, memory models based on unsupervised plasticity rules cannot guarantee the stable generation of neural populations, and there may be interference between neural populations responding to different input patterns during the learning process, thereby destroying memory.

发明内容SUMMARY OF THE INVENTION

为了解决现有技术中存在的当前基于无监督可塑性的脉冲记忆模型记忆编码效率低、不稳定的问题,本发明提出一种基于多突触可塑性脉冲神经网络快速记忆编码方法和装置,其具体技术方案如下:In order to solve the problem of low memory coding efficiency and instability in the current pulse memory model based on unsupervised plasticity in the prior art, the present invention proposes a method and device for fast memory coding based on a multi-synaptic plasticity pulse neural network. The plan is as follows:

一种基于多突触可塑性的脉冲神经网络快速记忆编码方法,包括以下步骤:A method for fast memory coding of spiking neural network based on polysynaptic plasticity, comprising the following steps:

步骤一:基于层级编码策略将外部刺激转换为输入脉冲序列;Step 1: Convert external stimuli into input pulse trains based on a hierarchical coding strategy;

步骤二:脉冲神经网络收到输入脉冲后,基于改进的SRM模型更新输出层神经元的膜电位;Step 2: After receiving the input pulse, the spiking neural network updates the membrane potential of the neurons in the output layer based on the improved SRM model;

步骤三:使用监督群体Tempotron更新输入到输出层间的突触权值,激活输出层神经元记忆输入;Step 3: Use the supervised group Tempotron to update the synaptic weights between the input and the output layer, and activate the output layer neurons to memorize the input;

步骤四:输出层神经元激活后,使用无监督STDP更新层内激活神经元间的突触权值,形成增强的循环子网络存储记忆;Step 4: After the neurons in the output layer are activated, use unsupervised STDP to update the synaptic weights between the activated neurons in the layer to form an enhanced recurrent sub-network storage memory;

步骤五:在执行步骤四的同时,使用无监督抑制突触可塑性,更新抑制层到输出层间的突触权值,抑制反馈保障记忆不同输入的神经群体发放时间上的分离。Step 5: While performing Step 4, use unsupervised inhibition of synaptic plasticity to update the synaptic weights between the inhibition layer and the output layer, and the inhibition feedback ensures the separation of the firing time of neural groups that remember different inputs.

优选的,所述步骤一具体为:Preferably, the step one is specifically:

所述外部刺激为从MNIST手写体数字集的‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’七类中任意选取的七张图像,每张图像的尺寸为28pixel*28pixel,将每张图像均匀切分,获得49个4pixel*4pixel大小的接收域;The external stimuli are seven images arbitrarily selected from seven categories of '0', '1', '2', '3', '4', '5', and '6' in the MNIST handwritten digit set. The size of the image is 28pixel*28pixel, and each image is evenly divided to obtain 49 receptive fields of 4pixel*4pixel size;

所述层级编码策略使用由简单细胞Simple Cell的S层和复杂细胞Complex Cell的C层构成的两层结构对外部刺激进行特征提取,然后通过延迟编码方法对提取到的特征进行编码,其中:The hierarchical encoding strategy uses a two-layer structure composed of the S layer of the simple cell and the C layer of the complex cell to extract the features of the external stimulus, and then encode the extracted features through the delay encoding method, where:

S层:提取图像的边缘方向特征,S层的滤波核由4个方向分别为

Figure 827069DEST_PATH_IMAGE001
Figure 275367DEST_PATH_IMAGE002
Figure 461629DEST_PATH_IMAGE003
Figure 387997DEST_PATH_IMAGE004
的 Gabor滤波器组成,4个方向的所述Gabor滤波器均模拟视觉皮层的感受野对每个接收域对 应的图像块分别执行滤波操作,滤波后获得49*4个尺寸为4pixel *4 pixel的特征图; S layer: Extract the edge direction features of the image. The filter kernel of the S layer consists of four directions:
Figure 827069DEST_PATH_IMAGE001
,
Figure 275367DEST_PATH_IMAGE002
,
Figure 461629DEST_PATH_IMAGE003
,
Figure 387997DEST_PATH_IMAGE004
The Gabor filters in the four directions all simulate the receptive field of the visual cortex and perform filtering operations on the image blocks corresponding to each receptive field. feature map;

C层:先对S层输出的每个特征图执行相加操作获得尺寸为49 pixel *4 pixel的特征图,对特征图执行线性归一化将数据分布到[0,1]的区间内,然后按行执行最大竞争操作,所述按行执行最大竞争操作包括:每行保留强度最强的特征作为该位置上最匹配的方向特征,其它方向的特征值被置为0,获得尺寸为4 pixel 9*4 pixel的特征图的稀疏化表达;Layer C: First perform an addition operation on each feature map output by layer S to obtain a feature map with a size of 49 pixel * 4 pixel, and perform linear normalization on the feature map to distribute the data into the interval [0, 1]. Then perform the maximum competition operation by row, the performing maximum competition operation by row includes: each row retains the feature with the strongest strength as the most matching directional feature at the position, the feature values of other directions are set to 0, and the obtained size is 4 The sparse expression of the feature map of pixel 9*4 pixel;

获得的特征通过延迟编码转换为输入脉冲序列,按照以下公式计算:The obtained features are transformed into the input pulse train by delay encoding, which is calculated according to the following formula:

Figure 858161DEST_PATH_IMAGE005
Figure 858161DEST_PATH_IMAGE005

其中,

Figure 211782DEST_PATH_IMAGE006
是第i个神经元的发放时间,T是编码窗口的长度,
Figure 150919DEST_PATH_IMAGE007
是第i个特征的强度 值。 in,
Figure 211782DEST_PATH_IMAGE006
is the firing time of the ith neuron, T is the length of the encoding window,
Figure 150919DEST_PATH_IMAGE007
is the intensity value of the ith feature.

优选的,所述步骤二具体为:Preferably, the step 2 is specifically:

所述改进的SRM模型在不应核中引入后去极化电位,同时引入

Figure 880978DEST_PATH_IMAGE008
振荡作为外部输入 电流注入到输出层神经元中,在所述的脉冲神经网络中,当输出层其一神经元发放脉冲后, 该脉冲会传递给同层的其他神经元以及抑制层神经元,模型中输出层神经元的膜电位收到 来自输入层以及同层的其他神经元的兴奋输入和来自抑制神经元的抑制反馈,则模型中输 出层神经元的膜电位按如下公式计算: The improved SRM model introduces a post-depolarization potential in the refractory nucleus while introducing
Figure 880978DEST_PATH_IMAGE008
Oscillation is injected into the neurons in the output layer as external input currents. In the above-mentioned spiking neural network, when one neuron in the output layer emits a pulse, the pulse will be transmitted to other neurons in the same layer and neurons in the inhibitory layer. The membrane potential of the output layer neurons in the model receives the excitatory input from the input layer and other neurons in the same layer and the inhibitory feedback from the inhibitory neurons, then the membrane potential of the output layer neurons in the model is calculated according to the following formula:

Figure 205649DEST_PATH_IMAGE009
Figure 205649DEST_PATH_IMAGE009

Figure 871117DEST_PATH_IMAGE010
Figure 871117DEST_PATH_IMAGE010

Figure 156605DEST_PATH_IMAGE011
Figure 156605DEST_PATH_IMAGE011

Figure 546479DEST_PATH_IMAGE012
Figure 546479DEST_PATH_IMAGE012

Figure 866602DEST_PATH_IMAGE013
Figure 866602DEST_PATH_IMAGE013

Figure 702971DEST_PATH_IMAGE014
Figure 702971DEST_PATH_IMAGE014

其中,

Figure 475755DEST_PATH_IMAGE015
是神经元i的膜电位,
Figure 406671DEST_PATH_IMAGE016
是输入层神经元的总输入,
Figure 581300DEST_PATH_IMAGE017
是同层神经元 的总输入,
Figure 588570DEST_PATH_IMAGE018
是抑制反馈,
Figure 238863DEST_PATH_IMAGE019
是神经元i在
Figure 989782DEST_PATH_IMAGE006
处发放脉冲后膜电位的后去极化过程,
Figure 18918DEST_PATH_IMAGE020
Figure 446357DEST_PATH_IMAGE008
振荡源,
Figure 928154DEST_PATH_IMAGE021
Figure 217184DEST_PATH_IMAGE022
Figure 366405DEST_PATH_IMAGE023
分别是输入层神经元到输出层神经元、输出层内神经元、抑 制神经元到输出层神经元的突触权重,
Figure 233255DEST_PATH_IMAGE024
是归一化因子,
Figure 77714DEST_PATH_IMAGE025
Figure 29489DEST_PATH_IMAGE026
Figure 423430DEST_PATH_IMAGE027
分别是抑制反馈、后 去极化、
Figure 68038DEST_PATH_IMAGE008
振荡的幅值,
Figure 399794DEST_PATH_IMAGE028
Figure 155260DEST_PATH_IMAGE006
分别是突触前神经元j、突触后神经元i的脉冲发放时刻,
Figure 138128DEST_PATH_IMAGE029
Figure 829004DEST_PATH_IMAGE030
Figure 38268DEST_PATH_IMAGE031
Figure 722059DEST_PATH_IMAGE032
是时间常量,f是
Figure 434801DEST_PATH_IMAGE008
振荡的频率,t是时间变量,
Figure 30998DEST_PATH_IMAGE033
是初始相位。 in,
Figure 475755DEST_PATH_IMAGE015
is the membrane potential of neuron i,
Figure 406671DEST_PATH_IMAGE016
is the total input of neurons in the input layer,
Figure 581300DEST_PATH_IMAGE017
is the total input of neurons in the same layer,
Figure 588570DEST_PATH_IMAGE018
is the inhibitory feedback,
Figure 238863DEST_PATH_IMAGE019
is the neuron i in
Figure 989782DEST_PATH_IMAGE006
The post-depolarization process of the membrane potential after the pulse is fired,
Figure 18918DEST_PATH_IMAGE020
Yes
Figure 446357DEST_PATH_IMAGE008
oscillator source,
Figure 928154DEST_PATH_IMAGE021
,
Figure 217184DEST_PATH_IMAGE022
,
Figure 366405DEST_PATH_IMAGE023
are the synaptic weights from the input layer neurons to the output layer neurons, the neurons in the output layer, and the inhibitory neurons to the output layer neurons,
Figure 233255DEST_PATH_IMAGE024
is the normalization factor,
Figure 77714DEST_PATH_IMAGE025
,
Figure 29489DEST_PATH_IMAGE026
,
Figure 423430DEST_PATH_IMAGE027
Inhibitory feedback, post-depolarization,
Figure 68038DEST_PATH_IMAGE008
the amplitude of the oscillation,
Figure 399794DEST_PATH_IMAGE028
,
Figure 155260DEST_PATH_IMAGE006
are the impulse firing times of presynaptic neuron j and postsynaptic neuron i, respectively,
Figure 138128DEST_PATH_IMAGE029
,
Figure 829004DEST_PATH_IMAGE030
,
Figure 38268DEST_PATH_IMAGE031
,
Figure 722059DEST_PATH_IMAGE032
is the time constant, f is
Figure 434801DEST_PATH_IMAGE008
frequency of oscillation, t is the time variable,
Figure 30998DEST_PATH_IMAGE033
is the initial phase.

优选的,所述步骤三中的监督群体Tempotron更新输入层和输出层间突触权值,其更新方法为:Preferably, the supervised group Tempotron in the third step updates the synaptic weights between the input layer and the output layer, and the update method is as follows:

Figure 849263DEST_PATH_IMAGE034
Figure 849263DEST_PATH_IMAGE034

其中,

Figure 946532DEST_PATH_IMAGE035
是输入层到输出层间连接的权值,
Figure 654725DEST_PATH_IMAGE036
是层间权值的学习率,d是期 望的输出标记包括0 或1,
Figure 546457DEST_PATH_IMAGE037
是突触前神经元j的脉冲发放时刻,K是距离达到神经群体大小 还需激活的神经元数目,
Figure 589369DEST_PATH_IMAGE038
是第k个最活跃突触后神经元最大膜电位对应时刻。 in,
Figure 946532DEST_PATH_IMAGE035
is the weight of the connection between the input layer and the output layer,
Figure 654725DEST_PATH_IMAGE036
is the learning rate of the weights between layers, d is the expected output label including 0 or 1,
Figure 546457DEST_PATH_IMAGE037
is the pulse firing time of the presynaptic neuron j, K is the number of neurons that need to be activated before reaching the size of the neural population,
Figure 589369DEST_PATH_IMAGE038
is the time corresponding to the maximum membrane potential of the kth most active postsynaptic neuron.

优选的,所述步骤四中的无监督STDP更新层内激活神经元间的突触权值,其更新方法为:Preferably, the unsupervised STDP in the step 4 updates the synaptic weights between the activated neurons in the layer, and the update method is as follows:

Figure 490329DEST_PATH_IMAGE039
Figure 490329DEST_PATH_IMAGE039

其中,

Figure 53028DEST_PATH_IMAGE040
是输出层内连接权值的更新量,
Figure 381241DEST_PATH_IMAGE041
和分别表示增强和减弱的幅值,
Figure 645869DEST_PATH_IMAGE037
Figure 350520DEST_PATH_IMAGE006
分别表示突触前神经元j和突触后神经元i脉冲发放时刻,
Figure 33305DEST_PATH_IMAGE042
Figure 657054DEST_PATH_IMAGE043
是时间常数,
Figure 284344DEST_PATH_IMAGE044
表 示NMDA受体的衰变时间常量。 in,
Figure 53028DEST_PATH_IMAGE040
is the update amount of the connection weights in the output layer,
Figure 381241DEST_PATH_IMAGE041
and represent the magnitudes of enhancement and weakening, respectively,
Figure 645869DEST_PATH_IMAGE037
and
Figure 350520DEST_PATH_IMAGE006
are the pulse firing times of presynaptic neuron j and postsynaptic neuron i, respectively,
Figure 33305DEST_PATH_IMAGE042
,
Figure 657054DEST_PATH_IMAGE043
is the time constant,
Figure 284344DEST_PATH_IMAGE044
represents the decay time constant of NMDA receptors.

优选的,所述步骤五中的抑制突触可塑性的更新方法为:Preferably, the updating method for inhibiting synaptic plasticity in the step 5 is:

Figure 402473DEST_PATH_IMAGE045
Figure 402473DEST_PATH_IMAGE045

其中,

Figure 64398DEST_PATH_IMAGE046
是抑制层到输出层间连接权值的更新量,
Figure 861977DEST_PATH_IMAGE006
表示突触后神经元i的 脉冲发放时刻。 in,
Figure 64398DEST_PATH_IMAGE046
is the update amount of the connection weight between the suppression layer and the output layer,
Figure 861977DEST_PATH_IMAGE006
represents the timing of the firing of the postsynaptic neuron i.

一种基于多突触可塑性脉冲神经网络快速记忆编码装置,包括一个或多个处理器,用于实现所述的基于多突触可塑性脉冲神经网络快速记忆编码方法。A fast memory coding device based on polysynaptic plasticity spiking neural network, comprising one or more processors for implementing the fast memory coding method based on polysynaptic plasticity spiking neural network.

一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时,实现所述的基于多突触可塑性脉冲神经网络快速记忆编码方法。A computer-readable storage medium on which a program is stored, and when the program is executed by a processor, realizes the fast memory coding method based on the polysynaptic plasticity impulse neural network.

本发明的有益效果:Beneficial effects of the present invention:

本发明提出的有监督群体Tempotron、无监督STDP以及无监督抑制可塑性的协同,为脉冲神经网络快速记忆外部刺激提供了一种可行且高效的方式。与现有的脉冲记忆模型相比,有监督和无监督可塑性的协同,更具生物可解释性,并且有效地提升了记忆的编码速度与稳定性。The synergy of supervised group Tempotron, unsupervised STDP and unsupervised inhibitory plasticity proposed in the present invention provides a feasible and efficient way for the spiking neural network to quickly memorize external stimuli. Compared with existing impulse memory models, the synergy of supervised and unsupervised plasticity is more biologically interpretable, and effectively improves the encoding speed and stability of memory.

附图说明Description of drawings

图1是本发明一种实施例的基于多突触可塑性脉冲神经网络快速记忆编码方法的计算框架示意图;1 is a schematic diagram of a computational framework of a fast memory coding method based on a polysynaptic plasticity spiking neural network according to an embodiment of the present invention;

图2是本发明一种实施例的基于多突触可塑性脉冲神经网络快速记忆编码方法的流程示意图;2 is a schematic flowchart of a method for fast memory encoding based on a polysynaptic plasticity spiking neural network according to an embodiment of the present invention;

图3是外部刺激编码过程示意图;Figure 3 is a schematic diagram of an external stimulus encoding process;

图4是群体Tempotron学习策略的示意图;Figure 4 is a schematic diagram of a group Tempotron learning strategy;

图5(a)是本发明实施例的在脉冲神经网络中,每张图像编码后的输入脉冲序列按 照‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’的顺序分别在

Figure 976564DEST_PATH_IMAGE008
振荡不同周期的波谷处输入网络的示意 图; Fig. 5(a) is the spiking neural network according to the embodiment of the present invention, the input pulse sequence after each image is encoded according to '0', '1', '2', '3', '4', '5' , '6' in the order of
Figure 976564DEST_PATH_IMAGE008
Schematic diagram of the input network at the troughs of different periods of oscillation;

图5(b)是本发明实施例的‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’、‘0’、‘01’、‘012’、‘0123’、‘01234’、‘012345’、‘0123456’对应的活动示意图;Figure 5(b) shows '0', '1', '2', '3', '4', '5', '6', '0', '01', '012' of the embodiment of the present invention , '0123', '01234', '012345', '0123456' corresponding activity diagrams;

图6(a)是层间突触权值变化图;图6(b)是层内突触权值变化图;Figure 6(a) is the change diagram of synaptic weights between layers; Figure 6(b) is the change diagram of synaptic weights within layers;

图7是层间突触权值的收敛速度示意图;FIG. 7 is a schematic diagram of the convergence speed of the synaptic weights between layers;

图8是本发明提供的一种基于多突触可塑性脉冲神经网络快速记忆编码装置的结构框图。FIG. 8 is a structural block diagram of a fast memory coding device based on a multi-synaptic plasticity impulse neural network provided by the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案和技术效果更加清楚明白,以下结合说明书附图和实施例,对本发明作进一步详细说明。In order to make the objectives, technical solutions and technical effects of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments of the description.

为了解决/缓解上述的技术问题,本发明基于有监督群体Tempotron以及无监督STDP、抑制突触可塑性,构建了一个层间全连接,层内具有循环连接的脉冲神经网络模型,并在数字序列记忆任务上对模型功能进行验证,结果表明,该发明可以有效地提升记忆的编码速度与稳定性。In order to solve/mitigate the above technical problems, the present invention constructs a spiking neural network model with full connection between layers and recurrent connections in layers based on supervised group Tempotron and unsupervised STDP, suppressing synaptic plasticity, and memory in digital sequence. The function of the model is verified on the task, and the results show that the invention can effectively improve the encoding speed and stability of memory.

如图1所示,是基于多突触可塑性的脉冲神经网络快速记忆编码方法的计算框架,该计算框架包含输入层、输出层以及抑制层。输入层神经元全连接到输出层神经元,输入脉冲经由输入层传递到输出层,输入层到输出层之间的突触权值由有监督群体Tempotron更新;输出层内存在循环连接,输出层内的突触权值由无监督STDP更新;抑制层与输出层之间存在双向连接,其中,输出层到抑制层采用一对一的连接方式,这部分连接的突触权值固定,不参与权值训练,而抑制层到输出层采用对角无连接的全连接方式,突触权值由无监督抑制可塑性规则更新。As shown in Figure 1, it is a computational framework of the fast memory coding method of spiking neural network based on polysynaptic plasticity, which includes an input layer, an output layer and an inhibition layer. The neurons in the input layer are fully connected to the neurons in the output layer, the input pulse is transmitted to the output layer through the input layer, and the synaptic weights between the input layer and the output layer are updated by the supervised group Tempotron; there are cyclic connections in the output layer, and the output layer The synaptic weights within are updated by unsupervised STDP; there is a bidirectional connection between the inhibitory layer and the output layer, in which the output layer to the inhibitory layer adopts a one-to-one connection, and the synaptic weights of this part of the connection are fixed and do not participate in Weight training, while the full connection from the inhibition layer to the output layer is diagonally disconnected, and the synaptic weights are updated by the unsupervised inhibitory plasticity rule.

所述基于多突触可塑性脉冲神经网络快速记忆编码方法,如图2所示,具体包括以下步骤:The fast memory coding method based on the polysynaptic plasticity spiking neural network, as shown in Figure 2, specifically includes the following steps:

步骤一:基于层级编码策略将外部刺激转换为输入脉冲序列;Step 1: Convert external stimuli into input pulse trains based on a hierarchical coding strategy;

外部刺激的实值数据在输入到脉冲神经网络前首先被编码成脉冲模式,本实施例的外部刺激为从MNIST手写体数字集的‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’七类中任意选取的七张图像,图像尺寸为28 pixel *28 pixel,切分为49个4 pixel *4 pixel大小的接收域。如图3展示了数字‘0’的编码过程,首先经由简单细胞Simple Cell,S层和复杂细胞ComplexCell,C层构成的两层结构对其进行特征提取,然后通过延迟编码方法对提取到的特征进行编码,其中:The real-valued data of external stimuli are first encoded into a spiking pattern before being input to the spiking neural network. The external stimuli in this embodiment are '0', '1', '2', '3', ' from the MNIST handwritten digit set Seven images randomly selected from the seven categories of 4', '5', and '6', the image size is 28 pixel * 28 pixel, and is divided into 49 receptive fields of 4 pixel * 4 pixel size. Figure 3 shows the encoding process of the number '0'. First, the features are extracted through the two-layer structure composed of the simple cell, the S layer and the complex cell, the C layer, and then the extracted features are extracted by the delay encoding method. to encode, where:

S层:提取图像的边缘方向特征,S层的滤波核由方向分别为

Figure 632804DEST_PATH_IMAGE001
Figure 539449DEST_PATH_IMAGE002
Figure 380366DEST_PATH_IMAGE003
Figure 123195DEST_PATH_IMAGE004
的4个 Gabor滤波器组成,其模拟视觉皮层的感受野对接收域图像执行滤波操作,滤波后获得49*4 个尺寸为4 pixel *4 pixel的特征图; S layer: extract the edge direction features of the image, the filter kernel of the S layer is divided by the direction
Figure 632804DEST_PATH_IMAGE001
,
Figure 539449DEST_PATH_IMAGE002
,
Figure 380366DEST_PATH_IMAGE003
,
Figure 123195DEST_PATH_IMAGE004
It is composed of 4 Gabor filters, which simulate the receptive field of the visual cortex to perform filtering operations on the receptive field image, and obtain 49*4 feature maps with a size of 4 pixel * 4 pixel after filtering;

C层:先对S层输出的每个特征图执行相加操作获得尺寸为49*4的特征图,对特征图执行线性归一化将数据分布到[0,1]的区间内,然后按行执行最大竞争操作,即每行只保留强度最强的特征作为该位置上最匹配的方向特征,其它方向的特征值被置为0,获得尺寸为49*4的特征图的稀疏化表达。Layer C: First perform an addition operation on each feature map output by layer S to obtain a feature map with a size of 49*4, perform linear normalization on the feature map to distribute the data into the interval [0, 1], and then press The row performs the maximum competition operation, that is, each row only retains the feature with the strongest strength as the most matching directional feature at the position, and the eigenvalues of other directions are set to 0 to obtain a sparse representation of a feature map with a size of 49*4.

获得的特征通过延迟编码转换为输入脉冲序列,按照以下公式计算:The obtained features are transformed into the input pulse train by delay encoding, which is calculated according to the following formula:

Figure 97973DEST_PATH_IMAGE047
Figure 97973DEST_PATH_IMAGE047

其中,

Figure 468911DEST_PATH_IMAGE006
是第i个神经元的发放时间,T=20e-3是编码窗口的长度,
Figure 356096DEST_PATH_IMAGE007
是第i个特征的 强度值。 in,
Figure 468911DEST_PATH_IMAGE006
is the firing time of the ith neuron, T=20e-3 is the length of the coding window,
Figure 356096DEST_PATH_IMAGE007
is the intensity value of the ith feature.

每张图像的脉冲序列为49*4=196个,不同的图像由不同的脉冲序列表示,编码后的脉冲序列经由输入层进入网络,输入层神经元总数量为7*49*4=1372个。The pulse sequence of each image is 49*4=196, and different images are represented by different pulse sequences. The encoded pulse sequence enters the network through the input layer, and the total number of neurons in the input layer is 7*49*4=1372 .

步骤二:收到输入脉冲后,基于改进的SRM模型更新输出层神经元的膜电位;Step 2: After receiving the input pulse, update the membrane potential of the neurons in the output layer based on the improved SRM model;

本实施例中,脉冲神经网络的神经元采用改进的SRM模型,通过连续的核函数提供了对脉冲神经元动力学的简明描述,有利于清晰展示多个输入源对神经元发放的贡献。相比于原始的SRM模型,改进的SRM模型考虑了更多的生理细节,在不应核中引入后去极化电位来描述神经元发放脉冲后膜电位缓慢上升的过程:In this embodiment, the neurons of the spiking neural network adopt an improved SRM model, which provides a concise description of the dynamics of the spiking neurons through a continuous kernel function, which is conducive to clearly showing the contributions of multiple input sources to neuron firing. Compared with the original SRM model, the improved SRM model considers more physiological details, and introduces a post-depolarization potential in the refractory nucleus to describe the process of the slow rise of the membrane potential after the neuron fires a pulse:

Figure 179695DEST_PATH_IMAGE048
Figure 179695DEST_PATH_IMAGE048

Figure 692585DEST_PATH_IMAGE049
是神经元i在
Figure 918030DEST_PATH_IMAGE006
处发放脉冲后膜电位的后去极化过程,
Figure 241695DEST_PATH_IMAGE026
是后去极化的 幅值,
Figure 818170DEST_PATH_IMAGE006
是神经元i的脉冲发放时刻,
Figure 889679DEST_PATH_IMAGE032
是时间常量。
Figure 692585DEST_PATH_IMAGE049
is the neuron i in
Figure 918030DEST_PATH_IMAGE006
The post-depolarization process of the membrane potential after the pulse is fired,
Figure 241695DEST_PATH_IMAGE026
is the magnitude of the post-depolarization,
Figure 818170DEST_PATH_IMAGE006
is the pulse firing time of neuron i,
Figure 889679DEST_PATH_IMAGE032
is a time constant.

引入了

Figure 969631DEST_PATH_IMAGE008
振荡这种同步神经活动的重要脑电波,将其作为外部输入电流注入到输 出神经元中,被建模为余弦波。 introduced
Figure 969631DEST_PATH_IMAGE008
The vital brain waves that oscillate this synchronized neural activity, injected as external input currents into output neurons, are modeled as cosine waves.

Figure 464197DEST_PATH_IMAGE050
Figure 464197DEST_PATH_IMAGE050

其中,

Figure 652602DEST_PATH_IMAGE020
Figure 382660DEST_PATH_IMAGE008
振荡源,
Figure 192485DEST_PATH_IMAGE027
Figure 248165DEST_PATH_IMAGE008
振荡的幅值,
Figure 658287DEST_PATH_IMAGE051
Figure 926457DEST_PATH_IMAGE008
振荡的频率,
Figure 121946DEST_PATH_IMAGE033
是初始相位。在所述 的脉冲神经网络中,每张图像编码后的输入脉冲序列按照‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’ 的顺序分别在
Figure 207583DEST_PATH_IMAGE008
振荡不同周期的波谷处输入网络,如图5(a)所示。网络中输出层神经元接收 的来自输入层的兴奋性输入表达如下: in,
Figure 652602DEST_PATH_IMAGE020
Yes
Figure 382660DEST_PATH_IMAGE008
oscillator source,
Figure 192485DEST_PATH_IMAGE027
Yes
Figure 248165DEST_PATH_IMAGE008
the amplitude of the oscillation,
Figure 658287DEST_PATH_IMAGE051
Yes
Figure 926457DEST_PATH_IMAGE008
the frequency of oscillation,
Figure 121946DEST_PATH_IMAGE033
is the initial phase. In the described spiking neural network, the encoded input pulse sequence of each image is in the order of '0', '1', '2', '3', '4', '5', '6', respectively.
Figure 207583DEST_PATH_IMAGE008
The input network is oscillated at the troughs of different periods, as shown in Fig. 5(a). The excitatory input from the input layer received by the output layer neurons in the network is expressed as follows:

Figure 714788DEST_PATH_IMAGE010
Figure 714788DEST_PATH_IMAGE010

其中,

Figure 662015DEST_PATH_IMAGE016
是输入层神经元的总输入,
Figure 836644DEST_PATH_IMAGE021
是输入层神经元j到输出层神经元i之间 的突触权值,
Figure 96112DEST_PATH_IMAGE024
是归一化因子,
Figure 90613DEST_PATH_IMAGE028
是突触前神经元j的脉冲发放时刻,
Figure 841531DEST_PATH_IMAGE029
Figure 870667DEST_PATH_IMAGE030
是时间常量。 in,
Figure 662015DEST_PATH_IMAGE016
is the total input of neurons in the input layer,
Figure 836644DEST_PATH_IMAGE021
is the synaptic weight between the input layer neuron j to the output layer neuron i,
Figure 96112DEST_PATH_IMAGE024
is the normalization factor,
Figure 90613DEST_PATH_IMAGE028
is the pulse firing time of the presynaptic neuron j,
Figure 841531DEST_PATH_IMAGE029
,
Figure 870667DEST_PATH_IMAGE030
is a time constant.

当输出层神经元发放脉冲后,该脉冲会传递给同层的其他神经元,以及抑制层神经元。输出层神经元收到的来自同层的其他神经元的兴奋输入表达如下:When a neuron in the output layer emits a pulse, the pulse will be transmitted to other neurons in the same layer, as well as neurons in the inhibitory layer. The excitation input received by neurons in the output layer from other neurons in the same layer is expressed as follows:

Figure 32527DEST_PATH_IMAGE011
Figure 32527DEST_PATH_IMAGE011

Figure 779903DEST_PATH_IMAGE017
是同层神经元的总输入,
Figure 334512DEST_PATH_IMAGE022
是输出层内突触前神经元j与突触后神经元i 之间的突触权值。
Figure 779903DEST_PATH_IMAGE017
is the total input of neurons in the same layer,
Figure 334512DEST_PATH_IMAGE022
is the synaptic weight between the presynaptic neuron j and the postsynaptic neuron i in the output layer.

收到的来自抑制神经元的抑制反馈表达如下:The inhibitory feedback received from inhibitory neurons is expressed as follows:

Figure 342788DEST_PATH_IMAGE012
Figure 342788DEST_PATH_IMAGE012

Figure 816495DEST_PATH_IMAGE018
是抑制反馈,
Figure 660954DEST_PATH_IMAGE023
是抑制神经元到输出层神经元之间的突触权重,
Figure 878309DEST_PATH_IMAGE025
是抑 制反馈的幅值,
Figure 6671DEST_PATH_IMAGE006
是输出层神经元的脉冲发放时刻,
Figure 385700DEST_PATH_IMAGE031
是时间常量。
Figure 816495DEST_PATH_IMAGE018
is the inhibitory feedback,
Figure 660954DEST_PATH_IMAGE023
is the synaptic weight between the inhibitory neuron and the output layer neuron,
Figure 878309DEST_PATH_IMAGE025
is the magnitude of the suppression feedback,
Figure 6671DEST_PATH_IMAGE006
is the firing moment of the output layer neuron,
Figure 385700DEST_PATH_IMAGE031
is a time constant.

综上所述,模型中输出层神经元的膜电位按如下公式计算:In summary, the membrane potential of the output layer neurons in the model is calculated according to the following formula:

Figure 983034DEST_PATH_IMAGE052
Figure 983034DEST_PATH_IMAGE052

本实施例中,

Figure 125784DEST_PATH_IMAGE053
3,
Figure 984019DEST_PATH_IMAGE054
Figure 674894DEST_PATH_IMAGE055
Figure 618579DEST_PATH_IMAGE056
,,
Figure 302370DEST_PATH_IMAGE057
Figure 890478DEST_PATH_IMAGE058
Figure 876888DEST_PATH_IMAGE059
Figure 698083DEST_PATH_IMAGE060
,仿真时间间隔
Figure 936297DEST_PATH_IMAGE061
,一次仿真迭代时长
Figure 503545DEST_PATH_IMAGE062
,迭代次数设置为20。 In this embodiment,
Figure 125784DEST_PATH_IMAGE053
3,
Figure 984019DEST_PATH_IMAGE054
,
Figure 674894DEST_PATH_IMAGE055
,
Figure 618579DEST_PATH_IMAGE056
,,
Figure 302370DEST_PATH_IMAGE057
,
Figure 890478DEST_PATH_IMAGE058
,
Figure 876888DEST_PATH_IMAGE059
,
Figure 698083DEST_PATH_IMAGE060
, the simulation time interval
Figure 936297DEST_PATH_IMAGE061
, the duration of one simulation iteration
Figure 503545DEST_PATH_IMAGE062
, the number of iterations is set to 20.

步骤三:使用监督群体Tempotron更新输入到输出层间的突触权值,快速激活一群输出神经元记忆输入;Step 3: Use the supervised group Tempotron to update the synaptic weights between the input and output layers, and quickly activate a group of output neurons to memorize the input;

如图4所示,是群体Tempotron学习策略的示意图。在每次迭代学习过程中,对于每个输入脉冲模式,若输出层响应该输入模式的神经元个数还差K个才达到预设的神经群体大小,将加强从输入层的发放神经元到输出层前K个最大膜电位对应的未发放神经元之间的突触权值,层间权值更新公式如下:As shown in Figure 4, it is a schematic diagram of the group Tempotron learning strategy. In each iterative learning process, for each input pulse pattern, if the number of neurons in the output layer responding to the input pattern is less than K before reaching the preset neural population size, it will strengthen the firing neurons from the input layer to the The synaptic weight between the unfired neurons corresponding to the first K maximum membrane potentials in the output layer, the update formula of the weight between the layers is as follows:

Figure 785490DEST_PATH_IMAGE063
Figure 785490DEST_PATH_IMAGE063

其中,

Figure 969347DEST_PATH_IMAGE035
是输入层到输出层间连接权值的更新量,
Figure 480094DEST_PATH_IMAGE064
是层间权值的学习 率,d是期望的输出标记包括:0 或1,
Figure 167427DEST_PATH_IMAGE037
是突触前神经元j脉冲发放时刻,K是距离达到神经 群体大小还需激活的神经元数目,
Figure 623204DEST_PATH_IMAGE038
是第k个最活跃突触后神经元最大膜电位对应时刻。 in,
Figure 969347DEST_PATH_IMAGE035
is the update amount of the connection weight between the input layer and the output layer,
Figure 480094DEST_PATH_IMAGE064
is the learning rate of the weights between layers, d is the desired output label including: 0 or 1,
Figure 167427DEST_PATH_IMAGE037
is the pulse firing time of the presynaptic neuron j, K is the number of neurons that need to be activated until the distance reaches the size of the neural population,
Figure 623204DEST_PATH_IMAGE038
is the time corresponding to the maximum membrane potential of the kth most active postsynaptic neuron.

本实施例中,

Figure 763198DEST_PATH_IMAGE065
,输出神经元的数目为100,神经群体内的神经元个数为 12。学习前,层间初始权值服从正态分布随机初始化,均值为0.001,标准差为0.002。如图6 (a)所示,学习后,对于每个输入模式,输入层的发放神经元与输出层的响应神经群体间的 权值被增强。由于层间突触权值的增强,在输入脉冲模式呈现时刻附近,输出层呈现神经群 体发放活动编码输入,如图5(b)的‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’对应的活动所示。 In this embodiment,
Figure 763198DEST_PATH_IMAGE065
, the number of output neurons is 100, and the number of neurons in the neural population is 12. Before learning, the initial weights between layers are randomly initialized from a normal distribution, with a mean of 0.001 and a standard deviation of 0.002. As shown in Fig. 6(a), after learning, for each input pattern, the weights between the firing neurons in the input layer and the responding neural population in the output layer are enhanced. Due to the enhancement of synaptic weights between layers, near the moment when the input pulse pattern is presented, the output layer presents the neural group firing activity encoding input, as shown in Figure 5(b) '0', '1', '2', '3' , '4', '5', '6' corresponding activities are shown.

步骤四:输出层神经元激活后,使用无监督STDP更新层内激活神经元间的突触权值,形成增强的循环子网络存储记忆;Step 4: After the neurons in the output layer are activated, use unsupervised STDP to update the synaptic weights between the activated neurons in the layer to form an enhanced recurrent sub-network storage memory;

输出层神经元激活后,当突触前神经元与突触后神经元的发放时间差在[

Figure 343215DEST_PATH_IMAGE066
]区间内,突触权值将被更新。突触前神经元先于突触后神经元发放时,突触权 值被增强;突触前神经元晚于突触后神经元发放时,突触权值被减弱。层内权值更新公式如 下: After the output layer neurons are activated, when the firing time difference between the presynaptic neuron and the postsynaptic neuron is [
Figure 343215DEST_PATH_IMAGE066
] interval, the synaptic weights will be updated. When the presynaptic neuron fires before the postsynaptic neuron, the synaptic weight is enhanced; when the presynaptic neuron fires later than the postsynaptic neuron, the synaptic weight is weakened. The weight update formula in the layer is as follows:

Figure 885055DEST_PATH_IMAGE039
Figure 885055DEST_PATH_IMAGE039

其中,

Figure 508803DEST_PATH_IMAGE040
是输出层内连接权值的更新量,
Figure 136093DEST_PATH_IMAGE041
Figure 988643DEST_PATH_IMAGE067
分别表示增强和减弱的幅值,
Figure 650568DEST_PATH_IMAGE037
Figure 445218DEST_PATH_IMAGE006
分别表示突触前神经元j和突触后神经元i脉冲发放时刻,
Figure 559804DEST_PATH_IMAGE042
Figure 481624DEST_PATH_IMAGE043
是时间常数,
Figure 122690DEST_PATH_IMAGE044
表示NMDA受体的衰变时间常量。 in,
Figure 508803DEST_PATH_IMAGE040
is the update amount of the connection weights in the output layer,
Figure 136093DEST_PATH_IMAGE041
and
Figure 988643DEST_PATH_IMAGE067
represent the enhancement and weakening amplitudes, respectively,
Figure 650568DEST_PATH_IMAGE037
and
Figure 445218DEST_PATH_IMAGE006
are the pulse firing times of presynaptic neuron j and postsynaptic neuron i, respectively,
Figure 559804DEST_PATH_IMAGE042
,
Figure 481624DEST_PATH_IMAGE043
is the time constant,
Figure 122690DEST_PATH_IMAGE044
represents the decay time constant of NMDA receptors.

本实施例中,

Figure 229186DEST_PATH_IMAGE068
Figure 706435DEST_PATH_IMAGE069
。学习前,层内循环连 接的权值随机初始化,初始权值在[0,1e-5]区间内;学习后,响应输入层每个输入模式的神 经元群体的内部连接被增强,形成循环子网络,如图6(b)所示。自联想记忆存储在神经群体 内增强的循环连接权值中。 In this embodiment,
Figure 229186DEST_PATH_IMAGE068
,
Figure 706435DEST_PATH_IMAGE069
. Before learning, the weights of the cyclic connections in the layer are randomly initialized, and the initial weights are in the interval [0, 1e-5]; after learning, the internal connections of the neuron population responding to each input pattern of the input layer are enhanced to form a cyclic sub-layer. network, as shown in Figure 6(b). Auto-associative memories are stored in the weights of recurrent connections that are enhanced within the neural population.

步骤五:在执行步骤四的同时,使用无监督抑制可塑性更新抑制层到输出层间的突触权值,抑制反馈保障记忆不同输入的神经群体发放时间上的分离。Step 5: While performing Step 4, use unsupervised inhibitory plasticity to update the synaptic weights between the inhibitory layer and the output layer, and the inhibitory feedback ensures the separation of the firing time of neural groups that remember different inputs.

输出层到抑制层采用一对一的连接方式,这部分连接的突触权值固定,不参与权 值训练。抑制层到输出层采用对角无连接的全连接方式,权值矩阵对角线上的值为0,其余 的值均为1。当前时刻t与输出神经元的最近一次发放的时间差在[

Figure RE-689912DEST_PATH_IMAGE070
]区间内,输出层 对抑制神经元有输入的神经元的抑制权值将被减小,而对抑制神经元没有输入的抑制权值 不变。抑制突触可塑性的更新公式如下: The output layer to the inhibition layer adopts a one-to-one connection. The synaptic weights of this part of the connection are fixed and do not participate in weight training. From the suppression layer to the output layer, the diagonally unconnected full connection method is adopted. The value on the diagonal of the weight matrix is 0, and the rest of the values are 1. The time difference between the current time t and the last firing of the output neuron is [
Figure RE-689912DEST_PATH_IMAGE070
] interval, the inhibitory weight of the output layer to the neuron with input to the inhibitory neuron will be reduced, while the inhibitory weight of the inhibitory neuron without input will remain unchanged. The updated formula for inhibiting synaptic plasticity is as follows:

Figure RE-476602DEST_PATH_IMAGE045
Figure RE-476602DEST_PATH_IMAGE045

其中,

Figure RE-726318DEST_PATH_IMAGE046
是抑制层到输出层间连接权值的更新量,
Figure RE-180433DEST_PATH_IMAGE006
表示突触后神经元i的脉 冲发放时刻。输出层神经群体在受到刺激发放脉冲后,来自同层的神经群体内的兴奋性输 入、放电后的去极化电流以及外部
Figure RE-224612DEST_PATH_IMAGE008
振荡输入使得神经群体在后续没有刺激输入的情况下 能够在随后的
Figure RE-498599DEST_PATH_IMAGE008
振荡周期中持续发放,从而维持短期记忆。在输出层神经元发放后,来自抑 制层神经元的抑制反馈,阻止神经群体在发放后连续发放,避免了不同记忆项之间的互相 干扰,如图5(b)‘0’、‘01’、‘012’、‘0123’、‘01234’、‘012345’、‘0123456’对应的活动所示, 在输入刺激出现的随后的
Figure RE-20847DEST_PATH_IMAGE008
振荡周期的波峰附近,响应不同输入刺激的神经群体按顺序在 不同的时刻发放,抑制反馈保障记忆不同输入的神经群体发放时间上的分离。 in,
Figure RE-726318DEST_PATH_IMAGE046
is the update amount of the connection weight between the suppression layer and the output layer,
Figure RE-180433DEST_PATH_IMAGE006
represents the timing of the firing of the postsynaptic neuron i. After the neural population in the output layer is stimulated to emit pulses, the excitatory input from the neural population in the same layer, the post-discharge depolarization current, and the external
Figure RE-224612DEST_PATH_IMAGE008
Oscillating input enables neural populations to respond to subsequent
Figure RE-498599DEST_PATH_IMAGE008
Continuous firing during oscillation cycles, thereby maintaining short-term memory. After the neurons in the output layer are fired, the inhibitory feedback from the neurons in the inhibitory layer prevents the neural group from firing continuously after firing, avoiding the mutual interference between different memory items, as shown in Figure 5(b) '0', '01' , '012', '0123', '01234', '012345', '0123456' corresponding to the activities shown in the following
Figure RE-20847DEST_PATH_IMAGE008
Near the peak of the oscillation cycle, the neural populations responding to different input stimuli fire in sequence at different times, and the inhibitory feedback ensures the separation of the firing time of the neural populations that remember different inputs.

本实施例中,抑制神经元的数目为100,与输出神经元的数目一样,

Figure 788694DEST_PATH_IMAGE071
。 In this embodiment, the number of inhibitory neurons is 100, which is the same as the number of output neurons.
Figure 788694DEST_PATH_IMAGE071
.

如图7所示的是层间突触权值的收敛速度,在群体Tempotron 学习策略下,快速激活了一群输出神经元对输入进行记忆编码,层间突触权值只经过4次学习就收敛了。实验结果表明本发明提出的基于多突触可塑性的脉冲神经网络快速记忆编码方法,能够有效地提升记忆的编码速度与稳定性。Figure 7 shows the convergence speed of the synaptic weights between layers. Under the group Tempotron learning strategy, a group of output neurons are quickly activated to encode the input in memory, and the synaptic weights between layers converge after only 4 times of learning. . The experimental results show that the fast memory coding method based on the multi-synaptic plasticity of the spiking neural network proposed by the present invention can effectively improve the coding speed and stability of memory.

与前述基于多突触可塑性脉冲神经网络快速记忆编码方法的实施例相对应,本发明还提供了基于多突触可塑性脉冲神经网络快速记忆编码装置的实施例。Corresponding to the foregoing embodiments of the fast memory coding method based on the polysynaptic plasticity spiking neural network, the present invention also provides an embodiment of the fast memory coding device based on the polysynaptic plasticity spiking neural network.

参见图8,本发明实施例提供的一种基于多突触可塑性脉冲神经网络快速记忆编码装置,包括一个或多个处理器,用于实现上述实施例中的基于多突触可塑性脉冲神经网络快速记忆编码方法。Referring to FIG. 8 , an embodiment of the present invention provides a device for fast memory coding based on a spiking neural network with polysynaptic plasticity, including one or more processors for implementing the fast memory coding based on a spiking neural network with polysynaptic plasticity in the above embodiment Memory coding method.

本发明基于多突触可塑性脉冲神经网络快速记忆编码装置的实施例可以应用在任意具备数据处理能力的设备上,该任意具备数据处理能力的设备可以为诸如计算机等设备或装置。装置实施例可以通过软件实现,也可以通过硬件或者软硬件结合的方式实现。以软件实现为例,作为一个逻辑意义上的装置,是通过其所在任意具备数据处理能力的设备的处理器将非易失性存储器中对应的计算机程序指令读取到内存中运行形成的。从硬件层面而言,如图8所示,为本发明基于多突触可塑性脉冲神经网络快速记忆编码装置所在任意具备数据处理能力的设备的一种硬件结构图,除了图8所示的处理器、内存、网络接口、以及非易失性存储器之外,实施例中装置所在的任意具备数据处理能力的设备通常根据该任意具备数据处理能力的设备的实际功能,还可以包括其他硬件,对此不再赘述。The embodiments of the present invention based on the multi-synaptic plasticity spiking neural network fast memory coding device can be applied to any device with data processing capability, which can be a device or device such as a computer. The apparatus embodiment may be implemented by software, or may be implemented by hardware or a combination of software and hardware. Taking software implementation as an example, a device in a logical sense is formed by reading the corresponding computer program instructions in the non-volatile memory into the memory through the processor of any device with data processing capability where it is located. From the perspective of hardware, as shown in FIG. 8 , it is a hardware structure diagram of any device with data processing capability where the multi-synaptic plasticity spiking neural network fast memory coding device of the present invention is located, except for the processor shown in FIG. 8 , memory, network interface, and non-volatile memory, any device with data processing capability where the device is located in the embodiment may also include other hardware, usually according to the actual function of any device with data processing capability. No longer.

上述装置中各个单元的功能和作用的实现过程具体详见上述方法中对应步骤的实现过程,在此不再赘述。For details of the implementation process of the functions and functions of each unit in the above device, please refer to the implementation process of the corresponding steps in the above method, which will not be repeated here.

对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本发明方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。For the apparatus embodiments, since they basically correspond to the method embodiments, reference may be made to the partial descriptions of the method embodiments for related parts. The device embodiments described above are only illustrative, wherein the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed over multiple network elements. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the present invention. Those of ordinary skill in the art can understand and implement it without creative effort.

本发明实施例还提供一种计算机可读存储介质,其上存储有程序,该程序被处理器执行时,实现上述实施例中的基于多突触可塑性脉冲神经网络快速记忆编码方法。Embodiments of the present invention further provide a computer-readable storage medium on which a program is stored, and when the program is executed by a processor, implements the method for fast memory coding based on a multi-synaptic plasticity spiking neural network in the above embodiment.

所述计算机可读存储介质可以是前述任一实施例所述的任意具备数据处理能力的设备的内部存储单元,例如硬盘或内存。所述计算机可读存储介质也可以是风力发电机的外部存储设备,例如所述设备上配备的插接式硬盘、智能存储卡(Smart Media Card,SMC)、SD卡、闪存卡(Flash Card)等。进一步的,所述计算机可读存储介质还可以既包括任意具备数据处理能力的设备的内部存储单元也包括外部存储设备。所述计算机可读存储介质用于存储所述计算机程序以及所述任意具备数据处理能力的设备所需的其他程序和数据,还可以用于暂时地存储已经输出或者将要输出的数据。The computer-readable storage medium may be an internal storage unit of any device with data processing capability described in any of the foregoing embodiments, such as a hard disk or a memory. The computer-readable storage medium may also be an external storage device of the wind turbine, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), an SD card, and a flash memory card (Flash Card) equipped on the device. Wait. Further, the computer-readable storage medium may also include both an internal storage unit of any device with data processing capability and an external storage device. The computer-readable storage medium is used to store the computer program and other programs and data required by the device with data processing capability, and can also be used to temporarily store data that has been output or will be output.

以上所述,仅为本发明的优选实施案例,并非对本发明做任何形式上的限制。虽然前文对本发明的实施过程进行了详细说明,对于熟悉本领域的人员来说,其依然可以对前述各实例记载的技术方案进行修改,或者对其中部分技术特征进行同等替换。凡在本发明精神和原则之内所做修改、同等替换等,均应包含在本发明的保护范围之内。The above descriptions are only preferred implementation examples of the present invention, and do not limit the present invention in any form. Although the implementation process of the present invention has been described in detail above, those skilled in the art can still modify the technical solutions described in the foregoing examples, or perform equivalent replacements for some of the technical features. All modifications, equivalent replacements, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.

Claims (8)

1.基于多突触可塑性脉冲神经网络快速记忆编码方法,其特征在于,包括以下步骤:1. based on polysynaptic plasticity impulse neural network fast memory coding method, is characterized in that, comprises the following steps: 步骤一:基于层级编码策略将外部刺激转换为输入脉冲序列;Step 1: Convert external stimuli into input pulse trains based on a hierarchical coding strategy; 步骤二:脉冲神经网络收到输入脉冲后,基于改进的SRM模型更新输出层神经元的膜电位;Step 2: After receiving the input pulse, the spiking neural network updates the membrane potential of the neurons in the output layer based on the improved SRM model; 步骤三:使用监督群体Tempotron更新输入到输出层间的突触权值,激活输出层神经元记忆输入;Step 3: Use the supervised group Tempotron to update the synaptic weights between the input and the output layer, and activate the output layer neurons to memorize the input; 步骤四:输出层神经元激活后,使用无监督STDP更新层内激活神经元间的突触权值,形成增强的循环子网络存储记忆;Step 4: After the neurons in the output layer are activated, use unsupervised STDP to update the synaptic weights between the activated neurons in the layer to form an enhanced recurrent sub-network storage memory; 步骤五:在执行步骤四的同时,使用无监督抑制突触可塑性,更新抑制层到输出层间的突触权值,抑制反馈保障记忆不同输入的神经群体发放时间上的分离。Step 5: While performing Step 4, use unsupervised inhibition of synaptic plasticity to update the synaptic weights between the inhibition layer and the output layer, and the inhibition feedback ensures the separation of the firing time of neural groups that remember different inputs. 2.根据权利要求1所述的基于多突触可塑性脉冲神经网络快速记忆编码方法,其特征在于,所述步骤一具体为:2. the fast memory coding method based on polysynaptic plasticity impulse neural network according to claim 1, is characterized in that, described step one is specifically: 所述外部刺激为从MNIST手写体数字集的‘0’、‘1’、‘2’、‘3’、‘4’、‘5’、‘6’七类中任意选取的七张图像,每张图像的尺寸为28pixel*28pixel,将每张图像均匀切分,获得49个4pixel*4pixel大小的接收域;The external stimuli are seven images arbitrarily selected from seven categories of '0', '1', '2', '3', '4', '5', and '6' in the MNIST handwritten digit set. The size of the image is 28pixel*28pixel, and each image is evenly divided to obtain 49 receptive fields of 4pixel*4pixel size; 所述层级编码策略使用由简单细胞Simple Cell的S层和复杂细胞Complex Cell的C层构成的两层结构对外部刺激进行特征提取,然后通过延迟编码方法对提取到的特征进行编码,其中:The hierarchical encoding strategy uses a two-layer structure composed of the S layer of the simple cell and the C layer of the complex cell to extract the features of the external stimulus, and then encode the extracted features through the delay encoding method, where: S层:提取图像的边缘方向特征,S层的滤波核由4个方向分别为
Figure 117039DEST_PATH_IMAGE001
Figure 72357DEST_PATH_IMAGE002
Figure 691557DEST_PATH_IMAGE003
Figure 414050DEST_PATH_IMAGE004
的Gabor滤 波器组成,4个方向的所述Gabor滤波器均模拟视觉皮层的感受野对每个接收域对应的图像 块分别执行滤波操作,滤波后获得49*4个尺寸为4pixel *4 pixel的特征图;
S layer: Extract the edge direction features of the image. The filter kernel of the S layer consists of four directions:
Figure 117039DEST_PATH_IMAGE001
,
Figure 72357DEST_PATH_IMAGE002
,
Figure 691557DEST_PATH_IMAGE003
,
Figure 414050DEST_PATH_IMAGE004
The Gabor filters in the four directions all simulate the receptive field of the visual cortex and perform filtering operations on the image blocks corresponding to each receptive field. feature map;
C层:先对S层输出的每个特征图执行相加操作获得尺寸为49 pixel *4 pixel的特征图,对特征图执行线性归一化将数据分布到[0,1]的区间内,然后按行执行最大竞争操作,所述按行执行最大竞争操作包括:每行保留强度最强的特征作为该位置上最匹配的方向特征,其它方向的特征值被置为0,获得尺寸为4 pixel 9*4 pixel的特征图的稀疏化表达;Layer C: First perform an addition operation on each feature map output by layer S to obtain a feature map with a size of 49 pixel * 4 pixel, and perform linear normalization on the feature map to distribute the data into the interval [0, 1]. Then perform the maximum competition operation by row, the performing maximum competition operation by row includes: each row retains the feature with the strongest strength as the most matching directional feature at the position, the feature values of other directions are set to 0, and the obtained size is 4 The sparse expression of the feature map of pixel 9*4 pixel; 获得的特征通过延迟编码转换为输入脉冲序列,按照以下公式计算:The obtained features are transformed into the input pulse train by delay encoding, which is calculated according to the following formula:
Figure 753895DEST_PATH_IMAGE005
Figure 753895DEST_PATH_IMAGE005
其中,
Figure 953932DEST_PATH_IMAGE006
是第i个神经元的发放时间,T是编码窗口的长度,
Figure 868668DEST_PATH_IMAGE007
是第i个特征的强度值。
in,
Figure 953932DEST_PATH_IMAGE006
is the firing time of the ith neuron, T is the length of the encoding window,
Figure 868668DEST_PATH_IMAGE007
is the intensity value of the ith feature.
3.根据权利要求1所述的基于多突触可塑性脉冲神经网络快速记忆编码方法,其特征在于,所述步骤二具体为:3. the fast memory coding method based on polysynaptic plasticity impulse neural network according to claim 1, is characterized in that, described step 2 is specifically: 所述改进的SRM模型在不应核中引入后去极化电位,同时引入
Figure 888576DEST_PATH_IMAGE008
振荡作为外部输入电流 注入到输出层神经元中,在所述的脉冲神经网络中,当输出层其一神经元发放脉冲后,该脉 冲会传递给同层的其他神经元以及抑制层神经元,模型中输出层神经元的膜电位收到来自 输入层以及同层的其他神经元的兴奋输入和来自抑制神经元的抑制反馈,则模型中输出层 神经元的膜电位按如下公式计算:
The improved SRM model introduces a post-depolarization potential in the refractory nucleus while introducing
Figure 888576DEST_PATH_IMAGE008
Oscillation is injected into the neurons of the output layer as external input currents. In the above-mentioned spiking neural network, when a neuron in the output layer emits a pulse, the pulse will be transmitted to other neurons in the same layer and neurons in the inhibitory layer. The membrane potential of the output layer neurons in the model receives the excitatory input from the input layer and other neurons in the same layer and the inhibitory feedback from the inhibitory neurons, then the membrane potential of the output layer neurons in the model is calculated according to the following formula:
Figure 32113DEST_PATH_IMAGE009
Figure 32113DEST_PATH_IMAGE009
Figure 86657DEST_PATH_IMAGE010
Figure 86657DEST_PATH_IMAGE010
Figure 906714DEST_PATH_IMAGE011
Figure 906714DEST_PATH_IMAGE011
Figure 679498DEST_PATH_IMAGE012
Figure 679498DEST_PATH_IMAGE012
Figure 626725DEST_PATH_IMAGE013
Figure 626725DEST_PATH_IMAGE013
Figure 535775DEST_PATH_IMAGE014
Figure 535775DEST_PATH_IMAGE014
其中,
Figure 792313DEST_PATH_IMAGE015
是神经元i的膜电位,
Figure 786814DEST_PATH_IMAGE016
是输入层神经元的总输入,
Figure 537732DEST_PATH_IMAGE017
是同层神经元的总 输入,
Figure 485310DEST_PATH_IMAGE018
是抑制反馈,
Figure 522536DEST_PATH_IMAGE019
是神经元i在
Figure 145279DEST_PATH_IMAGE006
处发放脉冲后膜电位的后去极化过程,
Figure 558942DEST_PATH_IMAGE020
Figure 567218DEST_PATH_IMAGE008
振荡源,
Figure 509767DEST_PATH_IMAGE021
Figure 354226DEST_PATH_IMAGE022
Figure 492952DEST_PATH_IMAGE023
分别是输入层神经元到输出层神经元、输出层内神经元、抑制神 经元到输出层神经元的突触权重,
Figure 231101DEST_PATH_IMAGE024
是归一化因子,
Figure 751075DEST_PATH_IMAGE025
Figure 941885DEST_PATH_IMAGE026
Figure 821985DEST_PATH_IMAGE027
分别是抑制反馈、后去极 化、
Figure 680220DEST_PATH_IMAGE008
振荡的幅值,
Figure 371095DEST_PATH_IMAGE028
Figure 973502DEST_PATH_IMAGE006
分别是突触前神经元j、突触后神经元i的脉冲发放时刻,
Figure 532660DEST_PATH_IMAGE029
Figure 855188DEST_PATH_IMAGE030
Figure 841598DEST_PATH_IMAGE031
Figure 397214DEST_PATH_IMAGE032
是时间常量,f是
Figure 166586DEST_PATH_IMAGE008
振荡的频率,t是时间变量,
Figure 733834DEST_PATH_IMAGE033
是初始相位。
in,
Figure 792313DEST_PATH_IMAGE015
is the membrane potential of neuron i,
Figure 786814DEST_PATH_IMAGE016
is the total input of neurons in the input layer,
Figure 537732DEST_PATH_IMAGE017
is the total input of neurons in the same layer,
Figure 485310DEST_PATH_IMAGE018
is the inhibitory feedback,
Figure 522536DEST_PATH_IMAGE019
is the neuron i in
Figure 145279DEST_PATH_IMAGE006
The post-depolarization process of the membrane potential after the pulse is fired,
Figure 558942DEST_PATH_IMAGE020
Yes
Figure 567218DEST_PATH_IMAGE008
oscillator source,
Figure 509767DEST_PATH_IMAGE021
,
Figure 354226DEST_PATH_IMAGE022
,
Figure 492952DEST_PATH_IMAGE023
are the synaptic weights from the input layer neurons to the output layer neurons, the neurons in the output layer, and the inhibitory neurons to the output layer neurons,
Figure 231101DEST_PATH_IMAGE024
is the normalization factor,
Figure 751075DEST_PATH_IMAGE025
,
Figure 941885DEST_PATH_IMAGE026
,
Figure 821985DEST_PATH_IMAGE027
Inhibitory feedback, post-depolarization,
Figure 680220DEST_PATH_IMAGE008
the amplitude of the oscillation,
Figure 371095DEST_PATH_IMAGE028
,
Figure 973502DEST_PATH_IMAGE006
are the impulse firing times of presynaptic neuron j and postsynaptic neuron i, respectively,
Figure 532660DEST_PATH_IMAGE029
,
Figure 855188DEST_PATH_IMAGE030
,
Figure 841598DEST_PATH_IMAGE031
,
Figure 397214DEST_PATH_IMAGE032
is the time constant, f is
Figure 166586DEST_PATH_IMAGE008
frequency of oscillation, t is the time variable,
Figure 733834DEST_PATH_IMAGE033
is the initial phase.
4.根据权利要求1所述的基于多突触可塑性脉冲神经网络快速记忆编码方法,其特征在于,所述步骤三中的监督群体Tempotron更新输入层和输出层间突触权值,其更新方法为:4. The method for fast memory coding based on polysynaptic plasticity spiking neural network according to claim 1, wherein the supervised group Tempotron in the step 3 updates the synaptic weights between the input layer and the output layer, and its updating method for:
Figure 15780DEST_PATH_IMAGE034
Figure 15780DEST_PATH_IMAGE034
其中,
Figure 402899DEST_PATH_IMAGE035
是输入层到输出层间连接的权值,
Figure 179225DEST_PATH_IMAGE036
是层间权值的学习率,d是期望的 输出标记包括0 或1,
Figure 866558DEST_PATH_IMAGE037
是突触前神经元j的脉冲发放时刻,K是距离达到神经群体大小还需 激活的神经元数目,
Figure 116143DEST_PATH_IMAGE038
是第k个最活跃突触后神经元最大膜电位对应时刻。
in,
Figure 402899DEST_PATH_IMAGE035
is the weight of the connection between the input layer and the output layer,
Figure 179225DEST_PATH_IMAGE036
is the learning rate of the weights between layers, d is the expected output label including 0 or 1,
Figure 866558DEST_PATH_IMAGE037
is the pulse firing time of the presynaptic neuron j, K is the number of neurons that need to be activated before reaching the size of the neural population,
Figure 116143DEST_PATH_IMAGE038
is the time corresponding to the maximum membrane potential of the kth most active postsynaptic neuron.
5.根据权利要求1所述的基于多突触可塑性脉冲神经网络快速记忆编码方法,其特征在于,所述步骤四中的无监督STDP更新层内激活神经元间的突触权值,其更新方法为:5. The method for fast memory coding based on polysynaptic plasticity spiking neural network according to claim 1, wherein the unsupervised STDP in the step 4 updates the synaptic weights between the activated neurons in the layer, which updates The method is:
Figure 131503DEST_PATH_IMAGE039
Figure 131503DEST_PATH_IMAGE039
其中,
Figure 304996DEST_PATH_IMAGE040
是输出层内连接权值的更新量,
Figure 991977DEST_PATH_IMAGE041
Figure 163195DEST_PATH_IMAGE042
分别表示增强和减弱的幅值,
Figure 790486DEST_PATH_IMAGE037
Figure 157882DEST_PATH_IMAGE006
分别表示突触前神经元j和突触后神经元i脉冲发放时刻,
Figure 554228DEST_PATH_IMAGE043
Figure 99610DEST_PATH_IMAGE044
是时间常数,
Figure 869989DEST_PATH_IMAGE045
表 示NMDA受体的衰变时间常量。
in,
Figure 304996DEST_PATH_IMAGE040
is the update amount of the connection weights in the output layer,
Figure 991977DEST_PATH_IMAGE041
and
Figure 163195DEST_PATH_IMAGE042
represent the enhancement and weakening amplitudes, respectively,
Figure 790486DEST_PATH_IMAGE037
and
Figure 157882DEST_PATH_IMAGE006
are the pulse firing times of presynaptic neuron j and postsynaptic neuron i, respectively,
Figure 554228DEST_PATH_IMAGE043
,
Figure 99610DEST_PATH_IMAGE044
is the time constant,
Figure 869989DEST_PATH_IMAGE045
represents the decay time constant of NMDA receptors.
6.根据权利要求1所述的基于多突触可塑性脉冲神经网络快速记忆编码方法,其特征在于,所述步骤五中的抑制突触可塑性的更新方法为:6. The method for fast memory coding based on polysynaptic plasticity impulse neural network according to claim 1, wherein the updating method for inhibiting synaptic plasticity in the step 5 is:
Figure 650863DEST_PATH_IMAGE046
Figure 650863DEST_PATH_IMAGE046
其中,
Figure 42661DEST_PATH_IMAGE047
是抑制层到输出层间连接权值的更新量,
Figure 804950DEST_PATH_IMAGE006
表示突触后神经元i的脉冲 发放时刻。
in,
Figure 42661DEST_PATH_IMAGE047
is the update amount of the connection weight between the suppression layer and the output layer,
Figure 804950DEST_PATH_IMAGE006
represents the timing of the firing of the postsynaptic neuron i.
7.一种基于多突触可塑性脉冲神经网络快速记忆编码装置,其特征在于,包括一个或多个处理器,用于实现权利要求1-6中任一项所述的基于多突触可塑性脉冲神经网络快速记忆编码方法。7. A fast memory coding device based on polysynaptic plasticity impulse neural network, characterized in that it comprises one or more processors for realizing the polysynaptic plasticity impulse-based according to any one of claims 1-6 Neural network fast memory coding method. 8.一种计算机可读存储介质,其特征在于,其上存储有程序,该程序被处理器执行时,实现权利要求1-6中任一项所述的基于多突触可塑性脉冲神经网络快速记忆编码方法。8. A computer-readable storage medium, characterized in that, a program is stored thereon, and when the program is executed by a processor, the fast neural network based on polysynaptic plasticity according to any one of claims 1-6 is realized. Memory coding method.
CN202111497073.2A 2021-12-09 2021-12-09 Fast memory encoding method and device based on multi-synaptic plasticity spiking neural network Active CN114118383B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111497073.2A CN114118383B (en) 2021-12-09 2021-12-09 Fast memory encoding method and device based on multi-synaptic plasticity spiking neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111497073.2A CN114118383B (en) 2021-12-09 2021-12-09 Fast memory encoding method and device based on multi-synaptic plasticity spiking neural network

Publications (2)

Publication Number Publication Date
CN114118383A true CN114118383A (en) 2022-03-01
CN114118383B CN114118383B (en) 2025-02-28

Family

ID=80364420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111497073.2A Active CN114118383B (en) 2021-12-09 2021-12-09 Fast memory encoding method and device based on multi-synaptic plasticity spiking neural network

Country Status (1)

Country Link
CN (1) CN114118383B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114611686A (en) * 2022-05-12 2022-06-10 之江实验室 Synapse delay implementation system and method based on programmable neural mimicry core
CN115327373A (en) * 2022-04-20 2022-11-11 岱特智能科技(上海)有限公司 Hemodialysis equipment fault diagnosis method based on BP neural network and storage medium
CN115429293A (en) * 2022-11-04 2022-12-06 之江实验室 Sleep type classification method and device based on impulse neural network
CN116080688A (en) * 2023-03-03 2023-05-09 北京航空航天大学 Brain-inspiring-like intelligent driving vision assisting method, device and storage medium
CN116542291A (en) * 2023-06-27 2023-08-04 北京航空航天大学 A method and system for generating impulse memory images inspired by memory loops
CN117456577A (en) * 2023-10-30 2024-01-26 苏州大学 System and method for expression recognition based on optical pulse neural network
WO2024216856A1 (en) * 2023-04-17 2024-10-24 北京大学 Brain-like synaptic learning method, and neuromorphic hardware system based on brain-like technique

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150269483A1 (en) * 2014-03-18 2015-09-24 Panasonic Intellectual Property Management Co., Ltd. Neural network circuit and learning method for neural network circuit
CN108985447A (en) * 2018-06-15 2018-12-11 华中科技大学 A kind of hardware pulse nerve network system
CN111882064A (en) * 2020-08-03 2020-11-03 中国人民解放军国防科技大学 Method and system for implementing competitive learning mechanism of spiking neural network based on memristor
CN113298242A (en) * 2021-06-08 2021-08-24 浙江大学 Brain-computer interface decoding method based on impulse neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150269483A1 (en) * 2014-03-18 2015-09-24 Panasonic Intellectual Property Management Co., Ltd. Neural network circuit and learning method for neural network circuit
CN108985447A (en) * 2018-06-15 2018-12-11 华中科技大学 A kind of hardware pulse nerve network system
CN111882064A (en) * 2020-08-03 2020-11-03 中国人民解放军国防科技大学 Method and system for implementing competitive learning mechanism of spiking neural network based on memristor
CN113298242A (en) * 2021-06-08 2021-08-24 浙江大学 Brain-computer interface decoding method based on impulse neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
匡载波;王江;: "基于脑启发视觉神经元网络输电线路部件识别的研究", 电力系统及其自动化学报, vol. 32, no. 4, 3 February 2020 (2020-02-03) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327373A (en) * 2022-04-20 2022-11-11 岱特智能科技(上海)有限公司 Hemodialysis equipment fault diagnosis method based on BP neural network and storage medium
CN114611686A (en) * 2022-05-12 2022-06-10 之江实验室 Synapse delay implementation system and method based on programmable neural mimicry core
CN115429293A (en) * 2022-11-04 2022-12-06 之江实验室 Sleep type classification method and device based on impulse neural network
CN115429293B (en) * 2022-11-04 2023-04-07 之江实验室 Sleep type classification method and device based on impulse neural network
CN116080688A (en) * 2023-03-03 2023-05-09 北京航空航天大学 Brain-inspiring-like intelligent driving vision assisting method, device and storage medium
WO2024216856A1 (en) * 2023-04-17 2024-10-24 北京大学 Brain-like synaptic learning method, and neuromorphic hardware system based on brain-like technique
CN116542291A (en) * 2023-06-27 2023-08-04 北京航空航天大学 A method and system for generating impulse memory images inspired by memory loops
CN116542291B (en) * 2023-06-27 2023-11-21 北京航空航天大学 Pulse memory image generation method and system for memory loop inspiring
CN117456577A (en) * 2023-10-30 2024-01-26 苏州大学 System and method for expression recognition based on optical pulse neural network
CN117456577B (en) * 2023-10-30 2024-04-26 苏州大学 System and method for facial expression recognition based on optical pulse neural network

Also Published As

Publication number Publication date
CN114118383B (en) 2025-02-28

Similar Documents

Publication Publication Date Title
CN114118383A (en) Multi-synaptic plasticity pulse neural network-based fast memory coding method and device
Yu et al. Spike timing or rate? Neurons learn to make decisions for both through threshold-driven plasticity
Hunsberger et al. Spiking deep networks with LIF neurons
CN103201610B (en) For providing the Method and circuits of neuromorphic-synapse device system
Tsuda et al. Memory dynamics in asynchronous neural networks
CN113272828A (en) Elastic neural network
WO2015020802A2 (en) Computed synapses for neuromorphic systems
KR20170031695A (en) Decomposing convolution operation in neural networks
TW201543382A (en) Neural network adaptation to current computational resources
EP3050004A2 (en) Methods and apparatus for implementation of group tags for neural models
Ivancevic et al. Quantum neural computation
Ma et al. A memristive neural network model with associative memory for modeling affections
CN112101535A (en) Signal processing method of pulse neuron and related device
CN114118378A (en) Hardware-friendly STDP learning method and system based on threshold adaptive neuron
Zheng et al. An introductory review of spiking neural network and artificial neural network: From biological intelligence to artificial intelligence
Ma et al. Double layers self-organized spiking neural P systems with anti-spikes for fingerprint recognition
CN107798384B (en) Iris florida classification method and device based on evolvable pulse neural network
Ravichandran et al. Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks
CN109635942B (en) Brain excitation state and inhibition state imitation working state neural network circuit structure and method
Thorpe Timing, Spikes, and the Brain
KR102535635B1 (en) Neuromorphic computing device
Li et al. A review on synergistic learning
Xue et al. Improving liquid state machine with hybrid plasticity
US11289175B1 (en) Method of modeling functions of orientation and adaptation on visual cortex
Lacko From perceptrons to deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant