CN114118383A - Multi-synaptic plasticity pulse neural network-based fast memory coding method and device - Google Patents
Multi-synaptic plasticity pulse neural network-based fast memory coding method and device Download PDFInfo
- Publication number
- CN114118383A CN114118383A CN202111497073.2A CN202111497073A CN114118383A CN 114118383 A CN114118383 A CN 114118383A CN 202111497073 A CN202111497073 A CN 202111497073A CN 114118383 A CN114118383 A CN 114118383A
- Authority
- CN
- China
- Prior art keywords
- layer
- pulse
- neurons
- input
- neuron
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000015654 memory Effects 0.000 title claims abstract description 69
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 44
- 210000002569 neuron Anatomy 0.000 claims abstract description 109
- 230000000946 synaptic effect Effects 0.000 claims abstract description 26
- 230000005764 inhibitory process Effects 0.000 claims abstract description 24
- 239000012528 membrane Substances 0.000 claims abstract description 19
- 230000001537 neural effect Effects 0.000 claims abstract description 15
- 210000000225 synapse Anatomy 0.000 claims abstract description 14
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 claims abstract description 11
- 238000000926 separation method Methods 0.000 claims abstract description 5
- 230000003213 activating effect Effects 0.000 claims abstract description 4
- 239000010410 layer Substances 0.000 claims description 143
- 238000010304 firing Methods 0.000 claims description 21
- 230000002401 inhibitory effect Effects 0.000 claims description 15
- 230000001242 postsynaptic effect Effects 0.000 claims description 15
- 210000005215 presynaptic neuron Anatomy 0.000 claims description 13
- 239000011229 interlayer Substances 0.000 claims description 11
- 230000010355 oscillation Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 11
- 238000001914 filtration Methods 0.000 claims description 8
- 210000005036 nerve Anatomy 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 7
- 230000003956 synaptic plasticity Effects 0.000 claims description 7
- 210000004027 cell Anatomy 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 230000001629 suppression Effects 0.000 claims description 4
- 102000004868 N-Methyl-D-Aspartate Receptors Human genes 0.000 claims description 3
- 108090001041 N-Methyl-D-Aspartate Receptors Proteins 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 210000000857 visual cortex Anatomy 0.000 claims description 3
- 230000003111 delayed effect Effects 0.000 claims description 2
- 230000005284 excitation Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 10
- 125000004122 cyclic group Chemical group 0.000 description 7
- 210000004205 output neuron Anatomy 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000000638 stimulation Effects 0.000 description 5
- 210000004556 brain Anatomy 0.000 description 4
- 230000002964 excitative effect Effects 0.000 description 3
- 238000012421 spiking Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000011953 bioanalysis Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000003930 cognitive ability Effects 0.000 description 1
- 230000003920 cognitive function Effects 0.000 description 1
- 230000036992 cognitive tasks Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000028161 membrane depolarization Effects 0.000 description 1
- 230000032611 negative regulation of synaptic plasticity Effects 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000003534 oscillatory effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a quick memory coding method based on a multi-synaptic plasticity pulse neural network, which comprises the following steps: the method comprises the following steps: converting the external stimulus into an input pulse sequence based on a hierarchical coding strategy; step two: after receiving the input pulse, the pulse neural network updates the membrane potential of the neuron on the output layer based on the improved SRM model; step three: updating synapse weight values input to an output layer by using a supervision group Tempotron, and activating neuron memory input of the output layer; step four: after neurons in an output layer are activated, updating synaptic weights among the activated neurons in the layer by using unsupervised STDP to form enhanced storage memory of a circulation sub-network; step five: and when the step four is executed, unsupervised inhibition synapse plasticity is used, synapse weights from an inhibition layer to an output layer are updated, and separation of release time of different input neural populations of feedback guarantee memory is inhibited. The invention also provides a quick memory coding device based on the multi-synaptic plasticity pulse neural network. The invention effectively improves the coding speed and stability of memory.
Description
Technical Field
The invention belongs to the field of brain-like intelligence and artificial intelligence, and relates to a quick memory coding method and device based on a multi-synaptic plasticity pulse neural network.
Background
The Spiking Neural Network (SNN) simulates an information processing mechanism of a biological nervous system, and has the advantages of low power consumption, stronger information expression capability and the like compared with the traditional artificial Neural Network through discrete pulse event driven calculation, thereby being a powerful tool for analyzing and simulating the cognitive function of the brain. Memory, a cognitive ability to recognize, maintain, and reproduce things that have gone through, is one of the most central components of brain intelligence.
Synaptic plasticity has long been recognized as the basis for learning and memory, and neuroscience research indicates that biological brains rely on the synergy of multiple synaptic plasticity to ensure the reliable implementation of cognitive tasks. However, due to the lack of understanding of the interaction between different forms of synaptic Plasticity rules and the complexity of the structure and dynamics of the neural network, most of the existing pulse memory models only use unsupervised synaptic Plasticity rules (such as Spike-Timing-Dependent-Plasticity (STDP)) to train the cyclic network, and by adjusting the synaptic weights between the activated neurons in the cyclic network, a cyclic sub-network memory with strengthened connection is formed, and the memory of the external input mode is actively coded by the collective co-emission of the neural population. However, the memory model based on the unsupervised plasticity rule has a slow learning process, requires multiple repeated input of sensory stimulation to generate neural population expression memory, and has low memory efficiency. In addition, a memory model based on unsupervised plasticity rules cannot guarantee stable generation of neural populations, and interference may occur between neural populations responding to different input patterns in the learning process to destroy memory.
Disclosure of Invention
In order to solve the problems of low memory coding efficiency and instability of the current pulse memory model based on unsupervised plasticity in the prior art, the invention provides a quick memory coding method and device based on a multi-synaptic plasticity pulse neural network, and the specific technical scheme is as follows:
a pulse neural network fast memory coding method based on multi-synaptic plasticity comprises the following steps:
the method comprises the following steps: converting the external stimulus into an input pulse sequence based on a hierarchical coding strategy;
step two: after receiving the input pulse, the pulse neural network updates the membrane potential of the neuron on the output layer based on the improved SRM model;
step three: updating synapse weight values input to an output layer by using a supervision group Tempotron, and activating neuron memory input of the output layer;
step four: after neurons in an output layer are activated, updating synaptic weights among the activated neurons in the layer by using unsupervised STDP to form enhanced storage memory of a circulation sub-network;
step five: and when the step four is executed, unsupervised inhibition synapse plasticity is used, synapse weights from an inhibition layer to an output layer are updated, and separation of release time of different input neural populations of feedback guarantee memory is inhibited.
Preferably, the first step is specifically:
the external stimulus is seven images selected from seven types of '0', '1', '2', '3', '4', '5' and '6' of MNIST handwriting digit sets, the size of each image is 28 pixels and 28 pixels, each image is evenly divided, and 49 receiving domains with the size of 4 pixels and 4 pixels are obtained;
the hierarchical coding strategy uses a two-layer structure consisting of the S-layer of Simple cells and the C-layer of Complex cells to perform feature extraction on external stimuli, and then encodes the extracted features by a delayed coding method, wherein:
and (2) S layer: extracting edge direction characteristics of the image, wherein the filtering kernels of the S layer are respectively divided into 4 directions、、、The Gabor filters in 4 directions simulate the receptive field of the visual cortex, and respectively perform filtering operation on image blocks corresponding to each receiving domain, and 49 × 4 feature graphs with the size of 4pixel × 4 pixels are obtained after filtering;
layer C: firstly, performing an adding operation on each feature graph output by the S layer to obtain a feature graph with the size of 49 pixels by 4 pixels, performing linear normalization on the feature graphs to distribute data into an interval of [0, 1], and then performing maximum competition operation in rows, wherein the performing of the maximum competition operation in rows comprises: keeping the feature with the strongest intensity in each row as the most matched direction feature at the position, setting the feature values of other directions as 0, and obtaining the sparse expression of the feature map with the size of 4 pixels 9 by 4 pixels;
the obtained characteristics are converted into an input pulse sequence through delay coding and calculated according to the following formula:
wherein,is the firing time of the ith neuron, T is the length of the coding window,is the intensity value of the ith feature.
Preferably, the second step is specifically:
the improved SRM model introduces a post-depolarization potential in the refractory nucleus and simultaneously introducesIn the pulse neural network, after a neuron of an output layer emits a pulse, the pulse is transmitted to other neurons and inhibitory layer neurons in the same layer, and the membrane potential of the neuron of the output layer in the model receives excitation input from the input layer and other neurons in the same layer and inhibition feedback from the inhibitory neurons, so that the membrane potential of the neuron of the output layer in the model is calculated according to the following formula:
wherein,is the membrane potential of the neuron i,is the total input to the input layer neurons,is the total input of the neurons of the same layer,is to suppress the feedback of the feedback signal,is a neuron i inA post-depolarization process of the membrane potential after the firing pulse,is thatThe source of the oscillation is set to be,、、synaptic weights from input layer neurons to output layer neurons, from output layer neurons, and from inhibitory neurons to output layer neurons, respectively,is a normalization factor that is a function of,、、respectively inhibiting feedback and post-depolarization,The amplitude of the oscillations is such that,、the pulse firing times of the pre-synaptic neuron j and the post-synaptic neuron i respectively,、、、is a time constant, f isThe frequency of the oscillation, t is a time variable,is the initial phase.
Preferably, the supervised group Tempotron in step three updates the synaptic weights between the input layer and the output layer, and the updating method is as follows:
wherein,is the weight of the input layer to output layer connection,is the learning rate of the inter-layer weights, d is the expected output flag includes 0 or 1,is the pulse emitting time of the presynaptic neuron j, K is the number of neurons which need to be activated when the distance reaches the size of the nerve population,is the time corresponding to the maximal membrane potential of the kth most active post-synaptic neuron.
Preferably, the unsupervised STDP in step four updates the synaptic weights among the activated neurons in the layer, and the updating method is as follows:
wherein,is the update quantity of the connection weight in the output layer,and the magnitude of the enhancement and the attenuation respectively,andrespectively representing the moment of pulse firing for pre-synaptic neuron j and post-synaptic neuron i,、is the time constant of the time at which,represents the decay time constant of the NMDA receptor.
Preferably, the updating method for inhibiting synaptic plasticity in the fifth step is as follows:
wherein,is the update amount of the connection weight between the suppression layer and the output layer,representing the pulse firing instant of the post-synaptic neuron i.
A multi-synaptic plasticity pulse neural network-based fast memory coding device comprises one or more processors and is used for realizing the multi-synaptic plasticity pulse neural network-based fast memory coding method.
A computer readable storage medium, on which a program is stored, which, when executed by a processor, implements the multi-synaptic plasticity pulse neural network-based fast memory coding method.
The invention has the beneficial effects that:
the cooperation of the supervised population Tempotron, the unsupervised STDP and the unsupervised plasticity inhibition provided by the invention provides a feasible and efficient way for the pulse neural network to quickly memorize the external stimulation. Compared with the existing pulse memory model, the pulse memory model has the synergy of supervised and unsupervised plasticity, has higher bioanalysis, and effectively improves the coding speed and the stability of memory.
Drawings
FIG. 1 is a block diagram of a computation framework of a multi-synaptic plasticity pulse neural network-based fast memory coding method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a multi-synaptic plasticity pulse neural network-based fast memory coding method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an external stimulus encoding process;
FIG. 4 is a schematic diagram of a population Tempotron learning strategy;
FIG. 5(a) shows the input pulse sequence after each image is encoded in the order of '0', '1', '2', '3', '4', '5' and '6' in the impulse neural network according to the embodiment of the present inventionOscillating a schematic diagram of the input network at the trough of different periods;
FIG. 5(b) is an activity diagram of an embodiment of the present invention corresponding to '0', '1', '2', '3', '4', '5', '6', '0', '01', '012', '0123', '01234', '012345' and '0123456';
FIG. 6(a) is a graph of inter-layer synaptic weight change; FIG. 6(b) is a diagram of synaptic weight change in layers;
FIG. 7 is a diagram illustrating the convergence speed of inter-layer synaptic weights;
FIG. 8 is a block diagram of a fast memory coding device based on a multi-synaptic plasticity pulse neural network according to the present invention.
Detailed Description
In order to make the objects, technical solutions and technical effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples.
In order to solve/relieve the technical problems, the invention constructs a pulse neural network model with interlayer full connection and cyclic connection in layers based on a supervised population Tempotron, an unsupervised STDP and synaptic plasticity inhibition, and verifies the function of the model on a digital sequence memory task, and the result shows that the invention can effectively improve the coding speed and the stability of memory.
As shown in fig. 1, the computing framework of the pulse neural network fast memory coding method based on multi-synaptic plasticity includes an input layer, an output layer, and a suppression layer. The input layer neuron is fully connected to the output layer neuron, the input pulse is transmitted to the output layer through the input layer, and the synaptic weight value between the input layer and the output layer is updated by the supervised population Tempotron; the output layer has a circular connection, and the synapse weight value in the output layer is updated by the unsupervised STDP; bidirectional connection exists between the inhibition layer and the output layer, wherein the output layer and the inhibition layer adopt a one-to-one connection mode, synapse weights of the connection are fixed and do not participate in weight training, the inhibition layer and the output layer adopt a diagonal connectionless full connection mode, and the synapse weights are updated by an unsupervised inhibition plasticity rule.
The multi-synaptic plasticity pulse neural network-based fast memory coding method specifically comprises the following steps as shown in fig. 2:
the method comprises the following steps: converting the external stimulus into an input pulse sequence based on a hierarchical coding strategy;
the real-valued data of the external stimulus, which is seven images arbitrarily selected from seven of '0', '1', '2', '3', '4', '5', '6' of the MNIST handwriting digital set, with an image size of 28 pixels, is first encoded into a pulse pattern before being input to the pulse neural network, and is divided into 49 receiving fields with a size of 4 pixels. As shown in fig. 3, the encoding process of the number '0' firstly performs feature extraction through a two-layer structure composed of a Simple Cell, an S-layer and a Complex Cell, a C-layer, and then encodes the extracted features by a delay encoding method, wherein:
and (2) S layer: extracting edge direction characteristics of the image, wherein the filter kernels of the S layers are respectively in the directions、、、The 4 Gabor filters simulate the receptive field of visual cortex to perform filtering operation on the received domain image, and 49 × 4 characteristic graphs with the size of 4pixel × 4pixel are obtained after filtering;
layer C: firstly, adding operation is carried out on each feature map output by the S layer to obtain a feature map with the size of 49 x 4, linear normalization is carried out on the feature maps to distribute data into intervals of [0, 1], then maximum competition operation is carried out according to rows, namely, only the feature with the strongest intensity is reserved in each row to be used as the most matched direction feature at the position, feature values in other directions are set to be 0, and sparse expression of the feature map with the size of 49 x 4 is obtained.
The obtained characteristics are converted into an input pulse sequence through delay coding and calculated according to the following formula:
wherein,is the firing time of the ith neuron, T =20e-3 is the length of the coding window,is the intensity value of the ith feature.
The pulse sequence for each image was 49 × 4=196, the different images were represented by different pulse sequences, the encoded pulse sequences entered the network via the input layer, and the total number of neurons in the input layer was 7 × 49 × 4= 1372.
Step two: after receiving the input pulse, updating the membrane potential of the neuron in the output layer based on the improved SRM model;
in this embodiment, the neurons of the spiking neural network adopt the improved SRM model, and a concise description of the dynamics of the spiking neurons is provided by a continuous kernel function, which is beneficial to clearly showing the contribution of a plurality of input sources to neuron firing. Compared with the original SRM model, the improved SRM model considers more physiological details, and introduces post-depolarization potential in the refractory nucleus to describe the process of slow rise of membrane potential after the neuron sends out pulse:
is a neuron i inA post-depolarization process of the membrane potential after the firing pulse,is the magnitude of the back depolarization(s),is the pulse firing time of the neuron i,is a time constant.
Is introduced intoThe brain waves, which are important for the synchronous neural activity, are oscillated and injected as an external input currentInto the output neuron, it is modeled as a cosine wave.
Wherein,is thatThe source of the oscillation is set to be,is thatThe amplitude of the oscillations is such that,is thatThe frequency of the oscillations is such that,is the initial phase. In the impulse neural network, the input impulse sequence after each image coding is respectively arranged in the sequence of ' 0 ', ' 1 ', ' 2 ', ' 3 ', ' 4 ', ' 5 ' and ' 6The input network is oscillated at the troughs of different periods as shown in fig. 5 (a). Excitatory inputs from the input layer received by neurons in the output layer of the network are expressed as follows:
wherein,is the total input to the input layer neurons,is the synaptic weight between input layer neuron j to output layer neuron i,is a normalization factor that is a function of,is the pulse firing instant of the pre-synaptic neuron j,、is a time constant.
When an output layer neuron fires a pulse, the pulse is transmitted to other neurons in the same layer, as well as to the inhibitory layer neurons. Excitatory inputs received by neurons in the output layer from other neurons in the same layer are expressed as follows:
is the total input of the neurons of the same layer,is the synaptic weight between the pre-synaptic neuron j and the post-synaptic neuron i in the output layer.
The inhibitory feedback received from the inhibitory neurons is expressed as follows:
is to suppress the feedback of the feedback signal,are synaptic weights between inhibitory neurons to output layer neurons,is the magnitude of the suppressed feedback and,is the pulse firing time of the output layer neurons,is a time constant.
In summary, the membrane potential of the neurons in the output layer in the model is calculated as follows:
in the present embodiment, the first and second electrodes are,3,,,,,,,,simulation time intervalOne time simulation iteration durationThe number of iterations is set to 20.
Step three: updating synapse weight values input to an output layer by using a supervision group Tempotron, and quickly activating a group of output neurons to memorize and input;
as shown in fig. 4, is a schematic diagram of the population Tempotron learning strategy. In each iterative learning process, for each input pulse mode, if the number of neurons responding to the input mode by the output layer is different by K to reach the size of a preset nerve population, synaptic weights between the input layer's firing neurons and the output layer's non-firing neurons corresponding to the first K maximum membrane potentials are strengthened, and an interlayer weight updating formula is as follows:
wherein,is the update quantity of the connection weight between the input layer and the output layer,is the learning rate of the inter-layer weights, and d is the expected output flag including: a value of 0 or 1, or a combination thereof,is the pulse emitting time of the presynaptic neuron j, K is the number of neurons which need to be activated when the distance reaches the size of the nerve population,is the time corresponding to the maximal membrane potential of the kth most active post-synaptic neuron.
In the present embodiment, the first and second electrodes are,the number of output neurons was 100, and the number of neurons in the neural population was 12. Before learning, the interlayer initial weight is randomly initialized according to normal distribution, the mean value is 0.001, and the standard deviation is 0.002. As shown in fig. 6(a), after learning, for each input pattern, the weight between the firing neurons of the input layer and the responding neural population of the output layer is enhanced. Due to the enhancement of the inter-layer synaptic weight, near the time of input pulse pattern presentation, the output layer presents the neural population issuing activity encoding input, as shown by the activities corresponding to '0', '1', '2', '3', '4', '5', and '6' of fig. 5 (b).
Step four: after neurons in an output layer are activated, updating synaptic weights among the activated neurons in the layer by using unsupervised STDP to form enhanced storage memory of a circulation sub-network;
after the neuron on the output layer is activated, when the difference between the release times of the pre-synaptic neuron and the post-synaptic neuron is in [ 2 ]]Within the interval, the synaptic weight will be updated. When the presynaptic neuron is issued before the postsynaptic neuron, the synaptic weight is enhanced; the synaptic weight is weakened when the pre-synaptic neuron fires later than the post-synaptic neuron. The intra-layer weight value updating formula is as follows:
wherein,is the update quantity of the connection weight in the output layer,andthe magnitude of the enhancement and the attenuation are indicated separately,andrespectively representing the moment of pulse firing for pre-synaptic neuron j and post-synaptic neuron i,、is the time constant of the time at which,represents the decay time constant of the NMDA receptor.
In the present embodiment, the first and second electrodes are,,. Before learning, the weight of the intra-layer cyclic connection is initialized randomly, and the initial weight is [0, 1e-5 ]]Within the interval; after learning, the internal connections of the neuron population in response to each input pattern of the input layer are enhanced to form a cyclic subnetwork, as shown in fig. 6 (b). Self-associative memory is stored in enhanced cyclic connection weights within the neural population.
Step five: and D, while the step four is executed, updating the synaptic weights between the inhibition layer and the output layer by using unsupervised inhibition plasticity, and inhibiting the separation of the release time of the neural populations with different inputs of feedback guarantee memory.
The output layer and the inhibition layer adopt a one-to-one connection mode, and synapse weights of the connection are fixed and do not participate in the weightsAnd (5) training. The inhibition layer and the output layer adopt a full connection mode without connection of diagonal, the value on the diagonal of the weight matrix is 0, and the rest values are 1. The time difference between the current time t and the latest issuance of the output neuron is set to]Within the interval, the inhibition weight of the output layer to the neuron with input to the inhibition neuron is reduced, and the inhibition weight to the neuron without input to the inhibition neuron is not changed. The updated formula for inhibition of synaptic plasticity is as follows:
wherein,is the update amount of the connection weight between the suppression layer and the output layer,representing the pulse firing instant of the post-synaptic neuron i. After the nerve population of the output layer is subjected to the stimulation and the pulse, the excitatory input, the depolarized current after discharge and the external part from the nerve population of the same layerThe oscillatory input enables a neural population to be followed without a subsequent stimulation inputThe oscillation period is continuously issued, thereby maintaining short-term memory. After the neurons in the output layer are released, the inhibition feedback from the neurons in the inhibition layer prevents the neural population from being continuously released after being released, and avoids mutual interference among different memory items, as shown in the activities corresponding to fig. 5(b), 0 ', 01 ', 012 ', 0123 ', 01234 ', 012345 ' and 0123456 ', the subsequent input stimulation appearsNear the peak of the oscillation period, the nerve groups responding to different input stimuli are issued at different moments in sequence, and the separation of the issuing time of the nerve groups with different inputs is restrained and guaranteed to be memorized in a feedback mode.
as shown in fig. 7, the convergence speed of the inter-layer synaptic weights is the convergence speed, and under the group Tempotron learning strategy, a group of output neurons is activated rapidly to perform memory coding on the input, and the inter-layer synaptic weights converge after only 4 times of learning. Experimental results show that the pulse neural network rapid memory coding method based on multi-synapse plasticity can effectively improve the coding speed and stability of memory.
Corresponding to the foregoing embodiments of the multi-synaptic plasticity pulse neural network-based fast memory coding method, the invention also provides embodiments of a multi-synaptic plasticity pulse neural network-based fast memory coding device.
Referring to fig. 8, an apparatus for fast memory coding based on a multi-synaptic plasticity pulse neural network according to an embodiment of the present invention includes one or more processors, and is configured to implement the method for fast memory coding based on a multi-synaptic plasticity pulse neural network according to the above embodiment.
The embodiment of the invention based on the multi-synapse plasticity pulse neural network fast memory coding device can be applied to any data processing-capable equipment, such as computers and other equipment or devices. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. From a hardware aspect, as shown in fig. 8, a hardware structure diagram of any device with data processing capability based on the multi-synaptic plasticity pulse neural network fast memory coding device of the present invention is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 8, in an embodiment, any device with data processing capability that the device is located may generally include other hardware according to the actual function of the any device with data processing capability, which is not described again.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the present invention further provides a computer-readable storage medium, on which a program is stored, and when the program is executed by a processor, the method for implementing the fast memory coding method based on the multi-synaptic plasticity pulse neural network in the above embodiments is implemented.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be an external storage device of the wind turbine, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), and the like, provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing-capable device, and may also be used for temporarily storing data that has been output or is to be output.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way. Although the foregoing has described the practice of the present invention in detail, it will be apparent to those skilled in the art that modifications may be made to the practice of the invention as described in the foregoing examples, or that certain features may be substituted in the practice of the invention. All changes, equivalents and modifications which come within the spirit and scope of the invention are desired to be protected.
Claims (8)
1. The quick memory coding method based on the multi-synaptic plasticity pulse neural network is characterized by comprising the following steps of:
the method comprises the following steps: converting the external stimulus into an input pulse sequence based on a hierarchical coding strategy;
step two: after receiving the input pulse, the pulse neural network updates the membrane potential of the neuron on the output layer based on the improved SRM model;
step three: updating synapse weight values input to an output layer by using a supervision group Tempotron, and activating neuron memory input of the output layer;
step four: after neurons in an output layer are activated, updating synaptic weights among the activated neurons in the layer by using unsupervised STDP to form enhanced storage memory of a circulation sub-network;
step five: and when the step four is executed, unsupervised inhibition synapse plasticity is used, synapse weights from an inhibition layer to an output layer are updated, and separation of release time of different input neural populations of feedback guarantee memory is inhibited.
2. The multi-synaptic plasticity pulse neural network based fast memory coding method according to claim 1, wherein the first step is specifically:
the external stimulus is seven images selected from seven types of '0', '1', '2', '3', '4', '5' and '6' of MNIST handwriting digit sets, the size of each image is 28 pixels and 28 pixels, each image is evenly divided, and 49 receiving domains with the size of 4 pixels and 4 pixels are obtained;
the hierarchical coding strategy uses a two-layer structure consisting of the S-layer of Simple cells and the C-layer of Complex cells to perform feature extraction on external stimuli, and then encodes the extracted features by a delayed coding method, wherein:
and (2) S layer: extracting edge direction characteristics of the image, wherein the filtering kernels of the S layer are respectively divided into 4 directions、、、The Gabor filters in 4 directions simulate the receptive field of the visual cortex, and respectively perform filtering operation on image blocks corresponding to each receiving domain, and 49 × 4 feature graphs with the size of 4pixel × 4 pixels are obtained after filtering;
layer C: firstly, performing an adding operation on each feature graph output by the S layer to obtain a feature graph with the size of 49 pixels by 4 pixels, performing linear normalization on the feature graphs to distribute data into an interval of [0, 1], and then performing maximum competition operation in rows, wherein the performing of the maximum competition operation in rows comprises: keeping the feature with the strongest intensity in each row as the most matched direction feature at the position, setting the feature values of other directions as 0, and obtaining the sparse expression of the feature map with the size of 4 pixels 9 by 4 pixels;
the obtained characteristics are converted into an input pulse sequence through delay coding and calculated according to the following formula:
3. The multi-synaptic plasticity pulse neural network-based fast memory coding method according to claim 1, wherein the second step is specifically:
the improved SRM model introduces a post-depolarization potential in the refractory nucleus and simultaneously introducesIn the pulse neural network, after a neuron of an output layer emits a pulse, the pulse is transmitted to other neurons and inhibitory layer neurons in the same layer, and the membrane potential of the neuron of the output layer in the model receives excitation input from the input layer and other neurons in the same layer and inhibition feedback from the inhibitory neurons, so that the membrane potential of the neuron of the output layer in the model is calculated according to the following formula:
wherein,is the membrane potential of the neuron i,is the total input to the input layer neurons,is the total input of the neurons of the same layer,is to suppress the feedback of the feedback signal,is a neuron i inA post-depolarization process of the membrane potential after the firing pulse,is thatThe source of the oscillation is set to be,、、synaptic weights from input layer neurons to output layer neurons, from output layer neurons, and from inhibitory neurons to output layer neurons, respectively,is a normalization factor that is a function of,、、respectively inhibiting feedback and post-depolarization,The amplitude of the oscillations is such that,、the pulse firing times of the pre-synaptic neuron j and the post-synaptic neuron i respectively,、、、is a time constant, f isThe frequency of the oscillation, t is a time variable,is the initial phase.
4. The multi-synaptic plasticity pulse neural network flash memory coding method according to claim 1, wherein the supervised group Tempotron in the third step updates synaptic weights between the input layer and the output layer, and the updating method is as follows:
wherein,is the weight of the input layer to output layer connection,is the learning rate of the inter-layer weights, d is the expected output flag includes 0 or 1,is the pulse emitting time of the presynaptic neuron j, K is the number of neurons which need to be activated when the distance reaches the size of the nerve population,is the time corresponding to the maximal membrane potential of the kth most active post-synaptic neuron.
5. The method of claim 1, wherein the unsupervised STDP in step four updates synaptic weights between active neurons in the intra-layer by:
wherein,is the update quantity of the connection weight in the output layer,andthe magnitude of the enhancement and the attenuation are indicated separately,andrespectively representing the moment of pulse firing for pre-synaptic neuron j and post-synaptic neuron i,、is the time constant of the time at which,represents the decay time constant of the NMDA receptor.
6. The multi-synaptic plasticity pulse neural network based fast memory coding method according to claim 1, wherein the updating method for suppressing synaptic plasticity in the fifth step is:
7. A multi-synaptic plasticity pulse neural network-based fast memory coding device, comprising one or more processors, and being used for implementing the multi-synaptic plasticity pulse neural network-based fast memory coding method according to any one of claims 1 to 6.
8. A computer-readable storage medium, on which a program is stored, which, when executed by a processor, implements the multi-synaptic plasticity pulse neural network-based fast memory coding method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111497073.2A CN114118383A (en) | 2021-12-09 | 2021-12-09 | Multi-synaptic plasticity pulse neural network-based fast memory coding method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111497073.2A CN114118383A (en) | 2021-12-09 | 2021-12-09 | Multi-synaptic plasticity pulse neural network-based fast memory coding method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114118383A true CN114118383A (en) | 2022-03-01 |
Family
ID=80364420
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111497073.2A Pending CN114118383A (en) | 2021-12-09 | 2021-12-09 | Multi-synaptic plasticity pulse neural network-based fast memory coding method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114118383A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114611686A (en) * | 2022-05-12 | 2022-06-10 | 之江实验室 | Synapse delay implementation system and method based on programmable neural mimicry core |
CN115327373A (en) * | 2022-04-20 | 2022-11-11 | 岱特智能科技(上海)有限公司 | Hemodialysis equipment fault diagnosis method based on BP neural network and storage medium |
CN115429293A (en) * | 2022-11-04 | 2022-12-06 | 之江实验室 | Sleep type classification method and device based on impulse neural network |
CN116080688A (en) * | 2023-03-03 | 2023-05-09 | 北京航空航天大学 | Brain-inspiring-like intelligent driving vision assisting method, device and storage medium |
CN116542291A (en) * | 2023-06-27 | 2023-08-04 | 北京航空航天大学 | Pulse memory image generation method and system for memory loop inspiring |
CN117456577A (en) * | 2023-10-30 | 2024-01-26 | 苏州大学 | System and method for realizing expression recognition based on optical pulse neural network |
-
2021
- 2021-12-09 CN CN202111497073.2A patent/CN114118383A/en active Pending
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115327373A (en) * | 2022-04-20 | 2022-11-11 | 岱特智能科技(上海)有限公司 | Hemodialysis equipment fault diagnosis method based on BP neural network and storage medium |
CN114611686A (en) * | 2022-05-12 | 2022-06-10 | 之江实验室 | Synapse delay implementation system and method based on programmable neural mimicry core |
CN115429293A (en) * | 2022-11-04 | 2022-12-06 | 之江实验室 | Sleep type classification method and device based on impulse neural network |
CN115429293B (en) * | 2022-11-04 | 2023-04-07 | 之江实验室 | Sleep type classification method and device based on impulse neural network |
CN116080688A (en) * | 2023-03-03 | 2023-05-09 | 北京航空航天大学 | Brain-inspiring-like intelligent driving vision assisting method, device and storage medium |
CN116542291A (en) * | 2023-06-27 | 2023-08-04 | 北京航空航天大学 | Pulse memory image generation method and system for memory loop inspiring |
CN116542291B (en) * | 2023-06-27 | 2023-11-21 | 北京航空航天大学 | Pulse memory image generation method and system for memory loop inspiring |
CN117456577A (en) * | 2023-10-30 | 2024-01-26 | 苏州大学 | System and method for realizing expression recognition based on optical pulse neural network |
CN117456577B (en) * | 2023-10-30 | 2024-04-26 | 苏州大学 | System and method for realizing expression recognition based on optical pulse neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114118383A (en) | Multi-synaptic plasticity pulse neural network-based fast memory coding method and device | |
Rabinovich et al. | Dynamical principles in neuroscience | |
Taherkhani et al. | A review of learning in biologically plausible spiking neural networks | |
Diehl et al. | Unsupervised learning of digit recognition using spike-timing-dependent plasticity | |
Yu et al. | Spike timing or rate? Neurons learn to make decisions for both through threshold-driven plasticity | |
Demin et al. | Recurrent spiking neural network learning based on a competitive maximization of neuronal activity | |
Yu et al. | Rapid feedforward computation by temporal encoding and learning with spiking neurons | |
WO2015020802A2 (en) | Computed synapses for neuromorphic systems | |
TW201543382A (en) | Neural network adaptation to current computational resources | |
Xu et al. | Spike trains encoding and threshold rescaling method for deep spiking neural networks | |
TW201523463A (en) | Methods and apparatus for implementation of group tags for neural models | |
Shaw | Donald hebb: The organization of behavior | |
Ma et al. | A memristive neural network model with associative memory for modeling affections | |
CN112101535A (en) | Signal processing method of pulse neuron and related device | |
Zheng et al. | An introductory review of spiking neural network and artificial neural network: from biological intelligence to artificial intelligence | |
Varier et al. | Establishing, versus maintaining, brain function: a neuro-computational model of cortical reorganization after injury to the immature brain | |
CN110378469A (en) | SCNN inference device based on asynchronous circuit, PE unit, processor and computer equipment thereof | |
Li et al. | A review on synergistic learning | |
Zhao et al. | A Feed-Forward Neural Network for Increasing the Hopfield-Network Storage Capacity | |
US11289175B1 (en) | Method of modeling functions of orientation and adaptation on visual cortex | |
Pashaie et al. | Self-organization in a parametrically coupled logistic map network: A model for information processing in the visual cortex | |
Song et al. | The spiking neural network based on fMRI for speech recognition | |
Vaila et al. | Spiking CNNs with PYNN and NEURON | |
Viswanathan et al. | A Study of Prefrontal Cortex Task Switching Using Spiking Neural Networks | |
Cutsuridis | Computational models of memory formation in healthy and diseased microcircuits of the hippocampus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |