CN114118383A - Multi-synaptic plasticity pulse neural network-based fast memory coding method and device - Google Patents

Multi-synaptic plasticity pulse neural network-based fast memory coding method and device Download PDF

Info

Publication number
CN114118383A
CN114118383A CN202111497073.2A CN202111497073A CN114118383A CN 114118383 A CN114118383 A CN 114118383A CN 202111497073 A CN202111497073 A CN 202111497073A CN 114118383 A CN114118383 A CN 114118383A
Authority
CN
China
Prior art keywords
layer
pulse
neurons
input
neuron
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111497073.2A
Other languages
Chinese (zh)
Inventor
袁孟雯
唐华锦
王笑
陆宇婧
张梦骁
洪朝飞
黄恒
赵文一
燕锐
潘纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Zhejiang Lab
Original Assignee
Zhejiang University ZJU
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU, Zhejiang Lab filed Critical Zhejiang University ZJU
Priority to CN202111497073.2A priority Critical patent/CN114118383A/en
Publication of CN114118383A publication Critical patent/CN114118383A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a quick memory coding method based on a multi-synaptic plasticity pulse neural network, which comprises the following steps: the method comprises the following steps: converting the external stimulus into an input pulse sequence based on a hierarchical coding strategy; step two: after receiving the input pulse, the pulse neural network updates the membrane potential of the neuron on the output layer based on the improved SRM model; step three: updating synapse weight values input to an output layer by using a supervision group Tempotron, and activating neuron memory input of the output layer; step four: after neurons in an output layer are activated, updating synaptic weights among the activated neurons in the layer by using unsupervised STDP to form enhanced storage memory of a circulation sub-network; step five: and when the step four is executed, unsupervised inhibition synapse plasticity is used, synapse weights from an inhibition layer to an output layer are updated, and separation of release time of different input neural populations of feedback guarantee memory is inhibited. The invention also provides a quick memory coding device based on the multi-synaptic plasticity pulse neural network. The invention effectively improves the coding speed and stability of memory.

Description

Multi-synaptic plasticity pulse neural network-based fast memory coding method and device
Technical Field
The invention belongs to the field of brain-like intelligence and artificial intelligence, and relates to a quick memory coding method and device based on a multi-synaptic plasticity pulse neural network.
Background
The Spiking Neural Network (SNN) simulates an information processing mechanism of a biological nervous system, and has the advantages of low power consumption, stronger information expression capability and the like compared with the traditional artificial Neural Network through discrete pulse event driven calculation, thereby being a powerful tool for analyzing and simulating the cognitive function of the brain. Memory, a cognitive ability to recognize, maintain, and reproduce things that have gone through, is one of the most central components of brain intelligence.
Synaptic plasticity has long been recognized as the basis for learning and memory, and neuroscience research indicates that biological brains rely on the synergy of multiple synaptic plasticity to ensure the reliable implementation of cognitive tasks. However, due to the lack of understanding of the interaction between different forms of synaptic Plasticity rules and the complexity of the structure and dynamics of the neural network, most of the existing pulse memory models only use unsupervised synaptic Plasticity rules (such as Spike-Timing-Dependent-Plasticity (STDP)) to train the cyclic network, and by adjusting the synaptic weights between the activated neurons in the cyclic network, a cyclic sub-network memory with strengthened connection is formed, and the memory of the external input mode is actively coded by the collective co-emission of the neural population. However, the memory model based on the unsupervised plasticity rule has a slow learning process, requires multiple repeated input of sensory stimulation to generate neural population expression memory, and has low memory efficiency. In addition, a memory model based on unsupervised plasticity rules cannot guarantee stable generation of neural populations, and interference may occur between neural populations responding to different input patterns in the learning process to destroy memory.
Disclosure of Invention
In order to solve the problems of low memory coding efficiency and instability of the current pulse memory model based on unsupervised plasticity in the prior art, the invention provides a quick memory coding method and device based on a multi-synaptic plasticity pulse neural network, and the specific technical scheme is as follows:
a pulse neural network fast memory coding method based on multi-synaptic plasticity comprises the following steps:
the method comprises the following steps: converting the external stimulus into an input pulse sequence based on a hierarchical coding strategy;
step two: after receiving the input pulse, the pulse neural network updates the membrane potential of the neuron on the output layer based on the improved SRM model;
step three: updating synapse weight values input to an output layer by using a supervision group Tempotron, and activating neuron memory input of the output layer;
step four: after neurons in an output layer are activated, updating synaptic weights among the activated neurons in the layer by using unsupervised STDP to form enhanced storage memory of a circulation sub-network;
step five: and when the step four is executed, unsupervised inhibition synapse plasticity is used, synapse weights from an inhibition layer to an output layer are updated, and separation of release time of different input neural populations of feedback guarantee memory is inhibited.
Preferably, the first step is specifically:
the external stimulus is seven images selected from seven types of '0', '1', '2', '3', '4', '5' and '6' of MNIST handwriting digit sets, the size of each image is 28 pixels and 28 pixels, each image is evenly divided, and 49 receiving domains with the size of 4 pixels and 4 pixels are obtained;
the hierarchical coding strategy uses a two-layer structure consisting of the S-layer of Simple cells and the C-layer of Complex cells to perform feature extraction on external stimuli, and then encodes the extracted features by a delayed coding method, wherein:
and (2) S layer: extracting edge direction characteristics of the image, wherein the filtering kernels of the S layer are respectively divided into 4 directions
Figure 827069DEST_PATH_IMAGE001
Figure 275367DEST_PATH_IMAGE002
Figure 461629DEST_PATH_IMAGE003
Figure 387997DEST_PATH_IMAGE004
The Gabor filters in 4 directions simulate the receptive field of the visual cortex, and respectively perform filtering operation on image blocks corresponding to each receiving domain, and 49 × 4 feature graphs with the size of 4pixel × 4 pixels are obtained after filtering;
layer C: firstly, performing an adding operation on each feature graph output by the S layer to obtain a feature graph with the size of 49 pixels by 4 pixels, performing linear normalization on the feature graphs to distribute data into an interval of [0, 1], and then performing maximum competition operation in rows, wherein the performing of the maximum competition operation in rows comprises: keeping the feature with the strongest intensity in each row as the most matched direction feature at the position, setting the feature values of other directions as 0, and obtaining the sparse expression of the feature map with the size of 4 pixels 9 by 4 pixels;
the obtained characteristics are converted into an input pulse sequence through delay coding and calculated according to the following formula:
Figure 858161DEST_PATH_IMAGE005
wherein,
Figure 211782DEST_PATH_IMAGE006
is the firing time of the ith neuron, T is the length of the coding window,
Figure 150919DEST_PATH_IMAGE007
is the intensity value of the ith feature.
Preferably, the second step is specifically:
the improved SRM model introduces a post-depolarization potential in the refractory nucleus and simultaneously introduces
Figure 880978DEST_PATH_IMAGE008
In the pulse neural network, after a neuron of an output layer emits a pulse, the pulse is transmitted to other neurons and inhibitory layer neurons in the same layer, and the membrane potential of the neuron of the output layer in the model receives excitation input from the input layer and other neurons in the same layer and inhibition feedback from the inhibitory neurons, so that the membrane potential of the neuron of the output layer in the model is calculated according to the following formula:
Figure 205649DEST_PATH_IMAGE009
Figure 871117DEST_PATH_IMAGE010
Figure 156605DEST_PATH_IMAGE011
Figure 546479DEST_PATH_IMAGE012
Figure 866602DEST_PATH_IMAGE013
Figure 702971DEST_PATH_IMAGE014
wherein,
Figure 475755DEST_PATH_IMAGE015
is the membrane potential of the neuron i,
Figure 406671DEST_PATH_IMAGE016
is the total input to the input layer neurons,
Figure 581300DEST_PATH_IMAGE017
is the total input of the neurons of the same layer,
Figure 588570DEST_PATH_IMAGE018
is to suppress the feedback of the feedback signal,
Figure 238863DEST_PATH_IMAGE019
is a neuron i in
Figure 989782DEST_PATH_IMAGE006
A post-depolarization process of the membrane potential after the firing pulse,
Figure 18918DEST_PATH_IMAGE020
is that
Figure 446357DEST_PATH_IMAGE008
The source of the oscillation is set to be,
Figure 928154DEST_PATH_IMAGE021
Figure 217184DEST_PATH_IMAGE022
Figure 366405DEST_PATH_IMAGE023
synaptic weights from input layer neurons to output layer neurons, from output layer neurons, and from inhibitory neurons to output layer neurons, respectively,
Figure 233255DEST_PATH_IMAGE024
is a normalization factor that is a function of,
Figure 77714DEST_PATH_IMAGE025
Figure 29489DEST_PATH_IMAGE026
Figure 423430DEST_PATH_IMAGE027
respectively inhibiting feedback and post-depolarization,
Figure 68038DEST_PATH_IMAGE008
The amplitude of the oscillations is such that,
Figure 399794DEST_PATH_IMAGE028
Figure 155260DEST_PATH_IMAGE006
the pulse firing times of the pre-synaptic neuron j and the post-synaptic neuron i respectively,
Figure 138128DEST_PATH_IMAGE029
Figure 829004DEST_PATH_IMAGE030
Figure 38268DEST_PATH_IMAGE031
Figure 722059DEST_PATH_IMAGE032
is a time constant, f is
Figure 434801DEST_PATH_IMAGE008
The frequency of the oscillation, t is a time variable,
Figure 30998DEST_PATH_IMAGE033
is the initial phase.
Preferably, the supervised group Tempotron in step three updates the synaptic weights between the input layer and the output layer, and the updating method is as follows:
Figure 849263DEST_PATH_IMAGE034
wherein,
Figure 946532DEST_PATH_IMAGE035
is the weight of the input layer to output layer connection,
Figure 654725DEST_PATH_IMAGE036
is the learning rate of the inter-layer weights, d is the expected output flag includes 0 or 1,
Figure 546457DEST_PATH_IMAGE037
is the pulse emitting time of the presynaptic neuron j, K is the number of neurons which need to be activated when the distance reaches the size of the nerve population,
Figure 589369DEST_PATH_IMAGE038
is the time corresponding to the maximal membrane potential of the kth most active post-synaptic neuron.
Preferably, the unsupervised STDP in step four updates the synaptic weights among the activated neurons in the layer, and the updating method is as follows:
Figure 490329DEST_PATH_IMAGE039
wherein,
Figure 53028DEST_PATH_IMAGE040
is the update quantity of the connection weight in the output layer,
Figure 381241DEST_PATH_IMAGE041
and the magnitude of the enhancement and the attenuation respectively,
Figure 645869DEST_PATH_IMAGE037
and
Figure 350520DEST_PATH_IMAGE006
respectively representing the moment of pulse firing for pre-synaptic neuron j and post-synaptic neuron i,
Figure 33305DEST_PATH_IMAGE042
Figure 657054DEST_PATH_IMAGE043
is the time constant of the time at which,
Figure 284344DEST_PATH_IMAGE044
represents the decay time constant of the NMDA receptor.
Preferably, the updating method for inhibiting synaptic plasticity in the fifth step is as follows:
Figure 402473DEST_PATH_IMAGE045
wherein,
Figure 64398DEST_PATH_IMAGE046
is the update amount of the connection weight between the suppression layer and the output layer,
Figure 861977DEST_PATH_IMAGE006
representing the pulse firing instant of the post-synaptic neuron i.
A multi-synaptic plasticity pulse neural network-based fast memory coding device comprises one or more processors and is used for realizing the multi-synaptic plasticity pulse neural network-based fast memory coding method.
A computer readable storage medium, on which a program is stored, which, when executed by a processor, implements the multi-synaptic plasticity pulse neural network-based fast memory coding method.
The invention has the beneficial effects that:
the cooperation of the supervised population Tempotron, the unsupervised STDP and the unsupervised plasticity inhibition provided by the invention provides a feasible and efficient way for the pulse neural network to quickly memorize the external stimulation. Compared with the existing pulse memory model, the pulse memory model has the synergy of supervised and unsupervised plasticity, has higher bioanalysis, and effectively improves the coding speed and the stability of memory.
Drawings
FIG. 1 is a block diagram of a computation framework of a multi-synaptic plasticity pulse neural network-based fast memory coding method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating a multi-synaptic plasticity pulse neural network-based fast memory coding method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an external stimulus encoding process;
FIG. 4 is a schematic diagram of a population Tempotron learning strategy;
FIG. 5(a) shows the input pulse sequence after each image is encoded in the order of '0', '1', '2', '3', '4', '5' and '6' in the impulse neural network according to the embodiment of the present invention
Figure 976564DEST_PATH_IMAGE008
Oscillating a schematic diagram of the input network at the trough of different periods;
FIG. 5(b) is an activity diagram of an embodiment of the present invention corresponding to '0', '1', '2', '3', '4', '5', '6', '0', '01', '012', '0123', '01234', '012345' and '0123456';
FIG. 6(a) is a graph of inter-layer synaptic weight change; FIG. 6(b) is a diagram of synaptic weight change in layers;
FIG. 7 is a diagram illustrating the convergence speed of inter-layer synaptic weights;
FIG. 8 is a block diagram of a fast memory coding device based on a multi-synaptic plasticity pulse neural network according to the present invention.
Detailed Description
In order to make the objects, technical solutions and technical effects of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples.
In order to solve/relieve the technical problems, the invention constructs a pulse neural network model with interlayer full connection and cyclic connection in layers based on a supervised population Tempotron, an unsupervised STDP and synaptic plasticity inhibition, and verifies the function of the model on a digital sequence memory task, and the result shows that the invention can effectively improve the coding speed and the stability of memory.
As shown in fig. 1, the computing framework of the pulse neural network fast memory coding method based on multi-synaptic plasticity includes an input layer, an output layer, and a suppression layer. The input layer neuron is fully connected to the output layer neuron, the input pulse is transmitted to the output layer through the input layer, and the synaptic weight value between the input layer and the output layer is updated by the supervised population Tempotron; the output layer has a circular connection, and the synapse weight value in the output layer is updated by the unsupervised STDP; bidirectional connection exists between the inhibition layer and the output layer, wherein the output layer and the inhibition layer adopt a one-to-one connection mode, synapse weights of the connection are fixed and do not participate in weight training, the inhibition layer and the output layer adopt a diagonal connectionless full connection mode, and the synapse weights are updated by an unsupervised inhibition plasticity rule.
The multi-synaptic plasticity pulse neural network-based fast memory coding method specifically comprises the following steps as shown in fig. 2:
the method comprises the following steps: converting the external stimulus into an input pulse sequence based on a hierarchical coding strategy;
the real-valued data of the external stimulus, which is seven images arbitrarily selected from seven of '0', '1', '2', '3', '4', '5', '6' of the MNIST handwriting digital set, with an image size of 28 pixels, is first encoded into a pulse pattern before being input to the pulse neural network, and is divided into 49 receiving fields with a size of 4 pixels. As shown in fig. 3, the encoding process of the number '0' firstly performs feature extraction through a two-layer structure composed of a Simple Cell, an S-layer and a Complex Cell, a C-layer, and then encodes the extracted features by a delay encoding method, wherein:
and (2) S layer: extracting edge direction characteristics of the image, wherein the filter kernels of the S layers are respectively in the directions
Figure 632804DEST_PATH_IMAGE001
Figure 539449DEST_PATH_IMAGE002
Figure 380366DEST_PATH_IMAGE003
Figure 123195DEST_PATH_IMAGE004
The 4 Gabor filters simulate the receptive field of visual cortex to perform filtering operation on the received domain image, and 49 × 4 characteristic graphs with the size of 4pixel × 4pixel are obtained after filtering;
layer C: firstly, adding operation is carried out on each feature map output by the S layer to obtain a feature map with the size of 49 x 4, linear normalization is carried out on the feature maps to distribute data into intervals of [0, 1], then maximum competition operation is carried out according to rows, namely, only the feature with the strongest intensity is reserved in each row to be used as the most matched direction feature at the position, feature values in other directions are set to be 0, and sparse expression of the feature map with the size of 49 x 4 is obtained.
The obtained characteristics are converted into an input pulse sequence through delay coding and calculated according to the following formula:
Figure 97973DEST_PATH_IMAGE047
wherein,
Figure 468911DEST_PATH_IMAGE006
is the firing time of the ith neuron, T =20e-3 is the length of the coding window,
Figure 356096DEST_PATH_IMAGE007
is the intensity value of the ith feature.
The pulse sequence for each image was 49 × 4=196, the different images were represented by different pulse sequences, the encoded pulse sequences entered the network via the input layer, and the total number of neurons in the input layer was 7 × 49 × 4= 1372.
Step two: after receiving the input pulse, updating the membrane potential of the neuron in the output layer based on the improved SRM model;
in this embodiment, the neurons of the spiking neural network adopt the improved SRM model, and a concise description of the dynamics of the spiking neurons is provided by a continuous kernel function, which is beneficial to clearly showing the contribution of a plurality of input sources to neuron firing. Compared with the original SRM model, the improved SRM model considers more physiological details, and introduces post-depolarization potential in the refractory nucleus to describe the process of slow rise of membrane potential after the neuron sends out pulse:
Figure 179695DEST_PATH_IMAGE048
Figure 692585DEST_PATH_IMAGE049
is a neuron i in
Figure 918030DEST_PATH_IMAGE006
A post-depolarization process of the membrane potential after the firing pulse,
Figure 241695DEST_PATH_IMAGE026
is the magnitude of the back depolarization(s),
Figure 818170DEST_PATH_IMAGE006
is the pulse firing time of the neuron i,
Figure 889679DEST_PATH_IMAGE032
is a time constant.
Is introduced into
Figure 969631DEST_PATH_IMAGE008
The brain waves, which are important for the synchronous neural activity, are oscillated and injected as an external input currentInto the output neuron, it is modeled as a cosine wave.
Figure 464197DEST_PATH_IMAGE050
Wherein,
Figure 652602DEST_PATH_IMAGE020
is that
Figure 382660DEST_PATH_IMAGE008
The source of the oscillation is set to be,
Figure 192485DEST_PATH_IMAGE027
is that
Figure 248165DEST_PATH_IMAGE008
The amplitude of the oscillations is such that,
Figure 658287DEST_PATH_IMAGE051
is that
Figure 926457DEST_PATH_IMAGE008
The frequency of the oscillations is such that,
Figure 121946DEST_PATH_IMAGE033
is the initial phase. In the impulse neural network, the input impulse sequence after each image coding is respectively arranged in the sequence of ' 0 ', ' 1 ', ' 2 ', ' 3 ', ' 4 ', ' 5 ' and ' 6
Figure 207583DEST_PATH_IMAGE008
The input network is oscillated at the troughs of different periods as shown in fig. 5 (a). Excitatory inputs from the input layer received by neurons in the output layer of the network are expressed as follows:
Figure 714788DEST_PATH_IMAGE010
wherein,
Figure 662015DEST_PATH_IMAGE016
is the total input to the input layer neurons,
Figure 836644DEST_PATH_IMAGE021
is the synaptic weight between input layer neuron j to output layer neuron i,
Figure 96112DEST_PATH_IMAGE024
is a normalization factor that is a function of,
Figure 90613DEST_PATH_IMAGE028
is the pulse firing instant of the pre-synaptic neuron j,
Figure 841531DEST_PATH_IMAGE029
Figure 870667DEST_PATH_IMAGE030
is a time constant.
When an output layer neuron fires a pulse, the pulse is transmitted to other neurons in the same layer, as well as to the inhibitory layer neurons. Excitatory inputs received by neurons in the output layer from other neurons in the same layer are expressed as follows:
Figure 32527DEST_PATH_IMAGE011
Figure 779903DEST_PATH_IMAGE017
is the total input of the neurons of the same layer,
Figure 334512DEST_PATH_IMAGE022
is the synaptic weight between the pre-synaptic neuron j and the post-synaptic neuron i in the output layer.
The inhibitory feedback received from the inhibitory neurons is expressed as follows:
Figure 342788DEST_PATH_IMAGE012
Figure 816495DEST_PATH_IMAGE018
is to suppress the feedback of the feedback signal,
Figure 660954DEST_PATH_IMAGE023
are synaptic weights between inhibitory neurons to output layer neurons,
Figure 878309DEST_PATH_IMAGE025
is the magnitude of the suppressed feedback and,
Figure 6671DEST_PATH_IMAGE006
is the pulse firing time of the output layer neurons,
Figure 385700DEST_PATH_IMAGE031
is a time constant.
In summary, the membrane potential of the neurons in the output layer in the model is calculated as follows:
Figure 983034DEST_PATH_IMAGE052
in the present embodiment, the first and second electrodes are,
Figure 125784DEST_PATH_IMAGE053
3,
Figure 984019DEST_PATH_IMAGE054
Figure 674894DEST_PATH_IMAGE055
Figure 618579DEST_PATH_IMAGE056
,,
Figure 302370DEST_PATH_IMAGE057
Figure 890478DEST_PATH_IMAGE058
Figure 876888DEST_PATH_IMAGE059
Figure 698083DEST_PATH_IMAGE060
simulation time interval
Figure 936297DEST_PATH_IMAGE061
One time simulation iteration duration
Figure 503545DEST_PATH_IMAGE062
The number of iterations is set to 20.
Step three: updating synapse weight values input to an output layer by using a supervision group Tempotron, and quickly activating a group of output neurons to memorize and input;
as shown in fig. 4, is a schematic diagram of the population Tempotron learning strategy. In each iterative learning process, for each input pulse mode, if the number of neurons responding to the input mode by the output layer is different by K to reach the size of a preset nerve population, synaptic weights between the input layer's firing neurons and the output layer's non-firing neurons corresponding to the first K maximum membrane potentials are strengthened, and an interlayer weight updating formula is as follows:
Figure 785490DEST_PATH_IMAGE063
wherein,
Figure 969347DEST_PATH_IMAGE035
is the update quantity of the connection weight between the input layer and the output layer,
Figure 480094DEST_PATH_IMAGE064
is the learning rate of the inter-layer weights, and d is the expected output flag including: a value of 0 or 1, or a combination thereof,
Figure 167427DEST_PATH_IMAGE037
is the pulse emitting time of the presynaptic neuron j, K is the number of neurons which need to be activated when the distance reaches the size of the nerve population,
Figure 623204DEST_PATH_IMAGE038
is the time corresponding to the maximal membrane potential of the kth most active post-synaptic neuron.
In the present embodiment, the first and second electrodes are,
Figure 763198DEST_PATH_IMAGE065
the number of output neurons was 100, and the number of neurons in the neural population was 12. Before learning, the interlayer initial weight is randomly initialized according to normal distribution, the mean value is 0.001, and the standard deviation is 0.002. As shown in fig. 6(a), after learning, for each input pattern, the weight between the firing neurons of the input layer and the responding neural population of the output layer is enhanced. Due to the enhancement of the inter-layer synaptic weight, near the time of input pulse pattern presentation, the output layer presents the neural population issuing activity encoding input, as shown by the activities corresponding to '0', '1', '2', '3', '4', '5', and '6' of fig. 5 (b).
Step four: after neurons in an output layer are activated, updating synaptic weights among the activated neurons in the layer by using unsupervised STDP to form enhanced storage memory of a circulation sub-network;
after the neuron on the output layer is activated, when the difference between the release times of the pre-synaptic neuron and the post-synaptic neuron is in [ 2 ]
Figure 343215DEST_PATH_IMAGE066
]Within the interval, the synaptic weight will be updated. When the presynaptic neuron is issued before the postsynaptic neuron, the synaptic weight is enhanced; the synaptic weight is weakened when the pre-synaptic neuron fires later than the post-synaptic neuron. The intra-layer weight value updating formula is as follows:
Figure 885055DEST_PATH_IMAGE039
wherein,
Figure 508803DEST_PATH_IMAGE040
is the update quantity of the connection weight in the output layer,
Figure 136093DEST_PATH_IMAGE041
and
Figure 988643DEST_PATH_IMAGE067
the magnitude of the enhancement and the attenuation are indicated separately,
Figure 650568DEST_PATH_IMAGE037
and
Figure 445218DEST_PATH_IMAGE006
respectively representing the moment of pulse firing for pre-synaptic neuron j and post-synaptic neuron i,
Figure 559804DEST_PATH_IMAGE042
Figure 481624DEST_PATH_IMAGE043
is the time constant of the time at which,
Figure 122690DEST_PATH_IMAGE044
represents the decay time constant of the NMDA receptor.
In the present embodiment, the first and second electrodes are,
Figure 229186DEST_PATH_IMAGE068
Figure 706435DEST_PATH_IMAGE069
. Before learning, the weight of the intra-layer cyclic connection is initialized randomly, and the initial weight is [0, 1e-5 ]]Within the interval; after learning, the internal connections of the neuron population in response to each input pattern of the input layer are enhanced to form a cyclic subnetwork, as shown in fig. 6 (b). Self-associative memory is stored in enhanced cyclic connection weights within the neural population.
Step five: and D, while the step four is executed, updating the synaptic weights between the inhibition layer and the output layer by using unsupervised inhibition plasticity, and inhibiting the separation of the release time of the neural populations with different inputs of feedback guarantee memory.
The output layer and the inhibition layer adopt a one-to-one connection mode, and synapse weights of the connection are fixed and do not participate in the weightsAnd (5) training. The inhibition layer and the output layer adopt a full connection mode without connection of diagonal, the value on the diagonal of the weight matrix is 0, and the rest values are 1. The time difference between the current time t and the latest issuance of the output neuron is set to
Figure RE-689912DEST_PATH_IMAGE070
]Within the interval, the inhibition weight of the output layer to the neuron with input to the inhibition neuron is reduced, and the inhibition weight to the neuron without input to the inhibition neuron is not changed. The updated formula for inhibition of synaptic plasticity is as follows:
Figure RE-476602DEST_PATH_IMAGE045
wherein,
Figure RE-726318DEST_PATH_IMAGE046
is the update amount of the connection weight between the suppression layer and the output layer,
Figure RE-180433DEST_PATH_IMAGE006
representing the pulse firing instant of the post-synaptic neuron i. After the nerve population of the output layer is subjected to the stimulation and the pulse, the excitatory input, the depolarized current after discharge and the external part from the nerve population of the same layer
Figure RE-224612DEST_PATH_IMAGE008
The oscillatory input enables a neural population to be followed without a subsequent stimulation input
Figure RE-498599DEST_PATH_IMAGE008
The oscillation period is continuously issued, thereby maintaining short-term memory. After the neurons in the output layer are released, the inhibition feedback from the neurons in the inhibition layer prevents the neural population from being continuously released after being released, and avoids mutual interference among different memory items, as shown in the activities corresponding to fig. 5(b), 0 ', 01 ', 012 ', 0123 ', 01234 ', 012345 ' and 0123456 ', the subsequent input stimulation appears
Figure RE-20847DEST_PATH_IMAGE008
Near the peak of the oscillation period, the nerve groups responding to different input stimuli are issued at different moments in sequence, and the separation of the issuing time of the nerve groups with different inputs is restrained and guaranteed to be memorized in a feedback mode.
In this example, the number of inhibitory neurons is 100, as is the number of output neurons,
Figure 788694DEST_PATH_IMAGE071
as shown in fig. 7, the convergence speed of the inter-layer synaptic weights is the convergence speed, and under the group Tempotron learning strategy, a group of output neurons is activated rapidly to perform memory coding on the input, and the inter-layer synaptic weights converge after only 4 times of learning. Experimental results show that the pulse neural network rapid memory coding method based on multi-synapse plasticity can effectively improve the coding speed and stability of memory.
Corresponding to the foregoing embodiments of the multi-synaptic plasticity pulse neural network-based fast memory coding method, the invention also provides embodiments of a multi-synaptic plasticity pulse neural network-based fast memory coding device.
Referring to fig. 8, an apparatus for fast memory coding based on a multi-synaptic plasticity pulse neural network according to an embodiment of the present invention includes one or more processors, and is configured to implement the method for fast memory coding based on a multi-synaptic plasticity pulse neural network according to the above embodiment.
The embodiment of the invention based on the multi-synapse plasticity pulse neural network fast memory coding device can be applied to any data processing-capable equipment, such as computers and other equipment or devices. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and as a logical device, the device is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of any device with data processing capability. From a hardware aspect, as shown in fig. 8, a hardware structure diagram of any device with data processing capability based on the multi-synaptic plasticity pulse neural network fast memory coding device of the present invention is shown, except for the processor, the memory, the network interface, and the nonvolatile memory shown in fig. 8, in an embodiment, any device with data processing capability that the device is located may generally include other hardware according to the actual function of the any device with data processing capability, which is not described again.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the present invention further provides a computer-readable storage medium, on which a program is stored, and when the program is executed by a processor, the method for implementing the fast memory coding method based on the multi-synaptic plasticity pulse neural network in the above embodiments is implemented.
The computer readable storage medium may be an internal storage unit, such as a hard disk or a memory, of any data processing capability device described in any of the foregoing embodiments. The computer readable storage medium may also be an external storage device of the wind turbine, such as a plug-in hard disk, a Smart Media Card (SMC), an SD Card, a Flash memory Card (Flash Card), and the like, provided on the device. Further, the computer readable storage medium may include both an internal storage unit and an external storage device of any data processing capable device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the arbitrary data processing-capable device, and may also be used for temporarily storing data that has been output or is to be output.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way. Although the foregoing has described the practice of the present invention in detail, it will be apparent to those skilled in the art that modifications may be made to the practice of the invention as described in the foregoing examples, or that certain features may be substituted in the practice of the invention. All changes, equivalents and modifications which come within the spirit and scope of the invention are desired to be protected.

Claims (8)

1. The quick memory coding method based on the multi-synaptic plasticity pulse neural network is characterized by comprising the following steps of:
the method comprises the following steps: converting the external stimulus into an input pulse sequence based on a hierarchical coding strategy;
step two: after receiving the input pulse, the pulse neural network updates the membrane potential of the neuron on the output layer based on the improved SRM model;
step three: updating synapse weight values input to an output layer by using a supervision group Tempotron, and activating neuron memory input of the output layer;
step four: after neurons in an output layer are activated, updating synaptic weights among the activated neurons in the layer by using unsupervised STDP to form enhanced storage memory of a circulation sub-network;
step five: and when the step four is executed, unsupervised inhibition synapse plasticity is used, synapse weights from an inhibition layer to an output layer are updated, and separation of release time of different input neural populations of feedback guarantee memory is inhibited.
2. The multi-synaptic plasticity pulse neural network based fast memory coding method according to claim 1, wherein the first step is specifically:
the external stimulus is seven images selected from seven types of '0', '1', '2', '3', '4', '5' and '6' of MNIST handwriting digit sets, the size of each image is 28 pixels and 28 pixels, each image is evenly divided, and 49 receiving domains with the size of 4 pixels and 4 pixels are obtained;
the hierarchical coding strategy uses a two-layer structure consisting of the S-layer of Simple cells and the C-layer of Complex cells to perform feature extraction on external stimuli, and then encodes the extracted features by a delayed coding method, wherein:
and (2) S layer: extracting edge direction characteristics of the image, wherein the filtering kernels of the S layer are respectively divided into 4 directions
Figure 117039DEST_PATH_IMAGE001
Figure 72357DEST_PATH_IMAGE002
Figure 691557DEST_PATH_IMAGE003
Figure 414050DEST_PATH_IMAGE004
The Gabor filters in 4 directions simulate the receptive field of the visual cortex, and respectively perform filtering operation on image blocks corresponding to each receiving domain, and 49 × 4 feature graphs with the size of 4pixel × 4 pixels are obtained after filtering;
layer C: firstly, performing an adding operation on each feature graph output by the S layer to obtain a feature graph with the size of 49 pixels by 4 pixels, performing linear normalization on the feature graphs to distribute data into an interval of [0, 1], and then performing maximum competition operation in rows, wherein the performing of the maximum competition operation in rows comprises: keeping the feature with the strongest intensity in each row as the most matched direction feature at the position, setting the feature values of other directions as 0, and obtaining the sparse expression of the feature map with the size of 4 pixels 9 by 4 pixels;
the obtained characteristics are converted into an input pulse sequence through delay coding and calculated according to the following formula:
Figure 753895DEST_PATH_IMAGE005
wherein,
Figure 953932DEST_PATH_IMAGE006
is the firing time of the ith neuron, T is the length of the coding window,
Figure 868668DEST_PATH_IMAGE007
is the intensity value of the ith feature.
3. The multi-synaptic plasticity pulse neural network-based fast memory coding method according to claim 1, wherein the second step is specifically:
the improved SRM model introduces a post-depolarization potential in the refractory nucleus and simultaneously introduces
Figure 888576DEST_PATH_IMAGE008
In the pulse neural network, after a neuron of an output layer emits a pulse, the pulse is transmitted to other neurons and inhibitory layer neurons in the same layer, and the membrane potential of the neuron of the output layer in the model receives excitation input from the input layer and other neurons in the same layer and inhibition feedback from the inhibitory neurons, so that the membrane potential of the neuron of the output layer in the model is calculated according to the following formula:
Figure 32113DEST_PATH_IMAGE009
Figure 86657DEST_PATH_IMAGE010
Figure 906714DEST_PATH_IMAGE011
Figure 679498DEST_PATH_IMAGE012
Figure 626725DEST_PATH_IMAGE013
Figure 535775DEST_PATH_IMAGE014
wherein,
Figure 792313DEST_PATH_IMAGE015
is the membrane potential of the neuron i,
Figure 786814DEST_PATH_IMAGE016
is the total input to the input layer neurons,
Figure 537732DEST_PATH_IMAGE017
is the total input of the neurons of the same layer,
Figure 485310DEST_PATH_IMAGE018
is to suppress the feedback of the feedback signal,
Figure 522536DEST_PATH_IMAGE019
is a neuron i in
Figure 145279DEST_PATH_IMAGE006
A post-depolarization process of the membrane potential after the firing pulse,
Figure 558942DEST_PATH_IMAGE020
is that
Figure 567218DEST_PATH_IMAGE008
The source of the oscillation is set to be,
Figure 509767DEST_PATH_IMAGE021
Figure 354226DEST_PATH_IMAGE022
Figure 492952DEST_PATH_IMAGE023
synaptic weights from input layer neurons to output layer neurons, from output layer neurons, and from inhibitory neurons to output layer neurons, respectively,
Figure 231101DEST_PATH_IMAGE024
is a normalization factor that is a function of,
Figure 751075DEST_PATH_IMAGE025
Figure 941885DEST_PATH_IMAGE026
Figure 821985DEST_PATH_IMAGE027
respectively inhibiting feedback and post-depolarization,
Figure 680220DEST_PATH_IMAGE008
The amplitude of the oscillations is such that,
Figure 371095DEST_PATH_IMAGE028
Figure 973502DEST_PATH_IMAGE006
the pulse firing times of the pre-synaptic neuron j and the post-synaptic neuron i respectively,
Figure 532660DEST_PATH_IMAGE029
Figure 855188DEST_PATH_IMAGE030
Figure 841598DEST_PATH_IMAGE031
Figure 397214DEST_PATH_IMAGE032
is a time constant, f is
Figure 166586DEST_PATH_IMAGE008
The frequency of the oscillation, t is a time variable,
Figure 733834DEST_PATH_IMAGE033
is the initial phase.
4. The multi-synaptic plasticity pulse neural network flash memory coding method according to claim 1, wherein the supervised group Tempotron in the third step updates synaptic weights between the input layer and the output layer, and the updating method is as follows:
Figure 15780DEST_PATH_IMAGE034
wherein,
Figure 402899DEST_PATH_IMAGE035
is the weight of the input layer to output layer connection,
Figure 179225DEST_PATH_IMAGE036
is the learning rate of the inter-layer weights, d is the expected output flag includes 0 or 1,
Figure 866558DEST_PATH_IMAGE037
is the pulse emitting time of the presynaptic neuron j, K is the number of neurons which need to be activated when the distance reaches the size of the nerve population,
Figure 116143DEST_PATH_IMAGE038
is the time corresponding to the maximal membrane potential of the kth most active post-synaptic neuron.
5. The method of claim 1, wherein the unsupervised STDP in step four updates synaptic weights between active neurons in the intra-layer by:
Figure 131503DEST_PATH_IMAGE039
wherein,
Figure 304996DEST_PATH_IMAGE040
is the update quantity of the connection weight in the output layer,
Figure 991977DEST_PATH_IMAGE041
and
Figure 163195DEST_PATH_IMAGE042
the magnitude of the enhancement and the attenuation are indicated separately,
Figure 790486DEST_PATH_IMAGE037
and
Figure 157882DEST_PATH_IMAGE006
respectively representing the moment of pulse firing for pre-synaptic neuron j and post-synaptic neuron i,
Figure 554228DEST_PATH_IMAGE043
Figure 99610DEST_PATH_IMAGE044
is the time constant of the time at which,
Figure 869989DEST_PATH_IMAGE045
represents the decay time constant of the NMDA receptor.
6. The multi-synaptic plasticity pulse neural network based fast memory coding method according to claim 1, wherein the updating method for suppressing synaptic plasticity in the fifth step is:
Figure 650863DEST_PATH_IMAGE046
wherein,
Figure 42661DEST_PATH_IMAGE047
is the update amount of the connection weight between the suppression layer and the output layer,
Figure 804950DEST_PATH_IMAGE006
representing the pulse firing instant of the post-synaptic neuron i.
7. A multi-synaptic plasticity pulse neural network-based fast memory coding device, comprising one or more processors, and being used for implementing the multi-synaptic plasticity pulse neural network-based fast memory coding method according to any one of claims 1 to 6.
8. A computer-readable storage medium, on which a program is stored, which, when executed by a processor, implements the multi-synaptic plasticity pulse neural network-based fast memory coding method of any one of claims 1-6.
CN202111497073.2A 2021-12-09 2021-12-09 Multi-synaptic plasticity pulse neural network-based fast memory coding method and device Pending CN114118383A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111497073.2A CN114118383A (en) 2021-12-09 2021-12-09 Multi-synaptic plasticity pulse neural network-based fast memory coding method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111497073.2A CN114118383A (en) 2021-12-09 2021-12-09 Multi-synaptic plasticity pulse neural network-based fast memory coding method and device

Publications (1)

Publication Number Publication Date
CN114118383A true CN114118383A (en) 2022-03-01

Family

ID=80364420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111497073.2A Pending CN114118383A (en) 2021-12-09 2021-12-09 Multi-synaptic plasticity pulse neural network-based fast memory coding method and device

Country Status (1)

Country Link
CN (1) CN114118383A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114611686A (en) * 2022-05-12 2022-06-10 之江实验室 Synapse delay implementation system and method based on programmable neural mimicry core
CN115327373A (en) * 2022-04-20 2022-11-11 岱特智能科技(上海)有限公司 Hemodialysis equipment fault diagnosis method based on BP neural network and storage medium
CN115429293A (en) * 2022-11-04 2022-12-06 之江实验室 Sleep type classification method and device based on impulse neural network
CN116080688A (en) * 2023-03-03 2023-05-09 北京航空航天大学 Brain-inspiring-like intelligent driving vision assisting method, device and storage medium
CN116542291A (en) * 2023-06-27 2023-08-04 北京航空航天大学 Pulse memory image generation method and system for memory loop inspiring
CN117456577A (en) * 2023-10-30 2024-01-26 苏州大学 System and method for realizing expression recognition based on optical pulse neural network

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327373A (en) * 2022-04-20 2022-11-11 岱特智能科技(上海)有限公司 Hemodialysis equipment fault diagnosis method based on BP neural network and storage medium
CN114611686A (en) * 2022-05-12 2022-06-10 之江实验室 Synapse delay implementation system and method based on programmable neural mimicry core
CN115429293A (en) * 2022-11-04 2022-12-06 之江实验室 Sleep type classification method and device based on impulse neural network
CN115429293B (en) * 2022-11-04 2023-04-07 之江实验室 Sleep type classification method and device based on impulse neural network
CN116080688A (en) * 2023-03-03 2023-05-09 北京航空航天大学 Brain-inspiring-like intelligent driving vision assisting method, device and storage medium
CN116542291A (en) * 2023-06-27 2023-08-04 北京航空航天大学 Pulse memory image generation method and system for memory loop inspiring
CN116542291B (en) * 2023-06-27 2023-11-21 北京航空航天大学 Pulse memory image generation method and system for memory loop inspiring
CN117456577A (en) * 2023-10-30 2024-01-26 苏州大学 System and method for realizing expression recognition based on optical pulse neural network
CN117456577B (en) * 2023-10-30 2024-04-26 苏州大学 System and method for realizing expression recognition based on optical pulse neural network

Similar Documents

Publication Publication Date Title
CN114118383A (en) Multi-synaptic plasticity pulse neural network-based fast memory coding method and device
Rabinovich et al. Dynamical principles in neuroscience
Taherkhani et al. A review of learning in biologically plausible spiking neural networks
Diehl et al. Unsupervised learning of digit recognition using spike-timing-dependent plasticity
Yu et al. Spike timing or rate? Neurons learn to make decisions for both through threshold-driven plasticity
Demin et al. Recurrent spiking neural network learning based on a competitive maximization of neuronal activity
Yu et al. Rapid feedforward computation by temporal encoding and learning with spiking neurons
WO2015020802A2 (en) Computed synapses for neuromorphic systems
TW201543382A (en) Neural network adaptation to current computational resources
Xu et al. Spike trains encoding and threshold rescaling method for deep spiking neural networks
TW201523463A (en) Methods and apparatus for implementation of group tags for neural models
Shaw Donald hebb: The organization of behavior
Ma et al. A memristive neural network model with associative memory for modeling affections
CN112101535A (en) Signal processing method of pulse neuron and related device
Zheng et al. An introductory review of spiking neural network and artificial neural network: from biological intelligence to artificial intelligence
Varier et al. Establishing, versus maintaining, brain function: a neuro-computational model of cortical reorganization after injury to the immature brain
CN110378469A (en) SCNN inference device based on asynchronous circuit, PE unit, processor and computer equipment thereof
Li et al. A review on synergistic learning
Zhao et al. A Feed-Forward Neural Network for Increasing the Hopfield-Network Storage Capacity
US11289175B1 (en) Method of modeling functions of orientation and adaptation on visual cortex
Pashaie et al. Self-organization in a parametrically coupled logistic map network: A model for information processing in the visual cortex
Song et al. The spiking neural network based on fMRI for speech recognition
Vaila et al. Spiking CNNs with PYNN and NEURON
Viswanathan et al. A Study of Prefrontal Cortex Task Switching Using Spiking Neural Networks
Cutsuridis Computational models of memory formation in healthy and diseased microcircuits of the hippocampus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination