CN113723594A - Impulse neural network target identification method - Google Patents

Impulse neural network target identification method Download PDF

Info

Publication number
CN113723594A
CN113723594A CN202111019432.3A CN202111019432A CN113723594A CN 113723594 A CN113723594 A CN 113723594A CN 202111019432 A CN202111019432 A CN 202111019432A CN 113723594 A CN113723594 A CN 113723594A
Authority
CN
China
Prior art keywords
neuron
layer
neurons
weight
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111019432.3A
Other languages
Chinese (zh)
Other versions
CN113723594B (en
Inventor
牛立业
魏颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaoxing Beida Information Technology Innovation Center
Original Assignee
Shaoxing Beida Information Technology Innovation Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaoxing Beida Information Technology Innovation Center filed Critical Shaoxing Beida Information Technology Innovation Center
Priority to CN202111019432.3A priority Critical patent/CN113723594B/en
Publication of CN113723594A publication Critical patent/CN113723594A/en
Application granted granted Critical
Publication of CN113723594B publication Critical patent/CN113723594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention provides a pulse neural network target identification method, which effectively avoids the loss of neuron membrane potential by constructing the proportional attenuation characteristic of a pulse neuron model, ensures the pulse release rate of neurons, and solves the problem of under-activation of neuron pulse release; the problem of excessive activation of the pulse firing rate of the neuron is effectively reduced by constructing the self-adaptive threshold characteristic of the pulse neuron model.

Description

Impulse neural network target identification method
Technical Field
The invention belongs to the field of artificial intelligence image processing, and particularly relates to a pulse neural network target identification method.
Background
In recent years, a Spiking Neural Network (SNN) algorithm model based on the composition of the biologically-interpretable spiking neurons has become an indispensable research tool for developing brain-like cognitive, learning, and memory mechanisms. The SNN is a neural network based on sparse event triggering, which has the characteristics of hardware friendliness and energy consumption saving, so that researches on deep SNN algorithms are gradually favored by researchers. Training a traditional neural network, and transferring information depends on high-precision floating point numbers. In the brain system, information is transmitted in the form of action potentials of the biological system, so-called electrical impulses. SNNs are inspired by biological systems, which process discrete pulse sequence information. Training and learning of an Artificial Neural Network (ANN) mainly depends on a gradient descent-based direction propagation algorithm, and a discrete pulse sequence processed by the SNN cannot be propagated reversely by means of derivation to reduce errors so as to complete training of the network. The learning and training of the SNN algorithm mainly depends on synaptic plasticity generated by stimulation on neurons before and after synapses. To build large-scale available neural network models and neuromorphic computing platforms that are more realistic to living beings, the learning and training of deep impulse neural network algorithms generally does not use a direct training approach.
At present, a mode of converting ANN to SNN is adopted in a general deep network, the traditional ANN is fully trained, and then a series of operations are carried out to convert a rate-based network into a deep SNN target recognition method based on a pulse neuron model. The prior art cannot directly and equivalently convert the ANN into the SNN, and the performance loss of the network target identification method in the aspect of accuracy rate can be caused in the conversion process.
The invention content is as follows:
the invention aims to solve the technical problem of accuracy loss of a network target identification method caused in the process of converting ANN into SNN.
The invention provides a pulse neural network target identification method, which comprises the following steps:
s1, inputting a data set, selecting an artificial neural network according to the number of neurons in an input layer and the number of neurons in an output layer, setting an activation function of the neurons in the artificial neural network and removing a bias term;
s2, training the artificial neural network output by the S1 and storing the weight;
s3, selecting a LIF pulse neuron as a basic neuron, if the membrane potential U (t) of the basic neuron at the time t is larger than a release threshold, transmitting pulses at the current time, counting the number of neurons transmitting pulses by a front layer neuron at the current time and the number of pulses received by the neurons at the current time, calculating a current attenuation coefficient and a release threshold according to the counted number of the neurons and the counted number of pulses, and adding the current attenuation coefficient and the release threshold to the current LIF neuron to construct a self-adaptive neuron;
and S4, replacing the activation function in S1 with the adaptive neuron to form a pulse neural network, normalizing the weight in S2 and transplanting the normalized weight into the pulse neural network.
Preferably, in S1, the selection conditions are: the number of neurons in the input layer is equal to the number of image pixels in the data set, and the number of neurons in the output layer is equal to the number of image types in the data set.
Preferably, in S1, the activation function is a linear correction unit ReLU.
Preferably, in S3, the membrane potential u (t) of the basic neuron at time t can be calculated by any one of the following formulas:
Figure BDA0003240067860000021
or
Figure BDA0003240067860000031
Will U (t)0) Expressed as neurons at an initial time t0Membrane potential of U (t)0) For calculating the membrane potential of a neuron at any time t, U (t) if the neuron has no external stimulus0) Urest, if a neuron has been subjected to another signal input, U (t), before being subjected to an external signal input0)=Urest+RI0
Wherein the neuron is at t0Is RI0(ii) a Urest represents the resting potential magnitude of the neuronal membrane potential; tau ismDefining an equivalent circuit of the pulse neuron model, wherein the time constant of the neuron membrane potential is equal to RC, R is the membrane resistance in the equivalent circuit, and C is the membrane capacitance in the equivalent circuit; k is the current attenuation coefficient.
Preferably, in S3, the formula for calculating the issuing threshold is:
Vthr=Vthr0+n(S)*Vplus
wherein Vthr0Represents the initial threshold of the pulsing neuron, Vthr represents the current threshold magnitude of the neuron after the firing of the pulse, n(s) represents the number of pulses fired by the neuron, and Vplus represents the fraction of the increase in threshold when the membrane potential exceeds the firing threshold.
Preferably, in S4, the normalization method is:
a, initializing network parameters, setting the maximum activation value max _ activation of neurons in other layers except an input layer in the network to be 0, setting the maximum weight max _ weight of each layer to be 0, and setting the normalization factor norm _ factor to be 0;
b, respectively searching the weight value weight of each layer of the network, wherein the maximum weight value weight is max (Wij), Wij represents the weight value of the neuron j of the L-1 layer connected to the neuron i of the L layer, and max _ weight is max (weight);
c, each layer of neuron activation value neural _ activation, wherein max _ activation is max (neural _ activation), and then the normalization factor norm _ factor of each layer of the network is calculated as max (max _ weight, max _ activation);
d, carrying out weight normalization of the network, wherein Wij is Wij/norm _ factor.
Preferably, the formula of ReLU is
Figure BDA0003240067860000041
Wherein, Xi lRepresents the activation value of layer L neuron i; wij represents the weight value of the neuron j of the L-1 layer connected to the neuron i of the L layer; xj L-1Represents the activation value of layer L-1 neuron j.
Preferably, in S3, the calculation formula of the current attenuation coefficient is:
Figure BDA0003240067860000042
wherein, N represents the number of neurons in the previous layer network, and N (Σ j [ Sj (t-1) ═ 1]) represents the number of neurons that transmit pulses at t-1 time by the neurons in the previous layer connected to the neuron i in the current layer; if the neuron has a pulse at time t-1, then Sj (t-1) equals 1, otherwise Sj (t-1) equals 0.
Compared with the prior art, the invention has the following advantages and effects:
the invention can effectively reduce the loss of the membrane potential of the neuron, solve the problem of the short activation of the pulse emission of the neuron and solve the problem of the over activation of the neuron. The pulse release rate of the neurons is effectively adjusted, and the network performance loss in the process of converting the artificial neural network into the pulse neural network is reduced.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention.
Fig. 1 is a general flow chart of the present invention.
Fig. 2 is a flow chart of the ANN network preprocessing of the present invention.
FIG. 3 is a flow chart of the construction of a spiking neuron model according to the present invention.
FIG. 4 is a graph of a weight normalization algorithm of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1:
as shown in fig. 1 to 3, a method for identifying an object in a spiking neural network includes the following steps:
s1, inputting the data set, as shown in figure 2, selecting the artificial neural network according to the input layer neuron number and the output layer neuron number, setting the activation function of the artificial neural network neurons and removing the bias term. Preferably, all bias terms of the ANN network are removed. Preferably, the ANN input layer neuron number is equal to the input image pixel number, and assuming that the input image size is n × n, then the input layer neuron number is n2(ii) a The number of neurons in the output layer is equal to the number of image types in the data set, and assuming that the data set images share m types, the number of neurons in the output layer is m. Preferably, the activation function of the ANN neuron is set to a linear correction unit ReLU, the functional expression of which can be given by the following equation:
Figure BDA0003240067860000051
left side of equation
Figure BDA0003240067860000052
Represents the activation value of layer i neuron i; omegaijRepresents the weight value of the l-1 layer neuron j connected to the l layer neuron i;
Figure BDA0003240067860000053
represents the activation value of layer l-1 neuron j.
And S2, fully training the ANN network added with the limiting conditions and saving the weight to prepare for converting the SNN algorithm.
S3, as shown in fig. 3, the construction process of the pulse neuron model is as follows:
3.1, selecting a leakage-integration-release (LIF) impulse neuron as a basic neuron for establishing a proportional attenuation-release threshold impulse neuron model. The release threshold is continuously accumulated when the LIF neuron receives external input membrane potential, and when the membrane potential u of the neuron isi(t) exceeds its issuing threshold VthrAt the same time, the neuron emits a pulse Si(t) 1, and then resetting the membrane potential of the spiking neuron to the resting potential urestAnd the threshold of the neuron increases by V for each pulse it firesplus
The membrane potential at time t of the neuron membrane potential u (t) can be calculated by the following equation:
Figure BDA0003240067860000061
in the formula, the neuron is at t0Input RI of0;urestThe resting potential magnitude representing the neuronal membrane potential; time constant tau of neuronal membrane potentialm=RC;t0Temporal neuronal membrane potential: u (t)0)=urest+ Δ u. The neuron has no any other neuronWhen stimulated by external force, the membrane potential of the neuron will decay to the resting potential u0=urest
Alternatively, the first and second electrodes may be,
since the SNN neuron membrane potential is calculated and updated at each simulation time step dt, the membrane potential coefficient of the neuron membrane potential attenuation at each time step is as large as
Figure BDA0003240067860000062
Figure BDA0003240067860000063
The above equation indicates that the membrane potential decay coefficient at each time step is constant, and that changes in neuronal membrane potential are affected by a variety of neurotransmitters, which are released at different times under different stimuli. The neuronal membrane potential decay should not be a constant value at each time step. I.e. when the neuron receives a relatively strong stimulus at the present moment, the number of neurotransmitter molecules released by the neuron correspondingly increases. Based on the above analysis, proportional attenuation of membrane potential attenuation leaks integrals out the firing neurons. The membrane potential change can also be calculated from the following equation:
Figure BDA0003240067860000071
in the equation, k is the current attenuation coefficient, the magnitude of which is proportional to the stimulation received by the current neuron, and the stronger the received stimulation is, the larger the value thereof is, the corresponding attenuation of the neuron membrane potential also increases. The membrane potential attenuation coefficient at each time step is determined by the magnitude of the stimulus to which the neuron is subjected at each simulated time step. Since the impulse neural network processes the impulse sequence information, the membrane potential u of the neuron can be determinedi(t) performing an equivalent process by an equation for processing the pulse train information.
Figure BDA0003240067860000072
In the equation, ui(t-1) represents the membrane potential of neuron i at time t-1; wj,iRepresenting the magnitude of the weight that neuron i in the current layer connects with neuron j in the previous layer.
3.2, counting the total number of the current neurons connected with the front layer neurons in the SNN; and N represents the number of the connection between the neuron in the upper layer network and the current neuron.
3.3, counting the number of pulses sent by the neuron which receives the pulse from the connected front layer neuron at the current moment; sjA pulse sequence representing a pre-synaptic neuron j (S if the neuron has a pulse at time t-1j(t-1) ═ 1, otherwise, Sj(t-1)=0)。N(∑j[Sj(t-1)=1)]Representing the number of neurons that fire pulses at time t-1 for the preceding layer of neurons connected to the current layer of neurons i.
And 3.4, calculating the current neuron membrane potential attenuation coefficient according to the number of the connected neurons and the number of pulses received at the current moment, and calculating the neuron threshold value at the current moment according to the number of the pulses received at the current moment.
The current attenuation coefficient is calculated by the formula:
Figure BDA0003240067860000073
in the equation, N represents the number of neurons in the precursor network.
The issue threshold may be expressed in the equation:
Vthr=Vthr0+n(S)*Vplus
wherein Vthr0Indicating the initial threshold of the pulse neuron, Vthr indicating the current threshold of the neuron after the pulse is fired, n(s) indicating the number of pulses fired by the neuron, Vplus indicating the fraction of the membrane potential above the firing threshold at which the threshold increases, the more pulses fired by the neuron, the greater the threshold, and the more inputs that will need to be accumulated for the next firing pulse. Vplus is a constant, preferably 0.05.
3.5, adding the current attenuation coefficient and neuron threshold to the current LIF neuron constructs a proportional attenuation-firing threshold neuron.
And S4, replacing the ReLU unit of the original network with the proportional attenuation-release threshold impulse neuron constructed as above to form the SNN network. As shown in fig. 4, the weights stored by the fully trained ANN are normalized:
firstly, initializing network parameters, setting the maximum activation value max _ activation of neurons in other layers except the input layer in the network to be 0, setting the maximum weight max _ weight of each layer to be 0, and setting the normalization factor norm _ factor to be 0;
then, the weight value weight of each layer of the network is respectively searched for max (W)ij) Let max _ weight be max (weight); the neuron activation value of each layer is set to be max _ activation, and then the normalization factor norm _ factor of each layer of the network is calculated to be max (max _ weight, max _ activation);
finally, a weight normalization of the network, W, is performedij=Wij/normlization_factor。
And S5, directly transplanting the weight value normalized to the ANN replaced by the proportional attenuation-adaptive threshold impulse neuron to successfully establish the SNN algorithm.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A pulse neural network target identification method is characterized by comprising the following steps:
s1, inputting a data set, selecting an artificial neural network according to the data set, and setting an activation function of neurons in the artificial neural network;
s2, training the artificial neural network output by the S1 and storing the weight;
s3, selecting a LIF pulse neuron as a basic neuron, if the membrane potential U (t) of the basic neuron at the time t is larger than a release threshold, transmitting pulses at the current time, counting the number of neurons transmitting pulses by a front layer neuron at the current time and the number of pulses received by the neurons at the current time, calculating a current attenuation coefficient and a release threshold according to the counted number of the neurons and the counted number of pulses, and adding the current attenuation coefficient and the release threshold to the current LIF neuron to construct a self-adaptive neuron;
and S4, replacing the activation function in S1 with the adaptive neuron to form a pulse neural network, normalizing the weight in S2 and transplanting the normalized weight into the pulse neural network.
2. The method for identifying an object in a spiking neural network according to claim 1, wherein in S1, the selection condition is: the artificial neural network comprises the number of input layer neurons and the number of output layer neurons, wherein the number of input layer neurons is equal to the number of image pixels in the data set, and the number of output layer neurons is equal to the number of image types in the data set.
3. The method for identifying the target in the impulse neural network as claimed in claim 1, wherein the step S1 further comprises removing bias terms from the artificial neural network.
4. The method according to claim 1, wherein in S3, the membrane potential u (t) of the basic neuron at time t can be calculated by any one of the following formulas:
Figure FDA0003240067850000011
or
Figure FDA0003240067850000012
Will U (t)0) Expressed as neurons at an initial time t0Membrane potential of U (t)0) For calculating the membrane potential of a neuron at any time t if the neuron is not externalStimulation, then U (t)0) Urest, if a neuron has been subjected to another signal input, U (t), before being subjected to an external signal input0)=Urest+RI0
Wherein the neuron is at t0Is RI0(ii) a Urest represents the resting potential magnitude of the neuronal membrane potential; tau ismDefining an equivalent circuit of the pulse neuron model, wherein the time constant of the neuron membrane potential is equal to RC, R is the membrane resistance in the equivalent circuit, and C is the membrane capacitance in the equivalent circuit; k is the current attenuation coefficient.
5. The method for identifying the target of the impulse neural network as claimed in claim 1, wherein in S3, the formula for calculating the issuing threshold is:
Vthr=Vthr0+n(S)*Vplus
wherein Vthr0Represents the initial threshold of the pulsing neuron, Vthr represents the current threshold magnitude of the neuron after the firing of the pulse, n(s) represents the number of pulses fired by the neuron, and Vplus represents the fraction of the increase in threshold when the membrane potential exceeds the firing threshold.
6. The method for identifying the target of the spiking neural network according to claim 1, wherein in S4, the normalization method is:
a, initializing network parameters, setting the maximum activation value max _ activation of neurons in other layers except an input layer in the network to be 0, setting the maximum weight max _ weight of each layer to be 0, and setting the normalization factor norm _ factor to be 0;
b, respectively searching the weight value weight of each layer of the network, wherein the maximum weight value weight is max (Wij), Wij represents the weight value of the neuron j of the L-1 layer connected to the neuron i of the L layer, and max _ weight is max (weight);
c, each layer of neuron activation value neural _ activation, wherein max _ activation is max (neural _ activation), and then the normalization factor norm _ factor of each layer of the network is calculated as max (max _ weight, max _ activation);
d, carrying out weight normalization of the network, wherein Wij is Wij/norm _ factor.
7. The method of claim 3, wherein the activation function is a linear correction unit, ReLU, formulated as ReLU
Figure FDA0003240067850000031
Wherein, Xi lRepresents the activation value of layer L neuron i; wij represents the weight value of the neuron j of the L-1 layer connected to the neuron i of the L layer; xj L-1Represents the activation value of layer L-1 neuron j.
8. The method of claim 4, wherein the current attenuation coefficient is calculated according to the formula:
Figure FDA0003240067850000032
wherein, N represents the number of neurons in the previous layer network, and N (Σ j [ Sj (t-1) ═ 1]) represents the number of neurons that transmit pulses at t-1 time by the neurons in the previous layer connected to the neuron i in the current layer; if the neuron has a pulse at time t-1, then Sj (t-1) equals 1, otherwise Sj (t-1) equals 0.
9. The method of claim 5, wherein Vplus is a constant greater than 0.
CN202111019432.3A 2021-08-31 2021-08-31 Pulse neural network target identification method Active CN113723594B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111019432.3A CN113723594B (en) 2021-08-31 2021-08-31 Pulse neural network target identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111019432.3A CN113723594B (en) 2021-08-31 2021-08-31 Pulse neural network target identification method

Publications (2)

Publication Number Publication Date
CN113723594A true CN113723594A (en) 2021-11-30
CN113723594B CN113723594B (en) 2023-12-05

Family

ID=78680510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111019432.3A Active CN113723594B (en) 2021-08-31 2021-08-31 Pulse neural network target identification method

Country Status (1)

Country Link
CN (1) CN113723594B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103700118A (en) * 2013-12-27 2014-04-02 东北大学 Moving target detection method on basis of pulse coupled neural network
CN105095961A (en) * 2015-07-16 2015-11-25 清华大学 Mixing system with artificial neural network and impulsive neural network
CN108985447A (en) * 2018-06-15 2018-12-11 华中科技大学 A kind of hardware pulse nerve network system
CN111340181A (en) * 2020-02-11 2020-06-26 天津大学 Deep double-threshold pulse neural network conversion training method based on enhanced pulse
WO2020244370A1 (en) * 2019-06-05 2020-12-10 北京灵汐科技有限公司 Heterogeneous cooperative system and communication method therefor
CN113298231A (en) * 2021-05-19 2021-08-24 复旦大学 Graph representation space-time back propagation algorithm for impulse neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103700118A (en) * 2013-12-27 2014-04-02 东北大学 Moving target detection method on basis of pulse coupled neural network
CN105095961A (en) * 2015-07-16 2015-11-25 清华大学 Mixing system with artificial neural network and impulsive neural network
CN108985447A (en) * 2018-06-15 2018-12-11 华中科技大学 A kind of hardware pulse nerve network system
WO2020244370A1 (en) * 2019-06-05 2020-12-10 北京灵汐科技有限公司 Heterogeneous cooperative system and communication method therefor
CN111340181A (en) * 2020-02-11 2020-06-26 天津大学 Deep double-threshold pulse neural network conversion training method based on enhanced pulse
CN113298231A (en) * 2021-05-19 2021-08-24 复旦大学 Graph representation space-time back propagation algorithm for impulse neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LI YE NIU等: "high accuracy spiking neural network for objective recognition based on proportional attenuating Neuron", NEURAL PROCESSING LETTERS, pages 1 - 19 *
蔡荣太;吴庆祥;: "基于脉冲神经网络的红外目标提取", 计算机应用, vol. 30, no. 12, pages 3327 - 3330 *
黄庆坤;陈云华;张灵;兰浩鑫;: "时域感兴趣区域精确定位与膜电位多核调整的动态视觉传感器数据分类", 控制理论与应用, vol. 37, no. 08, pages 1837 - 1845 *

Also Published As

Publication number Publication date
CN113723594B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN110210563B (en) Image pulse data space-time information learning and identification method based on Spike cube SNN
CN111858989B (en) Pulse convolution neural network image classification method based on attention mechanism
CN112633497B (en) Convolutional impulse neural network training method based on re-weighted membrane voltage
CN113449864B (en) Feedback type impulse neural network model training method for image data classification
CN108985252B (en) Improved image classification method of pulse depth neural network
WO2022257329A1 (en) Brain machine interface decoding method based on spiking neural network
CN111639754A (en) Neural network construction, training and recognition method and system, and storage medium
CN112101535B (en) Signal processing method of impulse neuron and related device
CN110659666B (en) Image classification method of multilayer pulse neural network based on interaction
CN111612136B (en) Neural morphology visual target classification method and system
CN108304912B (en) System and method for realizing pulse neural network supervised learning by using inhibition signal
CN112906828A (en) Image classification method based on time domain coding and impulse neural network
CN108171319A (en) The construction method of the adaptive depth convolution model of network connection
CN111079837B (en) Method for detecting, identifying and classifying two-dimensional gray level images
CN111382840B (en) HTM design method based on cyclic learning unit and oriented to natural language processing
CN112597980A (en) Brain-like gesture sequence recognition method for dynamic vision sensor
CN111310816B (en) Method for recognizing brain-like architecture image based on unsupervised matching tracking coding
CN114266351A (en) Pulse neural network training method and system based on unsupervised learning time coding
CN115346096A (en) Pulse neural network model constructed based on memristor
CN117372843A (en) Picture classification model training method and picture classification method based on first pulse coding
CN113269113A (en) Human behavior recognition method, electronic device, and computer-readable medium
CN112861769A (en) Intelligent monitoring and early warning system and method for aged people
CN113723594A (en) Impulse neural network target identification method
CN111260054A (en) Learning method for improving accuracy of associative memory impulse neural network
Dao Image classification using convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant