CN110717590A - Efficient multi-pulse algorithm based on single-exponential kernel and neural membrane potential states - Google Patents

Efficient multi-pulse algorithm based on single-exponential kernel and neural membrane potential states Download PDF

Info

Publication number
CN110717590A
CN110717590A CN201910845259.9A CN201910845259A CN110717590A CN 110717590 A CN110717590 A CN 110717590A CN 201910845259 A CN201910845259 A CN 201910845259A CN 110717590 A CN110717590 A CN 110717590A
Authority
CN
China
Prior art keywords
neuron
pulse
membrane potential
current
learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910845259.9A
Other languages
Chinese (zh)
Inventor
于强
宋世明
李盛兰
王龙标
党建武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910845259.9A priority Critical patent/CN110717590A/en
Publication of CN110717590A publication Critical patent/CN110717590A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Logic Circuits (AREA)

Abstract

The invention discloses an efficient multi-pulse learning algorithm based on single exponential kernel and neural membrane potential states, which mainly comprises the following steps: initializing neuron model parameters; inputting a pulse space-time pattern diagram; calculating V (t); learning and adjusting; and (5) verifying the model. The invention can greatly improve the information integration efficiency of the neuron. In the process of neuron learning, complex calculation and derivation equations are omitted, excellent cognitive processing performance is kept, and meanwhile, the method has extremely high running speed and ultralow energy consumption.

Description

Efficient multi-pulse algorithm based on single-exponential kernel and neural membrane potential states
Technical Field
The invention belongs to the field of brain-like computation and multi-pulse learning algorithms, particularly relates to a technology for improving the computation efficiency and the learning performance of a pulse neuron model, and particularly relates to an efficient multi-pulse algorithm based on a single exponential kernel and a neural membrane potential state.
Background
The central nervous system of the human brain uses impulses for information transmission and processing, and thus impulses are considered as one of the key reasons why the human brain has excellent cognitive ability and low energy consumption. Researchers have proposed impulse neural networks, also known as third generation artificial neural networks, based on the mechanisms of operation and operation of the human brain. The pulse neuron model combines the information of time dimension, processes the input pulse sequence, converts proper pulse output and has better biological rationality.
In order for a spiking neuron to efficiently process and learn inputs, a number of different network models and learning methods have been proposed. Recently, a multi-pulse neural network learning algorithm (mst) with powerful information processing and learning capabilities has been proposed. Inspired by multi-pulse networks, researchers have attempted to develop simpler, more efficient algorithms. Most of the existing algorithms can achieve good learning effect, but they are based on complex calculation and derivation equations, and their efficiency is far from the biological nervous system, and they do not meet the requirement of application
How to maintain excellent performance and keep ultra-low energy consumption remains one of the major bottlenecks of the present neural network. Meanwhile, the artificial intelligence system is also a main constraint condition for applying the artificial intelligence system to the mobile equipment and the wearable equipment.
Disclosure of Invention
The invention provides an efficient multi-pulse algorithm based on single-exponential kernel and neural membrane potential states. Compared with the prior algorithm, the method can greatly improve the information integration efficiency of the neuron, omits complicated calculation and derivation equations in the neuron learning process, and has extremely high operation speed and ultralow energy consumption while maintaining excellent cognitive processing performance.
The technical scheme of the invention is an efficient multi-pulse algorithm based on single exponential nucleus and neural membrane potential states, which comprises the following steps:
1) initializing neuron model parameters;
2) inputting a pulse space-time pattern diagram;
3) calculating V (t);
4) learning and adjusting;
5) and (5) verifying the model.
The single-exponential nuclear novel neuron model comprises the following components:
Figure BDA0002195062360000021
n and wiRepresenting the number of pre-synaptic neurons and corresponding synaptic weights,
Figure BDA0002195062360000022
is the time at which the ith presynaptic neuron fires the jth pulse,
Figure BDA0002195062360000023
representing the firing time of the jth pulse of the current neuron;
Figure BDA0002195062360000024
a threshold representing a neuron that emits a pulse when the membrane potential exceeds the threshold;
the above formula indicates that the neuron integrates synaptic current of afferent neuron, and in addition, the neuron model also includes reset dynamics, and when the neuron emits pulse, the neuron resets its membrane potential;
each afferent synaptic current has a persistent effect on the membrane potential of the current neuron, with the magnitude of the effect being given by the weight wiAnd kernel function
Figure BDA0002195062360000025
Determining;
is a kernel function defined as:
here, τ represents a time constant of the membrane potential;
after the neuron integrates the input to obtain the trajectory v (t), the learning algorithm of the present invention can be used for adjustment learning.
The invention directly modifies the voltage close to the neuron threshold value based on the current neuron membrane potential state so as to change the number of output pulses, and the weight adjustment learning rule of the EMLC to the neuron is as follows:
Figure BDA0002195062360000031
wherein
Figure BDA0002195062360000032
noIs the actual number of pulse outputs, n, of the current neurondThe number of the pulse to be sent is obtained;
Figure BDA0002195062360000033
is obtained by directly deriving the weight w by formula (1), and lambda is the learning step length;
tTLDrepresenting the time point of the minimum reset membrane potential after the neuron transmits the pulse and performs membrane potential reset on the neuron, if the current pulse number n of the neuron isoGreater than a desired number ndPerforming long-term LTD inhibition on the neuron according to the membrane potential at the moment; t is tTLPRepresenting the time point of the maximum membrane potential of the neuron less than the threshold value, if the current pulse number n of the neuronoLess than the desired number ndAnd performing long-term LTP enhancement on the neuron according to the membrane potential at the moment.
Advantageous effects
Compared with the prior algorithm, the method can greatly improve the information integration efficiency of the neuron, omits complicated calculation and derivation equations in the neuron learning process, and has extremely high operation speed and ultralow energy consumption while maintaining excellent cognitive processing performance. The invention is closer to the way of processing information by a biological nervous system, and also makes a certain contribution to the large-scale application of the artificial neural network to mobile and wearable equipment.
Drawings
FIG. 1 is a schematic diagram of the learning algorithm of the present invention.
FIG. 2 is a comparison of the operating time of the present invention with other learning algorithms, with the horizontal axis representing the number of pulses output by the neuron and the vertical axis representing the CPU learning time required for the number of pulses output by the neuron to be n _ out. The plotted data are from the average of 100 experiments.
FIG. 3 compares the accuracy of the present invention with other learning algorithms. Sub-graph a shows the accuracy of the algorithm under the influence of different degrees of jitter noise. And a graph B shows the accuracy of the algorithm under the influence of pulse loss noise of different degrees. The plotted data are from the average of 100 experiments.
Detailed Description
The use of the invention is explained in detail below with reference to the drawings.
(1) Initializing neuron model parameters
The invention needs to initially set the parameters of the neuron model, including the time constant tau of the membrane potential and the weight w of the neuroniAnd so on, before subsequent computing operations can be performed.
(2) Input pulse space-time pattern diagram
The use of the present invention requires the provision of a pulse spatio-temporal pattern map. In practical applications, it is generally necessary to encode the source input using a pulse encoding algorithm before subsequent operations can be performed with the present invention.
(3) Calculation V (t)
The input current is integrated according to equation (1) to calculate the membrane potential V (t).
(4) Learning adjustment
Due to the response V (t) of the neuron by the weight wiIt was determined that EMLC is required to adjust the weights of neurons to learn if neurons are expected to have different output responses to different inputs.
And (5) repeating the steps (3) and (4) until the expected pulse number is obtained.
(5) Model validation
In order to verify the efficiency and correctness of the present invention, the runtime of the present invention is compared with the runtime of other learning algorithms, as shown in fig. 2, and the experimental results show that the present invention can process the input very efficiently.
In addition, the recognition accuracy of the invention is compared with that of other learning algorithms, as shown in fig. 3, and the experimental result shows that the invention has stronger robustness and accuracy.
The present invention is based on the analysis of the leaky integrated discharge model, LEAKAGE INTEGRATE-AND-FIRE (LIF), because of its simplicity and ease of handling, making it the most common model of a pulsed neuron. The novel neuron model of the single-exponential nucleus used in the present invention is shown below.
Figure BDA0002195062360000041
N and wiRepresenting the number of pre-synaptic neurons and corresponding synaptic weights,
Figure BDA0002195062360000051
is the time at which the ith presynaptic neuron fires the jth pulse,
Figure BDA0002195062360000058
representing the firing time of the jth pulse of the current neuron.
Figure BDA0002195062360000052
Representing a threshold of the neuron, which emits a pulse when the membrane potential exceeds the threshold. The above formula indicates that the neuron integrates synaptic current of afferent neuron, and in addition, the neuron model also includes reset dynamics, which resets its membrane potential after the neuron emits a pulse. Each afferent synaptic current has a persistent effect on the membrane potential of the current neuron, with the magnitude of the effect being given by the weight wiAnd kernel function
Figure BDA0002195062360000053
And (6) determining.
Figure BDA0002195062360000054
Is a kernel function defined as:
Figure BDA0002195062360000055
here, τ represents a time constant of the membrane potential.
After the neuron integrates the input to obtain the trajectory v (t), the learning algorithm of the present invention can be used for adjustment learning. The basic idea of the learning algorithm in the present invention is to directly modify the voltage near the neuron threshold based on the current neuron membrane potential state to change the number of output pulses, which is herein named efficentilti-spikelearningredingoneneuron's neuron response (emlc). The weight adjustment learning rule of EMLC on neurons is as follows:
Figure BDA0002195062360000056
wherein:
nois the actual number of pulse outputs, n, of the current neurondIs the desired number of pulses to be delivered.Is obtained by directly deriving the weight w by formula (1), λ is the learning step length, t*An example refers to time. t is tTLDRepresenting the time point of the minimum reset membrane potential after the neuron transmits the pulse and performs membrane potential reset on the neuron. If the current pulse number n of the neuronoGreater than a desired number ndPerforming long-term inhibition (long-term suppression) (LTD) on the neuron according to the membrane potential at the moment; t is tTLPRepresenting the time point of the maximum membrane potential of the neuron less than the threshold value, if the current pulse number n of the neuronoLess than the desired number ndAnd performing long-term enhancement (LTP) on the neuron according to the membrane potential at the moment.

Claims (3)

1. The efficient multi-pulse algorithm based on the single exponential kernel and the neural membrane potential state is characterized by comprising the following steps of:
1) initializing neuron model parameters;
2) inputting a pulse space-time pattern diagram;
3) calculating V (t);
4) learning and adjusting;
5) and (5) verifying the model.
2. The efficient multi-pulse algorithm based on single-exponential nuclear and neural membrane potential states of claim 1, wherein the single-exponential nuclear novel neuron model is as follows:
Figure FDA0002195062350000011
n and wiRepresenting the number of pre-synaptic neurons and corresponding synaptic weights,
Figure FDA0002195062350000012
is the time at which the ith presynaptic neuron fires the jth pulse,
Figure FDA0002195062350000013
representing the firing time of the jth pulse of the current neuron;
θ represents a threshold of the neuron, the neuron emitting a pulse when the membrane potential exceeds the threshold;
the above formula indicates that the neuron integrates synaptic current of afferent neuron, and in addition, the neuron model also includes reset dynamics, and when the neuron emits pulse, the neuron resets its membrane potential;
each afferent synaptic current has a persistent effect on the membrane potential of the current neuron, with the magnitude of the effect being given by the weight wiAnd kernel function
Figure FDA0002195062350000014
Determining;
is a kernel function defined as:
Figure FDA0002195062350000016
here, τ represents a time constant of the membrane potential;
after the neuron integrates the input to obtain the trajectory v (t), the learning algorithm of the present invention can be used for adjustment learning.
3. The high-efficiency multi-pulse algorithm based on single-exponential kernel and neural membrane potential states of claim 1, wherein based on the current neuron membrane potential state, the voltage close to the neuron threshold is directly modified to change the number of output pulses, and the weight adjustment learning rule of the EMLC for the neuron is as follows:
Figure FDA0002195062350000021
wherein
Figure FDA0002195062350000022
noIs the actual number of pulse outputs, n, of the current neurondThe number of the pulse to be sent is obtained;
Figure FDA0002195062350000023
is obtained by directly deriving the weight w by formula (1), and lambda is the learning step length;
tTLDrepresenting the time point of the minimum reset membrane potential after the neuron transmits the pulse and performs membrane potential reset on the neuron, if the current pulse number n of the neuron isoGreater than a desired number ndThen, the nerve is aligned according to the membrane potential at the momentPerforming long-term LTD inhibition; t is tTLPRepresenting the time point of the maximum membrane potential of the neuron less than the threshold value, if the current pulse number n of the neuronoLess than the desired number ndAnd performing long-term LTP enhancement on the neuron according to the membrane potential at the moment.
CN201910845259.9A 2019-09-08 2019-09-08 Efficient multi-pulse algorithm based on single-exponential kernel and neural membrane potential states Pending CN110717590A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910845259.9A CN110717590A (en) 2019-09-08 2019-09-08 Efficient multi-pulse algorithm based on single-exponential kernel and neural membrane potential states

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910845259.9A CN110717590A (en) 2019-09-08 2019-09-08 Efficient multi-pulse algorithm based on single-exponential kernel and neural membrane potential states

Publications (1)

Publication Number Publication Date
CN110717590A true CN110717590A (en) 2020-01-21

Family

ID=69209761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910845259.9A Pending CN110717590A (en) 2019-09-08 2019-09-08 Efficient multi-pulse algorithm based on single-exponential kernel and neural membrane potential states

Country Status (1)

Country Link
CN (1) CN110717590A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627599A (en) * 2021-06-22 2021-11-09 中国科学院深圳先进技术研究院 Processing method and processing device for neuron signals and readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627599A (en) * 2021-06-22 2021-11-09 中国科学院深圳先进技术研究院 Processing method and processing device for neuron signals and readable storage medium
WO2022267385A1 (en) * 2021-06-22 2022-12-29 中国科学院深圳先进技术研究院 Neuronal signal processing method and processing apparatus, and readable storage medium
CN113627599B (en) * 2021-06-22 2023-11-24 深圳微灵医疗科技有限公司 Neuron signal processing method, processing device, and readable storage medium

Similar Documents

Publication Publication Date Title
CN107092959B (en) Pulse neural network model construction method based on STDP unsupervised learning algorithm
US10832123B2 (en) Compression of deep neural networks with proper use of mask
Lin et al. Relative ordering learning in spiking neural network for pattern recognition
JP2017525038A (en) Decomposition of convolution operations in neural networks
Huang et al. Orthogonal least squares algorithm for training cascade neural networks
WO2020151310A1 (en) Text generation method and device, computer apparatus, and medium
WO2015020802A2 (en) Computed synapses for neuromorphic systems
Wang et al. Echo state networks regulated by local intrinsic plasticity rules for regression
Alrawashdeh et al. Fast activation function approach for deep learning based online anomaly intrusion detection
Roeschies et al. Structure optimization of reservoir networks
Liu et al. Noisy softplus: an activation function that enables snns to be trained as anns
Miao et al. A supervised multi-spike learning algorithm for spiking neural networks
Xiao et al. Fast and accurate classification with a multi-spike learning algorithm for spiking neurons.
CN112288080A (en) Pulse neural network-oriented adaptive model conversion method and system
CN114266351A (en) Pulse neural network training method and system based on unsupervised learning time coding
Karlov et al. Application of high-speed algorithms for training neural networks for forecasting financial markets
CN107798384B (en) Iris florida classification method and device based on evolvable pulse neural network
CN110717590A (en) Efficient multi-pulse algorithm based on single-exponential kernel and neural membrane potential states
Li et al. Pattern recognition of spiking neural networks based on visual mechanism and supervised synaptic learning
CN110991602A (en) Event-driven pulse neuron simulation algorithm based on single exponential kernel
Furuya et al. Semi-supervised learning combining backpropagation and STDP: STDP enhances learning by backpropagation with a small amount of labeled data in a spiking neural network
Zhang et al. Intrinsic plasticity for online unsupervised learning based on soft-reset spiking neuron model
Hamed et al. Integrated feature selection and parameter optimization for evolving spiking neural networks using quantum inspired particle swarm optimization
CN115879518A (en) Task processing method and device based on AI chip
CN111582470B (en) Self-adaptive unsupervised learning image identification method and system based on STDP

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200121