CN113408613B - Single-layer image classification method based on delay mechanism - Google Patents

Single-layer image classification method based on delay mechanism Download PDF

Info

Publication number
CN113408613B
CN113408613B CN202110676241.8A CN202110676241A CN113408613B CN 113408613 B CN113408613 B CN 113408613B CN 202110676241 A CN202110676241 A CN 202110676241A CN 113408613 B CN113408613 B CN 113408613B
Authority
CN
China
Prior art keywords
input layer
time
neuron
delay
pulse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110676241.8A
Other languages
Chinese (zh)
Other versions
CN113408613A (en
Inventor
李建平
苌泽宇
冯文婷
肖飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110676241.8A priority Critical patent/CN113408613B/en
Publication of CN113408613A publication Critical patent/CN113408613A/en
Application granted granted Critical
Publication of CN113408613B publication Critical patent/CN113408613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a single-layer image classification method based on a delay mechanism, which belongs to the technical field of image processing and comprises the following steps: s1, constructing an image classification model; s2, training the image classification model by adopting the image set to obtain a trained image classification model; s3, classifying the images by adopting the trained image classification model to obtain the image types; the image classification model comprises a feature extraction unit, a pulse delay coding unit and a single-layer classifier which are sequentially connected; the invention solves the problem that the learning effect is easily interfered because the Tempotron learning algorithm only depends on adjusting the synaptic weight.

Description

Single-layer image classification method based on delay mechanism
Technical Field
The invention relates to the technical field of image processing, in particular to a single-layer image classification method based on a delay mechanism.
Background
Tempotron is one of the earliest algorithms for describing the change in membrane voltage of a pulse neuron, and originally describes the basic features of a class of algorithms based on membrane voltage driving. The adjustment of synaptic weights is only related to the maximum membrane voltage, and only the influence of threshold and kernel function needs to be considered. The role of Tempotron in the impulse neural network is similar to the basic role of the perceptron. The simplicity of the Tempotron algorithm results in that it can only solve the binary problem. But numerous researchers have also made a great deal of innovation and improvement based on Tempotron algorithm
The Tempotron algorithm has two main drawbacks: firstly, the postsynaptic membrane voltage can only send one pulse, and then an incoming signal is not received, so that the limitation of a Tempotron algorithm is caused, and for the problem, some students already provide an optimization solution for Muti-Tempotron and other algorithms; secondly, the purpose of training and learning can only be completed by adjusting the synapse weight through the Tempotron algorithm adjusting strategy. However, the single adjustment strategy results in low learning efficiency of the learning algorithm, and the learning effect is very easy to interfere.
Disclosure of Invention
Aiming at the defects in the prior art, the single-layer image classification method based on the delay mechanism solves the problem that the learning effect is easily interfered because the Tempotron learning algorithm only depends on adjusting the synaptic weight.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a single-layer image classification method based on a delay mechanism comprises the following steps:
s1, constructing an image classification model;
s2, training the image classification model by adopting the image set to obtain a trained image classification model;
and S3, classifying the images by adopting the trained image classification model to obtain the image types.
Further, the image classification model in step S1 includes a feature extraction unit, a pulse delay coding unit, and a single-layer classifier, which are connected in sequence;
the feature extraction unit is used for extracting features of the image to obtain feature image data;
the pulse delay coding unit is used for coding the characteristic image data to obtain an excitation pulse time sequence;
the single-layer classifier is used for processing the excitation pulse time sequence to obtain the category of the image.
Further, the pulse delay encoding unit encodes the feature image data according to the formula:
ti=tmax-ln(axi+1) (1)
wherein, tiFor the excitation pulse time point, t, corresponding to the ith pixel pointmaxTo edit the size of the time window, a is the coding parameter, xiAnd the pixel value of the ith pixel point corresponding to the characteristic image data.
Further, the input layer of the single-layer classifier includes: two types of neurons of a positive mode L + and a negative mode L-are counted, and N neurons are counted;
the training method comprises the following steps:
a1, calculating the pulse membrane potential voltage of the input layer neuron after the input layer neuron of the single-layer classifier is input with the excitation pulse time sequence according to the excitation pulse time sequence:
a2, judging that the mode is in the positive mode L +,
Figure BDA0003120677660000021
whether the pulse membrane potential voltage of the neuron of the time input layer is smaller than a threshold value or not is judged, and if yes, the pulse membrane potential voltage of the neuron of the time input layer is found
Figure BDA0003120677660000022
All excitation pulse time points before the time point, at diIncreasing delay in delay
Figure BDA0003120677660000023
And jumps to step A3, if no, jumps to step A8, wherein,
Figure BDA0003120677660000024
to a certain point in time of the excitation pulse time sequence, diDelay for input layer ith neuron;
a3, judging that the input layer is at this momentDelay of absence or presence of a neuroniLess than 0, if so, the delay d of the corresponding neuroniSetting to be 0, jumping to the step A4, if not, jumping to the step A4;
a4, recalculating and judging
Figure BDA0003120677660000025
Whether the pulse membrane potential voltage of the neuron of the time input layer reaches a threshold value or not is judged, if not, the step A5 is skipped, and if so, the step A8 is skipped;
a5, increase
Figure BDA0003120677660000031
Weights of all neurons before the time;
a6, recalculating and judging
Figure BDA0003120677660000032
Whether the pulse membrane potential voltage of the neuron of the time input layer reaches a threshold value or not is judged, if not, the step A7 is skipped, and if so, the step A8 is skipped;
a7, judgment
Figure BDA0003120677660000033
Whether the time reaches a set time threshold value, if yes, jumping to the step A8, and if not, increasing
Figure BDA0003120677660000034
The value of the time, and jump to step A2;
a8, judging whether the pulse membrane potential voltage of the largest input layer neuron is larger than the threshold value under the positive mode L-, if so, finding out
Figure BDA0003120677660000035
All excitation pulse time points before the moment in time, at diIncreasing delay in delay
Figure BDA0003120677660000036
And jumping to step A9, if not, jumping to step A12, wherein,
Figure BDA0003120677660000037
A certain point in time of the excitation pulse time sequence corresponding to the pulse membrane potential voltage of the largest input layer neuron, diDelay for input layer ith neuron;
a9, determining whether there is a delay d of one neuron in the input layeriLess than 0, if so, the delay d of the corresponding neuroniSetting to be 0, jumping to the step A10, if not, jumping to the step A10;
a10, recalculating and judging
Figure BDA0003120677660000038
Whether the pulse membrane potential voltage of the neuron of the time input layer is larger than a threshold value or not is judged, if yes, the step A11 is skipped, and if not, the step A12 is skipped;
a11, increase
Figure BDA0003120677660000039
Weights of all neurons before the moment and jumps to step A8;
and A12, finishing the training of the single-layer classifier.
Further, the formula for calculating the pulse membrane potential voltage of the input layer neuron in step a1 is:
Figure BDA00031206776600000310
wherein V (t) is the pulse membrane potential voltage of the input layer neuron, ωiIs the weight of the ith neuron of the input layer, tiFor the excitation pulse time point corresponding to the ith pixel point, diDelay of the ith neuron of the input layer, t is time, V 0For the initial values of the pulse film potential voltages of the neurons of the input layer, exp () is an exponential function, τm、τsAre all time constants, VrestIs the resting voltage.
Further, the increased delay in step A2
Figure BDA0003120677660000041
The calculation formula of (2) is as follows:
Figure BDA0003120677660000042
wherein, ω isiIs the weight of the ith neuron of the input layer, tiFor the excitation pulse time point corresponding to the ith pixel point,
Figure BDA0003120677660000043
in positive mode L +, a certain time point of the excitation pulse time sequence, diDelay for the ith neuron of the input layer, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τm、τsAre all time constants.
Further, the weight increased in step A5
Figure BDA0003120677660000044
The calculation formula of (2) is as follows:
Figure BDA0003120677660000045
wherein λ is learning rate, tiFor the excitation pulse time point corresponding to the ith pixel point,
Figure BDA0003120677660000046
in positive mode L +, a certain time point of the excitation pulse time sequence, diDelay for the ith neuron of the input layer, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τm、τsAre all time constants.
Further, in step A8 is increased
Figure BDA0003120677660000047
The calculation formula of (2) is as follows:
Figure BDA0003120677660000048
wherein, ω isiIs the weight of the ith neuron of the input layer, tiFor the excitation pulse time point corresponding to the ith pixel point,
Figure BDA0003120677660000049
In negative mode L +, a certain time point of the excitation pulse time sequence, diDelay for the ith neuron of the input layer, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τm、τsAre all time constants.
Further, the formula for calculating the weight added in step a11 is:
Figure BDA00031206776600000410
wherein λ is learning rate, tiFor the excitation pulse time point corresponding to the ith pixel point,
Figure BDA00031206776600000411
in negative mode L +, a certain time point of the excitation pulse time sequence, diDelay for the ith neuron of the input layer, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τm、τsAre all time constants.
In conclusion, the beneficial effects of the invention are as follows:
(1) the single-layer classifier is an improvement on a Tempotron learning algorithm, and the learning algorithm no longer only depends on adjusting synapse weight by adding a delay mechanism.
(2) The single-layer classifier influences the weight adjustment not only by the change of the membrane voltage but also by the change of the delay. The adjustment of the weights by the single-layer classifier can thus be better distributed over different neurons. Therefore, the learning efficiency is higher, and the participation degree of the neurons in network regulation is higher.
(3) The delay adjustment of the single-layer classifier is based on the original delay time, and the adjustment of the membrane voltage and the weight is influenced by adjusting the delay. This process is a continuously variable process. The change of the delay enables the weight distribution to be more reasonable, the learning effect to be more robust, and the accuracy of image classification to be higher.
(4) In the single-layer classifier delay mechanism, the delay of each neuron changes along with the change trend of the membrane voltage, and can be adjusted for multiple times. Therefore, the single-layer classifier does not rely on the regulation of synapses only, and the addition of delay enables the algorithm to have higher efficiency and robustness. What is more important is the training process, after the latest membrane voltage is obtained each time, the delay needs to be adjusted preferentially, then the delay effect is compared, the target is not reached, and the weight is updated. Since the delay is a very small amount depending on the membrane voltage and the weight, the membrane voltage of the whole network can be fine-tuned. If the weights are updated first and then the delays are adjusted, the closest target opportunity is usually missed, resulting in an increase in the overall algorithm learning time. The synapses of the single-layer classifier are adjusted in a relatively smaller amplitude, and the classification capability of the synapses is improved in a small amplitude by a delayed adjustment mechanism.
Drawings
Fig. 1 is a flowchart of a single-layer image classification method based on a delay mechanism.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a single-layer image classification method based on a delay mechanism includes the following steps:
s1, constructing an image classification model;
s2, training the image classification model by adopting an image set to obtain a trained image classification model;
and S3, classifying the images by adopting the trained image classification model to obtain the image classes.
The image classification model in the step S1 comprises a feature extraction unit, a pulse delay coding unit and a single-layer classifier which are connected in sequence;
the feature extraction unit is used for extracting features of the image to obtain feature image data;
the pulse delay coding unit is used for coding the characteristic image data to obtain an excitation pulse time sequence;
the single-layer classifier is used for processing the excitation pulse time sequence to obtain the category of the image.
The formula of the pulse delay coding unit for coding the characteristic image data is as follows:
ti=tmax-ln(axi+1) (1)
wherein, tiFor the excitation pulse time point, t, corresponding to the ith pixel pointmaxTo edit the size of the time window, a is the coding parameter, xiAnd the pixel value of the ith pixel point corresponding to the characteristic image data.
The input layer of the single-layer classifier includes: two types of neurons of a positive mode L + and a negative mode L-are counted, and N neurons are counted;
The training method comprises the following steps:
a1, according to the excitation pulse time sequence, calculating the pulse membrane potential voltage of the input layer neuron after the excitation pulse time sequence is input into the input layer neuron of the single-layer classifier:
the formula for calculating the pulse membrane potential voltage of the input layer neuron in step a1 is:
Figure BDA0003120677660000071
wherein V (t) is a pulse membrane of neurons in the input layerPotential voltage, ωiIs the weight of the ith neuron of the input layer, tiFor the excitation pulse time point corresponding to the ith pixel point, diDelay of the ith neuron of the input layer, t is time, V0For the initial values of the pulse film potential voltages of the neurons of the input layer, exp () is an exponential function, τm、τsAre all time constants, VrestIs the resting voltage.
A2, judging that the mode is in the positive mode L +,
Figure BDA0003120677660000072
whether the pulse membrane potential voltage of the neuron of the time input layer is smaller than a threshold value or not is judged, and if yes, the pulse membrane potential voltage is found
Figure BDA0003120677660000073
All excitation pulse time points before the moment in time, at diIncreasing delay in delay
Figure BDA0003120677660000074
And jumps to step A3, if no, to step A8, wherein,
Figure BDA0003120677660000075
to a certain point in time of the excitation pulse time sequence, diDelay for input layer ith neuron;
increased delay in step A2
Figure BDA0003120677660000076
The calculation formula of (2) is as follows:
Figure BDA0003120677660000077
wherein, ω isiIs the weight of the ith neuron of the input layer, t iFor the excitation pulse time point corresponding to the ith pixel point,
Figure BDA0003120677660000078
in positive mode L +, the time sequence of excitation pulseA certain point in time of the column, diDelay for the ith neuron of the input layer, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τm、τsAre all time constants.
A3, determining whether there is a delay d of one neuron in the input layer at the momentiLess than 0, if so, the delay d of the corresponding neuroniSetting to be 0, jumping to the step A4, if not, jumping to the step A4;
a4, recalculating and judging
Figure BDA0003120677660000079
Whether the pulse membrane potential voltage of the neuron of the time input layer reaches a threshold value or not is judged, if not, the step A5 is skipped, and if so, the step A8 is skipped;
a5, increase
Figure BDA00031206776600000710
Weights of all neurons before the time;
increased weight in step A5
Figure BDA00031206776600000711
The calculation formula of (2) is as follows:
Figure BDA00031206776600000712
wherein λ is learning rate, tiFor the excitation pulse time point corresponding to the ith pixel point,
Figure BDA0003120677660000081
in positive mode L +, a certain time point of the excitation pulse time sequence, diDelay for the ith neuron of the input layer, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τm、τsAre all time constants.
A6, recalculating and judging
Figure BDA0003120677660000082
Whether the pulse membrane potential voltage of the neuron of the time input layer reaches a threshold value or not is judged, if not, the step A7 is skipped, and if yes, the step A8 is skipped;
a7, judgment
Figure BDA0003120677660000083
Whether the time reaches a set time threshold value, if yes, jumping to the step A8, and if not, increasing
Figure BDA0003120677660000084
The value of the time, and jump to step A2;
a8, judging whether the pulse membrane potential voltage of the largest input layer neuron is larger than the threshold value under the positive mode L-, if so, finding out
Figure BDA0003120677660000085
All excitation pulse time points before the moment in time, at diIncreasing delay in delay
Figure BDA0003120677660000086
And jumps to step a9, if no, to step a12, wherein,
Figure BDA0003120677660000087
a certain point in time of the excitation pulse time sequence corresponding to the pulse membrane potential voltage of the largest input layer neuron, diDelay for input layer ith neuron;
increase in step A8
Figure BDA0003120677660000088
The calculation formula of (2) is as follows:
Figure BDA0003120677660000089
wherein, ω isiIs the weight of the ith neuron of the input layer, tiCorresponding for ith pixel pointThe point in time of the excitation pulse is,
Figure BDA00031206776600000810
in negative mode L +, a certain time point of the excitation pulse time sequence, diDelay for the ith neuron of the input layer, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τ m、τsAre all time constants.
A9, determining whether there is a delay d of one neuron in the input layeriIf less than 0, delay d of corresponding neuron is determinediSetting to be 0, jumping to the step A10, if not, jumping to the step A10;
a10, recalculating and judging
Figure BDA00031206776600000811
Whether the pulse membrane potential voltage of the neuron of the time input layer is larger than a threshold value or not is judged, if yes, the step A11 is skipped, and if not, the step A12 is skipped;
a11, increase
Figure BDA0003120677660000091
Weights of all neurons before the moment and jumps to step A8;
the formula for the weight added in step a11 is:
Figure BDA0003120677660000092
wherein λ is learning rate, tiFor the excitation pulse time point corresponding to the ith pixel point,
Figure BDA0003120677660000093
in negative mode L +, a certain time point of the excitation pulse time sequence, diDelay for the ith neuron of the input layer, V0For the initial values of the pulse film potential voltages of the neurons of the input layer, exp () is an exponential function, τm、τsAre all time constants.
And A12, finishing training of the single-layer classifier.

Claims (2)

1. A single-layer image classification method based on a delay mechanism is characterized by comprising the following steps:
s1, constructing an image classification model;
s2, training the image classification model by adopting the image set to obtain a trained image classification model;
S3, classifying the images by using the trained image classification model to obtain the image categories;
the image classification model in the step S1 includes a feature extraction unit, a pulse delay coding unit, and a single-layer classifier, which are connected in sequence;
the feature extraction unit is used for extracting features of the image to obtain feature image data;
the pulse delay coding unit is used for coding the characteristic image data to obtain an excitation pulse time sequence;
the single-layer classifier is used for processing the excitation pulse time sequence to obtain the category of the image;
the input layer of the single-layer classifier comprises: two types of neurons of a positive mode L + and a negative mode L-are counted, and N neurons are counted;
the training method comprises the following steps:
a1, calculating the pulse membrane potential voltage of the input layer neuron after the input layer neuron of the single-layer classifier is input with the excitation pulse time sequence according to the excitation pulse time sequence:
the formula for calculating the pulse membrane potential voltage of the input layer neuron in the step A1 is as follows:
Figure FDA0003558856520000011
wherein V (t) is the pulse membrane potential voltage of the input layer neuron, ωiIs the weight of the ith neuron of the input layer, tiFor the excitation pulse time point corresponding to the ith pixel point, d iDelay of the ith neuron of the input layer, t is time, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τm、τsAre all time constants, VrestIs a resting voltage;
a2, judging that the mode is in the positive mode L +,
Figure FDA0003558856520000012
whether the pulse membrane potential voltage of the neuron of the time input layer is smaller than a threshold value or not is judged, and if yes, the pulse membrane potential voltage is found
Figure FDA0003558856520000021
All excitation pulse time points before the moment in time, at diIncreasing delay in delay
Figure FDA0003558856520000022
And jumps to step A3, if no, to step A8, wherein,
Figure FDA0003558856520000023
to a certain point in time of the excitation pulse time sequence, diDelay for input layer ith neuron;
increased delay in step A2
Figure FDA0003558856520000024
The calculation formula of (2) is as follows:
Figure FDA0003558856520000025
wherein, ω isiIs the weight of the ith neuron of the input layer, tiFor the excitation pulse time point corresponding to the ith pixel point,
Figure FDA0003558856520000026
in positive mode L +, a certain time point of the excitation pulse time sequence, diFor the ith god of the input layerDelayed by a unit, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τm、τsAre all time constants;
a3, determining whether there is a delay d of one neuron in the input layer at the momentiLess than 0, if so, the delay d of the corresponding neuroniSetting to be 0, jumping to the step A4, if not, jumping to the step A4;
A4, recalculating and determining
Figure FDA0003558856520000027
Whether the pulse membrane potential voltage of the neuron of the time input layer reaches a threshold value or not is judged, if not, the step A5 is skipped, and if yes, the step A8 is skipped;
a5, increase
Figure FDA0003558856520000028
Weights of all neurons before the time;
the weight increased in the step A5
Figure FDA0003558856520000029
The calculation formula of (c) is:
Figure FDA00035588565200000210
wherein λ is learning rate, tiFor the excitation pulse time point corresponding to the ith pixel point,
Figure FDA00035588565200000211
in positive mode L +, a certain time point of the excitation pulse time sequence, diDelay for the ith neuron of the input layer, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τm、τsAre all time constants;
a6, recalculating and judging
Figure FDA00035588565200000212
Whether the pulse membrane potential voltage of the neuron of the time input layer reaches a threshold value or not is judged, if not, the step A7 is skipped, and if so, the step A8 is skipped;
a7, judgment
Figure FDA00035588565200000213
Whether the time reaches a set time threshold value, if yes, jumping to the step A8, and if not, increasing
Figure FDA0003558856520000031
The value of the time, and jump to step A2;
a8, judging whether the pulse membrane potential voltage of the largest input layer neuron is larger than the threshold value under the negative mode L-, if so, finding out
Figure FDA0003558856520000032
All excitation pulse time points before the moment in time, at d iIncreasing delay in delay
Figure FDA0003558856520000033
And jumps to step a9, if no, jumps to step a12, wherein,
Figure FDA0003558856520000034
a certain time point of the excitation pulse time sequence corresponding to the pulse membrane potential voltage of the largest input layer neuron, diDelay for input layer ith neuron;
increase in step A8
Figure FDA0003558856520000035
The calculation formula of (c) is:
Figure FDA0003558856520000036
wherein, ω isiIs the weight of the ith neuron of the input layer, tiFor the excitation pulse time point corresponding to the ith pixel point,
Figure FDA0003558856520000037
in negative mode L-, a certain point in time of the excitation pulse time sequence, diDelay for the ith neuron of the input layer, V0For the initial value of the pulse film potential voltage of the input layer neurons, exp () is an exponential function, τm、τsAre all time constants;
a9, determining whether there is a delay d of one neuron in the input layeriLess than 0, if so, the delay d of the corresponding neuroniSetting to be 0, jumping to the step A10, if not, jumping to the step A10;
a10, recalculating and judging
Figure FDA0003558856520000038
Whether the pulse membrane potential voltage of the neuron of the time input layer is larger than a threshold value or not is judged, if yes, the step A11 is skipped, and if not, the step A12 is skipped;
a11, increase
Figure FDA0003558856520000039
Weights of all neurons before the moment and jumps to step A8;
increased weight in step A11
Figure FDA00035588565200000310
The calculation formula of (c) is:
Figure FDA00035588565200000311
wherein λ is learning rate, tiFor the excitation pulse time point corresponding to the ith pixel point,
Figure FDA00035588565200000312
in negative mode L-, a certain time point of the time series of excitation pulses, diDelay for the ith neuron of the input layer, V0For the initial values of the pulse film potential voltages of the neurons of the input layer, exp () is an exponential function, τm、τsAre all time constants;
and A12, finishing training of the single-layer classifier.
2. The single-layer image classification method based on the delay mechanism as claimed in claim 1, wherein the pulse delay coding unit codes the feature image data according to the formula:
ti=tmax-ln(axi+1) (1)
wherein, tiFor the excitation pulse time point, t, corresponding to the ith pixel pointmaxTo edit the size of the time window, a is the coding parameter, xiAnd the pixel value of the ith pixel point corresponding to the characteristic image data.
CN202110676241.8A 2021-06-18 2021-06-18 Single-layer image classification method based on delay mechanism Active CN113408613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110676241.8A CN113408613B (en) 2021-06-18 2021-06-18 Single-layer image classification method based on delay mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110676241.8A CN113408613B (en) 2021-06-18 2021-06-18 Single-layer image classification method based on delay mechanism

Publications (2)

Publication Number Publication Date
CN113408613A CN113408613A (en) 2021-09-17
CN113408613B true CN113408613B (en) 2022-07-19

Family

ID=77685113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110676241.8A Active CN113408613B (en) 2021-06-18 2021-06-18 Single-layer image classification method based on delay mechanism

Country Status (1)

Country Link
CN (1) CN113408613B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306301A (en) * 2011-08-26 2012-01-04 中南民族大学 Motion identification system by simulating spiking neuron of primary visual cortex
CN103279958A (en) * 2013-05-31 2013-09-04 电子科技大学 Image segmentation method based on Spiking neural network
CN107194426A (en) * 2017-05-23 2017-09-22 电子科技大学 A kind of image-recognizing method based on Spiking neutral nets
CN108416391A (en) * 2018-03-16 2018-08-17 重庆大学 The image classification method of view-based access control model cortex treatment mechanism and pulse supervised learning
CN108985447A (en) * 2018-06-15 2018-12-11 华中科技大学 A kind of hardware pulse nerve network system
CN110210563A (en) * 2019-06-04 2019-09-06 北京大学 The study of pattern pulse data space time information and recognition methods based on Spike cube SNN

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9418331B2 (en) * 2013-10-28 2016-08-16 Qualcomm Incorporated Methods and apparatus for tagging classes using supervised learning
EP3548840B1 (en) * 2016-11-29 2023-10-11 Blackmore Sensors & Analytics, LLC Method and system for classification of an object in a point cloud data set
CN108875846B (en) * 2018-05-08 2021-12-10 河海大学常州校区 Handwritten digit recognition method based on improved impulse neural network
FR3089037B1 (en) * 2018-11-27 2022-05-27 Commissariat Energie Atomique NEURONAL CIRCUIT ABLE TO IMPLEMENT SYNAPTIC LEARNING
FR3089663B1 (en) * 2018-12-07 2021-09-17 Commissariat Energie Atomique Artificial neuron for neuromorphic chip with resistive synapses
CN109635938B (en) * 2018-12-29 2022-05-17 电子科技大学 Weight quantization method for autonomous learning impulse neural network
GB202007545D0 (en) * 2020-05-20 2020-07-01 Univ Ulster Improvements in and relating to image classification using retinal ganglion cell modelling
CN112633497B (en) * 2020-12-21 2023-08-18 中山大学 Convolutional impulse neural network training method based on re-weighted membrane voltage

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306301A (en) * 2011-08-26 2012-01-04 中南民族大学 Motion identification system by simulating spiking neuron of primary visual cortex
CN103279958A (en) * 2013-05-31 2013-09-04 电子科技大学 Image segmentation method based on Spiking neural network
CN107194426A (en) * 2017-05-23 2017-09-22 电子科技大学 A kind of image-recognizing method based on Spiking neutral nets
CN108416391A (en) * 2018-03-16 2018-08-17 重庆大学 The image classification method of view-based access control model cortex treatment mechanism and pulse supervised learning
CN108985447A (en) * 2018-06-15 2018-12-11 华中科技大学 A kind of hardware pulse nerve network system
CN110210563A (en) * 2019-06-04 2019-09-06 北京大学 The study of pattern pulse data space time information and recognition methods based on Spike cube SNN

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Competitive Learning in a Spiking Neural Network:Towards an Intelligent Pattern Classifier;Sergey A. Lobov等;《Sensors》;20200116;第20卷(第2期);第1-14页 *
Power-efficient neural network with artificial dendrites;Xinyi Li等;《Nature Nanotechnology》;20200129;第15卷;第776-782页 *
基于脉冲序列内积的脉冲神经网络监督学习研究;王向文;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160715(第7期);第I140-62页 *
基于脉冲神经网络的学习算法研究及其应用;薛庆弢;《中国优秀硕士学位论文全文数据库 信息科技辑》;20200115(第1期);第I140-261页 *

Also Published As

Publication number Publication date
CN113408613A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN109408731B (en) Multi-target recommendation method, multi-target recommendation model generation method and device
CN107580712B (en) Reduced computational complexity for fixed point neural networks
CN111858989B (en) Pulse convolution neural network image classification method based on attention mechanism
US9092735B2 (en) Method and apparatus for structural delay plasticity in spiking neural networks
US20220230051A1 (en) Spiking Neural Network
US10140573B2 (en) Neural network adaptation to current computational resources
CN108304912B (en) System and method for realizing pulse neural network supervised learning by using inhibition signal
CN113330450A (en) Method for identifying objects in an image
WO2018212946A1 (en) Sigma-delta position derivative networks
CN109886343B (en) Image classification method and device, equipment and storage medium
US11308387B2 (en) STDP with synaptic fatigue for learning of spike-time-coded patterns in the presence of parallel rate-coding
Duffner et al. An online backpropagation algorithm with validation error-based adaptive learning rate
CN114186672A (en) Efficient high-precision training algorithm for impulse neural network
US20200257973A1 (en) Inference method and device using spiking neural network
Wu et al. Binarized neural networks on the imagenet classification task
CN113205048A (en) Gesture recognition method and system
CN111310816B (en) Method for recognizing brain-like architecture image based on unsupervised matching tracking coding
CN116762085A (en) Variable bit rate compression using neural network model
CN113408613B (en) Single-layer image classification method based on delay mechanism
Ye et al. Recognition algorithm of emitter signals based on PCA+ CNN
CN114266351A (en) Pulse neural network training method and system based on unsupervised learning time coding
Grimaldi et al. A homeostatic gain control mechanism to improve event-driven object recognition
CN108846349A (en) A kind of face identification method based on dynamic Spiking neural network
CN111582461B (en) Neural network training method and device, terminal equipment and readable storage medium
CN111582462B (en) Weight in-situ updating method and device, terminal equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant