CN110837776A - Pulse neural network handwritten Chinese character recognition method based on STDP - Google Patents
Pulse neural network handwritten Chinese character recognition method based on STDP Download PDFInfo
- Publication number
- CN110837776A CN110837776A CN201910954627.3A CN201910954627A CN110837776A CN 110837776 A CN110837776 A CN 110837776A CN 201910954627 A CN201910954627 A CN 201910954627A CN 110837776 A CN110837776 A CN 110837776A
- Authority
- CN
- China
- Prior art keywords
- neuron
- pulse
- neurons
- neural network
- stdp
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 title claims abstract description 27
- 210000002569 neuron Anatomy 0.000 claims abstract description 81
- 239000012528 membrane Substances 0.000 claims abstract description 20
- 238000012549 training Methods 0.000 claims abstract description 15
- 230000001242 postsynaptic effect Effects 0.000 claims abstract description 10
- 230000003518 presynaptic effect Effects 0.000 claims abstract description 10
- 230000008859 change Effects 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims abstract description 8
- 230000000946 synaptic effect Effects 0.000 claims abstract description 6
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 230000002401 inhibitory effect Effects 0.000 claims description 26
- 210000000225 synapse Anatomy 0.000 claims description 20
- 230000002964 excitative effect Effects 0.000 claims description 18
- 230000005284 excitation Effects 0.000 claims description 11
- 230000000284 resting effect Effects 0.000 claims description 10
- 230000005764 inhibitory process Effects 0.000 claims description 8
- 210000005215 presynaptic neuron Anatomy 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 210000002364 input neuron Anatomy 0.000 claims description 7
- 230000001537 neural effect Effects 0.000 claims description 6
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 claims description 6
- 230000036279 refractory period Effects 0.000 claims description 4
- 230000036749 excitatory postsynaptic potential Effects 0.000 claims description 3
- 230000003068 static effect Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 abstract description 5
- 210000001320 hippocampus Anatomy 0.000 abstract description 2
- 210000001176 projection neuron Anatomy 0.000 abstract description 2
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000002829 reductive effect Effects 0.000 description 2
- 241000238371 Sepiidae Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 210000003050 axon Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000012421 spiking Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/32—Digital ink
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2155—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Character Discrimination (AREA)
Abstract
The invention relates to a pulse neural network handwritten Chinese character recognition method based on STDP, which comprises the following steps: s1: downloading an off-line data set, namely an off-line handwritten Chinese character data set; s2: preprocessing an offline data set: normalizing each picture in the data set; s3: determining a number of neurons for training; s4: constructing a network structure; s5: pulse coding each pixel in the neural network; s6: determining a neuron model; s7: learning the neuron model by adopting the STDP learning rule; s8: putting the data sets into a network in sequence for training, and finishing the training of the impulse neural network after 3 times of iteration; the recognition method can improve the efficiency of handwritten Chinese recognition. The STDP learning mechanism adopted by the invention is existed in the pyramidal neuron of the hippocampus at the earliest time, and the relative time sequence of pre-synaptic and post-synaptic pulse emission induces different synaptic change processes, thereby influencing the membrane potential of the neuron.
Description
Technical Field
The invention relates to the technical field of image recognition, in particular to an STDP (short Stroke data set) -based pulse neural network handwritten Chinese character recognition method.
Background
For a long time, the problem of Handwritten Chinese Character Recognition (HCCR) has attracted people's attention and research and has played an important role in various applications. Such as bank check recognition, automatic mail sorting, document digitization, intelligent education, etc. The previous handwritten Chinese character recognition work can be divided into different types, including recognition tasks of numbers, English characters, Chinese characters, French characters and the like. The HCCR problem has been studied extensively for over 40 years and can be further divided into two categories, online identification and offline identification. The online recognizer uses the digitized trace of the pen to recognize characters in the process of writing, while the offline recognizer processes scanned images of previously handwritten characters. In general, online identification tasks are easier than offline identification tasks because there is a large amount of digitized tracking information available to train the model. However, offline identification has wider applications, such as automatically sorting mail and editing old documents.
In recent years, many research works and competitions are dedicated to Chinese character off-line recognition, and along with the rapid increase of computing power, the massive accumulation of training data and the continuous improvement of nonlinear activation functions, the deep convolutional neural network makes remarkable progress in many computer vision tasks. Good results are obtained in the handwritten Chinese character recognition. However, deep convolutional neural networks are always accompanied by billions of parameters and multiply-accumulate. Such huge computational cost and storage requirements still hinder the development of the CNN model in practical applications. In the hand-written Chinese characters, general models need to train hundreds of millions of parameters, and the training time is long, the energy consumption is high, and the models cannot be applied in daily life. Although many researchers do pruning, weight quantization and network structure optimization on networks for handwritten Chinese character recognition, the magnitude of parameters cannot be reduced to below million levels without losing a great deal of precision. How to achieve high efficiency, small storage and being suitable for hardware are all problems to be solved by the current handwritten Chinese character recognition.
Impulse neural networks exhibit striking biological similarities and powerful computational power due to their patterns recognition, image processing, computer vision, etc. In image recognition, neurons are activated only when the membrane potential reaches a threshold value, and it is not necessary to set a large number of labels and adjust a large number of parameters, so its characteristics of low power consumption and high efficiency become one of the important techniques of contemporary researchers. Meanwhile, the pulse neural network is well matched with an integrated circuit of a Field Programmable Gate Array (FPGA), and a hardware model with low power consumption, small volume and high-speed parallel processing is provided for the pulse neural network on hardware. In the impulse neural network, information is spread by neurons in the form of impulse sequences, and an STDP learning mechanism is developed by Hebbian rules, is considered as an important mechanism for brain learning and information storage, and belongs to an unsupervised learning mechanism. STDP achieves network balance by modulating the pulse time difference between pre-and post-synaptic. The impulse neural network can be trained by using the STDP learning rule.
Disclosure of Invention
The invention provides an STDP-based pulse neural network handwritten Chinese character recognition method for overcoming the defect of low handwritten Chinese character recognition efficiency in the prior art.
The method comprises the following steps:
s1: downloading an off-line data set, namely an off-line handwritten Chinese character data set;
s2: preprocessing an offline data set: normalizing each picture in the data set;
s3: determining a number of neurons for training;
s4: constructing a network structure;
s5: pulse coding each pixel in the neural network;
s6: determining a neuron model; adopting a leakage integration-and-fire model (LIF) as a neuron model;
s7: learning the neuron model by adopting the STDP learning rule;
s8: putting the data sets into a network in sequence for training, and finishing the training of the impulse neural network after 3 times of iteration;
preferably, S3 is specifically: in the offline dataset, class N tags { z }1,z2,…znAnd clustering similarity of each type of label by adopting ISODATA unsupervised learning, wherein after clustering, the quantity of each type of label after clustering is M ═ Mi(ii) a 1,2, …, N }, total cluster number S.
Preferably, the IOSDATA similarity clustering algorithm comprises the following steps: s3.1: initializing parameters including the expected number of cluster centers, the minimum number of samples in each cluster domain, and the standard deviation of sample distance distribution in one cluster domain;
s3.2: performing neighbor clustering, and calculating a clustering center and a mean value;
s3.3: and judging whether the expected indexes are met or not, if not, performing splitting calculation, returning to S3.2 after the splitting calculation, if the expected indexes are met and merging conditions are met, performing merging operation, and outputting results after the budgets are merged. After clustering, the number of each label after clustering is M ═ { M ═ Mi(ii) a i is 1,2, …, N, and the total number of clusters is S.
Preferably, S4 is specifically: the first layer is an input layer, and the number of input neurons is set to 64 x 64; the second layer is an excitant layer, and the number of excitable neurons is set as S; the third layer is a inhibition layer, and the number of inhibitory neurons is set as S; connecting the neurons of the input layer with the neurons of the excitation layer, connecting the neurons of the excitation layer with the neurons of the inhibition layer in a one-to-one manner, and reversely inhibiting all the neurons of the excitation layer by the neurons of the inhibition layer.
Preferably, in S5, since each pixel corresponds to each input neuron, the pulse coding specifically includes: setting a period of 350ms, each pixel is coded into a pulse sequence which is distributed in Poisson, and the pulse emissivity is in direct proportion to the intensity of the corresponding pixel in an input image.
Preferably, the change process of the membrane potential V of the integral-and-fire model (LIF) in S6 is described by the following first order differential equation:
wherein ErestIs static membrane potential, EexcAnd EinhEquilibrium potential for excitatory and inhibitory synapses, geAnd giConductance of excitatory synapses and inhibitory synapses, respectively; τ is the time constant (excitatory neurons have a longer period than inhibitory neurons);
Vthresh_ethreshold value V for excitatory neuronsrest_eIs excitatory neuronal resting potential, Vthresh_iIs inhibitory neuron threshold Vrest_iFor inhibitory neuronal resting potential, it can be expressed as:
when the membrane potential of the neuron exceeds a membrane threshold VthresWhen the neuron emits a pulse and the membrane potential returns to the resting potential VrestWithin milliseconds after reset, the neuron is in a refractory period and can not generate pulse any more;
if the presynaptic neuron is an excitatory neuron, its conductivity geFirst order equation of (1):
time constant τ heregeIs an excitatory postsynaptic potential, and similarly if the presynaptic neuron is an inhibitory neuron, the conductance giThe same equation is used for the update of (1), but the time constant after inhibitory synaptic is τgiCan be expressed as:
preferably, S7 is specifically:
recording the weight of each synapse and recording the trace X before each synapsepreThe trace is increased by 1 each time a pre-synaptic pulse reaches a synapse, otherwise X is increased bypreExponentially decaying; when the post-synaptic pulse reaches the synapse, calculating the weight change delta w according to the pre-synaptic trace:
Δw=η(xpre-xtar)(wmax-w)μ
η is the learning rate, wmaxIs the maximum weight, μ determines the dependency on the previous weight. And determines the dependency of the update on the previous weights. x is the number oftarIs to record the current x as the post-synaptic neuron generates a pulsepreThe value of (c).
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
(1) compared with the traditional convolution neural network, the structural model of the impulse neural network is simple, occupies small memory and has high training and recognition speed. For example, the traditional HCCR-GoogLeNet-Ensemble-10 Chinese character recognition model needs to occupy 270.0MB, while the structural model of the impulse neural network only occupies about 35 MB.
(2) Compared with the traditional convolutional neural network, the impulse neural network has higher biological plasticity, the impulse neural network is formed by taking an impulse neuron model with higher biological interpretability as a basic unit, the impulse neural network called as a third-generation neural network is one of key attention technologies for artificial intelligence development research in the future, Hodgkin and the like provide a Hodgkin-Huxley high-dimensional nonlinear neuron model by analyzing and modeling cuttlefish axons, and the LIF model for the application is simplified according to the Hodgkin-Huxley neuron model. The STDP learning mechanism adopted by the invention is existed in the pyramidal neuron of the hippocampus at the earliest time, and the relative time sequence of pre-synaptic and post-synaptic pulse emission induces different synaptic change processes, thereby influencing the membrane potential of the neuron. Compared with the traditional artificial neural network, the pulse neuron model and the learning method adopted by the invention have more symbolic biological characteristics.
Drawings
Fig. 1 is a flowchart of an STDP-based impulse neural network handwritten Chinese character recognition method in embodiment 1.
FIG. 2 is a schematic diagram of IOSDATA similarity clustering algorithm.
Fig. 3 is a schematic diagram of a spiking neural network.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1:
the embodiment provides an STDP-based pulse neural network handwritten Chinese character recognition method, as shown in fig. 1, the method includes the following steps:
s1: downloading HWDB1.1 offline data sets in the Chinese academy CASIA handwritten Chinese library;
s2: preprocessing an offline handwritten Chinese character data set: each picture of the data set has different sizes, and the pictures cannot be uniformly put into the input layer to be coded into the pulse sequence, so that the pictures need to be normalized, and the uniform size is 64 pixels by 64 pixels.
S3: determining the number of neurons used for training: in the offline dataset, class N tags { z }1,z2,…znAnd (3) carrying out similarity clustering on each type of label by adopting ISODATA unsupervised learning, wherein an IOSDATA similarity clustering algorithm is shown in a figure 2 and comprises the following main steps: s3.1: initializing parameters including the expected number of cluster centers, the minimum number of samples in each cluster domain, the standard deviation of sample distance distribution in one cluster domain, and the like; s3.2: performing neighbor clustering, calculating clustering centers, mean values and the like; s3.3: and judging whether the expected indexes are met or not, if not, performing splitting calculation, returning to S3.2 after the splitting calculation, if the expected indexes are met and merging conditions are met, performing merging operation, and outputting results after the budgets are merged. After clustering, the number of each label after clustering is M ═ { M ═ Mi(ii) a i is 1,2, …, N, and the total number of clusters is S.
S4: constructing a pulse neural network topological structure: as shown in fig. 3, the first layer is an input layer, and the number of input neurons is set to 64 × 64; the second layer is an excitant layer, and the number of excitable neurons is set as S; the third layer is the inhibitory layer, and the number of inhibitory neurons is set to S. Connecting the neurons of the input layer with the neurons of the excitation layer, connecting the neurons of the excitation layer with the neurons of the inhibition layer in a one-to-one manner, and reversely inhibiting all the neurons of the excitation layer by the neurons of the inhibition layer.
S5: pulse encoding: the pixels of the pictures in each dataset correspond to each input neuron. Setting a period of 350ms, each pixel is coded into a pulse sequence which is distributed in Poisson, and the pulse emissivity is in direct proportion to the intensity of the corresponding pixel in an input image.
S6: determining a neuron model: the neuron model adopts a leaky integration-and-fire model (LIF), and the change process of the membrane potential V is described by the following first-order differential equation:
wherein ErestIs static membrane potential, EexcAnd EinhEquilibrium potential for excitatory and inhibitory synapses, geAnd giConductance of excitatory synapses and inhibitory synapses, respectively. τ is the time constant (excitatory neurons have a longer period than inhibitory neurons). When the membrane potential of the neuron exceeds a membrane threshold VthresWhen the neuron emits a pulse and the membrane potential returns to the resting potential VrestWithin milliseconds after reset, the neuron is in refractory period and will not produce any more pulses.
Vthresh_eThreshold value V for excitatory neuronsrest_eIs excitatory neuronal resting potential, Vthresh_iIs inhibitory neuron threshold Vrest_iFor inhibitory neuronal resting potential, it can be expressed as:
when the membrane potential of the neuron exceeds the membraneThreshold value VthresWhen the neuron emits a pulse and the membrane potential returns to the resting potential VrestWithin milliseconds after reset, the neuron is in a refractory period and can not generate pulse any more;
if the presynaptic neuron is an excitatory neuron, its conductivity geFirst order equation of (1):
time constant τ heregeIs an excitatory postsynaptic potential, and similarly if the presynaptic neuron is an inhibitory neuron, the conductance giThe same equation is used for the update of (1), but the time constant after inhibitory synaptic is τgiCan be expressed as:
s7: determining a learning rule: the learning rule of STDP is adopted.
All synapses from input neurons to excitatory neurons are built on the STDP learning rule. Pre-synaptic trace is xpreIt records the most recent process by which a presynaptic neuron generates a pulse to synapse. The trace is incremented by 1 each time a pre-synaptic pulse reaches a synapse, otherwise xpreDecays exponentially. When the post-synaptic pulse reaches the synapse, calculating the weight change delta w according to the pre-synaptic trace:
Δw=η(xpre-xtar)(wmax-w)μ
η is the learning rate, wmaxIs the maximum weight, μ determines the dependency on the previous weight. And determines the dependency of the update on the previous weights. x is the number oftarIs to record the current x as the post-synaptic neuron generates a pulsepreThe value of (c). The higher the target value, the lower the synaptic weight. This bias calculation reduces the effect of pre-synaptic neurons on the weaker and weaker firing pulses of post-synaptic neurons.
In this embodiment, each pixel of the picture corresponds to a neuron of the input layer one by one, each pixel is encoded to serve as a neuron which is input to the input layer from a pulse sequence distributed in poisson, the neuron of the input layer sends a pulse to a neuron of the excitation layer according to the pulse sequence with probability, and after receiving and sending conditions before and after the pulse are recorded according to a trace of the formula after the neuron of the excitation layer receives the pulse, the weight w of synapse is changed.
S8: and (4) putting the data sets into a network in sequence for training, and after 3 times of iteration, finishing the training of the impulse neural network.
The terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (7)
1. An STDP-based pulse neural network handwritten Chinese character recognition method is characterized by comprising the following steps:
s1: downloading an off-line data set, namely an off-line handwritten Chinese character data set;
s2: preprocessing an offline data set: normalizing each picture in the data set;
s3: determining a number of neurons for training;
s4: constructing a network structure;
s5: pulse coding each pixel in the neural network;
s6: determining a neuron model; adopting a leakage integral-and-fire model as a neuron model;
s7: learning the neuron model by adopting the STDP learning rule;
s8: and (4) putting the data sets into a network in sequence for training, and completing the training of the impulse neural network after iteration.
2. The STDP-based pulse neural network handwritten Chinese character recognition method of claim 1, wherein S3 specifically comprises: in the offline dataset, class N tags { z }1,z2,…znAnd clustering similarity of each type of label by adopting ISODATA unsupervised learning, wherein after clustering, the quantity of each type of label after clustering is M ═ Mi(ii) a 1,2, …, N }, total cluster number S.
3. The STDP-based impulse neural network handwritten Chinese character recognition method of claim 2, wherein the IOSDATA similarity clustering algorithm comprises the steps of:
s3.1: initializing parameters including the expected number of cluster centers, the minimum number of samples in each cluster domain, and the standard deviation of sample distance distribution in one cluster domain;
s3.2: performing neighbor clustering, and calculating a clustering center and a mean value;
s3.3: judging whether the expected indexes are met or not, if not, performing splitting calculation, returning to S3.2 after the splitting calculation, if the expected indexes are met and merging conditions are met, performing merging operation, and outputting results after the budgets are merged; after clustering, the number of each label after clustering is M ═ { M ═ Mi(ii) a i is 1,2, …, N, and the total number of clusters is S.
4. The STDP-based pulse neural network handwritten Chinese character recognition method of claim 1, wherein S4 specifically comprises: the first layer is an input layer, and the number of input neurons is set to 64 x 64; the second layer is an excitant layer, and the number of excitable neurons is set as S; the third layer is a inhibition layer, and the number of inhibitory neurons is set as S; connecting the neurons of the input layer with the neurons of the excitation layer, connecting the neurons of the excitation layer with the neurons of the inhibition layer in a one-to-one manner, and reversely inhibiting all the neurons of the excitation layer by the neurons of the inhibition layer.
5. The STDP-based pulse neural network handwritten Chinese character recognition method of claim 1, wherein in S5, since each pixel corresponds to each input neuron, the pulse coding specifically is: setting a period of 350ms, each pixel is coded into a pulse sequence which is distributed in Poisson, and the pulse emissivity is in direct proportion to the intensity of the corresponding pixel in an input image.
6. The STDP-based pulse neural network handwritten Chinese character recognition method of claim 5, wherein the change process of the membrane potential V of the integral-and-fire model in S6 is described by the following first order differential equation:
wherein ErestIs static membrane potential, EexcAnd EinhEquilibrium potential for excitatory and inhibitory synapses, geAnd giConductance of excitatory synapses and inhibitory synapses, respectively; τ is a time constant;
Vthresh_ethreshold value V for excitatory neuronsrest_eIs excitatory neuronal resting potential, Vthresh_iIs inhibitory neuron threshold Vrest_iIs inhibitory neuronal resting potential;
when the membrane potential of the neuron exceeds a membrane threshold VthresWhen the neuron emits a pulse and the membrane potential returns to the resting potential VrestWithin milliseconds after reset, the neuron is in a refractory period and can not generate pulse any more;
if the presynaptic neuron is an excitatory neuron, its conductivity geFirst order equation of (1):
wherein the time constant τgeIs an excitatory postsynaptic potential, and similarly if the presynaptic neuron is inhibitorySystemic neurons, conductance giThe same equation is used for the update of (1), but the time constant after inhibitory synaptic is τgiExpressed as:
7. the STDP-based pulse neural network handwritten Chinese character recognition method of claim 6, wherein S7 specifically comprises:
recording the weight of each synapse and recording the trace X before each synapsepreThe trace is increased by 1 each time a pre-synaptic pulse reaches a synapse, otherwise X is increased bypreExponentially decaying; when the post-synaptic pulse reaches the synapse, calculating the weight change delta w according to the pre-synaptic trace:
Δw=η(xpre-xtar)(wmax-w)μ
η is the learning rate, wmaxIs the maximum weight, μ determines the dependency on the previous weight; determining the dependency of the update on the previous weight; x is the number oftarIs to record the current x as the post-synaptic neuron generates a pulsepreThe value of (c).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910954627.3A CN110837776A (en) | 2019-10-09 | 2019-10-09 | Pulse neural network handwritten Chinese character recognition method based on STDP |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910954627.3A CN110837776A (en) | 2019-10-09 | 2019-10-09 | Pulse neural network handwritten Chinese character recognition method based on STDP |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110837776A true CN110837776A (en) | 2020-02-25 |
Family
ID=69575179
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910954627.3A Pending CN110837776A (en) | 2019-10-09 | 2019-10-09 | Pulse neural network handwritten Chinese character recognition method based on STDP |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110837776A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111858989A (en) * | 2020-06-09 | 2020-10-30 | 西安工程大学 | Image classification method of pulse convolution neural network based on attention mechanism |
CN112541578A (en) * | 2020-12-23 | 2021-03-23 | 中国人民解放军总医院 | Retina neural network model |
CN112906828A (en) * | 2021-04-08 | 2021-06-04 | 周士博 | Image classification method based on time domain coding and impulse neural network |
CN113033782A (en) * | 2021-03-31 | 2021-06-25 | 广东工业大学 | Method and system for training handwritten number recognition model |
CN113408714A (en) * | 2021-05-14 | 2021-09-17 | 杭州电子科技大学 | Full-digital pulse neural network hardware system and method based on STDP rule |
CN113989818A (en) * | 2021-12-27 | 2022-01-28 | 中科南京智能技术研究院 | Character classification method and system based on brain-like computing platform |
CN115048979A (en) * | 2022-04-29 | 2022-09-13 | 贵州大学 | Robot touch pulse data classification method based on regularization |
CN112749637B (en) * | 2020-12-29 | 2023-09-08 | 电子科技大学 | SNN-based distributed optical fiber sensing signal identification method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107092959A (en) * | 2017-04-07 | 2017-08-25 | 武汉大学 | Hardware friendly impulsive neural networks model based on STDP unsupervised-learning algorithms |
CN108764242A (en) * | 2018-05-21 | 2018-11-06 | 浙江工业大学 | Off-line Chinese Character discrimination body recognition methods based on deep layer convolutional neural networks |
CN108875846A (en) * | 2018-05-08 | 2018-11-23 | 河海大学常州校区 | A kind of Handwritten Digit Recognition method based on improved impulsive neural networks |
-
2019
- 2019-10-09 CN CN201910954627.3A patent/CN110837776A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107092959A (en) * | 2017-04-07 | 2017-08-25 | 武汉大学 | Hardware friendly impulsive neural networks model based on STDP unsupervised-learning algorithms |
CN108875846A (en) * | 2018-05-08 | 2018-11-23 | 河海大学常州校区 | A kind of Handwritten Digit Recognition method based on improved impulsive neural networks |
CN108764242A (en) * | 2018-05-21 | 2018-11-06 | 浙江工业大学 | Off-line Chinese Character discrimination body recognition methods based on deep layer convolutional neural networks |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111858989A (en) * | 2020-06-09 | 2020-10-30 | 西安工程大学 | Image classification method of pulse convolution neural network based on attention mechanism |
CN111858989B (en) * | 2020-06-09 | 2023-11-10 | 西安工程大学 | Pulse convolution neural network image classification method based on attention mechanism |
CN112541578A (en) * | 2020-12-23 | 2021-03-23 | 中国人民解放军总医院 | Retina neural network model |
CN112749637B (en) * | 2020-12-29 | 2023-09-08 | 电子科技大学 | SNN-based distributed optical fiber sensing signal identification method |
CN113033782A (en) * | 2021-03-31 | 2021-06-25 | 广东工业大学 | Method and system for training handwritten number recognition model |
CN113033782B (en) * | 2021-03-31 | 2023-07-07 | 广东工业大学 | Training method and system for handwriting digital recognition model |
CN112906828A (en) * | 2021-04-08 | 2021-06-04 | 周士博 | Image classification method based on time domain coding and impulse neural network |
CN113408714A (en) * | 2021-05-14 | 2021-09-17 | 杭州电子科技大学 | Full-digital pulse neural network hardware system and method based on STDP rule |
CN113989818A (en) * | 2021-12-27 | 2022-01-28 | 中科南京智能技术研究院 | Character classification method and system based on brain-like computing platform |
CN115048979A (en) * | 2022-04-29 | 2022-09-13 | 贵州大学 | Robot touch pulse data classification method based on regularization |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110837776A (en) | Pulse neural network handwritten Chinese character recognition method based on STDP | |
Panda et al. | Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition | |
Zhu et al. | B-CNN: branch convolutional neural network for hierarchical classification | |
CN108875846B (en) | Handwritten digit recognition method based on improved impulse neural network | |
Bawa et al. | Linearized sigmoidal activation: A novel activation function with tractable non-linear characteristics to boost representation capability | |
Dasgupta et al. | Nonlinear dynamic Boltzmann machines for time-series prediction | |
Nessler et al. | STDP enables spiking neurons to detect hidden causes of their inputs | |
CN111858989B (en) | Pulse convolution neural network image classification method based on attention mechanism | |
Goodfellow et al. | Spike-and-slab sparse coding for unsupervised feature discovery | |
Ledinauskas et al. | Training deep spiking neural networks | |
Shrestha et al. | Stable spike-timing dependent plasticity rule for multilayer unsupervised and supervised learning | |
Cao et al. | Application of a modified Inception-v3 model in the dynasty-based classification of ancient murals | |
Fu et al. | An ensemble unsupervised spiking neural network for objective recognition | |
Battleday et al. | From convolutional neural networks to models of higher‐level cognition (and back again) | |
Cios | Deep neural networks—A brief history | |
Fatahi et al. | Towards an spiking deep belief network for face recognition application | |
Panda et al. | Convolutional spike timing dependent plasticity based feature learning in spiking neural networks | |
Lagani et al. | Training convolutional neural networks with competitive hebbian learning approaches | |
CN114138971A (en) | Genetic algorithm-based maximum multi-label classification method | |
Tekir et al. | Deep learning: Exemplar studies in natural language processing and computer vision | |
Dong et al. | Research and application of local perceptron neural network in highway rectifier for time series forecasting | |
He et al. | Leaf classification utilizing a convolutional neural network with a structure of single connected layer | |
Rekabdar et al. | Scale and translation invariant learning of spatio-temporal patterns using longest common subsequences and spiking neural networks | |
Deng et al. | Stdp and competition learning in spiking neural networks and its application to image classification | |
Nebti et al. | Handwritten digits recognition based on swarm optimization methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200225 |