CN113033782A - Method and system for training handwritten number recognition model - Google Patents

Method and system for training handwritten number recognition model Download PDF

Info

Publication number
CN113033782A
CN113033782A CN202110352414.0A CN202110352414A CN113033782A CN 113033782 A CN113033782 A CN 113033782A CN 202110352414 A CN202110352414 A CN 202110352414A CN 113033782 A CN113033782 A CN 113033782A
Authority
CN
China
Prior art keywords
neuron
synapse
layer
training
neurons
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110352414.0A
Other languages
Chinese (zh)
Other versions
CN113033782B (en
Inventor
林彦宇
刘怡俊
林文杰
叶武剑
刘文杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202110352414.0A priority Critical patent/CN113033782B/en
Publication of CN113033782A publication Critical patent/CN113033782A/en
Application granted granted Critical
Publication of CN113033782B publication Critical patent/CN113033782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a handwriting number recognition model training method and a system, which input MNIST training data set and STDP synapse initial weight matrix, establish each neuron model and each synapse model, use distributed multithreading parallel technology, dynamically use a plurality of threads to pre-divide neuron groups according to computer resources, then establish local impulse neural network in threads, initialize neuron groups, synapse connection relation and synapse weight in each independent thread, after initialization, all threads are iteratively trained in parallel according to set round number, and solve the technical problems that the existing impulse neural network adopts a serial training method, cannot reasonably utilize computer resources, has low training efficiency and is not beneficial to popularization.

Description

Method and system for training handwritten number recognition model
Technical Field
The invention relates to the technical field of handwriting recognition, in particular to a method and a system for training a handwriting digital recognition model.
Background
Handwritten digit recognition, which is the ability of a computer to receive, understand and recognize readable handwritten digits from paper documents, photos or other sources, has great practical application value in real life, for example, handwritten digit recognition can be applied to identification of bank remittance orders to greatly reduce labor cost, and in the process, deep artificial neural network structures are generally used for identification. However, the nature of the deep artificial neural network is far from that of an actual brain model, and the recognition rate is not high. Therefore, a third-generation artificial Neural Network is proposed to be used for improvement, a Spiking Neural Network (SNN) which realizes information transmission between neurons in a pulse form is known as the third-generation artificial Neural Network, has strong biological rationality on a neuron model, a synapse model and a learning mechanism, and is highly close to a real biological Neural Network. However, because the types of neurons in the spiking neural network are complex and various, the number of neurons is numerous, the protruding structures connecting the neurons have different delays and modifiable connection weights, and the real-time activity state simulation of a large number of neurons is a huge challenge for a computer, for example, an unsupervised training algorithm using a time-sequence Dependent pulse (synapse) Plasticity learning rule (STDP) can be successfully trained to obtain a network model for handwritten digit recognition prediction, but the training method is a typical serial training method, which cannot reasonably utilize computer resources, has low training efficiency, and is not beneficial to popularization.
Disclosure of Invention
The invention provides a handwritten number recognition model training method and a handwritten number recognition model training system, which are used for solving the technical problems that the existing pulse neural network adopts a serial training method, cannot reasonably utilize computer resources, has low training efficiency and is not beneficial to popularization.
In view of the above, a first aspect of the present invention provides a method for training a handwritten digit recognition model, including:
inputting the MNIST training data set and the STDP synaptic initial weight matrix into a global impulse neural network model, wherein the global impulse neural network model comprises neuron models and synapse models;
according to computer resources, dynamically using multiple threads to pre-divide neuron groups of the global pulse neural network model;
establishing an in-thread local impulse neural network model, and initializing neuron groups, synaptic connection relations and synaptic weights in each independent thread;
after initialization is completed, performing iterative training on the local pulse neural network models in all threads according to a preset number of rounds, wherein periodic synchronization and pulse transmission are continuously performed during training;
and after the training is finished, storing all STDP synapse final weights of all threads, and integrating and generating a synapse weight matrix and storing the synapse weight matrix.
Optionally, inputting the MNIST training data set and the STDP synapse initial weight matrix into a global spiking neural network model, including:
and converting the training images of all 28 × 28 pixels in the MNIST training data set by a 784 × 1 matrix, and inputting the training images into the global pulse neural network model by taking uniformly distributed random numbers as initial values of the STDP synapse initial weight matrix.
Optionally, the global spiking neural network model comprises a three-layer network structure, respectively input layer N1Excitation layer N2And an inhibiting layer N3
N1={Ai={ak;k=1,2,...,K};i=1,2,...I}
N2={bi;i=1,2,...,I}
N3={ci;i=1,2,...,I}
N1To N2The synapse set of (a) is:
E12={Hi′I={eki=(ak,bi,wki);k=1,2,...,K};i′,i=1,2,...,I}
N2to N3The synapse set of (a) is:
E23={gii=(bi,ci,wii);i=1,2,...,I}
N3to N2The synapse set of (a) is:
E32={Li={eii′=(ci,bi′,wii′) (ii) a I '═ 1, 2., I and I' ≠ I }; i ═ 1, 2., I }
Wherein K is the number of pixels of the input image, I is the number of neurons in an excitation layer, and the number of input neuron groups, the number of neurons in the excitation layer and the number of neurons in an inhibition layer are equal and are I; a. theiInputting a neuron group corresponding to an input handwritten digital gray image; a iskInputting a neuron corresponding to one pixel of an input handwritten digital gray scale image; biIs an excitatory neuron; c. CiTo inhibit neurons; hi′iFor input of a neuron group Ai′To excitatory neurons biSynapse set of ekiFor the kth synapse in the ith' input neuron population, wkiIs ekiThe weight of (2); giiTo excite neurons biTo inhibit neurons ciDirected edge, wiiIs giiConnecting the weight value; l isiTo inhibit layer neurons ciSet of synapses to the excitation layer, eii′To inhibit neuron ciTo excitatory neurons bi′Synapse of, wii′Is eii′The weight of (2).
Optionally, pre-partitioning the neuron population of the global spiking neural network model using multiple threads according to computer resource dynamics, comprises:
creating a plurality of threads according to the number of CPUs of the computer, wherein each thread independently occupies one CPU, and the number of created threads does not exceed the number of CPUs of the computer;
in each thread, the number of STDP synapses connected by an input neuron to an excitatory neuron is K X (I/X), the number of static synapses connected by an excitatory neuron to an inhibitory neuron is I/X, the number of static synapses connected by an inhibitory neuron to an excitatory neuron is ((400/X) -1) X (400/X), the number of static synapses connected by a matching neuron to an excitatory neuron is (I/X-1) X (1-I/X), wherein X is the number of CPUs.
Optionally, the local impulse neural network model in each thread includes four layer network structures, which are: input layer M1Excitation layer M2And an inhibiting layer M3And a mapping layer M4
Input layer M1Input neuron akThe range is as follows: k1, 2, K, excitation layer M2Exciting neurons biThe range is as follows: x × I/X, (X +1) × I/X, inhibition layer M3 inhibits neuron ciThe range is as follows: x × I/X, (X +1) × I/X, mapping layer M4Mapping neurons djThe range is as follows: 1, X × I/X or (X +1) X I/X, 1., I;
M1={Ai={ak;k=1,2,...,K};i=1,2,...I/X}
M2={bi;i=x×I/X,...,(x+1)×I/X}
M3={ci;i=x×I/X,...,(x+1)×I/X}
M4={dj;j=1,...,x×I/Xor(x+1)×I/X,...,I}
M1to M2The synapse set of (a) is:
F12={Pi′I=(fki=(ak,bi,w′ii);k=1,2,...,K};i′,i=x×I/X,...,(x+1)×I/X}
M2to M3The synapse set of (a) is:
F23={rii=(bi,ci,w′ii) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I; i ═ X × I/X, (X +1) × I/X }
M3To M2The synapse set of (a) is:
F32={Qi={fii′=(ci,bi′w′ii′) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I }; i ═ X × I/X, (X +1) × I/X }
M4To M2The synapse set of (a) is:
F42={Sj={fji=(dj,bi,w′ji);i=x×I/X,...,(x+1)×I/X};j=1,...,x×I/Xor(x+1)×I/X,...,I}
in the LAN, all neuron states are preset default rest states, the STDP synapse F12 receives uniformly distributed random values as training initial weights and keeps an updated state every period, and the static synapses F23, F32 and F42 receive preset default values as training initial weights and keep a fixed state every period.
The second aspect of the present invention provides a training system for a handwritten digit recognition model, comprising:
the input module is used for inputting the MNIST training data set and the STDP synaptic initial weight matrix into the global impulse neural network model, and the global impulse neural network model comprises neuron models and synaptic models;
the partitioning module is used for dynamically partitioning the neuron population of the global pulse neural network model in advance by using multiple threads according to computer resources;
the thread module is used for establishing an in-thread local impulse neural network model and initializing neuron groups, synaptic connection relations and synaptic weights in each independent thread;
the training module is used for carrying out iterative training on the local pulse neural network models in all threads according to a preset round number after initialization is finished, wherein periodic synchronization and pulse transmission are continuously carried out during training;
and the integration module is used for storing the final weights of all STDP synapses of all threads after the training is finished, and integrating and generating a synapse weight matrix and storing the synapse weight matrix.
Optionally, the input module is specifically configured to:
and converting the training images of all 28 × 28 pixels in the MNIST training data set by a 784 × 1 matrix, and inputting the training images into the global pulse neural network model by taking uniformly distributed random numbers as initial values of the STDP synapse initial weight matrix.
Optionally, the global spiking neural network model comprises a three-layer network structure, respectively input layer N1Excitation layer N2And an inhibiting layer N3
N1={Ai={ak;k=1,2,...,K};i=1,2,...I}
N2={bi;i=1,2,...,I}
N3={ci;i=1,2,...,I}
N1To N2The synapse set of (a) is:
E12={Hi′I={eki=(ak,bi,wki);k=1,2,...,K};i′,i=1,2,...,I}
N2to N3The synapse set of (a) is:
E23={gii=(bi,ci,wii);i=1,2,...,I}
N3to N2The synapse set of (a) is:
E32={Li={eii′=(ci,bi′,wii′) (ii) a I '═ 1, 2., I and I' ≠ I }; i ═ 1, 2., I }
Wherein K is the number of pixels of the input image, I is the number of neurons in an excitation layer, and the number of input neuron groups, the number of neurons in the excitation layer and the number of neurons in an inhibition layer are equal and are I; a. theiInputting a neuron group corresponding to an input handwritten digital gray image; a iskInputting a neuron corresponding to one pixel of an input handwritten digital gray scale image; biIs an excitatory neuron; c. CiTo inhibit neurons; hi′iFor input of a neuron group Ai′To excitatory neurons biSynapse set of ekiFor the kth synapse in the ith' input neuron population, wkiIs ekiThe weight of (2); giiTo excite neurons biTo inhibit neurons ciDirected edge, wiiIs giiConnecting the weight value; l isiTo inhibit layer neurons ciSet of synapses to the excitation layer, eii′To inhibit neuron ciTo excitatory neurons bi′Synapse of, wii′Is eii′The weight of (2).
Optionally, the dividing module is specifically configured to:
creating a plurality of threads according to the number of CPUs of the computer, wherein each thread independently occupies one CPU, and the number of created threads does not exceed the number of CPUs of the computer;
in each thread, the number of STDP synapses connected by an input neuron to an excitatory neuron is K X (I/X), the number of static synapses connected by an excitatory neuron to an inhibitory neuron is I/X, the number of static synapses connected by an inhibitory neuron to an excitatory neuron is ((400/X) -1) X (400/X), the number of static synapses connected by a matching neuron to an excitatory neuron is (I/X-1) X (1-I/X), wherein X is the number of CPUs.
Optionally, the local impulse neural network model in each thread includes four layer network structures, which are: input layer M1Excitation layer M2And an inhibiting layer M3And a mapping layer M4
Input layer M1Input neuron akThe range is as follows: k1, 2, K, excitation layer M2Exciting neurons biThe range is as follows: x × I/X, (X +1) × I/X, inhibition layer M3 inhibits neuron ciThe range is as follows: x × I/X, (X +1) × I/X, mapping layer M4Mapping neurons djThe range is as follows: 1, X × I/X or (X +1) X I/X, 1., I;
M1={Ai={ak;k=1,2,...,K};i=1,2,...I/X}
M2={bi;i=x×I/X,...,(x+1)×I/X}
M3={ci;i=x×I/X,...,(x+1)×I/X}
M4={dj;j=1,...,x×I/Xor(x+1)×I/X,...,I}
M1to M2The synapse set of (a) is:
F12={Pi'I={fki=(ak,bi,w′ii);k=1,2,...,K};i′,i=x×I/X,...,(x+1)×I/X}
M2to M3The synapse set of (a) is:
F23={rii=(bi,ci,w′ii) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I; i ═ X × I/X, (X +1) × I/X }
M3To M2The synapse set of (a) is:
F32={Qi={fii′=(ci,bi′w′ii′) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I }; i ═ X × I/X, (X +1) × I/X }
M4To M2The synapse set of (a) is:
F42={Sj={fji=(dj,bi,w′ji);i=x×I/X,...,(x+1)×I/X};j=1,...,x×I/Xor(x+1)×I/X,...,I}
in the local impulse neural network, all neuron states are preset default rest states, STDP synapse F12Receiving uniformly distributed random values as initial weight of training and keeping updating state every period, static synapse F23、F32And F42And receiving a preset default value as a training initial weight value and keeping a fixed state every period.
According to the technical scheme, the embodiment of the invention has the following advantages:
the invention provides a handwritten number recognition model training method, which comprises the steps of inputting MNIST training data sets and STDP synapse initial weight matrixes, establishing each neuron model and each synapse model, using a distributed multithreading parallel technology, dynamically using a plurality of threads to pre-divide neuron groups according to computer resources, then establishing an in-thread local area impulse neural network, initializing the neuron groups, synapse connection relations and synapse weights in each independent thread, and after initialization is completed, performing parallel training on all threads according to set round number iteration.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and it is obvious for those skilled in the art to obtain other drawings according to these drawings.
FIG. 1 is a schematic flow chart illustrating a method for training a handwritten digit recognition model according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a single in-thread training process provided in an embodiment of the present invention;
FIG. 3 is a diagram illustrating a gradual process of training the initial random STDP synaptic weight matrix according to an embodiment of the present invention (from top to bottom and from left to right).
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For easy understanding, referring to fig. 1 to 3, the present invention provides an embodiment of a method for training a handwritten digit recognition model, including:
step 101, inputting the MNIST training data set and the STDP synaptic initial weight matrix into a global impulse neural network model, wherein the global impulse neural network model comprises neuron models and synapse models.
The MNIST data set is a very classical data set in the field of machine learning and consists of 60000 training samples and 10000 testing samples, wherein each sample is 28-by-28-pixel grayAnd handwriting the digital picture. In the invention, training images of all 28 × 28 pixels in the MNIST data set are converted by a 784 × 1 matrix, initial values of STDP synapse weights are uniformly distributed random numbers, and a global pulse neural network model comprising neuron activity models and synapse models is established. The global pulse neural network model comprises three-layer network structure which is an input layer N1Excitation layer N2And an inhibiting layer N3
N1={Ai={ak;k=1,2,...,K};i=1,2,...I}
N2={bi;i=1,2,...,I}
N3={ci;i=1,2,...,I}
N1To N2The synapse set of (a) is:
E12={Hi′I={eki=(ak,bi,wki);k=1,2,...,K};i′,i=1,2,...,I}
N2to N3The synapse set of (a) is:
E23={gii=(bi,ci,wii);i=1,2,...,I}
N3to N2The synapse set of (a) is:
E32={Li={eii′=(ci,bi′,wii′) (ii) a I '═ 1, 2., I and I' ≠ I }; i ═ 1, 2., I }
Wherein K is the number of pixels of the input image, I is the number of neurons in an excitation layer, and the number of input neuron groups, the number of neurons in the excitation layer and the number of neurons in an inhibition layer are equal and are I; a. theiInputting a neuron group corresponding to an input handwritten digital gray image; a iskInputting a neuron corresponding to one pixel of an input handwritten digital gray scale image; biIs an excitatory neuron; c. CiTo inhibit neurons; hi′iFor input of a neuron group Ai′To excitatory neurons biSynapse set of ekiFor the ith' input godThrough the k-th synapse in the tuple, wkiIs ekiThe weight of (2); giiTo excite neurons biTo inhibit neurons ciDirected edge, wiiIs giiConnecting the weight value; l isiTo inhibit layer neurons ciSet of synapses to the excitation layer, eii′To inhibit neuron ciTo excitatory neurons bi′Synapse of, wii′Is eii′The weight of (2).
And step 102, partitioning the neuron population of the global pulse neural network model in advance by using multiple threads according to computer resource dynamics.
Using distributed multi-threaded parallel techniques, neuron populations are pre-partitioned using multiple threads dynamically according to computer resources. Input layer N1The gray scale information of the pixels is encoded as time-series pulse information. That is, the matrix of the handwritten digital gray scale image in the MNIST data set is converted in the matrix dimension of step 101 and then used as the input layer N according to a certain ratio (intensity of pixel value)1Middle input neuron akPoisson's ratio of (a). Each input neuron akWhether to fire pulses or not is determined by the respective poisson ratio, and the intensity of the corresponding pixel value is dynamically adjusted according to the number of fire pulses of the excitatory neuron in each time period. Each input neuron akThe issued pulse sequence of (a) conforms to a poisson random distribution. During a certain time period, when a certain input neuron akWhen a pulse is given, the neuron b is excitediWill receive the corresponding STDP synapse ekiWeight value w ofkiAs an excitatory input. And inhibit neuron c in the previous cyclei' the pulse sent will act on this cycle, the excitatory neuron bi will receive and ci' connected corresponding Static synapse (Static synapse) eii′Synaptic weight w ofii′As a suppressive input, and updates the membrane potential itself by updating the equation over a period of time, and pulses are issued when the membrane potential exceeds its threshold. Exciting neurons biInhibitory neurons c connected one-to-one after firing a pulseiWill receive the corresponding Static synapse giiSynaptic weight w ofiiAsInputting and updating through an updating equation along with a time period, and sending out pulses when the membrane potential exceeds a threshold value condition. Inhibition of neurons ciThe pulse pair is released to other excitatory neurons b which do not correspond to the next periodi' producing a side inhibitory effect. In addition, according to the STDP learning rule, in each time period, the weight of the STDP synapse changes due to the pulse sent by the pre-and post-synaptic neurons, the pre-synaptic neuron sends a pulse before the post-synaptic neuron, and the synaptic weight increases, which indicates that the association between the two is high. And conversely, the synaptic weight value is reduced, which indicates that the association between the two is low.
According to the periodic update process of the neurons, the activity process of each neuron in the excitatory neurons is consistent, the excitatory neurons are independent of each other, data interaction does not exist, and the update sequence does not exist. The excitatory neurons are all connected to the population of input neurons, so they all receive the same excitatory input. The exciting neuron is only connected with the corresponding inhibiting neuron, so that the pulse emitted by the exciting neuron only acts on the corresponding inhibiting neuron, and the rest inhibiting neurons are not influenced. The pulse that inhibits the neuron from firing acts on other excitatory neurons in the next cycle.
According to the characteristics, the invention divides each neuron group in the global pulse neural network, and the method specifically comprises the following steps:
the number of CPU in the conventional computer is set to X, the number of input neuron groups is set to 784, the number of excitatory neuron groups is set to 400, and the number of inhibitory neuron groups is set to 400. Thus, X threads are created on the computer, denoted T ═ Tx(ii) a X1.. X } each thread txOne CPU is independently occupied so as to improve the execution efficiency of the training process. Each thread tx784 input neurons (denoted as K), 400/X excitatory neurons (denoted as I/X), 400/X inhibitory neurons (denoted as I/X) and 1-400/X mapping neurons (denoted as J/X) will all be assigned. At each thread txThe number of STDP synapses connected to the excitatory neuron by the input neuron is K (I/X), the number of Static synapses connected to the inhibitory neuron by the excitatory neuron is I/X, and the inhibitory neuron is connected to the inhibitory neuronThe number of Static synapses of the excitatory neuron is ((400/X) -1) × (400/X), and the number of Static synapses of the matching neuron connected to the excitatory neuron is (I/X-1) × (1-I/X).
103, establishing an intra-thread local impulse neural network model, and initializing neuron groups, synaptic connections and synaptic weights in each independent thread.
For each thread txAnd the internal local impulse neural network comprises a four-layer network structure: input layer M1Excitation layer M2And an inhibiting layer M3And a mapping layer M4(ii) a Input layer M1Input neuron akThe range is as follows: k1, 2, K, excitation layer M2Exciting neurons biThe range is as follows: x × I/X, (X +1) × I/X, inhibition layer M3 inhibits neuron ciThe range is as follows: x × I/X, (X +1) × I/X, mapping layer M4Mapping neurons djThe range is as follows: 1, X × I/X or (X +1) X I/X, 1., I;
M1={Ai={ak;k=1,2,...,K};i=1,2,...I/X}
M2={bi;i=x×I/X,...,(x+1)×I/X}
M3={ci;i=x×I/X,...,(x+1)×I/X}
M4={dj;j=1,...,x×I/Xor(x+1)×I/X,...,I}
M1to M2The synapse set of (a) is:
F12={Pi′I={fki=(ak,bi,w′ii);k=1,2,...,K};i′,i=x×I/X,...,(x+1)×I/X}
M2to M3The synapse set of (a) is:
F23={rii=(bi,ci,w′ii) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I; i ═ X × I/X, (X +1) × I/X }
M3To M2The synapse set of (a) is:
F32={Qi={fii′=(ci,bi′w′ii′) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I }; i ═ X × I/X, (X +1) × I/X }
M4To M2The synapse set of (a) is:
F42={Sj={fji=(dj,bi,w′ji);i=x×I/X,...,(x+1)×I/X};j=1,...,x×I/Xor(x+1)×I/X,...,I}
in the local impulse neural network, all neuron states are preset default rest states, STDP synapse F12Receiving uniformly distributed random values as initial weight of training and keeping updating state every period, static synapse F23、F32And F42And receiving a preset default value as a training initial weight value and keeping a fixed state every period.
And step 104, after the initialization is completed, performing iterative training on the local area pulse neural network models in all threads according to a preset number of rounds, wherein periodic synchronization and pulse transmission are continuously performed during the training period.
After the initialization of all the threads T is completed, it is necessary to wait for the other threads to complete the initialization, i.e., the first synchronization of the initialization. All threads T then perform respective neuron and synapse update processes in parallel for each time period, similar to the neuron cycle update process of the global spiking neural network in step 102. The difference is that. After the updating process is finished, each thread transmits the inhibitory neuron pulse generated in the time period to other threads, and the pulse transmission among the threads is finished by means of an inter-thread communication technology. The thread that completes the pulse transmission first needs to wait for the rest of the threads to complete the pulse transmission, i.e., synchronization of each time cycle. And after the time period synchronization is completed, all threads enter the next period to start neuron updating. And (5) iterating and circulating the process until the training round number reaches the set round number, and finishing the parallelization training.
The key point of the parallel training process is the way of time period synchronization and pulse transmission. And setting a global synchronization flag variable flag during period synchronization, clearing the synchronization flag variable flag before each period starts, performing atomic addition operation on the flag by each thread after each period is finished, and waiting for the flag value to be equal to the total number of the threads by all the threads, namely the synchronization process of each thread. Burst transfers require the setting of global double buffers, one write buffer and one read buffer. After each period starts, the mapping layer is responsible for receiving the pulse data of the last period in the read buffer. Each thread writes a inhibit layer issue pulse to the write buffer before the end of each cycle. In the next cycle, the original write buffer will be used as the read buffer for the mapping layer to read the pulse data, and the original read buffer will be converted into the write buffer for the storage of the pulse data in the cycle.
And 105, after the training is finished, storing the final weights of all STDP synapses of all threads, and integrating and generating a synapse weight matrix and storing the synapse weight matrix.
The initial random STDP synaptic weight matrix training ramping process is shown in fig. 3. And after the training is finished, storing the final weight of all STDP synapses of each thread, integrating and generating a weight matrix, and storing the weight matrix for subsequent handwritten digit recognition and prediction.
The invention provides a handwriting number recognition model training method, which comprises the steps of inputting MNIST training data sets and STDP synapse initial weight matrixes, establishing each neuron model and each synapse model, using a distributed multithreading parallel technology, dynamically using a plurality of threads to pre-divide neuron groups according to computer resources, then establishing an in-thread local area impulse neural network, initializing the neuron groups, synapse connection relations and synapse weights in each independent thread, and after initialization is completed, performing parallel training on all threads according to set round number iteration.
The invention also provides an embodiment of a handwriting number recognition model training system, which comprises the following steps:
the input module is used for inputting the MNIST training data set and the STDP synaptic initial weight matrix into the global impulse neural network model, and the global impulse neural network model comprises neuron models and synaptic models;
the partitioning module is used for dynamically partitioning the neuron population of the global pulse neural network model in advance by using multiple threads according to computer resources;
the thread module is used for establishing an in-thread local impulse neural network model and initializing neuron groups, synaptic connection relations and synaptic weights in each independent thread;
the training module is used for carrying out iterative training on the local pulse neural network models in all threads according to a preset round number after initialization is finished, wherein periodic synchronization and pulse transmission are continuously carried out during training;
and the integration module is used for storing the final weights of all STDP synapses of all threads after the training is finished, and integrating and generating a synapse weight matrix and storing the synapse weight matrix.
The input module is specifically configured to:
and converting the training images of all 28 × 28 pixels in the MNIST training data set by a 784 × 1 matrix, and inputting the training images into the global pulse neural network model by taking uniformly distributed random numbers as initial values of the STDP synapse initial weight matrix.
The global pulse neural network model comprises three-layer network structure which is an input layer N1Excitation layer N2And an inhibiting layer N3
N1={Ai={ak;k=1,2,...,K};i=1,2,...I}
N2={bi;i=1,2,...,I}
N3={ci;i=1,2,...,I}
N1To N2The synapse set of (a) is:
E12={Hi′I={eki=(ak,bi,wki);k=1,2,...,K};i′,i=1,2,...,I}
N2to N3The synapse set of (a) is:
E23={gii=(bi,ci,wii);i=1,2,...,I}
N3to N2The synapse set of (a) is:
E32={Li={eii′=(ci,bi′,wii′) (ii) a I '═ 1, 2., I and I' ≠ I }; i ═ 1, 2., I }
Wherein K is the number of pixels of the input image, I is the number of neurons in an excitation layer, and the number of input neuron groups, the number of neurons in the excitation layer and the number of neurons in an inhibition layer are equal and are I; a. theiInputting a neuron group corresponding to an input handwritten digital gray image; a iskInputting a neuron corresponding to one pixel of an input handwritten digital gray scale image; biIs an excitatory neuron; c. CiTo inhibit neurons; hi′iFor input of a neuron group Ai′To excitatory neurons biSynapse set of ekiFor the kth synapse in the ith' input neuron population, wkiIs ekiThe weight of (2); giiTo excite neurons biTo inhibit neurons ciDirected edge, wiiIs giiConnecting the weight value; l isiTo inhibit layer neurons ciSet of synapses to the excitation layer, eii′To inhibit neuron ciTo excitatory neurons bi′Synapse of, wii′Is eii′The weight of (2).
The dividing module is specifically configured to:
creating a plurality of threads according to the number of CPUs of the computer, wherein each thread independently occupies one CPU, and the number of created threads does not exceed the number of CPUs of the computer;
in each thread, the number of STDP synapses connected by an input neuron to an excitatory neuron is K X (I/X), the number of static synapses connected by an excitatory neuron to an inhibitory neuron is I/X, the number of static synapses connected by an inhibitory neuron to an excitatory neuron is ((400/X) -1) X (400/X), the number of static synapses connected by a matching neuron to an excitatory neuron is (I/X-1) X (1-I/X), wherein X is the number of CPUs.
The local impulse neural network model in each thread comprises four layers of networksThe structure is as follows: input layer M1Excitation layer M2And an inhibiting layer M3And a mapping layer M4
Input layer M1Input neuron akThe range is as follows: k1, 2, K, excitation layer M2Exciting neurons biThe range is as follows: x × I/X, (X +1) × I/X, inhibition layer M3 inhibits neuron ciThe range is as follows: x × I/X, (X +1) × I/X, mapping layer M4Mapping neurons djThe range is as follows: 1, X × I/X or (X +1) X I/X, 1., I;
M1={Ai={ak;k=1,2,...,K};i=1,2,...I/X}
M2={bi;i=x×I/X,...,(x+1)×I/X}
M3={ci;i=x×I/X,...,(x+1)×I/X}
M4={dj;j=1,...,x×I/Xor(x+1)×I/X,...,I}
M1to M2The synapse set of (a) is:
F12={Pi′I={fki=(ak,bi,w′ii);k=1,2,...,K};i′,i=x×I/X,...,(x+1)×I/X}
M2to M3The synapse set of (a) is:
F23={rii=(bi,ci,w′ii) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I; i ═ X × I/X, (X +1) × I/X }
M3To M2The synapse set of (a) is:
F32={Qi={fii′=(ci,bi'w′ii′) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I }; i ═ X × I/X, (X +1) × I/X }
M4To M2The synapse set of (a) is:
F42={Sj={fji=(dj,bi,w′ji);i=x×I/X,...,(x+1)×I/X};j=1,...,x×I/Xor(x+1)×I/X,...,I}
in the local impulse neural network, all neuron states are preset default rest states, STDP synapse F12Receiving uniformly distributed random values as initial weight of training and keeping updating state every period, static synapse F23、F32And F42And receiving a preset default value as a training initial weight value and keeping a fixed state every period.
The handwriting number recognition model training system provided by the invention inputs MNIST training data set and STDP synapse initial weight matrix, establishes each neuron model and each synapse model, uses distributed multithreading parallel technology, dynamically uses a plurality of threads to pre-divide neuron groups according to computer resources, then establishes an in-thread local area impulse neural network, initializes the neuron groups, synapse connection relation and synapse weight in each independent thread, and after initialization is completed, all threads are iteratively trained in parallel according to set turns.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the system and the specific working process described above may refer to the corresponding process in the foregoing method embodiments, and are not described herein again.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for training a handwritten digit recognition model, comprising:
inputting the MNIST training data set and the STDP synaptic initial weight matrix into a global impulse neural network model, wherein the global impulse neural network model comprises neuron models and synapse models;
according to computer resources, dynamically using multiple threads to pre-divide neuron groups of the global pulse neural network model;
establishing an in-thread local impulse neural network model, and initializing neuron groups, synaptic connection relations and synaptic weights in each independent thread;
after initialization is completed, performing iterative training on the local pulse neural network models in all threads according to a preset number of rounds, wherein periodic synchronization and pulse transmission are continuously performed during training;
and after the training is finished, storing all STDP synapse final weights of all threads, and integrating and generating a synapse weight matrix and storing the synapse weight matrix.
2. The method of claim 1, wherein inputting the MNIST training data set and the STDP synaptic initial weight matrix into the global spiking neural network model comprises:
and converting the training images of all 28 × 28 pixels in the MNIST training data set by a 784 × 1 matrix, and inputting the training images into the global pulse neural network model by taking uniformly distributed random numbers as initial values of the STDP synapse initial weight matrix.
3. The method of claim 1, wherein the global spiking neural network model comprises a three-layer network structure, i.e. input layer N1Excitation layer N2And an inhibiting layer N3
N1={Ai={ak;k=1,2,...,K};i=1,2,...I}
N2={bi;i=1,2,...,I}
N3={ci;i=1,2,...,I}
N1To N2The synapse set of (a) is:
E12={Hi′I={eki=(ak,bi,wki);k=1,2,...,K};i′,i=1,2,...,I}
N2to N3The synapse set of (a) is:
E23={gii=(bi,ci,wii);i=1,2,...,I}
N3to N2The synapse set of (a) is:
E32={Li={eii′=(ci,bi′,wii′) (ii) a I '═ 1, 2., I and I' ≠ I }; i ═ 1, 2., I }
Wherein K is the number of pixels of the input image, I is the number of neurons in an excitation layer, and the number of input neuron groups, the number of neurons in the excitation layer and the number of neurons in an inhibition layer are equal and are I; a. theiInputting a neuron group corresponding to an input handwritten digital gray image; a iskInputting a neuron corresponding to one pixel of an input handwritten digital gray scale image; biIs an excitatory neuron; c. CiTo inhibit neurons; hi′iFor input of a neuron group Ai′To excitatory neurons biSynapse set of ekiFor the kth synapse in the ith' input neuron population, wkiIs ekiThe weight of (2); giiTo excite neurons biTo inhibit neurons ciDirected edge, wiiIs giiConnecting the weight value; l isiTo inhibit layer neurons ciSet of synapses to the excitation layer, eii′To inhibit neuron ciTo excitatory neurons bi′Synapse of, wii′Is eii′The weight of (2).
4. The method of training a handwriting recognition model according to claim 3, wherein pre-partitioning neuron populations of the global spiking neural network model using multithreading according to computer resource dynamics comprises:
creating a plurality of threads according to the number of CPUs of the computer, wherein each thread independently occupies one CPU, and the number of created threads does not exceed the number of CPUs of the computer;
in each thread, the number of STDP synapses connected by an input neuron to an excitatory neuron is K X (I/X), the number of static synapses connected by an excitatory neuron to an inhibitory neuron is I/X, the number of static synapses connected by an inhibitory neuron to an excitatory neuron is ((400/X) -1) X (400/X), the number of static synapses connected by a matching neuron to an excitatory neuron is (I/X-1) X (1-I/X), wherein X is the number of CPUs.
5. The method of claim 4, wherein the intra-thread local area impulse neural network model comprises four layers of network structures, respectively: input layer M1Excitation layer M2And an inhibiting layer M3And a mapping layer M4
Input layer M1Input neuron akThe range is as follows: k1, 2, K, excitation layer M2Exciting neurons biThe range is as follows: x × I/X, (X +1) × I/X, inhibition layer M3 inhibits neuron ciThe range is as follows: x × I/X, (X +1) × I/X, mapping layer M4Mapping neurons djThe range is as follows: 1, X × I/X or (X +1) X I/X, 1., I;
M1={Ai={ak;k=1,2,...,K};i=1,2,...I/X}
M2={bi;i=x×I/X,...,(x+1)×I/X}
M3={ci;i=x×I/X,...,(x+1)×I/X}
M4={dj;j=1,...,x×I/Xor(x+1)×I/X,...,I}
M1to M2The synapse set of (a) is:
F12={Pi′I={fki=(ak,bi,w′ii);k=1,2,...,K};i′,i=x×I/X,...,(x+1)×I/X}
M2to M3The synapse set of (a) is:
F23={rii=(bi,ci,w′ii) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I; i ═ X × I/X, (X +1) × I/X }
M3To M2The synapse set of (a) is:
F32={Qi={fii′=(ci,bi′w′ii′) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I }; i ═ X × I/X, (X +1) × I/X }
M4To M2The synapse set of (a) is:
F42={Sj={fji=(dj,bi,w′ji);i=x×I/X,...,(x+1)×I/X};j=1,...,x×I/Xor(x+1)×I/X,...,I}
in the local impulse neural network, all neuron states are preset default rest states, STDP synapse F12Receiving uniformly distributed random values as initial weight of training and keeping updating state every period, static synapse F23、F32And F42And receiving a preset default value as a training initial weight value and keeping a fixed state every period.
6. A system for training a handwritten digit recognition model, comprising:
the input module is used for inputting the MNIST training data set and the STDP synaptic initial weight matrix into the global impulse neural network model, and the global impulse neural network model comprises neuron models and synaptic models;
the partitioning module is used for dynamically partitioning the neuron population of the global pulse neural network model in advance by using multiple threads according to computer resources;
the thread module is used for establishing an in-thread local impulse neural network model and initializing neuron groups, synaptic connection relations and synaptic weights in each independent thread;
the training module is used for carrying out iterative training on the local pulse neural network models in all threads according to a preset round number after initialization is finished, wherein periodic synchronization and pulse transmission are continuously carried out during training;
and the integration module is used for storing the final weights of all STDP synapses of all threads after the training is finished, and integrating and generating a synapse weight matrix and storing the synapse weight matrix.
7. The system for training a handwritten digit recognition model according to claim 6, wherein the input module is specifically configured to:
and converting the training images of all 28 × 28 pixels in the MNIST training data set by a 784 × 1 matrix, and inputting the training images into the global pulse neural network model by taking uniformly distributed random numbers as initial values of the STDP synapse initial weight matrix.
8. The system of claim 6, wherein the global spiking neural network model comprises a three-layer network structure, i.e., input layer N1Excitation layer N2And an inhibiting layer N3
N1={Ai={ak;k=1,2,...,K};i=1,2,...I}
N2={bi;i=1,2,...,I}
N3={ci;i=1,2,...,I}
N1To N2The synapse set of (a) is:
E12={Hi′I={eki=(ak,bi,wki);k=1,2,...,K};i′,i=1,2,...,I}
N2to N3The synapse set of (a) is:
E23={gii=(bi,ci,wii);i=1,2,...,I}
N3to N2The synapse set of (a) is:
E32={Li={eii′=(ci,bi′,wii′) (ii) a I '═ 1, 2., I and I' ≠ I }; i ═ 1, 2., I }
Wherein K is the number of pixels of the input image, I is the number of neurons in an excitation layer, and the number of input neuron groups, the number of neurons in the excitation layer and the number of neurons in an inhibition layer are equal and are I; a. theiInputting a neuron group corresponding to an input handwritten digital gray image; a iskInputting a neuron corresponding to one pixel of an input handwritten digital gray scale image; biIs an excitatory neuron; c. CiTo inhibit neurons; hi′iFor input of a neuron group Ai′To excitatory neurons biSynapse set of ekiFor the kth synapse in the ith' input neuron population, wkiIs ekiThe weight of (2); giiTo excite neurons biTo inhibit neurons ciDirected edge, wiiIs giiConnecting the weight value; l isiTo inhibit layer neurons ciSet of synapses to the excitation layer, eii′To inhibit neuron ciTo excitatory neurons bi′Synapse of, wii′Is eii′The weight of (2).
9. The system for training a handwritten digit recognition model according to claim 8, wherein the partitioning module is specifically configured to:
creating a plurality of threads according to the number of CPUs of the computer, wherein each thread independently occupies one CPU, and the number of created threads does not exceed the number of CPUs of the computer;
in each thread, the number of STDP synapses connected by an input neuron to an excitatory neuron is K X (I/X), the number of static synapses connected by an excitatory neuron to an inhibitory neuron is I/X, the number of static synapses connected by an inhibitory neuron to an excitatory neuron is ((400/X) -1) X (400/X), the number of static synapses connected by a matching neuron to an excitatory neuron is (I/X-1) X (1-I/X), wherein X is the number of CPUs.
10. According to claimThe system for training a handwritten digit recognition model, according to claim 9, wherein the local impulse neural network model in each thread comprises four layers of network structures, which are: input layer M1Excitation layer M2And an inhibiting layer M3And a mapping layer M4
Input layer M1Input neuron akThe range is as follows: k1, 2, K, excitation layer M2Exciting neurons biThe range is as follows: x × I/X, (X +1) × I/X, inhibition layer M3 inhibits neuron ciThe range is as follows: x × I/X, (X +1) × I/X, mapping layer M4Mapping neurons djThe range is as follows: 1, X × I/X or (X +1) X I/X, 1., I;
M1={Ai={ak;k=1,2,...,K};i=1,2,...I/X}
M2={bi;i=x×I/X,...,(x+1)×I/X}
M3={ci;i=x×I/X,...,(x+1)×I/X}
M4={dj;j=1,...,x×I/Xor(x+1)×I/X,...,I}
M1to M2The synapse set of (a) is:
F12={Pi′I={fki=(ak,bi,w′ii);k=1,2,...,K};i′,i=x×I/X,...,(x+1)×I/X}
M2to M3The synapse set of (a) is:
F23={rii=(bi,ci,w′ii) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I; i ═ X × I/X, (X +1) × I/X }
M3To M2The synapse set of (a) is:
F32={Qi={fii′=(ci,bi′w′ii′) (ii) a I '═ X × I/X, (X +1) × I/X and I' ≠ I }; i ═ X × I/X, (X +1) × I/X }
M4To M2The synapse set of (a) is:
F42={Sj={fji=(dj,bi,w′ji);i=x×I/X,...,(x+1)×I/X};j=1,...,x×I/Xor(x+1)×I/X,...,I}
in the local impulse neural network, all neuron states are preset default rest states, STDP synapse F12Receiving uniformly distributed random values as initial weight of training and keeping updating state every period, static synapse F23、F32And F42And receiving a preset default value as a training initial weight value and keeping a fixed state every period.
CN202110352414.0A 2021-03-31 2021-03-31 Training method and system for handwriting digital recognition model Active CN113033782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110352414.0A CN113033782B (en) 2021-03-31 2021-03-31 Training method and system for handwriting digital recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110352414.0A CN113033782B (en) 2021-03-31 2021-03-31 Training method and system for handwriting digital recognition model

Publications (2)

Publication Number Publication Date
CN113033782A true CN113033782A (en) 2021-06-25
CN113033782B CN113033782B (en) 2023-07-07

Family

ID=76453531

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110352414.0A Active CN113033782B (en) 2021-03-31 2021-03-31 Training method and system for handwriting digital recognition model

Country Status (1)

Country Link
CN (1) CN113033782B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548384A (en) * 2022-04-28 2022-05-27 之江实验室 Method and device for constructing impulse neural network model with abstract resource constraint

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875846A (en) * 2018-05-08 2018-11-23 河海大学常州校区 A kind of Handwritten Digit Recognition method based on improved impulsive neural networks
CN110837776A (en) * 2019-10-09 2020-02-25 广东工业大学 Pulse neural network handwritten Chinese character recognition method based on STDP
CN112163672A (en) * 2020-09-08 2021-01-01 杭州电子科技大学 WTA learning mechanism-based cross array impulse neural network hardware system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875846A (en) * 2018-05-08 2018-11-23 河海大学常州校区 A kind of Handwritten Digit Recognition method based on improved impulsive neural networks
CN110837776A (en) * 2019-10-09 2020-02-25 广东工业大学 Pulse neural network handwritten Chinese character recognition method based on STDP
CN112163672A (en) * 2020-09-08 2021-01-01 杭州电子科技大学 WTA learning mechanism-based cross array impulse neural network hardware system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548384A (en) * 2022-04-28 2022-05-27 之江实验室 Method and device for constructing impulse neural network model with abstract resource constraint

Also Published As

Publication number Publication date
CN113033782B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
Cai et al. Efficient architecture search by network transformation
US11837324B2 (en) Deep learning-based aberrant splicing detection
Huang et al. Deep networks with stochastic depth
Panda et al. Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition
Mosca et al. Deep incremental boosting
JP2019032808A (en) Mechanical learning method and device
CN110852439A (en) Neural network model compression and acceleration method, data processing method and device
CN108304912B (en) System and method for realizing pulse neural network supervised learning by using inhibition signal
Zambrano et al. Efficient computation in adaptive artificial spiking neural networks
Osogami et al. Learning dynamic Boltzmann machines with spike-timing dependent plasticity
Lopes et al. Deep belief networks (DBNs)
CN113033782B (en) Training method and system for handwriting digital recognition model
CN110188621A (en) A kind of three-dimensional face expression recognition methods based on SSF-IL-CNN
CN111062474A (en) Neural network optimization method for solving and improving adjacent computer machines
CN114766024A (en) Method and apparatus for pruning neural networks
Lee Differentiable sparsification for deep neural networks
Nguyen et al. Analytically tractable inference in deep neural networks
CN115063597A (en) Image identification method based on brain-like learning
Dasgupta et al. Regularized dynamic Boltzmann machine with delay pruning for unsupervised learning of temporal sequences
CN114463591A (en) Deep neural network image classification method, device, equipment and storage medium
Liu et al. Brain-inspired hierarchical spiking neural network using unsupervised stdp rule for image classification
Al Afandy et al. Deep learning
Arai et al. An improvement of the nonlinear semi-NMF based method by considering bias vectors and regularization for deep neural networks
Shirke et al. Drop: a simple way to prevent neural network by overfitting
Im et al. Computational complexity reduction of deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant