CN114220089B - Method for pattern recognition based on sectional progressive pulse neural network - Google Patents

Method for pattern recognition based on sectional progressive pulse neural network Download PDF

Info

Publication number
CN114220089B
CN114220089B CN202111436510.XA CN202111436510A CN114220089B CN 114220089 B CN114220089 B CN 114220089B CN 202111436510 A CN202111436510 A CN 202111436510A CN 114220089 B CN114220089 B CN 114220089B
Authority
CN
China
Prior art keywords
layer
impulse
pulse
neurons
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111436510.XA
Other languages
Chinese (zh)
Other versions
CN114220089A (en
Inventor
杨旭
雷云霖
王淼
蔡建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111436510.XA priority Critical patent/CN114220089B/en
Publication of CN114220089A publication Critical patent/CN114220089A/en
Application granted granted Critical
Publication of CN114220089B publication Critical patent/CN114220089B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for pattern recognition based on a sectional progressive pulse neural network obtains samples of a pattern recognition task, establishes a coding layer, processes the samples and codes the samples into a form which can be processed by the pulse neural network; establishing an input layer, and converting the output of the coding layer into a pulse sequence; establishing a memory layer, wherein impulse neurons in the memory layer are used for establishing memory; then, a learning stage of the infant is carried out, an output layer is established, synapses of the memory layer and the output layer are established through a heuristic method, and further, the memory of the memory layer can be accurately extracted and decision-making can be carried out; then, in the accurate learning stage, inputting all samples of the pattern recognition task, only adjusting synaptic weights, guiding the weight adjustment by using teacher signals, and obtaining a pulse neural network for the current pattern recognition task after learning is completed; and fixing the synaptic weight and the synaptic structure of the pulse neural network after the learning is completed, and inputting data to be subjected to pattern recognition to obtain a pattern recognition result.

Description

Method for pattern recognition based on sectional progressive pulse neural network
Technical Field
The invention belongs to the technical field of artificial intelligence and neural networks, and particularly relates to a method for pattern recognition based on a sectional progressive pulse neural network.
Background
Impulse neural networks are considered to have great potential in artificial intelligence because their neuron models more closely resemble real neuron models, and are therefore referred to as third generation artificial neural networks following deep neural networks. However, in practical application, such as application to pattern recognition, the existing learning method of the impulse neural network generally has two directions, and one direction is to train the impulse neural network by means of the existing learning method of the deep neural network, and although the effect of the impulse neural network is not weaker than that of the deep neural network, the characteristic of the bionic neurons of the impulse neural network cannot be perfectly utilized in the direction, so that the potential of the impulse neural network cannot be fully exploited. The other direction is to acquire inspiration from brain science, the pulse neural network is learned by using brain-like algorithms, the direction has more intelligent potential, and the direction is a hot spot of the current pulse neural network research, but the efficient algorithms in the direction are relatively lacking at present. There are many synaptic plasticity mechanisms in the brain for learning, one of which is Long-term inhibition (LTD) and Long-term potentiation (LTP) mechanisms, which can be described as the increase in synaptic efficacy and number of which, if one synapse is stimulated at high frequencies, will result in the membrane potential of its postsynaptic neuron population to remain high for a Long period of time. LTD mechanisms can be described as causing the membrane potential of a group of its postsynaptic neurons to remain low for long periods of time and reduced synaptic efficacy and number if one synapse is subjected to low frequency stimulation. Meanwhile, the learning process of the person is staged, in the young children, the brain learns under the drive of background knowledge, synapses grow in large quantity, but the cognitive ability is not strong enough, the memory is not accurate enough, and then the brain finely trims and adjusts the synapses growing in large quantity on the basis, so that the brain has strong cognitive ability and accurate memory.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a method for carrying out pattern recognition based on a sectional progressive pulse neural network, which can effectively generate memory, can be widely applied to various pattern recognition tasks, has good robustness, and the introduction of brain LTD and LTP mechanisms enables the network to carry out accurate structure learning, so that the network scale is reduced on the premise of not losing much performance, the learning is more effective, and the operation efficiency is higher.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
A method for pattern recognition based on a segmented progressive impulse neural network comprises the following steps:
Step 1: acquiring a sample of a pattern recognition task, establishing a coding layer of a pulse neural network, processing the sample at the coding layer, and coding the sample into a form which can be processed by the pulse neural network;
step 2: establishing an input layer of a pulse neural network, and converting the output of a coding layer into a pulse sequence at the input layer;
step 3: establishing a memory layer of a pulse neural network, wherein pulse neurons in the memory layer are used for establishing memory;
step 4: carry out the learning stage of the infant
Inputting a part of samples of the pattern recognition task (i.e. samples before the processing in the step 1), introducing a Long-term inhibition (LTD) and a Long-term enhancement (Long-term potentiation, LTP) mechanism of the brain, and using a synaptic weight adjustment algorithm and a progressive synaptic structure adjustment algorithm to enable the impulse neural network to generate a structure suitable for the current task; this structure, i.e. the network's preliminary memory of the current task, can be considered. This is similar to the human learning process, in that, in general, the number of human brain synapses is reached during childhood, since just before this the brain begins to learn about the world, so that synapses are very plastic, creating very rough memories that are not necessarily useful later, but initially build up an understanding of the world.
Step 5: establishing an output layer of the impulse neural network, and establishing synapses of the memory layer and the output layer by a heuristic method, so that the memory of the memory layer can be accurately extracted and a decision can be made;
step 6: performing an accurate learning phase
Inputting all samples of the pattern recognition task (i.e. samples before being processed in the step 1), only performing synapse weight adjustment, and guiding the weight adjustment by using teacher signals, wherein the synapse weight adjustment is similar to a human learning process, and after the growth of a 'baby' period, the brain can trim synapses so as to generate accurate memory, thereby effectively completing the task, and obtaining a pulse neural network which can be used for the current pattern recognition task after the learning is completed;
step 7: and fixing the synaptic weight and the synaptic structure of the pulse neural network after the learning is completed, and inputting data to be subjected to pattern recognition to obtain a pattern recognition result.
In one embodiment, the sample is a picture or audio, etc., and thus the pattern recognition task is image recognition or audio recognition, etc.
In one embodiment, in step 1, a plurality of static convolution kernels are selected, a rolling and pooling operation is performed on each sample, and then the processed samples are normalized to the interval [0, t ], and each sample is L values after the processing.
In one embodiment, the step 2 constructs the input layer using the same number of pulse neurons (i.e., L) as the sample length and the same number of pulse generators (i.e., L) as the sample length, the pulse neurons being configured to convert the output of the encoding layer into a pulse train, the pulse generators being configured to induce the pulse neurons to transmit pulses; the pulse neurons are connected with the pulse generators one by one, each output of the pulse generator and the encoding layer corresponds to each other one by one, and the relation between the time of the pulse neurons to send the pulse and the output of the encoding layer is that: t spike-i=Outputi, where T spik is the induction time of the pulse generator i and Output i is the encoding layer Output result for the corresponding position.
In one embodiment, the step 3 uses m layers of impulse neural networks to construct a memory layer, each layer of impulse neural network has n impulse neurons, n and m are arbitrary positive numbers, in the network initialization stage, the impulse neurons of the rest impulse neural networks except the impulse neurons of the last layer of impulse neural network randomly select x impulse neurons to establish synapses for the impulse neural network of the next layer, 0< x < n, n.m neurons and the synapse structures and synapse weights between the n impulse neural networks form the basis of memory, and in the subsequent step, accurate memory is formed by learning.
In one embodiment, the step 4 includes:
Step 4.1: setting a discriminator for each memory layer impulse neuron, if the impulse neuron fires less than theta LTD times in the past t W time, the impulse neuron enters a brain long-term inhibition state in the next t W time, if the impulse neuron fires more than theta LTP times in the past t W time, the impulse neuron enters a long-term enhancement state in the next t W time; neurons entering a long-term enhancement state with a synaptic weight set to ω LTPLTP large enough to enable their activation through a synaptic connection, referred to as a post-synaptic neuron; a pulse neuron entering a long-term brain suppression state, the synaptic weight of which is set to omega LTDLTD small enough to make it difficult for the postsynaptic neuron to activate, and the pulse neuron entering the long-term brain suppression state needs to exert the effect of suppressing synapses when exiting the long-term brain suppression state, the synapses of which may be cut and weakened by the weight, the pulse neurons in the network memory layer are continuously switched among the long-term brain suppression state, the long-term enhancement state, the non-long-term brain suppression state and the non-long-term enhancement state, and the long-term enhancement mechanism can be regarded as the connection between neurons related to a certain mode in the enhanced current mode recognition task, and the long-term brain suppression mechanism suppresses the connection generated between the neurons by noise;
Step 4.2: inputting a sample of a portion of the pattern recognition task, the progressive synapse structure adjustment algorithm being: if there is no synaptic connection between two impulse neurons and it always fires within Δt, a new synapse is established between the two;
The synapse weight adjustment algorithm is adjusted by using a pulse time sequence dependent plasticity (STDP) principle, the synapse structure adjustment method is the same as a Hubby rule principle in the brain, and the STDP principle is considered as one of key rules of brain synapse plasticity. Pulse neurons in the network are continuously switched among LTD, LTP, non-LTD and LTD states, and the network learns through a progressive synaptic structure adjustment algorithm and a synaptic weight adjustment algorithm, so that a compact structure is generated between neurons related to a certain mode in a current mode identification task, and the whole structure is very suitable for the current task.
In one embodiment, the method of applying the inhibitory effect after exiting the long-term inhibition state of the brain in step 4.1 is as follows:
if the impulse neuron is about to exit the brain long-term inhibition state, the probability occurrence weight of each synapse of the impulse neuron is attenuated by alpha times, the probability of alpha <1, rho 2 does not occur, and the probability occurrence of (1-rho 2) is cut off; wherein the method comprises the steps of Ρ, κ, and ψ are constants controlling the probability magnitude, LTD_count is the number of times the impulse neuron is currently continuously entering the brain long-term inhibition state.
In one embodiment, the step 5 establishes synapses of the memory layer and the output layer by a heuristic method, including:
Step 5.1: using the samples used in the step 4, placing the samples with the same label in the same group, and inputting the samples of each group into a coding layer in sequence;
Step 5.2: for the same group of inputs, in the pulse neuron of the last layer of the memory layer, when the firing times exceed theta OUT times, synapses are established between the pulse neuron and the pulse neuron of the output layer corresponding to the current group. The process is similar to the brain decision process, and generally, the human brain decision is only related to the relevant brain region activated by the current external stimulus and is irrelevant to other unactivated brain regions, so that the decision is supposed to be made by the most relevant activation memory at present when judging output.
In one embodiment, the step 6 includes:
Step 6.1: setting an exciter for each impulse neuron of the output layer, and activating the impulse neurons after receiving teacher signals; then, all samples are disturbed and input into a pulse neural network;
Step 6.2: for a certain sample, if the output layer pulse neuron corresponding to the label is the pulse neuron j, the transmission time of the teacher signal of the pulse neuron j is t e, the transmission time of the teacher signal of the rest pulse neurons of the output layer is t in, wherein t in is earlier than the earliest firing time in the last layer of pulse neurons of the memory layer, t e is at the moment that the firing frequency of the last layer of pulse neurons of the memory layer is highest, under the guidance of the teacher signal, the synaptic weight between the memory layer and the pulse neurons j of the output layer is increased by the synaptic weight adjustment algorithm described in the step 4, and the synaptic weight of neurons except the pulse neurons j of the output layer in the memory layer, which are fired before t e, is reduced;
Step 6.3: repeating the steps 6.1 and 6.2 for a plurality of times, and completing learning to obtain a pulse neural network which can be used for the current pattern recognition task, wherein the pulse neurons in the network, and the synaptic structure and the synaptic weight between the pulse neurons can be regarded as memory.
Compared with the prior art, the invention has the beneficial effects that:
1) The invention can effectively learn the impulse neural network, can be widely applied to various pattern recognition tasks, has good robustness, can generate a self-adaptive network structure according to data, fully utilizes the characteristics of the impulse neural network similar to the brain, and digs the potential of the impulse neural network in the artificial intelligence direction as much as possible.
2) The invention introduces LTP and LTD mechanisms and other brain learning mechanisms, so that memory can be effectively generated, and the introduction of the brain LTD and LTP mechanisms enables a network to perform accurate structure learning, so that the network scale is reduced on the premise of not losing much performance, the learning is more effective, and the operation efficiency is higher.
3) The invention can be used for solving common pattern recognition problems such as picture classification, target detection, voice recognition and the like by the impulse neural network. In these pattern recognition problems, data is converted into a pulse sequence, and then the pulse neural network is trained using this method, so that the pulse neural network generates an adaptive structure and weights for the current tasks, which can improve performance on these tasks.
Drawings
Fig. 1 is a block diagram of a impulse neural network of the present invention.
Fig. 2 is a relation of the probabilities ρ2 and ρ1 mentioned in the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings and examples.
As shown in FIG. 1, the invention is a sectional progressive pulse neural network learning method for pattern recognition, wherein the network structure is constructed by four layers, namely a coding layer, an input layer, a memory layer and an output layer, and the coding layer is as follows: processing and encoding the input samples into a form which can be processed by the impulse neural network, namely impulse encoding; input layer: converting the output of the encoding layer into a pulse sequence; a memory layer: using a synaptic weight adjustment and progressive synaptic structure adjustment algorithm added with LTP and LTD mechanisms to learn, and generating a structure suitable for the current task after learning is completed; output layer: the synapses of the memory layer and the output layer are established by a heuristic method, so that the output of the memory layer can be accurately utilized and a decision can be made. In the learning method, the project similar to the gradual learning of human brain is divided into two stages, wherein the first stage is a 'fuzzy' learning stage, partial samples are input at the stage, new synapses are allowed to be generated, and a network is enabled to generate a structure suitable for the current task; the second stage is a 'precise' learning stage, all samples are input at the stage, only synaptic weight adjustment is carried out, and teacher signals are used for guiding weight adjustment, so that the network can be precisely optimized aiming at the current task, and good efficiency is achieved. After the memory layer generates a specific network structure and weight, the last layer creates and outputs synapses of the layers by using a heuristic method, then adjusts the weight of the whole network under the guidance of teacher signals so as to achieve optimal performance, and then fixes the network synapse structure and the synapse weight to obtain the impulse neural network model which can be applied to the actual pattern recognition problem. The invention can be widely applied to various pattern recognition tasks by effectively generating memory, and has good robustness. And the introduction of the brain LTD and LTP mechanisms enables the network to perform more targeted and accurate structure learning, so that the network scale is reduced on the premise of not losing much performance, the network scale complexity is reduced, the learning is more effective, and the operation efficiency is higher.
Referring to fig. 1 and 2, taking a pattern recognition problem of handwriting picture recognition as an example, the present invention includes the steps of:
Step 1: establishing a coding layer, processing the handwriting picture and coding the handwriting picture into a form which can be processed by a pulse neural network;
Step 1.1: selecting a plurality of two-dimensional static convolution kernels, carrying out convolution and pooling operation on each picture, normalizing the processed samples into intervals [0,255], and resetting the dimension of the processed data to one dimension with the length of L;
step 2: establishing an input layer, and converting the output of the coding layer into a pulse sequence;
Step 2.1: the input layer is constructed by using L pulse neurons and L pulse generators, the pulse neurons and the pulse generators are connected one by one, each output of the pulse generators and the coding layer is in one-to-one correspondence, and the relation between the time for inducing the pulse neurons to send the pulses for any pulse generator i in the pulse generators and the output of the coding layer is as follows: t spike=Outputi,Tspike is the induction time of the pulse generator i, output i is the encoding layer Output result for the corresponding position.
Step 3: establishing a memory layer in which impulse neurons can be used to construct memory;
Step 3.1: a 3-layer pulse neural network is used for constructing a feature extraction layer, each layer of network is provided with L pulse neurons, and in the initial stage, other pulse neurons except the pulse neurons of the last layer establish synapses for a plurality of pulse neurons of the subsequent layer, and the synapse weights are random;
step 4: and in the learning stage of 'infants', part of original pictures are input, long-term inhibition (LTD) and Long-term enhancement (Long-term potentiation, LTP) mechanisms are introduced, and a structure suitable for handwriting picture recognition tasks is generated by a network by using a synaptic weight adjustment algorithm and a progressive synaptic structure adjustment algorithm, so that the structure can be considered as preliminary memory of the network on the current tasks. This is similar to the human learning process, in that, in general, the number of human brain synapses is reached during childhood, since just before this the brain begins to learn about the world, so synapse plasticity is very strong, creating very rough memories that are not necessarily useful later, but initially build up an understanding of the world;
Step 4.1: each memory layer impulse neuron is provided with a arbiter that enters the LTD state for the next time t W if the neuron fires less than θ LTD times in the past t W, and enters the LTP state for the next time t W if the neuron fires more than θ LTP times in the past t W. Neurons entering the LTP state with their synaptic weights set to ω LTPLTP large enough to enable them to activate through a synaptic connection, also known as post-synaptic neurons, impulse neurons entering the LTD state with their synaptic weights set to ω LTDLTD small enough to make them difficult to activate, and impulse neurons entering the LTD state need to exert a synaptic-suppressing effect when exiting the LTD state, which may be cut and weight-impaired, impulse neurons in the network may switch continuously between LTD, LTP, non-LTD and LTD states, and LTP mechanisms may be considered to strengthen the link between those neurons associated with a certain mode in the current mode recognition task, while LTD mechanisms inhibit the link between noise-suppressing neurons. ;
Step 4.1.1: if a neuron is about to exit the LTD state, then each synapse has a probability occurrence weight for ρ1 decaying a (α < 1), and nothing for ρ2 occurs, and the probability occurrence for (1- ρ1- ρ2) is clipped. Wherein the method comprises the steps of Where ρ, κ, and ψ are constants controlling the probability size, LTD_count is the number of times this impulse neuron currently continuously enters the LTD state. In this example, ρ is 0.1, κ is 0.05, ψ is 0.25, and then the relationship between ρ2 and ρ1 is as shown in fig. 2;
Step 4.2: a few pictures of the part are selected for input and the progressive synapse structure adjustment algorithm is to establish a new synapse between two neurons if there is no synaptic connection between them and it always fires within Δt. The synaptic weight regulating method uses STDP principle, which is consistent with the principle of Hubby rule in brain, while the STDP principle is considered as one of the key rules of brain synaptic plasticity, pulse neurons in the network are continuously switched among LTD, LTP, non-LTD and LTD states, and the network is learned by a progressive synaptic structure regulating algorithm and a synaptic weight regulating algorithm, so that a compact structure is generated between neurons related to a certain mode in the current mode recognition task, and the whole structure is very suitable for the current task.
Step 4.2.1: the specific formula of the synaptic weight adjustment method using STDP rule is shown in formula 1:
Wherein w max is the upper limit of the weight, lambda STDP is the learning rate, mu + and mu _ are the weight decision coefficients when the weight increases and decays respectively, alpha STDP is the asymmetry coefficient, K + and K - are the time convergence coefficients when the weight decays and increases respectively, e is the natural constant, tau - and tau + are the time scaling coefficients when the weight increases and decays respectively, and w and w are the weights after update and before update respectively;
step 5: establishing an output layer, constructing by using 10 neurons, respectively representing digital categories 0to 9, and establishing synapses of the memory layer and the output layer by a heuristic method so as to accurately extract the memory of the memory layer and make decisions;
Step 5.1: grouping the pictures used in the step 4 according to categories, for example, grouping all the pictures with the category of 0, grouping the pictures with the category of 1, and inputting the groups into a coding layer;
Step 5.2: for the same group of inputs, firing times exceeding θ OUT times in the neurons of the last layer of the memory layer establish synapses between this neuron and the corresponding output layer neurons of the current group. The process is similar to the brain decision process, in general, the brain decision is only related to the related brain region activated by the current external stimulus and is irrelevant to other unactivated brain regions, so that the decision is supposed to be made by the most relevant activation memory at present when judging output;
Step 6: and (3) in the 'accurate' learning stage, inputting all pictures, only adjusting synaptic weights, and guiding the weight adjustment by using teacher signals. Similar to the human learning process, after growth in the "young child" period, the brain will trim the synapses to produce accurate memory, thereby effectively completing the task;
Step 6.1: each neuron of the output layer is provided with an exciter for activating the neuron after receiving the teacher signal. Then, all samples are scrambled and then input to a network;
Step 6.2: for a certain sample, assuming that a handwriting picture with the number 0 is used, the corresponding output layer neuron is the neuron No. 0, and the transmission time of the teacher signal of the neuron No. 0 is t e. The transmission time of the teacher signal of the remaining output layer neurons is t in, wherein t in is earlier than the earliest firing time in the last layer of neurons of the memory layer, and t e is at the moment when the firing frequency of the last layer of neurons of the memory layer is highest, so that under the guidance of the teacher signal, the synaptic weight between the memory layer and the i-neurons of the output layer will increase, and the synaptic weight of the neurons in the memory layer except the i-neurons in the output layer which fire before t e will decrease by the synaptic weight adjustment algorithm described in step 4.
Step 6.3: the synaptic weight adjustment algorithm is the same as that of step 4.2.1;
Step 6.4: repeating the steps 6.1 and 6.2 for a plurality of times, and then completing learning to obtain a pulse neural network which can be used for handwriting picture recognition tasks, wherein neurons in the network, and a synaptic structure and a synaptic weight between the neurons can be considered as memory for memorizing handwriting picture recognition tasks.
Step 7: and fixing the synaptic weight and the synaptic structure of the pulse neural network after the learning is completed, inputting a handwriting picture into the network, and representing the recognition result of the network by the earliest excited neuron of the output layer. Finally, the method can effectively identify the handwriting pictures, the identification finish reading is not lower than that of the existing deep learning method, 98% of the identification finish reading is achieved, meanwhile, the quantity of parameters and the energy consumption are greatly smaller than those of the existing work, compared with the neural network with the same scale, the quantity of synapses is reduced by about 80%, and the quantity of parameters and the energy consumption brought by operation are greatly reduced.

Claims (6)

1. The method for pattern recognition based on the segmented progressive pulse neural network is characterized by comprising the following steps of:
Step 1: acquiring a sample of a pattern recognition task, establishing a coding layer of a pulse neural network, processing the sample at the coding layer, and coding the sample into a form which can be processed by the pulse neural network, wherein the sample is a picture or audio, and the pattern recognition task is image recognition or audio recognition;
step 2: establishing an input layer of a pulse neural network, and converting the output of a coding layer into a pulse sequence at the input layer;
step 3: establishing a memory layer of a pulse neural network, wherein pulse neurons in the memory layer are used for establishing memory;
step 4: carry out the learning stage of the infant
Inputting a part of samples of the pattern recognition task, introducing a brain long-term inhibition and long-term enhancement mechanism, and using a synaptic weight adjustment algorithm and a progressive synaptic structure adjustment algorithm to enable the impulse neural network to generate a structure suitable for the current task; comprising the following steps:
Step 4.1: setting a discriminator for each memory layer impulse neuron, if the impulse neuron fires less than theta LTD times in the past t W time, the impulse neuron enters a brain long-term inhibition state in the next t W time, if the impulse neuron fires more than theta LTP times in the past t W time, the impulse neuron enters a long-term enhancement state in the next t W time; neurons entering a long-term enhancement state with a synaptic weight set to ω LTPLTP large enough to enable their activation through a synaptic connection, referred to as a post-synaptic neuron; a pulse neuron entering a long-term brain suppression state, the synaptic weight of which is set to omega LTDLTD small enough to make it difficult to activate its postsynaptic neurons, and a pulse neuron entering a long-term brain suppression state needs to exert an effect of suppressing synapses when exiting the long-term brain suppression state, which may be cut and weakened by the weight; the method for exerting the influence of inhibiting synapses after exiting from the long-term inhibition state of the brain comprises the following steps:
if the impulse neuron is about to exit the brain long-term inhibition state, the probability occurrence weight of each synapse of the impulse neuron is attenuated by alpha times, the probability of alpha <1, rho 2 does not occur, and the probability occurrence of (1-rho 2) is cut off; wherein the method comprises the steps of Ρ, κ and ψ are constants controlling the probability size, LTD_count is the number of times the impulse neuron is currently continuously put into the long-term inhibition state of the brain
Step 4.2: inputting a sample of a portion of the pattern recognition task, the progressive synapse structure adjustment algorithm being: if there is no synaptic connection between two impulse neurons and it always fires within Δt, a new synapse is established between the two; the synaptic weight adjusting algorithm is to adjust the synaptic weight by using a pulse time sequence dependent plasticity principle;
step 5: establishing an output layer of the impulse neural network, and establishing synapses of the memory layer and the output layer by a heuristic method, so that the memory of the memory layer can be accurately extracted and a decision can be made;
step 6: performing an accurate learning phase
Inputting all samples of the pattern recognition task, only performing synaptic weight adjustment, guiding the weight adjustment by using a teacher signal, and obtaining a pulse neural network which can be used for the current pattern recognition task after learning is completed;
step 7: and fixing the synaptic weight and the synaptic structure of the pulse neural network after the learning is completed, and inputting data to be subjected to pattern recognition to obtain a pattern recognition result.
2. The method for pattern recognition based on the segmented progressive impulse neural network according to claim 1, wherein in the step 1, a plurality of static convolution kernels are selected, each sample is subjected to convolution and pooling operation, the processed samples are normalized to an interval [0, t ], and each sample is L values after the processing.
3. The method for pattern recognition based on a segmented progressive impulse neural network according to claim 2, wherein in the step 2, the input layer is constructed using the same number of impulse neurons as the sample length and the same number of impulse generators as the sample length, the impulse neurons are used for converting the output of the coding layer into a pulse sequence, and the impulse generators are used for inducing the impulse neurons to transmit pulses; the pulse neurons are connected with the pulse generators one by one, each output of the pulse generator and the encoding layer corresponds to each other one by one, and the relation between the time of the pulse neurons to send the pulse and the output of the encoding layer is that: t spike-i=Outputi, where T spike-i is the induction time of the pulse generator i and Output i is the encoding layer Output result for the corresponding position.
4. The method for pattern recognition based on the segmented progressive type impulse neural network according to claim 3, wherein in the step 3, m layers of impulse neural networks are used for constructing a memory layer, each layer of impulse neural network is provided with n impulse neurons, n and m are arbitrary positive numbers, in the network initialization stage, except for the impulse neurons of the last layer of impulse neural network, the impulse neurons of the rest impulse neural networks randomly select x impulse neurons to establish synapses for the impulse neural network of the next layer, 0< x is less than or equal to n, n is less than or equal to m neurons, and the synapse structures and the synapse weights between the neurons form the basis of memory, and in the subsequent steps, accurate memory is formed through learning.
5. The method for pattern recognition based on a segmented progressive impulse neural network according to claim 4, wherein the step 5 establishes synapses of the memory layer and the output layer by a heuristic method, comprising:
Step 5.1: using the samples used in the step 4, placing the samples with the same label in the same group, and inputting the samples of each group into a coding layer in sequence;
Step 5.2: for the same group of inputs, in the pulse neuron of the last layer of the memory layer, when the firing times exceed theta OUT times, synapses are established between the pulse neuron and the pulse neuron of the output layer corresponding to the current group.
6. The method for pattern recognition based on a segmented progressive impulse neural network of claim 5, wherein the step 6 comprises:
step 6.1: setting an exciter for each impulse neuron of the output layer, activating the impulse neuron after receiving a teacher signal, and then disturbing all samples and inputting the samples into an impulse neural network;
Step 6.2: for a certain sample, if the output layer pulse neuron corresponding to the label is the pulse neuron j, the transmission time of the teacher signal of the pulse neuron j is t e, the transmission time of the teacher signal of the rest pulse neurons of the output layer is t in, wherein t in is earlier than the earliest firing time in the last layer of pulse neurons of the memory layer, t e is at the moment that the firing frequency of the last layer of pulse neurons of the memory layer is highest, under the guidance of the teacher signal, the synaptic weight between the memory layer and the pulse neurons j of the output layer is increased by the synaptic weight adjustment algorithm described in the step 4, and the synaptic weight of neurons except the pulse neurons j of the output layer in the memory layer, which are fired before t e, is reduced;
Step 6.3: after repeating steps 6.1 and 6.2 several times, learning is completed, and the impulse neurons in the impulse neural network and the synaptic structure and synaptic weight between the impulse neurons are considered as the constitution of memory after learning is completed.
CN202111436510.XA 2021-11-29 2021-11-29 Method for pattern recognition based on sectional progressive pulse neural network Active CN114220089B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111436510.XA CN114220089B (en) 2021-11-29 2021-11-29 Method for pattern recognition based on sectional progressive pulse neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111436510.XA CN114220089B (en) 2021-11-29 2021-11-29 Method for pattern recognition based on sectional progressive pulse neural network

Publications (2)

Publication Number Publication Date
CN114220089A CN114220089A (en) 2022-03-22
CN114220089B true CN114220089B (en) 2024-06-14

Family

ID=80698860

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111436510.XA Active CN114220089B (en) 2021-11-29 2021-11-29 Method for pattern recognition based on sectional progressive pulse neural network

Country Status (1)

Country Link
CN (1) CN114220089B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985447A (en) * 2018-06-15 2018-12-11 华中科技大学 A kind of hardware pulse nerve network system
CN110210563A (en) * 2019-06-04 2019-09-06 北京大学 The study of pattern pulse data space time information and recognition methods based on Spike cube SNN

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666518A (en) * 1995-06-26 1997-09-09 The United States Of America As Represented By The Secretary Of The Air Force Pattern recognition by simulated neural-like networks
WO2003067514A2 (en) * 2002-02-05 2003-08-14 Siemens Aktiengesellschaft Method for classifying the traffic dynamism of a network communication using a network that contains pulsed neurons
FR2977351B1 (en) * 2011-06-30 2013-07-19 Commissariat Energie Atomique NON-SUPERVISING LEARNING METHOD IN ARTIFICIAL NEURON NETWORK BASED ON MEMORY NANO-DEVICES AND ARTIFICIAL NEURON NETWORK USING THE METHOD
FR2977350B1 (en) * 2011-06-30 2013-07-19 Commissariat Energie Atomique NETWORK OF ARTIFICIAL NEURONS BASED ON COMPLEMENTARY MEMRISTIVE DEVICES
FR2996029B1 (en) * 2012-09-26 2018-02-16 Commissariat A L'energie Atomique Et Aux Energies Alternatives NEUROMORPHIC SYSTEM UTILIZING THE INTRINSIC CHARACTERISTICS OF MEMORY CELLS
US11934946B2 (en) * 2019-08-01 2024-03-19 International Business Machines Corporation Learning and recall in spiking neural networks
CN111639754A (en) * 2020-06-05 2020-09-08 四川大学 Neural network construction, training and recognition method and system, and storage medium
CN112232494A (en) * 2020-11-10 2021-01-15 北京理工大学 Method for constructing pulse neural network for feature extraction based on frequency induction
CN112232440B (en) * 2020-11-10 2022-11-11 北京理工大学 Method for realizing information memory and distinction of impulse neural network by using specific neuron groups
CN112288078B (en) * 2020-11-10 2023-05-26 北京理工大学 Self-learning, small sample learning and migration learning method and system based on impulse neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985447A (en) * 2018-06-15 2018-12-11 华中科技大学 A kind of hardware pulse nerve network system
CN110210563A (en) * 2019-06-04 2019-09-06 北京大学 The study of pattern pulse data space time information and recognition methods based on Spike cube SNN

Also Published As

Publication number Publication date
CN114220089A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
Mathayomchan et al. Center-crossing recurrent neural networks for the evolution of rhythmic behavior
Ikuta et al. Chaos glial network connected to multi-layer perceptron for solving two-spiral problem
Ding et al. Variational nonparametric Bayesian hidden Markov model
Zhao et al. Genetic optimization of radial basis probabilistic neural networks
CN114266351A (en) Pulse neural network training method and system based on unsupervised learning time coding
Yao The evolution of connectionist networks
Shan et al. Global asymptotic stability and stabilization of neural networks with general noise
CN114220089B (en) Method for pattern recognition based on sectional progressive pulse neural network
CN107798384B (en) Iris florida classification method and device based on evolvable pulse neural network
WO2016120739A1 (en) Discovering and using informative looping signals in a pulsed neural network having temporal encoders
Sun et al. Simplified spike-timing dependent plasticity learning rule of spiking neural networks for unsupervised clustering
Foderaro et al. Indirect training of a spiking neural network for flight control via spike-timing-dependent synaptic plasticity
Gardner et al. Learning temporally precise spiking patterns through reward modulated spike-timing-dependent plasticity
LI et al. Research on learning algorithm of spiking neural network
CN114372563A (en) Robot control method and system based on hybrid pulse reinforcement learning network structure
Florian Tempotron-like learning with ReSuMe
Masumori et al. Learning by stimulation avoidance scales to large neural networks
CN112288078A (en) Self-learning, small sample learning and transfer learning method and system based on impulse neural network
Ikuta et al. Multi-layer perceptron with glial network for solving two-spiral problem
Trivedi Voice identification system using neuro-fuzzy approach
US20240112024A1 (en) Method and system of training spiking neural network based conversion aware training
KR102644669B1 (en) Federated learning method and system for enhanced learning converges speed
Schaffer Initial experiments evolving spiking neural networks with supervised learning capability
Ongart et al. The improved training algorithm of deep learning with self-adaptive learning rate
Stratton Genetically connected artificial neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant