CN115688887A - High-reusability low-power-consumption pulse neural network model - Google Patents

High-reusability low-power-consumption pulse neural network model Download PDF

Info

Publication number
CN115688887A
CN115688887A CN202211363463.5A CN202211363463A CN115688887A CN 115688887 A CN115688887 A CN 115688887A CN 202211363463 A CN202211363463 A CN 202211363463A CN 115688887 A CN115688887 A CN 115688887A
Authority
CN
China
Prior art keywords
pulse
layer
convolution
data
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211363463.5A
Other languages
Chinese (zh)
Inventor
张国和
王冉
刘佳
万贤杰
俞宙
丁莎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 24 Research Institute
Xian Jiaotong University
Original Assignee
CETC 24 Research Institute
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 24 Research Institute, Xian Jiaotong University filed Critical CETC 24 Research Institute
Priority to CN202211363463.5A priority Critical patent/CN115688887A/en
Publication of CN115688887A publication Critical patent/CN115688887A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a pulse neural network model with high reusability and low power consumption, wherein the network comprises an encoding layer, a pulse convolution layer, a pulse pooling layer, a full-link layer and a pulse output layer, and picture data are converted into pulse signals through a synapse model; extracting multi-dimensional information of the signal by the pulse convolution layer; the pulse pooling layer reduces the dimension of the data and removes redundant information; and carrying out feature space mapping on the full connection layer. The invention introduces the concept of time, transmits signals by using the synaptic information between the neurons, and can realize the effect of low energy consumption; the invention adopts methods of network layer multiplexing, convolution multiplexing and the like, saves the computing resources and improves the computing efficiency.

Description

High-reusability low-power-consumption pulse neural network model
Technical Field
The invention relates to the field of integrated circuits, in particular to a pulse neural network model with high reusability and low power consumption.
Background
The human brain is composed of a multi-layered structure containing billions of neuronal cells interconnected by synapses, known as a neural network. Like the neurons in the brain, the neurons in the impulse neural network will slowly accumulate the received stimulation, and will release the impulse signal after reaching a certain degree, thereby transmitting information. At the level of simulating the human brain, spiking neural networks are a further step than artificial neural networks and the like, which incorporate the notion of time in the model and introduce synapses to simulate connections of neurons in the human brain. Compared with the first two generations of neural network systems, the impulse neural network has higher biological reliability, lower power consumption and higher parallelism, and has good application prospect in the aspect of image and voice data recognition.
In the related art, when the impulse neural network is constructed, a deep convolutional neural network is generally used, which is high in computational resource consumption and poor in anti-interference capability, so that an impulse neural network which can multiplex a network layer computational unit and has high parallelism in the same network layer is required to be provided.
Disclosure of Invention
The present invention is directed to solve the above problems in the prior art, and an object of the present invention is to provide a high-multiplexing-degree and low-power-consumption impulse neural network model, which greatly reduces the consumption of computing resources by multiplexing network layer computing units, and further reduces the computing time of data by using a parallel operation mode in the same layer network.
In order to achieve the purpose, the invention adopts the following technical scheme:
a high-reusability low-power-consumption pulse neural network model is disclosed, wherein the network comprises an encoding layer, a pulse convolution layer, a pulse pooling layer, a full-link layer and a pulse output layer, and the network implementation method comprises the following steps:
firstly, converting picture data into pulse signals through a synapse model;
extracting multi-dimensional information of the signal by the pulse convolution layer;
thirdly, the pulse pooling layer reduces the dimension of the data and removes redundant information;
and step four, carrying out feature space mapping on the full connection layer.
In the first step, the burst coding model converts the picture data into potential information of T time steps through the potential attenuation of the nerve synapse, so that the subsequent other network layers can conveniently process the data; because of the input layer, the potential of each time step is gradually reduced without the stimulation of the previous layer.
The synapse coding model simulates the electric potential updating rule of synapses in organisms, the electric potential of a neuron membrane decays along with time, and the neuron membrane immediately reacts when a pulse is input, wherein the specific updating rule is
Figure BDA0003923531140000021
In the second step, the pulse convolution layer is provided with a plurality of convolution layers in the whole network, the convolution kernels of the convolution layers have the same size and can be customized, multiplexing processing is convenient to perform during data processing, a LIF model is used in the synapse model, the convolved result is transmitted to the synapse model, and comparison is performed according to the result of each time step and a threshold value: if greater than the threshold, pass pulse 1 to the next layer; if the value is less than the threshold value, the membrane potential is normally attenuated.
In the specific implementation process, the process of the network is judged through a state signal line, the key data of the size of the convolution kernel of the corresponding convolution layer is configured, and then pulse convolution processing is carried out.
And (3) pulse convolution operation, namely taking multiplication and addition operation as a basic operation unit, reading data with the same size as a convolution kernel from a memory each time, processing the data with the same number as the convolution kernel in parallel, and transmitting the data to a synapse model after the convolution is finished.
The pulse pooling layer is a pulse of the pulse convolution layer, the numerical value of the pulse is 1 or 0, and an OR gate circuit is adopted when the specific pooling layer is built, so that the effect of minimizing resource consumption is achieved; and the specific pooling core size can be customized to adapt to different application scenarios.
And in the fourth step, all the layers are connected, the pulse convolution processing unit is multiplexed, data with the size of the convolution kernel is read each time, the processing unit of the convolution kernel is multiplexed, data with the same number as the classification result of the last layer is obtained through calculation, and then classification of the image data is completed.
Different scene tasks have different complexity degrees, and impulse neural networks with different depths need to be built.
Compared with the prior art, the invention has the following beneficial effects: compared with the common convolutional neural network, the time concept is introduced, and the effect of low energy consumption can be realized by transmitting signals by using synapse information between neurons; compared with the common pulse neural network, the method adopts the methods of network layer multiplexing, convolution multiplexing and the like, further saves the calculation resources and improves the calculation efficiency.
Drawings
FIG. 1 is a schematic diagram of the overall structure of a spiking neural network according to the present invention;
FIG. 2 is a schematic diagram of the pulse maximum pooling operation of the present invention;
FIG. 3 is a schematic diagram of a convolution layer multiplexed with full link layers according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, the invention is applied to low power consumption processing of picture classification problems, and comprises a first step of converting picture data into pulse signals through a synapse model; extracting multi-dimensional information of the signal by the pulse convolution layer; thirdly, the pulse pooling layer reduces the dimension of the data and removes redundant information; and step four, performing feature space mapping on the full connection layer, and finally outputting a classification result.
Referring to fig. 2, the max-pooling operation of the present invention can be performed by using an and gate in a digital circuit since the pulse signal is a 0,1 signal, and in the example of 2-by-2 max-pooling, the result of max-pooling is obtained by and-ing 4 pulse signals through a 4-input and gate.
Referring to fig. 3, the multiplexing of the fully-connected layers to the convolutional layers in the present invention, where diagram (a) represents the distribution of the weights of the fully-connected layers in the memory, and diagram (b) represents the processing of the convolutional module, which is illustrated as 3 × 3 convolution, where PE represents the most basic processing unit of the convolutional module: the specific operation is to read the weight information from the memory, then sequentially transmit the weight information to the PE unit, and multiplex the operation of the convolution layer to perform the processing of the full connection layer.
The pulse neural network model structurally comprises an encoding layer, a pulse convolution layer, a pulse pooling layer, a full-connection layer and a pulse output layer. The encoding layer uses potential decay encoding, i.e., the neuron membrane potential decays over time until it decays to a resting potential, for T time steps, since there is no input of a new pulse.
Figure BDA0003923531140000041
Introduction of time constant τ m = RC, then:
Figure BDA0003923531140000042
after coding, the coded signal enters a pulse convolution layer, in the layer, because whether the state of the neuron emits two states or not in each T time step, the state of the neuron can stimulate the neuron of the next layer, and meanwhile, the emitted neuron membrane voltage can reset to the rest potential.
a j (t)=(∈*s j )(t),η i (t)=(v*s i )(t)
Wherein e (-) and v (-) represent response of neuron to impulse and reset function respectively, for neuron of m layer, if impulse is issued, membrane potential is reset immediately and impulse signal is transferred to m +1 layer; if no pulses are delivered, the membrane potential decays as the time step increases.
After the pulse convolution is finished, the pulse convolution is input into a pulse pooling layer, and since 0 or 1 signals are input, a 2-by-2 maximum pooling operation can be performed by using a 4-input AND gate.
In order to further extract multi-dimensional information of the signal, the pulse convolution layer is multiplexed, the same convolution kernel size is used, the calculation resources are further saved, and the calculation rate is improved.
And after the second layer of pulse convolution operation, the second layer of pulse convolution operation is connected to a second layer of pulse pooling layer, and the second layer of pulse pooling layer is designed as the first layer of pulse pooling layer to multiplex a network layer, so that the dimension reduction and the redundant information removal are further performed on the data.
And after the pulse pooling operation, connecting the full-connection layers, reading weight information from the memory, writing the weight information into a pulse convolution kernel, multiplexing the convolution kernel to finish the operation of the multi-layer full-connection layers, and setting the number of the neurons of the last layer of the full-connection layer as the number to be classified.
After the operation is completed in full connection, the numbers with the same number to be classified are obtained and respectively represent the respective classes, the numbers are input into the multiple comparators, and the maximum number, namely the classification result, is compared.
In specific implementation, the weight data and the network parameters are stored in an external memory and are transmitted to the controller and each specific module through DMA. Through the signal of controller, the flow of control data and the selection and the internal data processing of module, picture information is transmitted into the impulse neural network by external equipment, finally output classification information, and then realize categorised effect.

Claims (9)

1. A high-reusability low-power-consumption pulse neural network model, wherein the network comprises an encoding layer, a pulse convolution layer, a pulse pooling layer, a full-link layer and a pulse output layer, and is characterized in that: the network implementation method comprises the following steps:
firstly, converting picture data into a pulse signal through a synapse model;
step two, extracting multi-dimensional information of the signal by the pulse convolution layer;
thirdly, the pulse pooling layer reduces the dimension of the data and removes redundant information;
and step four, carrying out feature space mapping on the full connection layer.
2. The high-reusability low-power-consumption pulse neural network model according to claim 1, wherein: in the first step, the burst coding model converts the picture data into potential information of T time step lengths through the potential attenuation of the nerve synapse, so that the subsequent other network layers can conveniently process the data; because of the input layer, the potential of each time step is gradually reduced without the stimulation of the previous layer.
3. The high-reusability low-power-consumption pulse neural network model according to claim 2, wherein: the synapse coding model simulates a synapse potential updating rule in biology, the neuron membrane potential decays along with time, and when a pulse is input, the neuron immediately reacts, and the specific updating rule is that
Figure FDA0003923531130000011
4. The high-reusability low-power-consumption pulse neural network model according to claim 1, wherein: in the second step, the pulse convolution layer is provided with a plurality of convolution layers in the whole network, the convolution kernels of the convolution layers have the same size and can be customized, multiplexing processing is convenient to perform during data processing, a LIF model is used in the synapse model, the convolved result is transmitted to the synapse model, and comparison is performed according to the result of each time step and a threshold value: if greater than the threshold, pass pulse 1 to the next layer; if the value is less than the threshold value, the membrane potential is normally attenuated.
5. The high-reusability low-power-consumption pulse neural network model according to claim 4, wherein: in the specific implementation process, the process of the network is judged through a state signal line, the key data of the size of the convolution kernel of the corresponding convolution layer is configured, and then pulse convolution processing is carried out.
6. The model of claim 4, wherein the pulse convolution operation uses a multiplication and addition operation as a basic operation unit, reads data with the same size as the convolution kernel from the memory each time, processes data with the same number as the convolution kernel in parallel, and transmits the data to the synapse model after the convolution is completed.
7. The high-reusability low-power-consumption pulse neural network model according to claim 1, wherein: the pulse pooling layer is a pulse of the pulse convolution layer, the value of the pulse is 1 or 0, and an OR gate circuit is adopted when the specific pooling layer is built so as to achieve the effect of minimizing resource consumption; and, the specific pooling core size can be customized to suit different application scenarios.
8. The high-reusability low-power-consumption pulse neural network model according to claim 1, wherein: and in the fourth step, all the layers are connected, a pulse convolution processing unit is multiplexed, data with the size of a convolution kernel is read each time, the processing unit of the convolution kernel is multiplexed, data with the same number as the classification result of the last layer is obtained through calculation, and then classification of the image data is completed.
9. The high-reusability low-power-consumption pulse neural network model according to claim 1, wherein: different scene tasks have different complexity degrees, and impulse neural networks with different depths need to be built.
CN202211363463.5A 2022-11-02 2022-11-02 High-reusability low-power-consumption pulse neural network model Pending CN115688887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211363463.5A CN115688887A (en) 2022-11-02 2022-11-02 High-reusability low-power-consumption pulse neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211363463.5A CN115688887A (en) 2022-11-02 2022-11-02 High-reusability low-power-consumption pulse neural network model

Publications (1)

Publication Number Publication Date
CN115688887A true CN115688887A (en) 2023-02-03

Family

ID=85048276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211363463.5A Pending CN115688887A (en) 2022-11-02 2022-11-02 High-reusability low-power-consumption pulse neural network model

Country Status (1)

Country Link
CN (1) CN115688887A (en)

Similar Documents

Publication Publication Date Title
CN111417963B (en) Improved spiking neural network
CN107092959B (en) Pulse neural network model construction method based on STDP unsupervised learning algorithm
CN109871940B (en) Multi-layer training algorithm of impulse neural network
CN110659666B (en) Image classification method of multilayer pulse neural network based on interaction
CN111858989A (en) Image classification method of pulse convolution neural network based on attention mechanism
CN110222760B (en) Quick image processing method based on winograd algorithm
CN111639754A (en) Neural network construction, training and recognition method and system, and storage medium
CN111340194B (en) Pulse convolution neural network neural morphology hardware and image identification method thereof
CN108304912B (en) System and method for realizing pulse neural network supervised learning by using inhibition signal
CN110490659B (en) GAN-based user load curve generation method
CN114595874A (en) Ultra-short-term power load prediction method based on dynamic neural network
CN107609634A (en) A kind of convolutional neural networks training method based on the very fast study of enhancing
CN112149815A (en) Population clustering and population routing method for large-scale brain-like computing network
CN111291861A (en) Input pulse coding method applied to pulse neural network
CN113962371B (en) Image identification method and system based on brain-like computing platform
CN112232440A (en) Method for realizing information memory and distinction of impulse neural network by using specific neuron groups
CN115346096A (en) Pulse neural network model constructed based on memristor
US10198688B2 (en) Transform for a neurosynaptic core circuit
CN115688887A (en) High-reusability low-power-consumption pulse neural network model
CN115546556A (en) Training method of pulse neural network for image classification
CN115223243A (en) Gesture recognition system and method
CN111667064B (en) Hybrid neural network based on photoelectric computing unit and operation method thereof
CN113269702A (en) Low-exposure vein image enhancement method based on cross-scale feature fusion
CN113989911A (en) Real environment facial expression recognition method based on three-dimensional face feature reconstruction and image deep learning
CN114548290A (en) Synaptic convolutional impulse neural network for event stream classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination