CN115936070A - Low-delay low-power-consumption pulse neural network conversion method - Google Patents

Low-delay low-power-consumption pulse neural network conversion method Download PDF

Info

Publication number
CN115936070A
CN115936070A CN202211632517.3A CN202211632517A CN115936070A CN 115936070 A CN115936070 A CN 115936070A CN 202211632517 A CN202211632517 A CN 202211632517A CN 115936070 A CN115936070 A CN 115936070A
Authority
CN
China
Prior art keywords
layer
value
neural network
pulse
neuron
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211632517.3A
Other languages
Chinese (zh)
Inventor
吴益飞
秦晓玲
郭健
李胜
陈庆伟
庄艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202211632517.3A priority Critical patent/CN115936070A/en
Publication of CN115936070A publication Critical patent/CN115936070A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a low-delay low-power-consumption pulse neural network conversion method, which comprises the following steps: building a convolutional neural network suitable for conversion, wherein an activation function is replaced by a finite soft step activation function, and a back propagation algorithm is adopted to train the weight of the network; normalizing the weights by the training set image; constructing a soft reset IF neuron model; constructing a maximum pooling layer based on event driving; establishing a pulse neural network with a structure consistent with that of the original ANN by adopting a neuron model and a maximum pooling layer, and multiplexing weight parameters obtained by the original ANN training; and repeatedly coding the input, inputting the amplitude value calculated and output by the first convolution layer into the replaced neuron, outputting a pulse sequence with a specified time step, and inputting the pulse sequence into a network to obtain a classification result. The method and the device improve the conversion precision of the network and reduce the precision loss of model conversion.

Description

Low-delay low-power-consumption pulse neural network conversion method
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a low-delay low-power-consumption pulse neural network conversion method.
Background
Spiking Neural Network (SNN) is a new generation of artificial Neural Network model derived from biological elicitations, and is mainly different from an artificial Neural Network in information processing properties. While conventional artificial neural networks use real-valued calculations, such as the amplitude of a signal, impulse neural networks use time information of a signal to exchange information by receiving and sending impulses through an "integrated-and-fire" mechanism, which makes them easy to deploy on related hardware. The form of the computation of the neurons in the SNN is event-driven, i.e. active only when receiving or sending out a pulse signal, thus allowing a substantial reduction in energy consumption.
Despite its many advantages, impulse neural networks cannot be trained using direct gradient-finding as ANN does due to their discrete computational nature. There are three main training methods at present, including a series of methods based on STDP and its improvements, an alternative gradient training method, and a transformation-based method. In which a conversion-based approach is able to get a usable impulse neural network in a relatively fast manner, but there is a loss of precision in the conversion process, some studies have proposed some approaches to solve the above-mentioned problems, one of which is that when the simulation step size T is sufficiently large, the ANN can be converted into SNN by setting the threshold voltage equal to the maximum ReLU activation value and copy parameter and offset for the corresponding ANN layer, and when the initial membrane potential of each layer of neurons is half of the threshold voltage, the conversion error can be minimized. Still other studies have proposed Burst neuron, which allows pulses to be sent between two time steps, increasing the Burst rate, and also proposed a late Inhibition firing (lipoling) to solve the problem of maximum Pooling in the transition.
However, for the convenience of calculation, these methods adopt average pooling instead of maximum pooling, and use the floating point number after the average pooling operation as the membrane potential increment of the layer of neurons, so that the layer of neurons update the membrane potential and judge whether to output pulses, thereby neglecting the energy consumption cost and precision loss of the process in hardware implementation.
Disclosure of Invention
The application provides a low-delay low-power-consumption pulse neural network conversion method which can be used for solving the technical problem that the pulse neural network is high in energy consumption in the training process.
The application provides a low-delay low-power-consumption pulse neural network conversion method, which comprises the following steps:
step 1, building a convolutional neural network suitable for conversion, wherein an activation function is replaced by a finite soft step activation function, and a back propagation algorithm is adopted to train the weight of the network;
step 2, normalizing the weights through the training set image;
step 3, constructing a soft reset IF neuron model;
step 4, constructing a maximum pooling layer based on event driving;
step 5, adopting a neuron model and a maximum pooling layer to build a pulse neural network with a structure consistent with that of the original ANN, and multiplexing weight parameters obtained by the original ANN training;
and 6, repeatedly encoding the input, inputting the amplitude calculated and output by the first convolution layer into the replaced neuron, outputting a pulse sequence with a specified time step, and inputting the pulse sequence into a network to obtain a classification result.
Optionally, the convolutional neural network comprises:
a plurality of convolution layers, a maximum pooling layer and a full-connection layer; all layer biases are 0; the sequence of the characteristic extraction part network is convolution, reLU and pooling; the gradient of the bias is closed to enable in the training process; the activation function is replaced by a finite soft step activation function, and the characteristic formula of the activation function is as follows:
Figure BDA0004006350980000021
Figure BDA0004006350980000022
Figure BDA0004006350980000023
Figure BDA0004006350980000024
wherein, a in And a out Input and output of the activation function, respectively; a is limit An output upper limit value of the activation function; k is a quantization coefficient; w controls the climbing gradient between adjacent steps, the value of w is adjusted through n, when w is larger, the gradient is steeper, the rising is quicker, the quantization precision is higher, and n takes the value with large quantity under the condition of not influencing the training; c is related to quantization accuracy, and is followed by a during training limit Change by change; b is a mixture of i The number of the inflection points of the curve is related to K.
Optionally, normalizing the weights by the training set image includes:
step 2-1, uniformly extracting each classification training set data
Figure BDA0004006350980000025
Inputting the batch into the ANN;
step 2-2, recording the maximum activation value corresponding to each layer by taking the layer as a unit;
step 2-3: and scaling the weight by using the ratio of the maximum activation values of the front layer and the rear layer corresponding to the weight parameter as a scaling coefficient, wherein the specific method comprises the following steps:
Figure BDA0004006350980000026
wherein w l Weight parameter, α, representing the layer l of the ANN l ,α l-1 The maximum values of the activation values of the l-1 st layer and the l-1 st layer are respectively expressed.
Optionally, the soft reset IF neuron model is as follows:
Figure BDA0004006350980000031
Figure BDA0004006350980000032
wherein, V thresh Representing the threshold voltage of the neuron, T representing the current time instant, T representing the total time step of the network,
Figure BDA0004006350980000033
represents the membrane voltage of the ith neuron of the ith layer of the spiking neural network at time t, < >>
Figure BDA0004006350980000034
And the ith neuron of the ith layer of the pulse neural network at the time t is represented as whether to emit pulses or not, wherein the emission is 1, and otherwise, the emission is 0.
Optionally, constructing the event-driven-based maximum pooling layer includes:
step 4-1, initializing an event counter in a pooling area and setting a maximum pooling threshold value to be 0;
step 4-2, inputting an event sequence, judging whether the event counter needs to be updated according to the value of the event sequence, keeping the event counter unchanged if the input is 0, and updating the value of the event counter if the input is 1;
and 4-3, taking the maximum value of the counter with the current event of 1 as a pre-activation value, comparing the pre-activation value with a current threshold value, if the pre-activation value exceeds the threshold value, issuing a pulse and updating the threshold value to be the pre-activation value, otherwise, not issuing the pulse, and keeping the threshold value unchanged.
Optionally, the impulse neural network is composed of a convolutional layer, a pooling layer and a full-connection layer; the bias of each network layer is 0; wherein the neurons of the convolutional layer and the full-link layer use soft reset IF neurons; the pooling layer uses a max pooling layer built on an event driven basis.
Optionally, the input is repeatedly encoded, the amplitude value output by the first convolution layer calculation is input to the replaced neuron, and the specific characteristic of the pulse sequence at the specified time step is output, including:
the input of the continuous T time steps is the same normalized image, the first convolution layer plays a role in the feature coding of convolution kernel extraction, the calculation result of the convolution layer passes through the soft reset IF neuron, and the output pulse sequence is regarded as the feature coding sequence of the image.
The finite soft step activation function designed by the application introduces the quantization error of the conversion into a network training link in advance, so that the conversion precision of the network can be improved; the event-driven maximum pooling layer adopts fixed point number operation and binary output, so that the power consumption of equipment can be reduced, and the calculation speed is increased; the method and the device use the trained parameters to encode the main characteristics extracted from the receptive field, and compared with the traditional Poisson coding, the method and the device reduce the randomness of encoding and further reduce the precision loss of model conversion.
Drawings
Fig. 1 is a schematic implementation flow diagram provided in an embodiment of the present application;
FIG. 2 is a flowchart illustrating a process of computing an event-driven pooling layer according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of encoding an input image into a pulse sequence according to an embodiment of the present application;
FIG. 4 is a graph of finite soft step activation functions for different slopes as provided by an embodiment of the present application;
fig. 5 is a graph of Cifar10 dataset conversion accuracy loss versus pulse sequence length provided by an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
A possible system architecture to which the embodiments of the present application are applicable will be first described with reference to the accompanying drawings.
The embodiment of the application provides a low-delay low-power-consumption pulse neural network conversion method, which comprises the following steps:
step 1, building a convolutional neural network suitable for conversion, wherein an activation function is replaced by a finite soft step activation function, and a back propagation algorithm is adopted to train the weight of the network;
step 2, normalizing the weights through the training set image;
step 3, constructing a soft reset IF neuron model;
step 4, constructing a maximum pooling layer based on event driving;
step 5, adopting the neuron model and the maximum pooling layer to build a pulse neural network with a structure consistent with that of the original ANN, and multiplexing weight parameters obtained by the original ANN training;
and 6, repeatedly coding the input, calculating the output amplitude value through the first convolution layer, inputting the amplitude value into the replaced neuron, outputting a pulse sequence with a specified time step, and inputting the pulse sequence into a network to obtain a classification result.
Further, in one embodiment, step 1 trains a qualified ANN.
Illustratively, for a Cifar10 data set, a VGG16 network structure is adopted, wherein a BN layer is not included, a pooling layer uses maximum pooling, an ADAM optimizer is used for training, the learning rate is 0.001, the finite soft step activation function of the invention is used as the activation function, the quantization coefficient K is designed to be 32, the gradient coefficient n is 100, and an ANN network with the precision of 95.3% is obtained through training.
Further, in one embodiment, step 2 normalizes the image weights through the training set. The specific process comprises the following steps:
step 2-1: for extracting uniformly the data of each classification training set
Figure BDA0004006350980000041
Inputting the batch into the ANN;
step 2-2: the maximum activation value of neurons of each layer of the ANN was recorded in units of layers.
Step 2-3: and scaling the weight by using the ratio of the maximum activation values of the front layer and the rear layer corresponding to the weight parameter as a scaling coefficient, wherein the characteristic formula is as follows:
Figure BDA0004006350980000042
wherein, w l Representing the weight parameter, alpha, of the first layer of the ANN l ,α l-1 Respectively represents the maximum value of the activation values of the l-1 st layer and the l-1 st layer.
Further, in one embodiment, step 3 constructs a soft-reset IF neuron model.
Before use, the neuron membrane potential is initialized firstly, and when receiving pulse, the neuron membrane potential is updated, and the characteristic formula is as follows:
Figure BDA0004006350980000051
wherein n is l Represents the number of neurons in the l-th layer,
Figure BDA0004006350980000052
represents the synaptic weight value of the jth neuron in the l-1 layer and the ith neuron in the l layer, and/or the number of the cells in the h layer>
Figure BDA0004006350980000053
Indicating whether the jth neuron on the layer l-1 at the time t-1 sends out a pulse or not.
Judging whether the membrane potential exceeds a threshold voltage, if so, issuing a pulse and updating the membrane potential, otherwise, not issuing the pulse and keeping the current membrane potential, wherein the characteristic formula is as follows:
Figure BDA0004006350980000054
Figure BDA0004006350980000055
further, in one embodiment, step 4 builds the event-driven based max pooling layer. The specific process based on the event-driven maximum pooling layer comprises the following steps:
step 4-1: the event counter within the pooling area is initialized to a maximum pooling threshold of 0.
Step 4-2: and (4) inputting an event sequence, judging whether the event counter needs to be updated according to the value of the event sequence, keeping the event counter unchanged if the input is 0, and updating the value of the event counter if the input is 1.
Step 4-3: and taking the maximum value of the counter with the current event of 1 as a preactivation value, comparing the preactivation value with a current threshold value, if the threshold value is exceeded, issuing a pulse and updating the threshold value to the preactivation value, otherwise, not issuing the pulse, and keeping the threshold value unchanged.
Taking the above Cifar10 data set and VGG16 network as an example, when the pooled scope is 2*2, the initialized event counter and the threshold are both 0, if the event sequence input at the time t =1 is 0011, the updated event counters are respectively 0, 1 and 1, the maximum value of the event counter corresponding to the current event is 1, and is greater than the threshold, so that the pooling layer outputs 1 and updates the threshold to be 1; if the time sequence input at the time t =2 is 1000, the event counters are updated to be 1, 0, 1, and 1, respectively, the maximum value of the event counter corresponding to the current event is 1, and is equal to the threshold, and then the pooling layer output 0 keeps the threshold unchanged. According to the above flow, if the time series of the subsequent inputs are 0001, 1000, 1001, 0100, 0000, 0100, 1000 and 1000 in sequence, the pulse series finally output by the pooling layer is 1100010101, which can be verified to be consistent with the maximum pulse frequency of the input.
Further, in one example, in step 5, a spiking neural network with a structure consistent with that of the original ANN is built by using the neuron model and the maximum pooling layer, and the weight parameters obtained by training the original ANN are reused. The method is characterized in that: the network only consists of a convolution layer, a pooling layer and a full-connection layer; the bias of each network layer is 0; wherein the neurons of the convolutional layer and the fully-connected layer are realized by using the soft reset IF neuron; the pooling layer is implemented using the event-driven build-based max pooling layer described above.
Further, in one embodiment, the repeatedly encoding the input in step 6, and the amplitude calculated and output by the first convolution layer is input to the replaced neuron, and the outputting of the specific characteristics of the pulse sequence at the specified time step includes: the input of the continuous T time steps is the same normalized image, the first convolution layer plays a role in the feature coding of convolution kernel extraction, the calculation result of the convolution layer passes through the soft reset IF neuron, and the output pulse sequence is regarded as the feature coding of the image.
Taking the above Cifar10 data set and VGG16 network as an example, experiments verify that only 30 time steps are needed, the precision loss can be reduced to 0.5%, i.e., classification tasks can be completed in a few time steps, and the validity of the present invention is verified.
The finite soft step activation function designed by the application introduces the quantization error of the conversion into a network training link in advance, so that the conversion precision of the network can be improved; the event-driven maximum pooling layer adopts fixed point number operation and binary output, so that the power consumption of equipment can be reduced, and the calculation speed is increased; the method and the device use the trained parameters to code the main features extracted from the receptive field, and compared with the traditional Poisson coding, the method and the device reduce the randomness of coding and further reduce the precision loss of model conversion.
The above-described embodiments of the present application do not limit the scope of the present application.

Claims (7)

1. A low-delay low-power-consumption pulse neural network conversion method is characterized by comprising the following steps:
step 1, building a convolutional neural network suitable for conversion, wherein an activation function is replaced by a finite soft step activation function, and a back propagation algorithm is adopted to train the weight of the network;
step 2, normalizing the weights through the training set image;
step 3, constructing a soft reset IF neuron model;
step 4, constructing a maximum pooling layer based on event driving;
step 5, adopting a neuron model and a maximum pooling layer to build a pulse neural network with a structure consistent with that of the original ANN, and multiplexing weight parameters obtained by the original ANN training;
and 6, repeatedly coding the input, calculating the output amplitude value through the first convolution layer, inputting the amplitude value into the replaced neuron, outputting a pulse sequence with a specified time step, and inputting the pulse sequence into a network to obtain a classification result.
2. The method of claim 1, wherein the convolutional neural network comprises:
a plurality of convolution layers, a maximum pooling layer and a full-connection layer; all layer biases are 0; the sequence of the characteristic extraction part network is convolution, reLU and pooling; the gradient of the bias is closed to enable in the training process; the activation function is replaced by a finite soft step activation function, and the characteristic formula of the activation function is as follows:
Figure FDA0004006350970000011
Figure FDA0004006350970000012
Figure FDA0004006350970000013
Figure FDA0004006350970000014
wherein, a in And a out Input and output of the activation function, respectively; a is limit Is the output upper limit value of the activation function; k is a quantization coefficient; w controls the climbing gradient between adjacent steps, and the value of w is adjusted through n,when w is larger, the gradient is steeper, the rising is faster, the quantization precision is higher, and n takes a large value under the condition of not influencing the training; c is related to quantization precision, and is followed by a during training limit Change by change; b i The number of the inflection points of the curve is related to K.
3. The method of claim 2, wherein normalizing the image weights by the training set comprises:
step 2-1, uniformly extracting each classification training set data
Figure FDA0004006350970000015
Inputting the batch into the ANN;
step 2-2, recording the maximum activation value corresponding to each layer by taking the layer as a unit;
step 2-3: and scaling the weight by using the ratio of the maximum activation values of the front layer and the rear layer corresponding to the weight parameter as a scaling coefficient, wherein the specific method comprises the following steps:
Figure FDA0004006350970000021
wherein w l Representing the weight parameter, alpha, of the first layer of the ANN l ,α l-1 The maximum values of the activation values of the l-1 st layer and the l-1 st layer are respectively expressed.
4. The method of claim 3, wherein the soft reset IF neuron model is as follows:
Figure FDA0004006350970000022
Figure FDA0004006350970000023
wherein, V thresh Representing the threshold voltage of the neuron, T representing the current time instant, T representing the total time step of the network,
Figure FDA0004006350970000024
represents the membrane voltage of the ith neuron of the ith layer of the spiking neural network at time t, < >>
Figure FDA0004006350970000025
And the signal represents whether the ith neuron of the ith layer of the pulse neural network at the time t sends a pulse or not, wherein the sending is 1, and the sending is 0 otherwise.
5. The method of claim 4, wherein constructing the event-driven-based maximum pooling layer comprises:
step 4-1, initializing an event counter in a pooling area and setting a maximum pooling threshold value to be 0;
step 4-2, inputting an event sequence, judging whether the event counter needs to be updated according to the value of the event sequence, keeping the event counter unchanged if the input is 0, and updating the value of the event counter if the input is 1;
and 4-3, taking the maximum value of the counter with the current event of 1 as a preactivation value, comparing the preactivation value with a current threshold value, if the current event exceeds the threshold value, issuing a pulse and updating the threshold value to be the preactivation value, otherwise, not issuing the pulse, and keeping the threshold value unchanged.
6. The method according to claim 5, wherein the spiking neural network consists of convolutional layer, pooling layer and full-link layer; the bias of each network layer is 0;
wherein the neurons of the convolutional layer and the full-link layer use soft reset IF neurons; the pooling layer uses a max pooling layer built on an event driven basis.
7. The method of claim 6, wherein the input is repeatedly encoded, the amplitude value output by the first convolution layer calculation is input to the replaced neuron, and the specific characteristic of the pulse sequence at the specified time step is output comprises:
the input of the continuous T time steps is the same normalized image, the first convolution layer plays a role in the feature coding of convolution kernel extraction, the calculation result of the convolution layer passes through the soft reset IF neuron, and the output pulse sequence is regarded as the feature coding sequence of the image.
CN202211632517.3A 2022-12-19 2022-12-19 Low-delay low-power-consumption pulse neural network conversion method Pending CN115936070A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211632517.3A CN115936070A (en) 2022-12-19 2022-12-19 Low-delay low-power-consumption pulse neural network conversion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211632517.3A CN115936070A (en) 2022-12-19 2022-12-19 Low-delay low-power-consumption pulse neural network conversion method

Publications (1)

Publication Number Publication Date
CN115936070A true CN115936070A (en) 2023-04-07

Family

ID=86655678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211632517.3A Pending CN115936070A (en) 2022-12-19 2022-12-19 Low-delay low-power-consumption pulse neural network conversion method

Country Status (1)

Country Link
CN (1) CN115936070A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117574968A (en) * 2023-11-30 2024-02-20 中国海洋大学 Pulse convolution neural network based on quantum derivatization, image processing method and system
CN117634564A (en) * 2024-01-26 2024-03-01 之江实验室 Pulse delay measurement method and system based on programmable nerve mimicry core
CN117634564B (en) * 2024-01-26 2024-05-24 之江实验室 Pulse delay measurement method and system based on programmable nerve mimicry core

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117574968A (en) * 2023-11-30 2024-02-20 中国海洋大学 Pulse convolution neural network based on quantum derivatization, image processing method and system
CN117634564A (en) * 2024-01-26 2024-03-01 之江实验室 Pulse delay measurement method and system based on programmable nerve mimicry core
CN117634564B (en) * 2024-01-26 2024-05-24 之江实验室 Pulse delay measurement method and system based on programmable nerve mimicry core

Similar Documents

Publication Publication Date Title
CN107092959B (en) Pulse neural network model construction method based on STDP unsupervised learning algorithm
Deng et al. Optimal conversion of conventional artificial neural networks to spiking neural networks
CN107679618B (en) Static strategy fixed-point training method and device
CN110659730A (en) Method for realizing end-to-end functional pulse model based on pulse neural network
CN111898689A (en) Image classification method based on neural network architecture search
CN114186672A (en) Efficient high-precision training algorithm for impulse neural network
CN109635938B (en) Weight quantization method for autonomous learning impulse neural network
CN112906828A (en) Image classification method based on time domain coding and impulse neural network
CN115936070A (en) Low-delay low-power-consumption pulse neural network conversion method
CN111382840B (en) HTM design method based on cyclic learning unit and oriented to natural language processing
CN114266351A (en) Pulse neural network training method and system based on unsupervised learning time coding
CN111310816B (en) Method for recognizing brain-like architecture image based on unsupervised matching tracking coding
CN115809700A (en) Spiking neural network learning method based on synapse-threshold synergy
CN112766603A (en) Traffic flow prediction method, system, computer device and storage medium
CN113902092A (en) Indirect supervised training method for impulse neural network
CN113298231A (en) Graph representation space-time back propagation algorithm for impulse neural network
Wu et al. Echo state network prediction based on backtracking search optimization algorithm
CN114399041A (en) Impulse neural network training method, device and chip
CN115880324A (en) Battlefield target image threshold segmentation method based on pulse convolution neural network
CN113469357A (en) Mapping method from artificial neural network to impulse neural network
Sun et al. Deep spiking neural network with ternary spikes
Yan et al. CQ $^{+} $ Training: Minimizing Accuracy Loss in Conversion from Convolutional Neural Networks to Spiking Neural Networks
Gao New evolutionary neural networks
Bondarev Training a digital model of a deep spiking neural network using backpropagation
CN117198555B (en) Disease transmission prediction method based on viral load and neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination