CN116629327A - Pulse neural network conversion training method, device and chip based on quantitative ANN - Google Patents

Pulse neural network conversion training method, device and chip based on quantitative ANN Download PDF

Info

Publication number
CN116629327A
CN116629327A CN202310599401.2A CN202310599401A CN116629327A CN 116629327 A CN116629327 A CN 116629327A CN 202310599401 A CN202310599401 A CN 202310599401A CN 116629327 A CN116629327 A CN 116629327A
Authority
CN
China
Prior art keywords
ann
snn
layer
quantized
pulse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310599401.2A
Other languages
Chinese (zh)
Inventor
郑乾
潘纲
胡扬帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310599401.2A priority Critical patent/CN116629327A/en
Publication of CN116629327A publication Critical patent/CN116629327A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0495Quantised networks; Sparse networks; Compressed networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a pulse neural network conversion training method, device and chip based on quantitative ANN, wherein the training method comprises the following steps: (1) training a quantized ANN using a quantization training method; (2) Constructing an equivalent mapping between the quantized ANN and the impulse neural network SNN, and minimizing quantization errors in the conversion process from the ANN to the SNN through optimization of the threshold and the weight by the quantized ANN training; (3) Constructing a signed IF neuron model, detecting the wrongly issued pulse and compensating in a negative pulse mode to reduce the sequence error of each layer in the conversion process from ANN to SNN; (4) A layer-by-layer fine tuning approach is used to reduce the accumulated sequence error transferred layer-by-layer during the ANN to SNN conversion process. By utilizing the method and the device, the deducing delay of the SNN is greatly reduced on the premise of ensuring that the SNN precision is equivalent to the ANN precision, and the instantaneity and the energy efficiency of potential SNN application are improved.

Description

Pulse neural network conversion training method, device and chip based on quantitative ANN
Technical Field
The invention belongs to the field of brain-like calculation, and particularly relates to a pulse neural network conversion training method, device and chip based on quantitative ANN.
Background
The impulse neural network (Spiking Neural Network, SNN) is a key to achieving low-power efficient machine intelligence. As a third generation artificial intelligence network, the impulse neural network (SNN) has at least the same computational power as the second generation Artificial Neural Network (ANN).
Unlike real-activated neurons used in ANNs, neurons in SNNs use pulses (binary events other than 0, i.e., 1) for event-driven information-based processing and delivery. SNN is also considered to have a potential for low power consumption, as the sparse pulse-based information processing and delivery mechanisms of biological neural networks are considered to be the reason for their efficiency. In the event driven computational model of SNN, all neurons are no longer driven by a global clock but are driven asynchronously and computed based on events. Since the computation between neurons is event driven only, independent of each other, SNN is not limited by clock polling. Combining with the emerging neuromorphic devices will greatly improve the low energy consumption potential of SNN.
The technical method for training SNN in the prior art mainly comprises the following two main categories:
the direct training technology comprises the following steps: the invisibility of the pulse function is overcome by using an alternative gradient and a time Back propagation (BPTT, back-propagation Through Time) method similar to that in ANN is applied to directly optimize the SNN. For example, chinese patent publication No. CN115358261a discloses a method for identifying a haptic object based on pulse time sequence error back propagation, in which haptic data for model training is input into a pulse neural network model, the pulse neural network model is trained by an error back propagation algorithm of the pulse time sequence, and the pulse neural network model is iteratively updated according to the calculated gradient until an optimal network weight parameter is obtained, and finally the trained pulse neural network model is obtained.
However, because of the sparsity of pulse training, the direct use of BPTT to train SNNs has low utilization of computing resources and memory on mainstream computing devices (e.g., GPUs), which is disadvantageous for network training. On the other hand, the problem of gradient extinction or explosion in the depth network caused by the substitution gradient can make the direct training method less effective for high-complexity tasks.
ANN to SNN conversion technique: with the exact same training process as ANN, an existing efficient computational framework for ANN can be utilized on mainstream hardware. Furthermore, by approximating the neural network activation values in an ANN to the pulse firing rate in an SNN, the ANN-to-SNN conversion algorithm achieves good performance on challenging tasks. However, the prior art schemes may lead to reduced performance during conversion due to quantization errors and accumulated errors, especially in case of short delays.
Disclosure of Invention
The invention provides a pulse neural network conversion training method based on quantitative ANN, which can greatly reduce the inference delay of SNN on the premise of ensuring that the SNN precision is equivalent to the ANN precision, and improves the instantaneity and energy efficiency of potential SNN application.
A pulse neural network conversion training method based on quantitative ANN comprises the following steps:
(1) Training a quantization ANN by adopting a quantization training method;
(2) Constructing an equivalent mapping between the quantized ANN and the impulse neural network SNN, and minimizing quantization errors in the conversion process from the ANN to the SNN through optimization of the threshold and the weight by the quantized ANN training;
(3) Constructing a signed IF neuron model, detecting the wrongly issued pulse and compensating in a negative pulse mode to reduce the sequence error of each layer in the conversion process from ANN to SNN;
(4) A layer-by-layer fine tuning approach is used to reduce the accumulated sequence error transferred layer-by-layer during the ANN to SNN conversion process.
In the step (2), the construction of an equivalent mapping between the quantized ANN and the impulse neural network SNN is specifically:
the mapping of quantized ANN to SNN is achieved by constructing a mapping between the quantized result (quantized value) of the ANN on the spatial domain for the input floating point number and the quantized result (pulse release rate) of the SNN on the time domain for the input membrane potential by a pulse release function.
And simultaneously quantizing the activation and the weight when the ANN is quantized and trained, and directly mapping the low-precision weight in the quantized ANN into the low-precision synaptic weight of the converted SNN.
For excitationQuantized ANN with a quantized active value with b-bit quantization precision, whose quantized active value is mapped to a step size t=2 b -1 pulse emission rate of SNN.
In step (3), the signed IF neuron model determines whether there are erroneously issued pulses by regarding the number of pulses that have been issued and the current membrane potential;
when the membrane potential is below the negative threshold while the number of pulses delivered is greater than zero, the signed IF neuron model considers that there is a false delivered pulse and delivers a negative pulse to correct the error while the membrane potential is plus the positive threshold to reset.
In the step (4), when the layer-by-layer fine tuning method is used, the SNN weight is indirectly optimized through the proxy ANN by constructing the proxy ANN sharing the weight with the SNN.
The layer-by-layer fine tuning method comprises the following specific processes:
reducing the Euclidean distance between the pulse emission rate spectrum output by the SNN and the activation value spectrum of the original quantized ANN layer by layer through the proxy ANN, so that the pulse emission rate output by the SNN approximates to the activation value in the original quantized ANN as much as possible;
starting from the second layer of the network on the training data set, taking the pulse emission rate spectrum of the SNN of the current layer as the output of the proxy ANN, taking the pulse emission rate spectrum of the SNN of the upper layer as the input, and optimizing the ANN weight to reduce the Euclidean distance between the SNN pulse emission rate and the quantized ANN; then updating the optimized weight to the synaptic weight of the SNN corresponding layer, and updating the output of the SNN of the current layer; this process is repeated until the last classification layer of the network is reached.
The invention also provides a pulse neural network conversion training device based on the quantitative ANN, which comprises a memory and one or more processors, wherein executable codes are stored in the memory, and the one or more processors are used for realizing the training method when executing the executable codes.
The invention also provides a brain-like chip, which is provided with the pulse neural network SNN, wherein the pulse neural network SNN is trained by adopting the training method.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention employs an equivalent mapping between temporal quantization in SNN and spatial quantization in ANN. Based on this mapping, it is demonstrated that the timing quantization error can be minimized by quantizing the ANN supervised learning, thereby improving the conversion efficiency of ANN to SNN by finding the optimal clipping range and the weight per layer of the network and the distribution of activation.
2. The invention adopts a signed IF neuron model and utilizes an additional proxy ANN to reduce the effect of a layer-by-layer fine tuning mechanism of the difference between the pulse issuing rate and the activation value between the SNN and the original ANN in the conversion process from the ANN to the SNN, thereby improving the conversion efficiency from the ANN to the SNN.
3. The invention adopts the conversion efficiency from quantized ANN to SNN under the condition of minimizing quantization error and sequence error, greatly reduces the inference delay of the SNN network on the premise that the SNN performance precision is close to the full-precision ANN, and improves the instantaneity and energy efficiency of potential SNN application.
Drawings
FIG. 1 is a block diagram of a method for training the conversion of a pulsed neural network based on quantitative ANN according to the present invention;
FIG. 2 is a diagram showing the equivalence between ANN spatial quantization and SNN temporal quantization in the present invention;
FIG. 3 is a schematic diagram showing the cause of the sequence error and its effect on the conversion SNN according to the present invention;
fig. 4 is a schematic diagram of a layer-by-layer fine tuning module of converted SNN according to the present invention.
Detailed Description
The invention will be described in further detail with reference to the drawings and examples, it being noted that the examples described below are intended to facilitate the understanding of the invention and are not intended to limit the invention in any way.
As shown in FIG. 1, according to the pulse neural network conversion training method based on quantitative ANN, on the premise that SNN precision is guaranteed to be equivalent to ANN precision, the inference delay of SNN is greatly reduced, and the instantaneity and energy efficiency of potential SNN application are improved. The method specifically comprises the following steps:
s01, minimizing quantization errors through equivalent mapping construction between the quantization ANNs and the SNNs.
In the ANN domain, building an artificial neural network with integer activation is naturally equivalent to compressing the activation values with a uniform quantization function that outputs uniformly distributed values. Such a function would be a full-precision activation of layer i neurons in an ANN using the ReLU activation functionSpatially discrete as:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the spatial quantization value, b representing the number of bits (precision), the state number being 2 b -1, round (·) represents the rounding operator, s l Represents a clipping threshold, which determines the input +.>Clip (min, max) is a clipping operation, clipping x to [ min, max ]]Within the range.
In the present embodiment, the SNN model defined by the literature "Rueckauer, bodo, iulia-Alexandra Lungu, yuhuang Hu, michael Pfeiffer, and Shih-Chii Liu. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification.front in neuroscience 11 (2017): 682" is used to facilitate conversion.
The model uses direct coding (direct current input for the first layer) and IF (leak-free) neurons and resets the neurons by subtraction. At time step t, the total membrane charge of neuron i of the first layerThe method comprises the following steps:
wherein M is l-1 Is the set of neurons of layer 1,for the synaptic connection weights between neurons i and j,/and j>For the bias term, a constant injection current, +.>The input pulse to neuron j is shown at time t.
The membrane potential equation defining IF neurons is as follows:
wherein θ l Representing the release threshold, t represents the t-th time step, Θ is a step function defined as:
given a pulse train of length T, the total input film charge over the time windowThe definition is as follows:
wherein, mu l Is the initial film charge.
The impulse function of the IF neuron will naturally beQuantized to release Rate->The quantization value represented:
wherein, the liquid crystal display device comprises a liquid crystal display device,represents the quantized value in time sequence, floor (·) represents the rounding down operation. Since the input of the first layer is direct current, +.>Let mu l =θ l /2,T=2 b -1,θ l =s l And causes the weight of the next layer to be scaled to s l W l +1 Let the output pulse be equivalent to the continuous value s l And due to the output of (2)
floor(x+0.5)=round(x)
Can obtain
From this activation value equivalence between ANN and SNN, an equivalent mapping between spatial quantization of ANN and temporal quantization of SNN can be derived. The pulse neuron of the first layer i has the pulse number (the number of output pulses) under the rate codingWhere T is the length of the burst. The number of pulses corresponds to t+1 discrete states (values). For a b-bit unsigned integer, it has 2 b Discrete states (values) {0,1, …,2 b -1}. From this mapping, the integer of quantized ANNs can be activated {0,1, …,2 b -1 is mapped to the number of pulses 0,1, …, T,i.e. T is set to 2 b -1, as shown in fig. 2.
S02, the signed IF neuron model minimizes the sequence error of each layer.
The generation of sequence errors during the ANN to SNN conversion is shown in FIG. 3. The present invention uses a table to display the membrane charge z, output pulse Θ and membrane potential V at each time step t for each SNN neuron.
In FIG. 3 (a), an ANN neuron receives two inputs, 2 and-2. Its output is active 0.
In fig. 3 (b), an IF neuron receives three pulsed charges (-1, 2) at t=1, 2, 3. The pulse emission rate of the output corresponds to the activation value of ANN.
In fig. 3 (c), one IF neuron receives three pulsed charges (2, -1, -1) at t=1, 2, 3. However, since the membrane potential is greater than the firing threshold, it immediately emits a pulse at t=1 and does not output any event at t=2, 3, resulting in a pulse firing rate that is not equal to the activation value of ANN.
In fig. 3 (d), one SIF neuron receives three pulsed charges (2, -1, -1) at t=1, 2, 3. Although it produces a pulse at t=1, our SIF model does not output any event when t=2, because the input current counteracts the residual membrane potential and produces a negative pulse at t=3, resulting in a pulse firing rate corresponding to the ANN activation value.
To account for the sequence error of each layer, the present invention cancels the erroneously transmitted pulses by introducing a signed IF neuron model. In the signed IF neuron model, a neuron will only fire a negative pulse IF it reaches the negative pulse threshold and at least a positive pulse is fired. In order to recover the erroneously subtracted membrane potential, the proposed model changes the reset mechanism of the negative pulse, i.e. resets the negative pulse by adding a positive threshold (i.e. θ). The overwrite pulse function Θ (x) is thus:
the membrane potential equation for the corresponding signed IF neuron is as follows:
s03, fine tuning the SNN layer by layer after conversion minimizes accumulated sequence errors.
The present invention minimizes the cumulative error by minimizing the euclidean distance between the neural network activation (independent of the sequence or cumulative error) and the SNN firing rate of each layer, the method framework of which is shown in fig. 4.
First, the present invention obtains SNN from quantized ANN transformations with L layers. A proxy ANN is then established that shares parameters with the SNN.
Layer 2 starts (layer 1 has no sequence error) and layer 1 of the proxy ANN receives as input an output map of the pulse firing rate from layer 1 of the SNN. Meanwhile, its output is set as the pulsing rate profile of layer i of SNN.
And calculating Euclidean loss between the output of the first layer in the proxy ANN and the reference ANN (quantifying the activation value of the first layer in the ANN), minimizing the Euclidean loss by optimizing the parameters (weight and deviation) of the first layer in the proxy ANN, and finally mapping the updated parameters in the proxy ANN back to the corresponding SNN layers.
This process is repeated layer by layer until the last classification layer is reached. Since the film potential of the last layer is used directly for classification in the present invention, the present invention skips the last layer.
The foregoing embodiments have described in detail the technical solution and the advantages of the present invention, it should be understood that the foregoing embodiments are merely illustrative of the present invention and are not intended to limit the invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the invention.

Claims (9)

1. A pulsed neural network conversion training method based on quantitative ANN, comprising:
(1) Training a quantization ANN by adopting a quantization training method;
(2) Constructing an equivalent mapping between the quantized ANN and the impulse neural network SNN, and minimizing quantization errors in the conversion process from the ANN to the SNN through optimization of the threshold and the weight by the quantized ANN training;
(3) Constructing a signed IF neuron model, detecting the wrongly issued pulse and compensating in a negative pulse mode to reduce the sequence error of each layer in the conversion process from ANN to SNN;
(4) A layer-by-layer fine tuning approach is used to reduce the accumulated sequence error transferred layer-by-layer during the ANN to SNN conversion process.
2. The method for training the conversion of the impulse neural network based on the quantized ANN according to claim 1, wherein in the step (2), the construction of the equivalent mapping between the quantized ANN and the impulse neural network SNN is specifically:
the mapping of quantized ANN to SNN is achieved by constructing a mapping between quantized values of the ANN for input floating point numbers in the spatial domain and the pulse emission rate of the SNN for input membrane potential in the time domain by a pulse emission function.
3. The method for training the conversion of the pulse neural network based on the quantized ANN according to claim 2, wherein the activation and the weight are quantized simultaneously when the ANN is trained in a quantization mode, and low-precision weights in the quantized ANN are directly mapped into low-precision synaptic weights of the converted SNN.
4. A method for training the conversion of a pulse neural network based on quantized ANN according to claim 3, wherein for quantized ANN with an activation value quantization precision of b bits, the quantized activation value is mapped to a step size of t=2 b -1 pulse emission rate of SNN.
5. The method of claim 1, wherein in step (3), the signed IF neuron model determines whether there are false pulses by determining the number of pulses that have been issued and the current membrane potential;
when the membrane potential is below the negative threshold while the number of pulses delivered is greater than zero, the signed IF neuron model considers that there are pulses delivered in error and delivers a negative pulse to correct the error while the membrane potential is plus the positive threshold to reset.
6. The method for training the conversion of the pulse neural network based on the quantitative ANN according to claim 1, wherein in the step (4), when the layer-by-layer fine tuning method is used, the optimization of the SNN weight is indirectly realized through the proxy ANN by constructing the proxy ANN sharing the weight with the SNN.
7. The pulse neural network conversion training method based on quantitative ANN according to claim 6, wherein the specific process of the layer-by-layer fine tuning method is as follows:
reducing the Euclidean distance between the pulse emission rate spectrum output by the SNN and the activation value spectrum of the original quantized ANN layer by layer through the proxy ANN, so that the pulse emission rate output by the SNN approximates to the activation value in the original quantized ANN as much as possible;
starting from the second layer of the network on the training data set, taking the pulse emission rate spectrum of the SNN of the current layer as the output of the proxy ANN, taking the pulse emission rate spectrum of the SNN of the upper layer as the input, and optimizing the ANN weight to reduce the Euclidean distance between the SNN pulse emission rate and the quantized ANN; then updating the optimized weight to the synaptic weight of the SNN corresponding layer, and updating the output of the SNN of the current layer; this process is repeated until the last classification layer of the network is reached.
8. A pulsed neural network conversion training device based on quantized ANN, comprising a memory and one or more processors, the memory storing executable code, the one or more processors configured to implement the training method of any one of claims 1-7 when executing the executable code.
9. A brain-like chip with a pulsed neural network SNN deployed thereon, wherein the pulsed neural network SNN is trained using the training method of any one of claims 1-7.
CN202310599401.2A 2023-05-25 2023-05-25 Pulse neural network conversion training method, device and chip based on quantitative ANN Pending CN116629327A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310599401.2A CN116629327A (en) 2023-05-25 2023-05-25 Pulse neural network conversion training method, device and chip based on quantitative ANN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310599401.2A CN116629327A (en) 2023-05-25 2023-05-25 Pulse neural network conversion training method, device and chip based on quantitative ANN

Publications (1)

Publication Number Publication Date
CN116629327A true CN116629327A (en) 2023-08-22

Family

ID=87596854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310599401.2A Pending CN116629327A (en) 2023-05-25 2023-05-25 Pulse neural network conversion training method, device and chip based on quantitative ANN

Country Status (1)

Country Link
CN (1) CN116629327A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117037287A (en) * 2023-10-08 2023-11-10 武汉理工大学 Behavior recognition method, system and device based on 3D impulse neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117037287A (en) * 2023-10-08 2023-11-10 武汉理工大学 Behavior recognition method, system and device based on 3D impulse neural network
CN117037287B (en) * 2023-10-08 2023-12-29 武汉理工大学 Behavior recognition method, system and device based on 3D impulse neural network

Similar Documents

Publication Publication Date Title
US10929744B2 (en) Fixed-point training method for deep neural networks based on dynamic fixed-point conversion scheme
US11308392B2 (en) Fixed-point training method for deep neural networks based on static fixed-point conversion scheme
EP3619651B1 (en) System and method for batch-normalized recurrent highway networks
WO2022148272A1 (en) Spiking neural network training method, data processing method, electronic device, and medium
CN116629327A (en) Pulse neural network conversion training method, device and chip based on quantitative ANN
US11521057B2 (en) Learning system and learning method
US11521131B2 (en) Systems and methods for deep-learning based super-resolution using multiple degradations on-demand learning
US11640534B2 (en) Threshold triggered back propagation of an artificial neural network
Ting et al. Exploiting randomness in stochastic computing
US11630953B2 (en) Systems and methods for end-to-end deep reinforcement learning based coreference resolution
US11049000B2 (en) Distributed state via cascades of tensor decompositions and neuron activation binding on neuromorphic hardware
CN115936070A (en) Low-delay low-power-consumption pulse neural network conversion method
WO2020177863A1 (en) Training of algorithms
CN115965062A (en) FPGA (field programmable Gate array) acceleration method for BERT (binary offset Transmission) middle-layer normalized nonlinear function
US11526735B2 (en) Neuromorphic neuron apparatus for artificial neural networks
CN114118378A (en) Hardware-friendly STDP learning method and system based on threshold self-adaptive neurons
CN114092763A (en) Method for constructing impulse neural network model
CN115880324A (en) Battlefield target image threshold segmentation method based on pulse convolution neural network
Jalaian et al. Uncertainty quantification in internet of battlefield things
US20220121910A1 (en) Neural apparatus for a neural network system
US20220036185A1 (en) Techniques for adapting neural networks to devices
US11823027B1 (en) System, network and method for selective activation of a computing network
Rotermund et al. Competitive performance and superior noise robustness of a non-negative deep convolutional spiking network
Mametsaliyev BUILDING A MATHEMATICAL MODEL OF MULTILAYER PERCEPTRON IN NEURAL NETWORK
CN117830799A (en) Training brain-like gesture recognition model, gesture category recognition method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination