CN114186672A - Efficient high-precision training algorithm for impulse neural network - Google Patents
Efficient high-precision training algorithm for impulse neural network Download PDFInfo
- Publication number
- CN114186672A CN114186672A CN202111547249.0A CN202111547249A CN114186672A CN 114186672 A CN114186672 A CN 114186672A CN 202111547249 A CN202111547249 A CN 202111547249A CN 114186672 A CN114186672 A CN 114186672A
- Authority
- CN
- China
- Prior art keywords
- neural network
- pulse
- layer
- algorithm
- impulse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
An efficient high-precision training algorithm for an impulse neural network, comprising the steps of: acquiring an MNIST handwritten digital image, and preprocessing the MNIST handwritten digital image to obtain a pulse sequence; constructing a loss function according to the output of the impulse neural network and the real sample label, inputting an impulse sequence into the impulse neural network by adopting a back propagation method, and training the impulse neural network to obtain a trained impulse neural network; and inputting the data to be classified into the trained pulse neural network to obtain a classification result. The invention has higher coding efficiency, overcomes the problem of gradient transmission, shortens the simulation time, reduces the simulation consumption, maintains higher classification accuracy, has good expansibility and can be applied to different learning classification tasks.
Description
Technical Field
The invention relates to the technical field of a pulse neural network, in particular to a high-efficiency and high-precision training algorithm of the pulse neural network.
Background
The artificial neural network is a mathematical model for simulating the brain nervous system to process information based on the neural network principle in biology. From early perceptron models to later BP neural networks, and further to deep neural networks in recent years, the neural networks have become a powerful processing tool, promoting the development of image recognition, unmanned driving and the like, and realizing breakthrough in a great number of fields. Although the performance of artificial neural networks is excellent, they do not have good biological rationality, and therefore, researchers have proposed impulse neural network models that hopefully better simulate the process of information processing by the brain. Therefore, designing an efficient and high-precision impulse neural network training method is also a popular research subject at present.
The impulse neural network is a direction of neural network research, and is continuously developed from an initial unsupervised learning algorithm to a supervised learning algorithm in order to improve simulation speed and classification accuracy. The existing pulse neural network learning algorithms are mainly divided into two types, one type is to convert a pre-trained artificial neural network into a corresponding pulse neural network, and the method needs longer simulation time to achieve better classification accuracy and is difficult to apply to low-power-consumption hardware. The second method is to train the impulse neural network directly, but because the impulse function is discontinuous and the back propagation algorithm is difficult to apply, the accuracy is lower than that of the traditional artificial neural network, and because of adopting the coding schemes such as frequency coding, the training efficiency is not ideal. It is therefore desirable to develop an algorithm for training a spiking neural network that reduces simulation time while ensuring high-precision classification. If the impulse neural network is not well trained, it may cause inaccurate classification of different digital or image data.
Disclosure of Invention
The invention aims to provide a training algorithm for a fully-connected or convolutional pulse neural network, which has very good learning efficiency, can reduce the consumption of computing resources in the simulation process of the pulse network, improve the simulation time, has very good classification accuracy, can classify a data set according to a marked class under the condition of short-time training of the pulse neural network, ensures the classification accuracy which is superior to that of the conventional pulse neural network, realizes the classification of image data, and can be applied to different digital classifications.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
an efficient high-precision training algorithm for an impulse neural network, comprising the steps of:
s1: acquiring an MNIST handwritten digital image, and preprocessing the MNIST handwritten digital image to obtain a pulse sequence;
s2: constructing a loss function according to the output of the impulse neural network and the real sample label, inputting an impulse sequence into the impulse neural network by adopting a back propagation method, and training the impulse neural network to obtain a trained impulse neural network;
s3: and inputting the data to be classified into the trained pulse neural network to obtain a classification result.
Further, in step S1, the MNIST handwritten digital image has a size of 28 × 28.
Further, in step S1, the MNIST divides the pixel value of the handwritten digital image by 255 to obtain a normalized pixel value in the range of [0,1], preprocesses the input image to generate a random number in the range of [0,1], compares the random number with the normalized pixel value in the range of [0,1], and generates a pulse if the pixel value is greater than the random number, thereby obtaining a pulse sequence.
Further, the spiking neural network includes convolutional and pooling layers.
Further, the gradient approximation function h (u) is calculated by:
wherein a is the extent width on the x-axis, u is the membrane potential, VthIs the threshold for the neuronal membrane potential.
Further, inputting the pulse sequence into the impulse neural network by adopting a back propagation method, training the impulse neural network to obtain the trained impulse neural network, and the method comprises the following steps: reading a pulse sequence from an input layer, combining a gradient approximation function, performing forward calculation layer by layer to generate a pulse signal, transmitting the pulse signal to a next layer, calculating an error between real sample labels corresponding to neurons of the last layer and the output layer after the pulse signal is transmitted to the output layer, performing error estimation on parameters of the pulse neural network in a reverse direction, updating synaptic weights of the convolutional layer and the full-connection layer according to the error estimation until the error reaches a set value, and obtaining the trained pulse convolutional neural network.
Further, inputting the data to be classified into the trained pulse neural network to obtain a classification result, comprising the following steps:
inputting the data to be classified into a trained pulse convolutional neural network, coding the data to be classified according to pixel values, extracting image characteristic information of the data to be classified through a convolutional layer and a pooling layer, outputting a predicted value aiming at each type of data through a full connection layer, comparing the predicted values aiming at each type of data, and taking the class with the maximum predicted value as the class of the data to be classified.
Further, the error is calculated by a loss function, and the loss function L is calculated by the following formula:
wherein o ist,nAnd (3) representing the output vector of the last layer n at a time step T, wherein Y is a label vector, n is the number of layers of the impulse neural network, and T is a time window.
Further, the parameters of the impulse neural network include, the number of layers, the type of network, the number of feature maps of each layer of the network, the size of convolution kernel, and the threshold value of the neuron membrane potential.
Compared with the prior art, the invention has the following beneficial effects:
the invention adopts a back propagation method and combines a gradient approximation function, forward calculation can be carried out layer by layer, error estimation can be carried out on parameters of the pulse neural network in a reverse mode, and synaptic weights of the convolutional layer and the full-link layer are updated according to the error estimation, so that the trained pulse convolutional neural network is obtained. The invention has higher coding efficiency, overcomes the problem of gradient transmission, shortens the simulation time, reduces the simulation consumption, maintains higher classification accuracy, has good expansibility and can be applied to different learning classification tasks.
Furthermore, the invention adopts the method that the pulse rate of the input pulse sequence of the image is in direct proportion to the normalized pixel value in the range of [0,1] to code the input image, compared with the frequency coding scheme, the coding process is simplified, and the coding efficiency is improved.
Drawings
FIG. 1 is a flowchart illustrating the steps of a spiking neural network training algorithm according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a spiking neural network according to an embodiment.
FIG. 3 is a diagram showing the results of the loss variation in the simulation process.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Referring to fig. 1, the invention relates to an efficient high-precision training algorithm for a spiking neural network, which comprises the following steps:
s1: collecting MNIST handwriting digital images, using the collected MNIST handwriting digital images as input data, and preprocessing the input data to obtain a pulse sequence;
specifically, an input data MNIST handwritten digital image is read, the size of the initial MNIST handwritten digital image is 28 x 28, the pixel value of the MNIST handwritten digital image is divided by 255 to obtain a normalized pixel value within the range of [0,1], the input image is preprocessed to generate a random number within [0,1], the random number is compared with the normalized pixel value within the range of [0,1], and if the size of the pixel value is larger than the random number, a pulse is generated at the corresponding position of the MNIST handwritten digital image, so that an input pulse sequence is obtained.
S2: constructing a pulse neural network with a convolutional layer and a pooling layer, constructing a pulse neuron for preparing biological rationality, and determining neuron parameters;
the specific process for constructing the pulse neuron with biological rationality is as follows:
the impulse neuron adopts a LIF neuron, which is defined by the following formula:
wherein u is the membrane potential,. tau.is the membrane time constant,. urestIs the membrane resting potential, is a constant, I is the sum of the currents supplied by the input synapses, and t is the time step.
To facilitate gradient calculation, the membrane potential at time t +1 is solved:
using a parameter lambda to representΣjWjo (j) denotes the sum I of the currents supplied by the input synapses, the index j denotes the presynaptic index, o (j) denotes the corresponding presynaptic pulse, which is binary (0 or 1). u. oftMembrane potential at time t, ut+1The membrane potential at time t + 1.
Setting the rest potential to 0, the membrane potential u at the time of t +1 can be obtainedt+1:
ut+1=λut+ΣjWjo(j)
In the above formula, the parameter λ is used to representu is the membrane potential at time t, ΣjWjo (j) denotes a presynaptic input I, the index j denotes the presynaptic index, o (j) denotes the corresponding presynaptic pulse, a pulse is generated only if the current potential is higher than the membrane potential of the neuron, VthIs the threshold for the neuronal membrane potential.
The parameters of the pulse neuron include membrane time constant tau and threshold value V of the neuron membrane potentialth。
S3: setting the number of layers of the impulse neural network, the type of the network, the number of feature maps of each layer of the network, the size of a convolution kernel and the threshold value V of the membrane potential of the neuron according to the size of input data and functional requirementsth;
The specific set-up of the network is different for different classification tasks. For simple classification tasks with small data scale and less classification required, fewer classification layers can be set, and for complex classification tasks with large data scale and more classification, the capability of extracting features can be improved by adopting a multilayer network.
S4: constructing a loss function according to the output of the impulse neural network and the real label of the sample, and training the impulse neural network based on the number of layers of the impulse neural network, the type of the network, the number of feature maps of each layer of the network, the size of a convolution kernel and the threshold voltage of the neuron set in the step S3 according to the impulse neural network constructed in the step S2 on the basis of a back propagation method to obtain the trained impulse neural network;
wherein the loss function L is the mean square error between all outputs and the label at a given time window T, and is as follows:
wherein o ist,nAnd the output vector of the last layer n at the time step T is shown, Y is a label vector, n is the number of network layers, and T is a time window.
At time step t, calculate the n-th layer loss function L, and then pairAndthe derivatives of (c) are as follows:
wherein l is the number of layers from 1 layer to n +1 layers.The output vector at time step t for the last layer n of neuron i,is the membrane potential of neuron i, j is the label of the number of layers, and i is the label of neuron.
The pulse neuron function o (j) is non-conductive and affects gradient transfer, the invention adopts a gradient approximation mode to solve the problem of non-conductivity, and a specific approximate calculation formula is as follows:
where a is the extent width on the x-axis, u is the membrane potential, and h (u) is a gradient approximation function.
The network input coding processing and the gradient calculation mode are combined, and the structure and the layer number of the pulse network are configured according to the corresponding classification tasks, so that the training algorithm with low time consumption and high precision can be realized.
Referring to fig. 2, based on a back propagation algorithm, in combination with a gradient approximation function h (u), reading a pulse sequence (i.e., input data) obtained by preprocessing from an input layer, performing forward calculation layer by layer to generate a pulse signal, transmitting the pulse signal to a next layer, calculating an error between real sample labels corresponding to neurons of the last layer of output and output layers, performing error estimation on parameters of a pulse neural network in a reverse direction, updating synaptic weights of a convolutional layer and a full-link layer according to the error estimation until the error reaches a set value (determined according to actual conditions) and the accuracy is high, and obtaining a trained pulse convolutional neural network.
Wherein the error is calculated according to a loss function L.
In the invention, an approximate function h (u) is adopted to replace a pulse neuron function, so that the problem of inconductibility of the pulse function is solved, and gradient transfer is realized.
S5: inputting the data to be classified into the trained pulse neural network to obtain a classification result, namely an output result.
Specifically, the data to be classified is input into a trained pulse convolutional neural network, the data to be classified is encoded according to the size of a pixel value, image feature information of the data to be classified is extracted through a convolutional layer and a pooling layer, a predicted value for each class is output through a full connection layer, the output predicted value for each class is compared, and the class with the largest predicted value is taken as the class to which the data to be classified (namely, the input image) belongs.
Conventional rate coding converts the real-valued image into a pulse train at the input, the emission rate of which is proportional to the pixel intensity. At the output, it calculates the firing rate of each neuron in the last layer within a given time window to determine the network output. But rate coding requires a long simulation time to achieve good performance. The invention can reduce the consumption of computing resources in the simulation process of the pulse network, improve the simulation time, have very good classification accuracy, can classify the data set according to the marked classes under the condition of short-time training of the pulse neural network, ensure the classification accuracy better than the conventional pulse neural network, realize the classification of the image data, and can also be applied to different digital classifications.
In a specific embodiment of the invention, a pulse neural network structure with convolutional layers and pooling layers is constructed, a neuron model with biological rationality is provided, MNIST handwritten digital images are used as input data, the network classification processing capacity is tested, and the size of the initial MNIST handwritten digital images is 28 x 28. The method comprises the following specific steps:
as shown in fig. 2, for an MNIST handwritten digital image, two fully connected layers and two convolution pooling layers are adopted, and a convolution kernel with a size of 5 × 5 is adopted as a synapse weight in the convolution layers to establish connection between partial pulse neurons before and after synapse; firstly, reading an MNIST (MNIST handwriting digital image), obtaining a normalized pixel value within a range of [0,1], comparing the normalized pixel value with a random number within a range of [0,1], if the pixel value is larger than the random number, generating a pulse at a position corresponding to the MNIST handwriting digital image, thereby obtaining an input pulse sequence, reading the input pulse sequence by a first layer of convolution layer, wherein the size of a feature map processed by the first layer of convolution layer is 26 x 26, and the size of the feature map is 13 x 13 after reducing the data volume by adopting a 2 x 2 pooling layer; the feature size after the second layer convolutional layer treatment was 13 × 13, and after the pooling layer treatment with 2 × 2, the feature size was 11 × 11; and then, connecting an output layer by adopting a layer of full-connection layer, wherein the output layer comprises 10 neurons and corresponds to 10 picture categories. And in each simulation process, judging the type of input data according to the output predicted value.
Referring to fig. 3, it can be seen that the error is significantly reduced in a shorter time step, which indicates that the training algorithm can complete the training and learning task in a shorter time to adjust the network synaptic weights.
The training algorithm can complete the training of the fully-connected impulse neural network and the convolution impulse neural network, the training time is short, and the classification precision of the impulse neural network completing the training is high.
The high-efficiency high-precision pulse neural network training algorithm provided by the invention can directly calculate the gradient change in the back propagation process through a gradient approximation function, and simultaneously, the input pulse sequence is coded according to the pixel value, so that the time consumption in the training process is shortened, the simulation resource consumption is reduced, the classification accuracy is higher, finally, 98.67% of classification accuracy can be obtained in an IRIS classification task, and 99.46% of classification accuracy can be obtained in an MNIST handwritten number classification task.
Claims (9)
1. An efficient high-precision training algorithm for a spiking neural network, comprising the steps of:
s1: acquiring an MNIST handwritten digital image, and preprocessing the MNIST handwritten digital image to obtain a pulse sequence;
s2: inputting a pulse sequence into a pulse neural network by adopting a back propagation method, and training the pulse neural network by utilizing a gradient approximation function to obtain a trained pulse neural network;
s3: and inputting the data to be classified into the trained pulse neural network to obtain a classification result.
2. The algorithm of claim 1, wherein the MNIST handwritten digital image size is 28 x 28 in step S1.
3. The algorithm of claim 1, wherein in step S1, the MNIST divides the pixel value of the handwritten digital image by 255 to obtain the normalized pixel value in the range of [0,1], preprocesses the input image to generate the random number in the range of [0,1], compares the random number with the normalized pixel value in the range of [0,1], and generates the pulse if the pixel value is larger than the random number, thereby obtaining the pulse sequence.
4. The algorithm of claim 1, wherein the spiking neural network comprises convolutional layers and pooling layers.
6. The algorithm for training the impulse neural network with high efficiency and high precision as claimed in claim 1, wherein the impulse neural network is trained by inputting the impulse sequence into the impulse neural network by using a back propagation method, and the algorithm comprises the following steps: reading a pulse sequence from an input layer, combining a gradient approximation function, performing forward calculation layer by layer to generate a pulse signal, transmitting the pulse signal to a next layer, calculating an error between real sample labels corresponding to neurons of the last layer and the output layer after the pulse signal is transmitted to the output layer, performing error estimation on parameters of the pulse neural network in a reverse direction, updating synaptic weights of the convolutional layer and the full-connection layer according to the error estimation until the error reaches a set value, and obtaining the trained pulse convolutional neural network.
7. The algorithm of claim 1, wherein the classification result is obtained by inputting data to be classified into the trained spiking neural network, and comprises the following steps:
inputting the data to be classified into a trained pulse convolutional neural network, coding the data to be classified according to pixel values, extracting image characteristic information of the data to be classified through a convolutional layer and a pooling layer, outputting a predicted value aiming at each type of data through a full connection layer, comparing the predicted values aiming at each type of data, and taking the class with the maximum predicted value as the class of the data to be classified.
8. The algorithm of claim 1, wherein the error is calculated by a loss function, and the loss function L is calculated by the following formula:
wherein o ist,nAnd (3) representing the output vector of the last layer n at a time step T, wherein Y is a label vector, n is the number of layers of the impulse neural network, and T is a time window.
9. The algorithm of claim 1, wherein the parameters of the spiking neural network comprise number of layers, type of network, number of feature maps of each layer of the network, size of convolution kernel, and threshold of neuron membrane potential.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111547249.0A CN114186672A (en) | 2021-12-16 | 2021-12-16 | Efficient high-precision training algorithm for impulse neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111547249.0A CN114186672A (en) | 2021-12-16 | 2021-12-16 | Efficient high-precision training algorithm for impulse neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114186672A true CN114186672A (en) | 2022-03-15 |
Family
ID=80544234
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111547249.0A Pending CN114186672A (en) | 2021-12-16 | 2021-12-16 | Efficient high-precision training algorithm for impulse neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114186672A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114332545A (en) * | 2022-03-17 | 2022-04-12 | 之江实验室 | Image data classification method and device based on low-bit pulse neural network |
CN115429293A (en) * | 2022-11-04 | 2022-12-06 | 之江实验室 | Sleep type classification method and device based on impulse neural network |
CN116797851A (en) * | 2023-07-28 | 2023-09-22 | 中国科学院自动化研究所 | Brain-like continuous learning method of image classification model, image classification method and device |
CN117493066A (en) * | 2023-12-28 | 2024-02-02 | 苏州元脑智能科技有限公司 | Fault prediction method, device, equipment and medium of server |
CN117556877A (en) * | 2024-01-11 | 2024-02-13 | 西南交通大学 | Pulse neural network training method based on data pulse characteristic evaluation |
-
2021
- 2021-12-16 CN CN202111547249.0A patent/CN114186672A/en active Pending
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114332545A (en) * | 2022-03-17 | 2022-04-12 | 之江实验室 | Image data classification method and device based on low-bit pulse neural network |
CN114332545B (en) * | 2022-03-17 | 2022-08-05 | 之江实验室 | Image data classification method and device based on low-bit pulse neural network |
CN115429293A (en) * | 2022-11-04 | 2022-12-06 | 之江实验室 | Sleep type classification method and device based on impulse neural network |
CN116797851A (en) * | 2023-07-28 | 2023-09-22 | 中国科学院自动化研究所 | Brain-like continuous learning method of image classification model, image classification method and device |
CN116797851B (en) * | 2023-07-28 | 2024-02-13 | 中国科学院自动化研究所 | Brain-like continuous learning method of image classification model, image classification method and device |
CN117493066A (en) * | 2023-12-28 | 2024-02-02 | 苏州元脑智能科技有限公司 | Fault prediction method, device, equipment and medium of server |
CN117493066B (en) * | 2023-12-28 | 2024-03-15 | 苏州元脑智能科技有限公司 | Fault prediction method, device, equipment and medium of server |
CN117556877A (en) * | 2024-01-11 | 2024-02-13 | 西南交通大学 | Pulse neural network training method based on data pulse characteristic evaluation |
CN117556877B (en) * | 2024-01-11 | 2024-04-02 | 西南交通大学 | Pulse neural network training method based on data pulse characteristic evaluation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112308158B (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
CN114186672A (en) | Efficient high-precision training algorithm for impulse neural network | |
Min et al. | A gradually distilled CNN for SAR target recognition | |
US20210089922A1 (en) | Joint pruning and quantization scheme for deep neural networks | |
CN108985252B (en) | Improved image classification method of pulse depth neural network | |
CN111858989A (en) | Image classification method of pulse convolution neural network based on attention mechanism | |
US20210158166A1 (en) | Semi-structured learned threshold pruning for deep neural networks | |
Paupamah et al. | Quantisation and pruning for neural network compression and regularisation | |
WO2021042857A1 (en) | Processing method and processing apparatus for image segmentation model | |
CN112633154B (en) | Method and system for converting heterogeneous face feature vectors | |
CN111310816B (en) | Method for recognizing brain-like architecture image based on unsupervised matching tracking coding | |
CN111062450A (en) | Image classification device and method based on FPGA and SCNN architecture | |
CN115482230A (en) | Pulmonary tuberculosis assistant decision-making system based on deep convolution pulse neural network | |
JP2022113135A (en) | Neural network training method and apparatus | |
KR20230088714A (en) | Personalized neural network pruning | |
CN117351542A (en) | Facial expression recognition method and system | |
CN114492634A (en) | Fine-grained equipment image classification and identification method and system | |
US20230076290A1 (en) | Rounding mechanisms for post-training quantization | |
CN113221683A (en) | Expression recognition method based on CNN model in teaching scene | |
CN117034060A (en) | AE-RCNN-based flood classification intelligent forecasting method | |
CN116956997A (en) | LSTM model quantization retraining method, system and equipment for time sequence data processing | |
CN115329821A (en) | Ship noise identification method based on pairing coding network and comparison learning | |
CN114463614A (en) | Significance target detection method using hierarchical significance modeling of generative parameters | |
CN113553917A (en) | Office equipment identification method based on pulse transfer learning | |
Ashiquzzaman et al. | Compact deeplearning convolutional neural network based hand gesture classifier application for smart mobile edge computing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |