CN117556877B - Pulse neural network training method based on data pulse characteristic evaluation - Google Patents

Pulse neural network training method based on data pulse characteristic evaluation Download PDF

Info

Publication number
CN117556877B
CN117556877B CN202410040609.5A CN202410040609A CN117556877B CN 117556877 B CN117556877 B CN 117556877B CN 202410040609 A CN202410040609 A CN 202410040609A CN 117556877 B CN117556877 B CN 117556877B
Authority
CN
China
Prior art keywords
training
sample
pulse
iteration
ssm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410040609.5A
Other languages
Chinese (zh)
Other versions
CN117556877A (en
Inventor
储节磊
唐玲玲
李天瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202410040609.5A priority Critical patent/CN117556877B/en
Publication of CN117556877A publication Critical patent/CN117556877A/en
Application granted granted Critical
Publication of CN117556877B publication Critical patent/CN117556877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a pulse neural network training method based on data pulse characteristic evaluation, which comprises the following steps: initializing a network; data input; forward propagation; calculating a gradient by using a gradient substitution method, and performing back propagation by using the calculated gradient information; according to the classification characteristics extracted by the network; judging iteration times; scaling down the training set size; calculating the extracted probability of each sample; extracting probability according to the calculated sample; inputting the obtained new training set into a network for forward propagation; updating SSM, HSSM, HCSSM values according to the classification characteristics; the method has the advantages that the gradient is replaced by the gradient, the calculated gradient information is used for counter propagation, the preset iteration times are reached, the method is easy to develop, the method is suitable for classifying tasks, the classifying features in the method are replaced by the pulse features extracted by other tasks, the corresponding tasks can be adapted, the method has the training process interpretability, the process is transparent, and the training effect is improved.

Description

Pulse neural network training method based on data pulse characteristic evaluation
Technical Field
The invention relates to the field of deep learning algorithms, in particular to a pulse neural network training method based on data pulse feature evaluation.
Background
The impulse neural network (Spiking Neural Network, SNN) is a crossover of neuroscience and machine learning, a third generation neural network called a post-perceptron and artificial neural network, and uses a biologically inspired impulse neuron model, which is an abstraction of the biological nervous system. The method has the characteristics of sparse calculation, event driving and the like, has biological rationality compared with an artificial neural network (Artificial Neural Network, ANN), and is a foundation for constructing a brain-like intelligent model.
At the same time, the impulse neural network provides a new solution for reducing the calculation energy consumption because of the sparse calculation (namely, only a small part of neurons are activated at a specific time point) and the event driving (only the impulse is triggered when the input signal reaches a certain threshold value).
However, the existing impulse neural network training method usually ignores the energy consumption problem caused by the training layer, and the problems of long model training time, low convergence speed, low efficiency and low interpretability of the training process are caused by setting a large training iteration number and using a large amount of sample data for model training. Therefore, the invention provides a pulse neural network training method based on data pulse characteristic evaluation.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides a pulse neural network training method based on data pulse characteristic evaluation.
The technical scheme adopted by the invention is that the method comprises the following steps:
step 1, initializing a network: constructing a pulse neural network of an input layer-hidden layer-output layer structure, and randomly initializing the weight and other parameters of the pulse neural network;
step 2, data input: randomly disturbing the data in the complete data set, and inputting the data pulse neural network;
step 3, forward propagation: the input data is transmitted forward through the network, in the process, the potential of the neuron is gradually increased, when the potential reaches the threshold value, pulse is generated and transmitted to the connected neuron, and finally the classification characteristic of the class g of the current iteration is extractedObtaining classification weight;
step 4, calculating gradients by using a gradient substitution method, carrying out back propagation by using calculated gradient information, calculating an objective function value by using the network output classification characteristic and the expected label, and optimizing the weight and each parameter of the network according to the back propagation gradient information so as to minimize the objective function;
step 5, calculating three pulse sample difficulty evaluation scales according to the classification features extracted by the network;
step 6, repeating the steps 2-5 until the iteration number Q is reached 0 Wherein Q is 0 The total training iteration number is preset.
Further, the method further comprises:
step 7, the size of the training set is reduced in proportion;
step 8, calculating the extraction probability of each sample;
step 9, extracting probability from the complete training set according to the calculated sampleIs selected from->The samples form the training set of the current iteration +.>
Step 10, inputting the obtained new training set into a network, performing forward propagation, and finally extracting the classification characteristics of the g class of the current iterationObtaining classification weight;
further, the method further comprises:
step 11, updating an instantaneous pulse sample difficulty evaluation scale (Sample Spike Metric, SSM) according to the classification characteristics, fusing the pulse sample difficulty evaluation scale (History Sample Spike Metric, HSSM) of the historical information, and fusing the pulse sample difficulty evaluation scale change value (History Sample Spike Metric Change, HCSSM) of the historical information;
step 12, using gradients to replace calculated gradients, using calculated gradient information to perform back propagation, calculating an objective function value by using the network output classification characteristic and the expected label, and optimizing the weight and each parameter of the network according to the back propagation gradient information so as to minimize the objective function;
step 13, repeating the steps 8-12 until the next iteration node Q is reached;
and 14, repeating the steps 7-13 until the preset iteration times are reached.
Further, in the step 5, calculating three pulse sample difficulty evaluation scales includes:
step 5.1, calculating an instantaneous pulse sample difficulty evaluation scale SSM of a sample level, wherein the expression is as follows:
wherein,representation sample->In->SSM value for a training iteration +.>、/>For control parameters, g is the total number of categories,for the sample at->Classifying features of correct class to which the training iteration belongs, < >>Indicating that the sample is at->The classification characteristic value belonging to the c-th class during the training iteration, wherein the value of c is from 1 to g;
step 5.2, calculating a pulse sample difficulty evaluation scale HSSM of fusion history information, wherein the expression is as follows:
wherein,representation sample->In->HSSM values for the second training iteration, +.>For the weight parameter of the history information +.>For sample->In->Transient pulse sample difficulty evaluation scale SSM value of training iteration +.>For sample->In->The instantaneous pulse sample difficulty evaluation scale SSM value of the secondary training iteration;
step 5.3, calculating a pulse sample difficulty evaluation scale change value HCSSM of fusion history information, wherein the expression is as follows:
wherein,representation sample->In->HCSSM value for a training iteration, +.>As a weight parameter of the history information,and->Is an intermediate variable +.>Representation sample->In->SSM for the second training iteration is compared to that at the firstChange value at training iterations, +.>Representation sample->In->SSM for the training iteration is less than at +.>The second formula is the calculation method of the intermediate variable, and the right part of the equal sign is +.>Representation sample->In->The instantaneous pulse sample difficulty of the training iteration evaluates the scale SSM.
Further, in the step 7, the training set size is scaled down, and the calculation formula is as follows:
wherein,size of training set updated for jth split node, +.>Training set size for the j-1 th split node, N is total data set size, +.>Reducing parameters for subsets->The lower-bound parameters are reduced for the subset, j is the order of the nodes separated by the current training site.
Further, in the step 8, the probability of each sample being extracted is calculated, and the calculation formula is:
wherein,for sample->In->The training iterations are extracted as probability values of the training set, < >>For sample->In->Extracted index of the training iterations, +.>Indicating that the whole sample is at +.>The extracted indices of the training iterations are summed, if a direct extraction method is used, +.>,/>For one of SSM, HSSM, HCSSM calculated in step 5,/->The average value of all the obtained corresponding samples is calculated according to one selected from SSM, HSSM, HCSSM; if a bell-shaped extraction method is used, then +.>,/>Also calculated for step 5, one of SSM, HSSM, HCSSM, -, is->、/>The standard deviation and the mean value of all the samples obtained by calculation in the selected SSM, HSSM, HCSSM are respectively represented by e on the right side of the equal sign, wherein the e represents a natural constant e.
The beneficial effects are that:
the invention provides a pulse neural network training method based on data pulse feature evaluation, and provides a pulse neural network training method which has the advantages of short training time, high training efficiency, good training effect, high interpretability and good expansibility, is easy to expand, is suitable for classifying tasks, and can adapt to corresponding tasks by replacing the classification features in the pulse neural network training method with the pulse features extracted by other tasks. Compared with the existing training method, the method has the advantages that the interpretation of the training process is realized, the training subset constructed by each iteration is obtained by formula calculation, the process is transparent, and the training effect is improved.
Drawings
FIG. 1 is a flow chart of method steps of the present invention;
FIG. 2 is a graph comparing loss curves tested on NMNIST data sets by the method of the present invention;
FIG. 3 is a graph of a comparison of test accuracy curves on an NMNIST dataset according to the method of the present invention;
FIG. 4 is a graph comparing test loss curves on a DVS-Gestme dataset according to the method of the present invention;
FIG. 5 is a graph comparing the accuracy of the test on the DVS-Gestme dataset according to the method of the present invention.
Detailed Description
It should be noted that, without conflict, the embodiments and features of the embodiments in the present application may be combined with each other, and the present application will be further described in detail with reference to the drawings and the specific embodiments.
As shown in fig. 1, the pulse neural network training method based on data pulse characteristic evaluation includes the steps of:
step 1, initializing a network: and constructing a pulse neural network of an input layer-hidden layer-output layer structure, and randomly initializing the weight and other parameters of the pulse neural network to obtain an initial network model.
Step 2, data input: and randomly scrambling the data in the complete data set, and inputting the data pulse neural network.
Step 3, forward propagation: the incoming data is propagated forward through the network. In the process, the potential of the neuron is gradually increased, when the potential reaches a threshold value, a pulse is generated and transmitted to the connected neuron, and finally the classification characteristic of the class g of the current iteration is extractedAnd obtaining the classification weight.
And 4, calculating gradients by using a gradient substitution (Surrogate Gradient) method, carrying out back propagation by using calculated gradient information, calculating an objective function value by using the network output classification characteristic and the expected label, and optimizing the weight and each parameter of the network according to the back propagation gradient information so as to minimize the objective function and obtain the trained impulse neural network model.
And 5, calculating SSM, HSSM, HCSSM three pulse sample difficulty evaluation scales according to the classification characteristics extracted by the network as follows.
Step 5.1, calculating an instantaneous pulse sample difficulty evaluation scale (Sample Spike Measure, SSM) of a sample level, wherein a calculation formula is as follows:
wherein,representation sample->In->SSM value for a training iteration +.>、/>For control parameters, g is the total number of categories,for the sample at->Classifying features of correct class to which the training iteration belongs, < >>Indicating that the sample is at->The classification characteristic value belonging to the c-th class during the training iteration, wherein the value of c is from 1 to g;
step 5.2, calculating a pulse sample difficulty evaluation scale HSSM of fusion history information, wherein the expression is as follows:
wherein,representation sample->In->HSSM values for the second training iteration, +.>For the weight parameter of the history information +.>For sample->In->Transient pulse sample difficulty evaluation scale SSM value of training iteration +.>For sample->In->The instantaneous pulse sample difficulty evaluation scale SSM value of the secondary training iteration;
step 5.3, calculating a pulse sample difficulty evaluation scale change value HCSSM of fusion history information, wherein the expression is as follows:
wherein,representation sample->In->HCSSM value for a training iteration, +.>As a weight parameter of the history information,and->Is an intermediate variable +.>Representation sample->In->SSM for the second training iteration is compared to that at the firstChange value at training iterations, +.>Representation sample->In->SSM for the training iteration is less than at +.>The second formula is the calculation method of the intermediate variable, and the right part of the equal sign is +.>Representation sample->In->The instantaneous pulse sample difficulty of the training iteration evaluates the scale SSM.
Step 6, repeating the steps 2-5 until the iteration number Q is reached 0 . Wherein Q is 0 And obtaining the pulse neural network model after the training in the first stage for the preset total training iteration times.
Step 7, the size of the training set is reduced according to the proportion, and the calculation method comprises the following steps:
wherein,size of training set updated for jth split node, +.>Training set size for the j-1 th split node, N is total data set size, +.>Reducing parameters for subsets->The lower-bound parameters are reduced for the subset, j is the order of the nodes separated by the current training site.
For the DVS-structure dataset, n=1176,,/>the method comprises the steps of carrying out a first treatment on the surface of the For NMNIST, n=60000,,/>the method comprises the steps of carrying out a first treatment on the surface of the For CIFAR10-DVS, n=9000,/i>,/>The method comprises the steps of carrying out a first treatment on the surface of the For N-Caltech101, n=7886,>,/>the method comprises the steps of carrying out a first treatment on the surface of the And obtaining the size of the training set after shrinking.
Step 8Calculating the extraction probability of each sample, wherein the calculation formula is as follows:
wherein,for sample->In->The training iterations are extracted as probability values of the training set, < >>For sample->In->Extracted index of the training iterations, +.>Indicating that the whole sample is at +.>The extracted indices of the training iterations are summed, if a direct extraction method is used, +.>,/>For one of SSM, HSSM, HCSSM calculated in step 5,/->The average value of all the obtained corresponding samples is calculated according to one selected from SSM, HSSM, HCSSM; if a bell-shaped extraction method is used, then +.>,/>Also calculated for step 5, one of SSM, HSSM, HCSSM, -, is->、/>The standard deviation and the mean value of all the samples obtained by calculation in the selected SSM, HSSM, HCSSM are respectively represented by e on the right side of the equal sign, wherein the e represents a natural constant e.
Step 9Extracting probabilities from the complete training set based on the calculated samplesIs selected from->Obtaining a training set of the current iteration by using samples>
Step 10Inputting the obtained new training set into a network, performing forward propagation, and finally extracting classification characteristics { of g classes of the current iteration,...,/>And obtaining the classification weight.
Step 11The SSM, HSSM, HCSSM value is updated based on the classification characteristic.
Step 12And (3) using the gradient to replace the calculated gradient, using the calculated gradient information to perform back propagation, calculating an objective function value by utilizing the network output classification characteristic and the expected label, and optimizing the weight and each parameter of the network according to the back propagation gradient information so as to minimize the objective function and obtain the trained impulse neural network.
Step 13Steps 8-12 are repeated until the next iteration node Q is reached.
Step 14Repeating steps 7-13 until a predetermined number of iterations is reached.
The invention takes image classification as a task to verify the effect of the proposed training method. The impulse neural network model uses a VGG13 structure and the objective function uses a mean-square error (MSE). The total iteration number E is set to 300, its segmentation sequence is {5, 20, 30, 45, 60, 90, 120, 170, 180, 240, 300}. The dataset includes DVS-Gesture, N-MNIST, CIFAR10-DVS, and N-Caltech101.
Compared with the existing training method, the training time on the DVS-Gestm data set is reduced by 66%, the training time on the N-MNIST data set is reduced by 74%, the training time on the CIFAR10-DVS data set is reduced by 50%, and the training time on the N-Caltech101 data set is reduced by 50%, compared with the existing training method, as shown in figures 3 and 5, the accuracy on the DVS-Gestm data set is improved by 2.17%, the accuracy on the N-MNIST data set is improved by 0.15%, the accuracy on the CIR 10-DVS data set is improved by 3.3%, and the accuracy on the N-Caltech101 data set is improved by 3.76%.
As shown in fig. 2 and 4, the present invention can make the impulse neural network model converge faster than the existing method.
The method has expansibility, is suitable for classifying tasks, and can adapt to corresponding tasks by replacing the classification features in the tasks with pulse features extracted by other tasks.
Compared with the existing training method, the method has the advantages that the training process is interpretable, the training subset constructed by each iteration is obtained through formula calculation, and the process is transparent.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various equivalent changes, modifications, substitutions and alterations can be made to these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (3)

1. The pulse neural network training method based on the data pulse characteristic evaluation is characterized by being applied to image classification and comprising the following steps:
step 1, initializing a network: constructing a pulse neural network of an input layer-hidden layer-output layer structure, and randomly initializing the weight and other parameters of the pulse neural network;
step 2, data input: randomly disturbing the data in the complete data set, and inputting the data pulse neural network;
step 3, forward propagation: the input data is transmitted forward through the network, in the process, the potential of the neuron is gradually increased, when the potential reaches the threshold value, pulse is generated and transmitted to the connected neuron, and finally the classification characteristic of the class g of the current iteration is extractedObtaining classification weight;
step 4, calculating gradients by using a gradient substitution method, carrying out back propagation by using calculated gradient information, calculating an objective function value by using the network output classification characteristic and the expected label, and optimizing the weight and each parameter of the network according to the back propagation gradient information so as to minimize the objective function;
step 5, calculating three pulse sample difficulty evaluation scales according to the classification features extracted by the network;
step 6, repeating the steps 2-5 until the iteration number Q is reached 0 Wherein Q is 0 The total training iteration times are preset;
step 7, the size of the training set is reduced in proportion;
step 8, calculating the extraction probability of each sample;
step 9, selecting from the complete training set D according to the calculated sample extraction probabilityThe samples form the training set of the current iteration +.>
Step 10, inputting the obtained new training set into a network, performing forward propagation, and finally extracting the classification characteristics of the g class of the current iterationObtaining classification weight;
step 11, updating SSM, HSSM, HCSSM values according to the classification characteristics;
step 12, using gradients to replace calculated gradients, using calculated gradient information to perform back propagation, calculating an objective function value by using the network output classification characteristic and the expected label, and optimizing the weight and each parameter of the network according to the back propagation gradient information so as to minimize the objective function;
step 13, repeating the steps 8-12 until the next iteration node Q is reached;
step 14, repeating the steps 7-13 until the preset iteration times are reached;
and 5, calculating three pulse sample difficulty evaluation scales comprises the following steps:
step 5.1, calculating an instantaneous pulse sample difficulty evaluation scale SSM of a sample level, wherein the expression is as follows:
wherein,representation sample->In->SSM value for a training iteration +.>、/>For control parameters, g is the total number of categories, < ->For the sample at->Classifying features of correct class to which the training iteration belongs, < >>Indicating that the sample is at->The classification characteristic value belonging to the c-th class during the training iteration, wherein the value of c is from 1 to g;
step 5.2, calculating a pulse sample difficulty evaluation scale HSSM of fusion history information, wherein the expression is as follows:
wherein,representation sample->In->HSSM values for the second training iteration, +.>For the weight parameter of the history information +.>For sample->In->Transient pulse sample difficulty evaluation scale SSM value of training iteration +.>For sample->In->The instantaneous pulse sample difficulty evaluation scale SSM value of the secondary training iteration;
step 5.3, calculating a pulse sample difficulty evaluation scale change value HCSSM of fusion history information, wherein the expression is as follows:
wherein,representation sample->In->HCSSM value for a training iteration, +.>For the weight parameter of the history information, < >>And->Is an intermediate variable +.>Representation sample->In->SSM for the training iteration is less than at +.>Change value at training iterations, +.>Representation sample->In->SSM for the training iteration is less than at +.>The second formula is the calculation method of the intermediate variable, and the right part of the equal sign is +.>Representation sample->In->The instantaneous pulse sample difficulty of the training iteration evaluates the scale SSM.
2. The method for training a pulse neural network based on data pulse feature evaluation according to claim 1, wherein in step 7, the training set size is scaled down, and the calculation formula is:
wherein,size of training set updated for jth split node, +.>Training set size for the j-1 th split node, N is total data set size, +.>Reducing parameters for subsets->The lower-bound parameters are reduced for the subset, j is the order of the nodes separated by the current training site.
3. The method for training a pulsed neural network based on data pulse feature evaluation of claim 1, wherein the step 8 calculates the probability of each sample being extracted by the expression:
wherein,for sample->In->The training iterations are decimated toProbability value of training set->For sample->In->Extracted index of the training iterations, +.>Indicating that the whole sample is at +.>The extracted indices of the training iterations are summed, if a direct extraction method is used, +.>,/>For one of SSM, HSSM, HCSSM calculated in step 5,/->The average value of all the obtained corresponding samples is calculated according to one selected from SSM, HSSM, HCSSM; if a bell-shaped extraction method is used, then +.>,/>Also calculated for step 5, one of SSM, HSSM, HCSSM, -, is->、/>The standard deviation and the mean value of all the samples obtained by calculation in the selected SSM, HSSM, HCSSM are respectively represented by e on the right side of the equal sign, wherein the e represents a natural constant e.
CN202410040609.5A 2024-01-11 2024-01-11 Pulse neural network training method based on data pulse characteristic evaluation Active CN117556877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410040609.5A CN117556877B (en) 2024-01-11 2024-01-11 Pulse neural network training method based on data pulse characteristic evaluation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410040609.5A CN117556877B (en) 2024-01-11 2024-01-11 Pulse neural network training method based on data pulse characteristic evaluation

Publications (2)

Publication Number Publication Date
CN117556877A CN117556877A (en) 2024-02-13
CN117556877B true CN117556877B (en) 2024-04-02

Family

ID=89813263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410040609.5A Active CN117556877B (en) 2024-01-11 2024-01-11 Pulse neural network training method based on data pulse characteristic evaluation

Country Status (1)

Country Link
CN (1) CN117556877B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
CN112633497A (en) * 2020-12-21 2021-04-09 中山大学 Convolutional pulse neural network training method based on reweighted membrane voltage
CN113505686A (en) * 2021-07-07 2021-10-15 中国人民解放军空军预警学院 Unmanned aerial vehicle target threat assessment method and device
CN114186672A (en) * 2021-12-16 2022-03-15 西安交通大学 Efficient high-precision training algorithm for impulse neural network
WO2022253229A1 (en) * 2021-06-04 2022-12-08 北京灵汐科技有限公司 Synaptic weight training method, target recognition method, electronic device, and medium
CN115602156A (en) * 2022-09-06 2023-01-13 西安电子科技大学(Cn) Voice recognition method based on multi-synapse connection optical pulse neural network
CN115700850A (en) * 2022-11-03 2023-02-07 天津大学四川创新研究院 Action identification method and system based on unsupervised neural network LBRI
WO2023178737A1 (en) * 2022-03-24 2023-09-28 中国科学院深圳先进技术研究院 Spiking neural network-based data enhancement method and apparatus

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10579925B2 (en) * 2013-08-26 2020-03-03 Aut Ventures Limited Method and system for predicting outcomes based on spatio/spectro-temporal data
US10204301B2 (en) * 2015-03-18 2019-02-12 International Business Machines Corporation Implementing a neural network algorithm on a neurosynaptic substrate based on criteria related to the neurosynaptic substrate
US20210350236A1 (en) * 2018-09-28 2021-11-11 National Technology & Engineering Solutions Of Sandia, Llc Neural network robustness via binary activation
EP4264499A1 (en) * 2020-12-21 2023-10-25 Citrix Systems, Inc. Multimodal modelling for systems using distance metric learning
CN113255905B (en) * 2021-07-16 2021-11-02 成都时识科技有限公司 Signal processing method of neurons in impulse neural network and network training method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
CN112633497A (en) * 2020-12-21 2021-04-09 中山大学 Convolutional pulse neural network training method based on reweighted membrane voltage
WO2022253229A1 (en) * 2021-06-04 2022-12-08 北京灵汐科技有限公司 Synaptic weight training method, target recognition method, electronic device, and medium
CN113505686A (en) * 2021-07-07 2021-10-15 中国人民解放军空军预警学院 Unmanned aerial vehicle target threat assessment method and device
CN114186672A (en) * 2021-12-16 2022-03-15 西安交通大学 Efficient high-precision training algorithm for impulse neural network
WO2023178737A1 (en) * 2022-03-24 2023-09-28 中国科学院深圳先进技术研究院 Spiking neural network-based data enhancement method and apparatus
CN115602156A (en) * 2022-09-06 2023-01-13 西安电子科技大学(Cn) Voice recognition method based on multi-synapse connection optical pulse neural network
CN115700850A (en) * 2022-11-03 2023-02-07 天津大学四川创新研究院 Action identification method and system based on unsupervised neural network LBRI

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A Method of Converting ANN to SNN for Image Classification;Ruohong Zhou;2023 IEEE 3rd International Conference on Electronic Technology, Communication and Information (ICETCI);20230717;819-822 *
MD -RBM 神经网络模型及其在材料微结构中聚类研究;储节磊等;计算机应用与软件;20190630;155-162 *
基于脉冲神经网络的类脑计算;王秀青等;北京工业大学学报;20191231;1277-1286 *

Also Published As

Publication number Publication date
CN117556877A (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US10832123B2 (en) Compression of deep neural networks with proper use of mask
CN109214566B (en) Wind power short-term prediction method based on long and short-term memory network
CN111898689B (en) Image classification method based on neural network architecture search
CN112949828B (en) Graph convolution neural network traffic prediction method and system based on graph learning
CN103166830B (en) A kind of Spam Filtering System of intelligent selection training sample and method
CN106021990A (en) Method for achieving classification and self-recognition of biological genes by means of specific characters
CN110866631A (en) Method for predicting atmospheric pollution condition based on integrated gate recursion unit neural network GRU
CN116721537A (en) Urban short-time traffic flow prediction method based on GCN-IPSO-LSTM combination model
CN112949189A (en) Modeling method for multi-factor induced landslide prediction based on deep learning
CN116628510A (en) Self-training iterative artificial intelligent model training method
CN110289987B (en) Multi-agent system network anti-attack capability assessment method based on characterization learning
CN117556877B (en) Pulse neural network training method based on data pulse characteristic evaluation
CN114723003A (en) Event sequence prediction method based on time sequence convolution and relational modeling
Shang et al. Research on intelligent pest prediction of based on improved artificial neural network
CN112711896B (en) Complex reservoir group optimal scheduling method considering multi-source forecast error uncertainty
CN113240113A (en) Method for enhancing network prediction robustness
CN112579777A (en) Semi-supervised classification method for unlabelled texts
CN112651499A (en) Structural model pruning method based on ant colony optimization algorithm and interlayer information
CN116643759A (en) Code pre-training model training method based on program dependency graph prediction
Lv et al. Rumor detection based on time graph attention network
CN116431988A (en) Resident trip activity time sequence generation method based on activity mode-Markov chain
CN115794805A (en) Medium-low voltage distribution network measurement data supplementing method
Wei et al. A new sparse restricted Boltzmann machine
CN115169544A (en) Short-term photovoltaic power generation power prediction method and system
CN115330036A (en) GRU-Seq2 Seq-based multistep long flood forecasting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant