CN108717570A - A kind of impulsive neural networks parameter quantification method - Google Patents

A kind of impulsive neural networks parameter quantification method Download PDF

Info

Publication number
CN108717570A
CN108717570A CN201810501442.2A CN201810501442A CN108717570A CN 108717570 A CN108717570 A CN 108717570A CN 201810501442 A CN201810501442 A CN 201810501442A CN 108717570 A CN108717570 A CN 108717570A
Authority
CN
China
Prior art keywords
neural network
parameters
parameter
pulse
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810501442.2A
Other languages
Chinese (zh)
Inventor
胡绍刚
乔冠超
张成明
罗鑫
刘夏凯
宁宁
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201810501442.2A priority Critical patent/CN108717570A/en
Publication of CN108717570A publication Critical patent/CN108717570A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The present invention relates to nerual network technique field more particularly to a kind of impulsive neural networks parameter quantification methods.The present invention method by map offline or on-line training obtain training complete original pulse neural network, the parameters such as impulsive neural networks weights, threshold value, leakage constant, set voltage, refractory period, the synaptic delay completed to training quantify, and all layers of neural network can share same group of quantization parameter or respectively one group of quantization parameter.Impulsive neural networks after parameter quantization only need a small amount of parameter that high-precision pulse neural network function can be realized.This method is high-precision simultaneously in holding, and effectively save impulsive neural networks parameter storage space improves arithmetic speed, reduces operation power consumption.

Description

Pulse neural network parameter quantification method
Technical Field
The invention relates to the technical field of neural networks, in particular to a pulse neural network parameter quantization method.
Background
The impulse neural network (SNN) is called a third generation neural network, is closer to the way of processing information by human brain, and is the development direction of the future neural network technology. SNNs receive information based on pulse sequences, and there are many encoding methods that can interpret a pulse sequence as an actual number, and the common encoding methods are pulse encoding and frequency encoding. Communication between neurons is also by pulsing, where when the membrane potential of one neuron is greater than its threshold, it generates a pulse signal that is transmitted to the other neuron to raise or lower its membrane potential. The SNN hardware platform is called a neuromorphic chip or a brain-like chip, completely subverts the traditional Von Neumann architecture, and the chip has the characteristics of low power consumption, low resource consumption and the like, and has the performance greatly superior to the traditional chip in the human brain-like fields of classification, identification and the like. The SNN training method mainly comprises two training modes, one is that a corresponding artificial neural network (ANN for short) is trained under a specific condition, and then the trained parameters are mapped into the SNN, but a large number of parameters are often required to be transmitted in the mapping process; another is to directly perform on-line learning of SNN, which is also accompanied by generation of a large number of parameters. If a traditional memory (such as an SRAM, a DRAM and the like) is adopted to store parameters, a huge storage space is needed, and if a novel device such as a memristor is adopted to store parameters, a plurality of parameters are difficult to realize accurately and stably; meanwhile, the huge parameters can reduce the operation speed and increase the operation power consumption. There is currently no method that can compress a large number of parameters in SNN.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method for reducing the storage space of the SNN parameters.
The technical scheme of the invention is as follows:
a pulse neural network parameter quantification method is characterized by comprising the following steps:
and acquiring the original SNN after training. The SNN uses pulse sequence as input, and its main parameters include weight, threshold, leakage constant, set voltage, refractory period, synaptic delay, etc. The trained SNN has the functions of high-precision classification, identification and the like. There are two main methods for obtaining a trained neural network: one is to obtain the trained SNN by off-line mapping, train the ANN (including MLP, CNN, RNN, LSTM, etc.) by methods such as random gradient descent, etc. commonly used for training the ANN, and then complete the training process after obtaining the ANN meeting the index requirements (such as classification, recognition accuracy, etc.), then map the parameters of the trained ANN into the SNN with the same topological structure, and use the input of the ANN as the input of the SNN after adopting pulse sequence coding (such as pulse sequence of Poisson distribution), thereby obtaining the trained SNN; one is to obtain SNN finished by training on line, establish SNN such as self-organizing SNN or other structures, utilize learning rules such as synaptic pulse time sequence dependent plasticity (STDP), adopt pulse sequences (such as Poisson distribution pulse sequence, time coding pulse sequence, etc.) to train SNN by learning on line, adjust parameters such as weight, threshold, leakage constant, set voltage, refractory period, synaptic delay, etc. of SNN in the training process, finish the training process after obtaining SNN meeting index requirements (such as classification, recognition accuracy, etc.), fix parameters such as weight, threshold, leakage constant, set voltage, refractory period, synaptic delay, etc. of SNN after training, thus obtain SNN finished by training;
one or more parameters that need to be quantized are selected. Parameters that may be quantified include weights, thresholds, leakage constants, set voltages, refractory periods, synaptic delays, and the like.
Respectively counting the parameter distribution condition of a certain layer or a plurality of layers or all layers;
interval division of the parameters is attempted. Selecting a parameter interval dividing method and an interval number, wherein the interval dividing method can adopt an equal dividing method, a non-equal dividing method, a confidence interval dividing method and the like, and the interval dividing method and the number can be tried and adjusted through parameter adjusting experience according to a specific neural network structure, a task type and index requirements;
the parameters are quantized according to the intervals. Traversing parameters in all intervals, quantizing all the parameters distributed in the same interval into the same value (namely a quantized value), and trying and adjusting the quantized value according to the specific neural network structure, task type and index requirement through parameter adjustment experience, wherein the size and the positive and negative of the quantized value are the same;
replacing corresponding parameters in the original SNN by the quantized parameters to obtain parameter quantized SNN;
and (3) testing the parameter quantization SNN by taking the input of the original SNN as input, finishing if the test result meets the index requirement, otherwise, returning to reselect the parameter interval division method and the interval number, and performing interval division and subsequent processes on the parameter.
The method has the advantages that the ANN can be converted into the SNN, the parameter quantization of the SNN is realized, the quantization method is simple and flexible to operate, the quantization in various forms can be realized, the performance of the neural network is hardly influenced, the storage resource can be saved, and the calculation speed is increased. Particularly, when SNN hardware is needed to be realized, the method can reduce resource consumption and calculation complexity on the chips such as RAM and the like, and improve hardware calculation speed and performance.
Drawings
FIG. 1 is a diagram illustrating a method for quantifying SNN parameters according to an embodiment of the present invention;
FIG. 2 is a schematic MLP diagram of one example of the ANN of FIG. 1;
FIG. 3 is a CNN diagram of one example of the ANN of FIG. 1;
FIG. 4 is a schematic diagram of an RNN of the example of the ANN of FIG. 1;
FIG. 5 is a schematic diagram of an LSTM example of the ANN of FIG. 1;
FIG. 6 is a schematic diagram of an ad hoc network of the example SNN of FIG. 1;
FIG. 7 is a weight distribution diagram and interval division of the example quantization parameter of FIG. 1;
FIG. 8 is a threshold distribution graph and interval division for one example of the quantization parameter of FIG. 2;
FIG. 9 is a leakage constant distribution diagram and interval division for one example of the quantization parameter of FIG. 2;
FIG. 10 is a set voltage profile and interval division for one example of the quantization parameter of FIG. 2;
FIG. 11 is a refractory period profile and interval division for one example of the quantization parameters of FIG. 2;
FIG. 12 is a synaptic delay profile and interval division of the example of the quantization parameter of FIG. 2.
Fig. 13 is a schematic diagram of an example method for implementing parameter quantization by SNN in fig. 1.
Detailed Description
The present invention is described in detail below with reference to the attached drawings so that those skilled in the art can better understand the present invention. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
As shown in fig. 1, an SNN parameter quantization method includes the following steps:
s1: and acquiring the original SNN after training.
The SNN uses pulse sequence as input, and its main parameters include weight, threshold, leakage constant, set voltage, refractory period, synaptic delay, etc. The trained SNN has the functions of high-precision classification, identification and the like. There are two main methods for obtaining a trained neural network: one is to obtain the trained SNN by off-line mapping, train the ANNs such as MLP (see fig. 2), CNN (see fig. 3), RNN (see fig. 4), LSTM (see fig. 5) by methods such as random gradient descent commonly used for training the ANNs, and then complete the training process after obtaining the ANNs meeting the index requirements (such as classification, recognition accuracy, etc.), map the parameters of the trained ANNs into SNNs having the same topological structure, and use the input of the ANNs after adopting pulse sequence coding (such as a pulse sequence of poisson distribution) as the input of the SNNs, thereby obtaining the trained SNNs; one is to obtain trained SNN through on-line training, establish SNN such as self-organizing SNN (see fig. 6) or other structures, utilize learning rules such as synaptic pulse timing dependency plasticity (STDP), adopt pulse sequences (such as poisson distribution pulse sequence, time coding pulse sequence, etc.) through on-line learning training SNN, adjust parameters such as weight, threshold, leakage constant, set voltage, refractory period, synaptic delay, etc. of SNN in the training process, complete the training process after obtaining SNN meeting index requirements (such as classification, recognition accuracy, etc.), fix parameters such as weight, threshold, leakage constant, set voltage, refractory period, synaptic delay, etc. of SNN after training is finished, thereby obtaining trained SNN.
S2: one or more parameters that need to be quantized are selected.
One parameter may be quantized at a time or a plurality of parameters may be quantized at a time. Parameters that may be quantified include weights, thresholds, leakage constants, set voltages, refractory periods, synaptic delays, and the like.
S3: and respectively counting the parameter distribution of a certain layer or a certain number of layers or all layers.
And counting the parameters of a certain layer or a plurality of layers or all layers of the SNN according to a certain parameter needing to be quantified, and drawing a parameter distribution map.
S4: interval division of the parameters is attempted.
Selecting a parameter interval dividing method and an interval number, wherein the interval dividing method can adopt an equal dividing method, a non-equal dividing method, a confidence interval dividing method and the like, and the interval dividing method and the number can be tried and adjusted through parameter adjusting experience according to specific neural network structures, task types and index requirements. On the basis of the parameter distribution diagram of S3, the results of interval division of the parameters such as the weight (see fig. 7), the threshold (see fig. 8), the leakage constant (see fig. 9), the set voltage (see fig. 10), the refractory period (see fig. 11), and the synaptic delay (see fig. 12) by using the equipartition method are shown in the figure.
S5: the parameters are quantized according to the intervals.
And traversing the parameters in all the intervals, quantizing all the parameters distributed in the same interval into the same value (namely a quantized value), and trying and adjusting the quantized value according to the parameter adjusting experience according to the specific neural network structure, the task type and the index requirement.
S6: obtaining the parameter quantization SNN.
And replacing the corresponding parameters in the original SNN by the quantized parameters to obtain the parameter quantized SNN.
S7: the test parameters quantify SNN.
And (3) testing the parameter quantization SNN by taking the input of the original SNN as input, finishing if the test result meets the index requirement, otherwise, returning to reselect the parameter interval division method and the interval number, and performing interval division and subsequent processes on the parameter.
Referring to fig. 13, the quantization method is further explained below by taking the implementation of the MNIST handwritten digit recognition task using MLP as an example, and includes the following steps:
s8: the target recognition accuracy is set.
Namely the index requirement of SI, the target identification accuracy of the MNIST test set by the neural network system is set.
S9: and training the MLP by using a BP algorithm and directly mapping the trained weight values to the SNN.
I.e., the offline training method in S1, in this example, certain conditions need to be satisfied when training MLP: (1) all cells of the MLP should use the Relu function as an activation function; (2) during the training process, the neuron bias is fixed to 0.
S10: the threshold and maximum frequency of SNN are set.
This is a characteristic parameter of SNN. The SNN input needs to be changed into a pulse form, so the input picture needs to be encoded, in this example, each pixel point of the picture is encoded by using a poisson distribution frequency encoding method, and the frequency of the pulse is positively correlated with the size of the input pixel. And (3) trying and adjusting the maximum frequency of the pulse and the threshold value of the LIF neuron through parameter adjustment experience according to the size of the mapping parameter and the recognition rate of subsequent feedback.
S11: and testing the SNN recognition rate.
And if the target identification accuracy is met in S8, performing the next step, otherwise, returning to S8 to retrain the MLP and the subsequent processes.
S12: and acquiring all layer weight distribution graphs and uniformly dividing intervals of the weights.
Namely, statistics of all layer parameter distributions as described in S3 and interval division of parameters as described in S4. In this example, only the weight is counted and the interval is divided, and an averaging method is tried to divide the interval, and the number of the intervals is 4.
S13: the weights are quantized according to the interval (trying to quantize using the midpoint of the interval).
I.e., quantizing the parameters according to the intervals as described in S5. In this example, quantization using the midpoint of the interval is attempted.
S14: and traversing all the weights of the SNN to find a maximum value wmax and a minimum value wmin, and taking the average value of wmax and wmin as w 0.
And quantizing the points in the interval.
S15: then, taking the average value of wmin and w0 as w-1; the average of wmax and w0 is taken as w 1.
And quantizing the points in the interval.
S16: traversing all the weights again, and if the weight is between wmin and w-1, making the weight equal to the average value x1 of the weight and the weight; if the weight value is between w-1 and w0, the weight value is equal to the average value x2 of the two, and x3 and x4 are obtained by the same method.
And quantizing the points in the interval.
S17: and testing the SNN recognition rate.
I.e., the test parameters quantify SNN at S7. If the performance index is met, the method is ended, otherwise, the method returns to S12 to reselect the interval division method and the subsequent processes.

Claims (4)

1. A pulse neural network parameter quantification method is characterized by comprising the following steps:
s1, acquiring a trained original impulse neural network, wherein parameters of the original impulse neural network comprise weight, threshold, leakage constant, set voltage, refractory period and synaptic delay;
s2, selecting one or more parameters needing quantization;
s3, counting the distribution of the selected parameters in the neural network;
s4, carrying out interval division on the selected parameters;
s5, quantizing the parameters according to the intervals, namely quantizing the parameters distributed in the same interval into the same value;
s6, obtaining a parameter quantization pulse neural network, namely replacing original parameters which only correspond to the original pulse neural network by the quantization values obtained in the step 5;
s7, testing the parameter quantization pulse neural network obtained in the step S6: and (4) testing the parameter quantization neural network by taking the input of the original pulse neural network as input, finishing if the test result meets the requirement of a preset index, and returning to the step S3 if the test result does not meet the requirement of the preset index.
2. The method according to claim 1, wherein the specific method for obtaining the trained raw spiking neural network in step S1 is as follows:
training a corresponding artificial neural network, wherein the artificial neural network is one of a multilayer perceptron, a convolutional neural network, a cyclic neural network and a long-term and short-term memory network, mapping training parameters to a pulse neural network with the same topological structure, and coding input data by adopting a pulse sequence to obtain an original pulse neural network;
or,
establishing a pulse neural network, adopting a pulse sequence as input, training the pulse neural network on line by utilizing a learning mechanism that synaptic pulse time sequence depends on plasticity, and fixing parameters of the pulse neural network after training is finished so as to obtain an original pulse neural network after training is finished.
3. The method for quantifying parameters of an impulse neural network according to claim 2, wherein the specific method in step S3 is as follows:
counting the parameter distribution condition of the selected parameters in a certain layer of network;
or,
counting the respective conditions of the parameters of the selected parameters in a certain layer of network;
still alternatively, the first and second substrates may be,
and counting the respective conditions of the parameters of the selected parameters in each layer network.
4. The method according to claim 3, wherein the specific method in step S4 is as follows:
and dividing the parameter intervals by adopting one of an averaging method, a non-averaging method and a confidence interval dividing method, and obtaining the number of the intervals.
CN201810501442.2A 2018-05-23 2018-05-23 A kind of impulsive neural networks parameter quantification method Pending CN108717570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810501442.2A CN108717570A (en) 2018-05-23 2018-05-23 A kind of impulsive neural networks parameter quantification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810501442.2A CN108717570A (en) 2018-05-23 2018-05-23 A kind of impulsive neural networks parameter quantification method

Publications (1)

Publication Number Publication Date
CN108717570A true CN108717570A (en) 2018-10-30

Family

ID=63900490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810501442.2A Pending CN108717570A (en) 2018-05-23 2018-05-23 A kind of impulsive neural networks parameter quantification method

Country Status (1)

Country Link
CN (1) CN108717570A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635938A (en) * 2018-12-29 2019-04-16 电子科技大学 A kind of autonomous learning impulsive neural networks weight quantization method
CN110059822A (en) * 2019-04-24 2019-07-26 苏州浪潮智能科技有限公司 One kind compressing quantization method based on channel packet low bit neural network parameter
CN110364232A (en) * 2019-07-08 2019-10-22 河海大学 It is a kind of based on memristor-gradient descent method neural network Strength of High Performance Concrete prediction technique
CN110796231A (en) * 2019-09-09 2020-02-14 珠海格力电器股份有限公司 Data processing method, data processing device, computer equipment and storage medium
WO2020155741A1 (en) * 2019-01-29 2020-08-06 清华大学 Fusion structure and method of convolutional neural network and pulse neural network
CN112085190A (en) * 2019-06-12 2020-12-15 上海寒武纪信息科技有限公司 Neural network quantitative parameter determination method and related product
WO2020253692A1 (en) * 2019-06-17 2020-12-24 浙江大学 Quantification method for deep learning network parameters
WO2021036908A1 (en) * 2019-08-23 2021-03-04 安徽寒武纪信息科技有限公司 Data processing method and apparatus, computer equipment and storage medium
WO2021036890A1 (en) * 2019-08-23 2021-03-04 安徽寒武纪信息科技有限公司 Data processing method and apparatus, computer device, and storage medium
CN113111997A (en) * 2020-01-13 2021-07-13 中科寒武纪科技股份有限公司 Method, apparatus and computer-readable storage medium for neural network data quantization
CN113111758A (en) * 2021-04-06 2021-07-13 中山大学 SAR image ship target identification method based on pulse neural network
CN113974607A (en) * 2021-11-17 2022-01-28 杭州电子科技大学 Sleep snore detecting system based on impulse neural network
US11397579B2 (en) 2018-02-13 2022-07-26 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11437032B2 (en) 2017-09-29 2022-09-06 Shanghai Cambricon Information Technology Co., Ltd Image processing apparatus and method
US11442786B2 (en) 2018-05-18 2022-09-13 Shanghai Cambricon Information Technology Co., Ltd Computation method and product thereof
US11513586B2 (en) 2018-02-14 2022-11-29 Shanghai Cambricon Information Technology Co., Ltd Control device, method and equipment for processor
US11544059B2 (en) 2018-12-28 2023-01-03 Cambricon (Xi'an) Semiconductor Co., Ltd. Signal processing device, signal processing method and related products
US11609760B2 (en) 2018-02-13 2023-03-21 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11630666B2 (en) 2018-02-13 2023-04-18 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11676029B2 (en) 2019-06-12 2023-06-13 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
US11703939B2 (en) 2018-09-28 2023-07-18 Shanghai Cambricon Information Technology Co., Ltd Signal processing device and related products
US11762690B2 (en) 2019-04-18 2023-09-19 Cambricon Technologies Corporation Limited Data processing method and related products
US11789847B2 (en) 2018-06-27 2023-10-17 Shanghai Cambricon Information Technology Co., Ltd On-chip code breakpoint debugging method, on-chip processor, and chip breakpoint debugging system
US11847554B2 (en) 2019-04-18 2023-12-19 Cambricon Technologies Corporation Limited Data processing method and related products
CN117391175A (en) * 2023-11-30 2024-01-12 中科南京智能技术研究院 Pulse neural network quantification method and system for brain-like computing platform
US11966583B2 (en) 2018-08-28 2024-04-23 Cambricon Technologies Corporation Limited Data pre-processing method and device, and related computer device and storage medium
US12001955B2 (en) 2019-08-23 2024-06-04 Anhui Cambricon Information Technology Co., Ltd. Data processing method, device, computer equipment and storage medium
US12112257B2 (en) 2019-08-27 2024-10-08 Anhui Cambricon Information Technology Co., Ltd. Data processing method, device, computer equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022614A (en) * 2016-05-22 2016-10-12 广州供电局有限公司 Data mining method of neural network based on nearest neighbor clustering
CN107704917A (en) * 2017-08-24 2018-02-16 北京理工大学 A kind of method of effectively training depth convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022614A (en) * 2016-05-22 2016-10-12 广州供电局有限公司 Data mining method of neural network based on nearest neighbor clustering
CN107704917A (en) * 2017-08-24 2018-02-16 北京理工大学 A kind of method of effectively training depth convolutional neural networks

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11437032B2 (en) 2017-09-29 2022-09-06 Shanghai Cambricon Information Technology Co., Ltd Image processing apparatus and method
US11709672B2 (en) 2018-02-13 2023-07-25 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11720357B2 (en) 2018-02-13 2023-08-08 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US12073215B2 (en) 2018-02-13 2024-08-27 Shanghai Cambricon Information Technology Co., Ltd Computing device with a conversion unit to convert data values between various sizes of fixed-point and floating-point data
US11609760B2 (en) 2018-02-13 2023-03-21 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11704125B2 (en) 2018-02-13 2023-07-18 Cambricon (Xi'an) Semiconductor Co., Ltd. Computing device and method
US11740898B2 (en) 2018-02-13 2023-08-29 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11397579B2 (en) 2018-02-13 2022-07-26 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11507370B2 (en) 2018-02-13 2022-11-22 Cambricon (Xi'an) Semiconductor Co., Ltd. Method and device for dynamically adjusting decimal point positions in neural network computations
US11630666B2 (en) 2018-02-13 2023-04-18 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11620130B2 (en) 2018-02-13 2023-04-04 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11663002B2 (en) 2018-02-13 2023-05-30 Shanghai Cambricon Information Technology Co., Ltd Computing device and method
US11513586B2 (en) 2018-02-14 2022-11-29 Shanghai Cambricon Information Technology Co., Ltd Control device, method and equipment for processor
US11442785B2 (en) 2018-05-18 2022-09-13 Shanghai Cambricon Information Technology Co., Ltd Computation method and product thereof
US11442786B2 (en) 2018-05-18 2022-09-13 Shanghai Cambricon Information Technology Co., Ltd Computation method and product thereof
US11789847B2 (en) 2018-06-27 2023-10-17 Shanghai Cambricon Information Technology Co., Ltd On-chip code breakpoint debugging method, on-chip processor, and chip breakpoint debugging system
US11966583B2 (en) 2018-08-28 2024-04-23 Cambricon Technologies Corporation Limited Data pre-processing method and device, and related computer device and storage medium
US11703939B2 (en) 2018-09-28 2023-07-18 Shanghai Cambricon Information Technology Co., Ltd Signal processing device and related products
US11544059B2 (en) 2018-12-28 2023-01-03 Cambricon (Xi'an) Semiconductor Co., Ltd. Signal processing device, signal processing method and related products
CN109635938B (en) * 2018-12-29 2022-05-17 电子科技大学 Weight quantization method for autonomous learning impulse neural network
CN109635938A (en) * 2018-12-29 2019-04-16 电子科技大学 A kind of autonomous learning impulsive neural networks weight quantization method
WO2020155741A1 (en) * 2019-01-29 2020-08-06 清华大学 Fusion structure and method of convolutional neural network and pulse neural network
US11762690B2 (en) 2019-04-18 2023-09-19 Cambricon Technologies Corporation Limited Data processing method and related products
US11934940B2 (en) 2019-04-18 2024-03-19 Cambricon Technologies Corporation Limited AI processor simulation
US11847554B2 (en) 2019-04-18 2023-12-19 Cambricon Technologies Corporation Limited Data processing method and related products
CN110059822A (en) * 2019-04-24 2019-07-26 苏州浪潮智能科技有限公司 One kind compressing quantization method based on channel packet low bit neural network parameter
CN112085190A (en) * 2019-06-12 2020-12-15 上海寒武纪信息科技有限公司 Neural network quantitative parameter determination method and related product
US12093148B2 (en) 2019-06-12 2024-09-17 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
US11676029B2 (en) 2019-06-12 2023-06-13 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
US11676028B2 (en) 2019-06-12 2023-06-13 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
US11675676B2 (en) 2019-06-12 2023-06-13 Shanghai Cambricon Information Technology Co., Ltd Neural network quantization parameter determination method and related products
CN112085190B (en) * 2019-06-12 2024-04-02 上海寒武纪信息科技有限公司 Method for determining quantization parameter of neural network and related product
WO2020253692A1 (en) * 2019-06-17 2020-12-24 浙江大学 Quantification method for deep learning network parameters
CN110364232B (en) * 2019-07-08 2021-06-11 河海大学 High-performance concrete strength prediction method based on memristor-gradient descent method neural network
CN110364232A (en) * 2019-07-08 2019-10-22 河海大学 It is a kind of based on memristor-gradient descent method neural network Strength of High Performance Concrete prediction technique
WO2021036908A1 (en) * 2019-08-23 2021-03-04 安徽寒武纪信息科技有限公司 Data processing method and apparatus, computer equipment and storage medium
WO2021036890A1 (en) * 2019-08-23 2021-03-04 安徽寒武纪信息科技有限公司 Data processing method and apparatus, computer device, and storage medium
US12001955B2 (en) 2019-08-23 2024-06-04 Anhui Cambricon Information Technology Co., Ltd. Data processing method, device, computer equipment and storage medium
US12112257B2 (en) 2019-08-27 2024-10-08 Anhui Cambricon Information Technology Co., Ltd. Data processing method, device, computer equipment and storage medium
CN110796231A (en) * 2019-09-09 2020-02-14 珠海格力电器股份有限公司 Data processing method, data processing device, computer equipment and storage medium
CN113111997A (en) * 2020-01-13 2021-07-13 中科寒武纪科技股份有限公司 Method, apparatus and computer-readable storage medium for neural network data quantization
CN113111997B (en) * 2020-01-13 2024-03-22 中科寒武纪科技股份有限公司 Method, apparatus and related products for neural network data quantization
CN113111758B (en) * 2021-04-06 2024-01-12 中山大学 SAR image ship target recognition method based on impulse neural network
CN113111758A (en) * 2021-04-06 2021-07-13 中山大学 SAR image ship target identification method based on pulse neural network
CN113974607A (en) * 2021-11-17 2022-01-28 杭州电子科技大学 Sleep snore detecting system based on impulse neural network
CN113974607B (en) * 2021-11-17 2024-04-26 杭州电子科技大学 Sleep snore detecting system based on pulse neural network
CN117391175A (en) * 2023-11-30 2024-01-12 中科南京智能技术研究院 Pulse neural network quantification method and system for brain-like computing platform

Similar Documents

Publication Publication Date Title
CN108717570A (en) A kind of impulsive neural networks parameter quantification method
CN109754066B (en) Method and apparatus for generating a fixed-point neural network
CN110555523B (en) Short-range tracking method and system based on impulse neural network
CN109816026B (en) Fusion device and method of convolutional neural network and impulse neural network
CN109462520B (en) Network traffic resource situation prediction method based on LSTM model
US20180046914A1 (en) Compression method for deep neural networks with load balance
JP2022501677A (en) Data processing methods, devices, computer devices, and storage media
CN108764568B (en) Data prediction model tuning method and device based on LSTM network
CN110288510B (en) Proximity sensor vision perception processing chip and Internet of things sensing device
CN114186672A (en) Efficient high-precision training algorithm for impulse neural network
CN114039870B (en) Deep learning-based real-time bandwidth prediction method for video stream application in cellular network
Gil et al. Quantization-aware pruning criterion for industrial applications
Horio Chaotic neural network reservoir
Zhang et al. Efficient spiking neural networks with logarithmic temporal coding
CN113962371A (en) Image identification method and system based on brain-like computing platform
CN110263917B (en) Neural network compression method and device
CN117973559A (en) Method and apparatus for solving personalized federal learning using an adaptive network
Roy et al. Hardware efficient, neuromorphic dendritically enhanced readout for liquid state machines
KR20210097382A (en) Rank order coding based spiking convolutional neural network calculation method and handler
CN116709422A (en) MEC task unloading method based on knowledge graph and matching theory
CN114387028B (en) Intelligent analysis method for commodity demand of online shopping platform
CN116188896A (en) Image classification method, system and equipment based on dynamic semi-supervised deep learning
CN113435577B (en) Gradient function learning framework replacement method based on training deep pulse neural network
Jiao A dense inception network with attention mechanism for short-term traffic flow prediction
CN113157453B (en) Task complexity-based high-energy-efficiency target detection task dynamic scheduling method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181030

WD01 Invention patent application deemed withdrawn after publication