CN109376854A - More truth of a matter logarithmic quantization method and devices for deep neural network - Google Patents

More truth of a matter logarithmic quantization method and devices for deep neural network Download PDF

Info

Publication number
CN109376854A
CN109376854A CN201811300010.1A CN201811300010A CN109376854A CN 109376854 A CN109376854 A CN 109376854A CN 201811300010 A CN201811300010 A CN 201811300010A CN 109376854 A CN109376854 A CN 109376854A
Authority
CN
China
Prior art keywords
neural network
deep neural
truth
matter
numberical range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811300010.1A
Other languages
Chinese (zh)
Other versions
CN109376854B (en
Inventor
邹卓
环宇翔
徐佳唯
郑立荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Silicon Charm Information Technology (shanghai) Co Ltd
Original Assignee
Silicon Charm Information Technology (shanghai) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Charm Information Technology (shanghai) Co Ltd filed Critical Silicon Charm Information Technology (shanghai) Co Ltd
Priority to CN201811300010.1A priority Critical patent/CN109376854B/en
Publication of CN109376854A publication Critical patent/CN109376854A/en
Application granted granted Critical
Publication of CN109376854B publication Critical patent/CN109376854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a kind of more truth of a matter logarithmic quantization method and devices for deep neural network, wherein more truth of a matter logarithmic quantization methods for deep neural network include: to be quantified by N-bit, by weights all in deep neural network or are inputted with 2NA fixed-point number indicates, wherein 2N‑1A fixed point is bottom with 2,2N‑1It is a fixed point withThe bottom of for;Determine all weights in the deep neural network with 2 for bottom numberical range SR_1 and withFor the numberical range SR_2 at bottom;By it is above-mentioned with 2 for bottom numberical range SR_1 and withQuantified for the numberical range SR_2 at bottom;By the index of the numerical value after above-mentioned quantizationIt is uniformly distributed on the domain Log being respectively segmented.The advantages of to realize the accuracy rate for improving neural network model.

Description

More truth of a matter logarithmic quantization method and devices for deep neural network
Technical field
The present invention relates to artificial intelligence fields, and in particular, to a kind of more truth of a matter for deep neural network are to quantity Change method and device.
Background technique
Currently, deep neural network (DNN) research achieves in recent years and develops rapidly and obtain Preliminary Applications.It is existing artificial Intelligent use depends on cloud calculating, is not suitable for having strict demand or bandwidth, energy constraint to real-time, safety Unmanned machine and the fields such as internet-of-things terminal.How deep neural network processor meets the local intelligent meter of internet-of-things terminal The speed and efficiency demand of calculation, have become the key technology urgently broken through.
Using deep neural network as the deep learning algorithm of representative, mentioned for internet of things oriented, the analysis of big data and processing Strong tool has been supplied, has realized the functions such as identification classification, feature extraction, cognitive Decision, prediction judgement.However as realizing Algorithm, it usually needs the support of CPU or GPU.Such as classical depth convolutional network (CNN) model AlexNet, need to carry out at least 7.2 hundred million multiplyings, model parameter have reached 240MB.Therefore, complicated Data Analysis Services algorithm is in low power consuming devices On realization be still a huge challenge.It is ground by the relevant of the low power consumption of the deep learning of representative of deep neural network Study carefully is current academia both at home and abroad and the hot spot that industry is paid close attention to.
The complexity and energy consumption expense that DNN is calculated mostly come from a large amount of multiplying, and large-scale network ginseng Number storage and access.Therefore, the Optimization Work of current low-power consumption is also carried out mainly around in these two aspects.
Existing convolutional neural networks are mainly the fixed point utilized or floating-point multiplier, realize-multiply-add module, but multiply The hardware complexity of musical instruments used in a Buddhist or Taoist mass is larger, and the power consumption and area overhead of chip are very big.It needs largely to multiply under Large Scale Neural Networks Add device, the cost of convolution algorithm can become very high.
The quantization algorithm of Log2 was proposed by the study group of Stanford University in 2017 earliest, by by the two of convolution algorithm A multiplier-weight (weight) and input (input or perhaps upper one layer of activation)-is quantified as 2xForm, can To convert shifting algorithm sign (b) × Bitshift (a, log for traditional convolution multiplication a × b2|b|).However this method is deposited In the online of accuracy rate, i.e., in the case where continuing to increase quantified precision, the accuracy rate of neural network model is no longer improved.
Summary of the invention
It is an object of the present invention in view of the above-mentioned problems, propose a kind of more truth of a matter for deep neural network to quantity The advantages of changing method and device, the accuracy rate of neural network model improved with realization.
To achieve the above object, on the one hand technical solution of the present invention, provides a kind of more bottoms for deep neural network Number logarithmic quantization method, comprising:
Quantified by N-bit, by weights all in deep neural network or is inputted with 2NA fixed-point number indicates, wherein 2N-1 A fixed point is bottom with 2,2N-1It is a fixed point withThe bottom of for;
Determine all weights in the deep neural network with 2 for bottom numberical range SR_1 and withFor the numerical value at bottom Range SR_2;
By it is above-mentioned with 2 for bottom numberical range SR_1 and withQuantified for the numberical range SR_2 at bottom;
By the index of the numerical value after above-mentioned quantizationIt is uniformly distributed on the domain Log being respectively segmented.
Preferably, in the determination deep neural network all weights with 2 for bottom numberical range SR_1 and with For in the numberical range SR_2 at bottom,
SR_1=round (log2(max-min))-2N-2
SR_2=2 × round (log2(max-min))
Wherein max is the largest weight absolute value, and min is the smallest weight absolute value.
Preferably, it is described by it is above-mentioned with 2 for bottom numberical range SR_1 and withFor the numberical range SR_2 amount of progress at bottom Change, the quantization of numerical value is specially determined by formula Log_Quant;
The Log_Quant formula are as follows:
Preferably, the index by the numerical value after above-mentioned quantizationIt is uniformly distributed on the domain Log being respectively segmented, specifically Are as follows:
Wherein, xl_1=log2|x|,
Preferably, wherein the index of fixed point specifically:
P_1={ SR_1-2N-1+ 1, SR_1-2N-1+ 2 ..., SR_1-1, SR_1 },
P_2={ SR_2-2N-1+ 1, SR_2-2N-1+ 2 ..., SR_2-1, SR_2 }.
Preferably, wherein the index of threshold value specifically:
Technical solution of the present invention second aspect provides a kind of more truth of a matter logarithmic quantizations dress for deep neural network It sets, comprising:
Computing unit and Log2 computing unit, it is describedThe input terminal of computing unit and the first data are selected Device connection is selected, it is describedThe output end of computing unit is connect with third data selector, the Log2 computing unit Input terminal is connect with the first data selector, and the output end of the Log2 computing unit is connect with third data selector.
Preferably, describedComputing unit, including the first value information decoder, the first shift unit, adder and Second data selector, the first value information decoder output signal are transmitted to the selection of the second data after the first shift unit Adder is arranged in the input terminal of device, second data selector.
Preferably, the Log2 computing unit, including the second power described in the second value information decoder and the second shift unit The output information of value information decoder is transmitted to the second shift unit.
A kind of specific implementation according to an embodiment of the present invention,
Technical solution of the present invention has the advantages that
Technical solution of the present invention proposes the method for more truth of a matter logarithmic quantizations, by it is existing with 2 be that the truth of a matter is decomposed into and is with 2 The truth of a matter and withQuantified for the truth of a matter, to improve the accuracy rate of neural network model.
Below by drawings and examples, technical scheme of the present invention will be described in further detail.
Detailed description of the invention
Fig. 1 is the flow chart described in the embodiment of the present invention for more truth of a matter logarithmic quantization methods of deep neural network;
Fig. 2 is the principle frame described in the embodiment of the present invention for more truth of a matter logarithmic quantization devices of deep neural network Figure;
Fig. 3 is the contrast schematic diagram of a variety of quantization methods described in the embodiment of the present invention.
In conjunction with attached drawing, appended drawing reference is as follows in the embodiment of the present invention:
1-Computing unit;2-Log2 computing unit;3- the first value information decoder;The second value information of 4- is translated Code device;Bis- data selector of 5-;The first data selector of 6-;7- third data selector.
Specific embodiment
It will be appreciated that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Base Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts it is all its Its embodiment, shall fall within the protection scope of the present invention.
The method that the present invention uses logarithmic quantization, target is reduced two multiplier-weights (weight) of convolution algorithm The form of logarithm is quantified as with input (input or perhaps upper one layer of activation)-, it can be by traditional convolution multiplication It is converted into shifting algorithm, has obtained the purpose for substantially reducing computation complexity and power consumption.Existing logarithmic quantization method, is all base In Log2, i.e., with 2 for the truth of a matter quantization method, but accuracy of this method due to being limited by dynamic range, to network It is affected.To solve this problem, the invention proposes a kind of method of more truth of a matter segment quantizations, corresponding hardware is devised Implementation method improves traditional accuracy in the 2 logarithmic quantization method for the truth of a matter.
It is traditional with 2 for the truth of a matter quantization method:
Quantify by N-bit, all weights of this layer or input will be by 2NA fixed-point number indicates;
N-bit quantization: it is i.e. common with 2 for the truth of a matter uniform quantization, represent 20 power to highest order from minimum bit LSB 2 system quantization methods of the n times side that MSB is 2.For example by the number of a 0-20, N-bit quantization is carried out, is exactly 20 points At 2 n times side's equal portions.Input dynamic range is evenly divided into 2^n parts.
Determine this layer of weight values range FSR=round (log2(max-min)), wherein to be the largest weight absolute by max Value, min is the smallest weight absolute value;
The quantization of numerical value is determined by formula Log_Quant:
2NThe index of a fixed-point numberIn uniform distribution on the domain Log
Wherein, x_log=log2|x|;
The indices P of fixed point _ log={ FSR-2N+ 1, FSR-2N+ 2 ..., FSR-1, FSR };
And the index of threshold value
It is proposed in the present inventionFor the quantization method of the truth of a matter
As shown in Figure 1, a kind of more truth of a matter logarithmic quantization methods for deep neural network, comprising:
Step S101: being quantified by N-bit, by weights all in deep neural network or is inputted with 2NA fixed-point number table Show, wherein 2N-1A fixed point is bottom with 2,2N-1It is a fixed point withThe bottom of for;
Step S102: determine all weights in the deep neural network with 2 for bottom numberical range SR_1 and withFor The numberical range SR_2 at bottom;
Step S103: by it is above-mentioned with 2 for bottom numberical range SR_1 and withFor the numberical range SR_2 amount of progress at bottom Change;
Step S104: by the index of the numerical value after above-mentioned quantizationIt is uniformly distributed on the domain Log being respectively segmented.
The difference of quantization algorithm and Log2 quantization algorithm is the truth of a matter, and that accordingly change is FSR, x_log And Log_Quant.
Compared with Log2,More fixed points are provided near the big weight of absolute value, weight passes throughThe network accuracy rate upper limit of quantization algorithm is higher.
Quantify by N-bit, all weights of the random layer of deep neural network or input will be by 2NA fixed-point number expression, Wherein 2N-1A fixed point is bottom with 2,2N-1It is a fixed point withThe bottom of for;
Determine deep neural network random layer weight values range SR_1 (with 2 bottom of for) and SR_2 (withThe bottom of for)
SR_1=round (log2(max-min))-2N-2,
SR_2=2 × round (log2(max-min)),
Wherein max is the largest weight absolute value, and min is the smallest weight absolute value;
The quantization of numerical value is determined by formula Log_Quant:
2NThe index of a fixed-point numberIn uniform distribution on the domain Log being respectively segmented:
Wherein, xl_1=log2|x|,
The index of fixed point:
P_1={ SR_1-2N-1+ 1, SR_1-2N-1+ 2 ..., SR_1-1, SR_1 },
P_2={ SR_2-2N-1+ 1, SR_2-2N-1+ 2 ..., SR_2-1, SR_2 },
And the index of threshold value:
As shown in Fig. 2, a kind of more truth of a matter logarithmic quantization devices for deep neural network, comprising:
Computing unit and Log2 computing unit, it is describedThe input terminal of computing unit and the first data are selected Device connection is selected, it is describedThe output end of computing unit is connect with third data selector, the Log2 computing unit Input terminal is connect with the first data selector, and the output end of the Log2 computing unit is connect with third data selector.
Preferably, describedComputing unit (Approximation displacement-cumulative), including the decoding of the first value information Device, the first shift unit, adder and the second data selector, the first value information decoder output signal is through the first displacement The second data selector is transmitted to after device, adder is arranged in the input terminal of second data selector.
Preferably, the Log2 computing unit (Log2 approximation displacement-cumulative), including the second value information decoder and the The output information of second value information decoder described in two shift units is transmitted to the second shift unit.
Device in hardware realization,After quantization weight conversion in order toForm, will can not directly multiply Method is converted into shifting algorithm.To solve this problem, it proposesIt is approximate.
Multiplication can thus be converted to displacement addition.Approximation quality can influence the standard of network to a certain extent True rate,Precision is higher, and the accuracy rate of network is generally higher, but corresponding hardware configuration will be more complicated.
Itd is proposed in order to further increase precision, in the present invention to truth of a matter quantization method (For the truth of a matter with 2 be belong to point Section quantization method).
More truth of a matter logarithmic quantization methods take the concept of segment quantization, do not take this layer of all weight or input The same truth of a matter, but by absolute value be segmented, in different sections weight or input take the different truth of a matter.For example, absolute It is upper using Log2 quantization to be worth small one section, is above used in one section of absolute value greatlyQuantization.Specific segmentation, from theory Upper theory can be there are many point-score, such as the fixed point of half is bottom with 2, the other half fixed point withThe bottom of for.Below with the narration of this example Specific algorithm and hardware realization.
The hardware approach of more truth of a matter logarithmic quantizations is required to support the calculating of two kinds of truth of a matter, that is, needs Log2 calculating section AndCalculating section.When two sections of fixed point number is identical, the selection of different calculating sections can sentencing by MSB Break realize the 0-7 fixed point with 2 the bottom of for, the 8-15 pinpoint withLog2 calculating section is selected as MSB=0 the bottom of for, As MSB=1, selectCalculating section.
The present invention mainly optimize with 2 for the truth of a matter logarithmic quantization deep neural network precision (accuracy rate).Reality Test the result shows that (AlexNet), with it is traditional with 2 for the truth of a matter to quantitative approach compared with, the present invention withFor the quantization of the truth of a matter Method uses, and can be increased to 75.5% (5 bit quantization precision) from 69.3% with the accuracy rate of the classification of Top-5.And it uses Segmentation to truth of a matter quantization method, under 4 bit quantization precision, realize 72.3% classification accuracy (it is traditional with 2 for the truth of a matter 69.3%) logarithmic quantization is.As shown in figure 3, wherein 301 be traditional uniform quantization method, 302 for 2 for the truth of a matter logarithm Quantization method, 303 areQuantization method, 304 for the technical program more truth of a matter segment quantizations method.
Technical solution of the present invention can be extended to 3 sections or multistage segmentation.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by those familiar with the art, all answers It is included within the scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.

Claims (9)

1. a kind of more truth of a matter logarithmic quantization methods for deep neural network characterized by comprising
Quantified by N-bit, by weights all in deep neural network or is inputted with 2NA fixed-point number indicates, wherein 2N-1A fixed point It is bottom with 2,2N-1It is a fixed point withThe bottom of for;
Determine all weights in the deep neural network with 2 for bottom numberical range SR_1 and withFor the numberical range at bottom SR_2;
By it is above-mentioned with 2 for bottom numberical range SR_1 and withQuantified for the numberical range SR_2 at bottom;
By the index of the numerical value after above-mentioned quantizationIt is uniformly distributed on the domain Log being respectively segmented.
2. more truth of a matter logarithmic quantization methods according to claim 1 for deep neural network, which is characterized in that described Determine all weights in the deep neural network with 2 for bottom numberical range SR_1 and withFor the numberical range SR_2 at bottom In,
SR_1=round (log2(max-min))-2N-2
SR_2=2 × round (log2(max-min))
Wherein max is the largest weight absolute value, and min is the smallest weight absolute value.
3. more truth of a matter logarithmic quantization methods according to claim 2 for deep neural network, which is characterized in that described By it is above-mentioned with 2 for bottom numberical range SR_1 and withQuantified for the numberical range SR_2 at bottom, specially by formula Log_ Quant determines the quantization of numerical value;
The Log_Quant formula are as follows:
4. more truth of a matter logarithmic quantization methods according to claim 3 for deep neural network, which is characterized in that described By the index of the numerical value after above-mentioned quantizationIt is uniformly distributed on the domain Log being respectively segmented, specifically:
Wherein, xl_1=log2|x|,
5. more truth of a matter logarithmic quantization methods according to claim 4 for deep neural network, which is characterized in that its In, the index of fixed point specifically:
P_1={ SR_1-2N-1+ 1, SR_1-2N-1+ 2 ..., SR_1-1, SR_1 },
P_2={ SR_2-2N-1+ 1, SR_2-2N-1+ 2 ..., SR_2-1, SR_2 }.
6. more truth of a matter logarithmic quantization methods according to claim 5 for deep neural network, which is characterized in that its In, the index of threshold value specifically:
7. a kind of more truth of a matter logarithmic quantization devices for deep neural network characterized by comprising
Computing unit and Log2 computing unit, it is describedThe input terminal of computing unit and the first data selector Connection, it is describedThe output end of computing unit is connect with third data selector, the input of the Log2 computing unit End is connect with the first data selector, and the output end of the Log2 computing unit is connect with third data selector.
8. more truth of a matter logarithmic quantization devices according to claim 7 for deep neural network, which is characterized in that describedComputing unit, including the first value information decoder, the first shift unit, adder and the second data selector, it is described First value information decoder output signal is transmitted to the second data selector, the second data selection after the first shift unit Adder is arranged in the input terminal of device.
9. more truth of a matter logarithmic quantization devices according to claim 7 for deep neural network, which is characterized in that described Log2 computing unit, the output letter including the second value information decoder described in the second value information decoder and the second shift unit Breath is transmitted to the second shift unit.
CN201811300010.1A 2018-11-02 2018-11-02 Multi-base logarithm quantization device for deep neural network Active CN109376854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811300010.1A CN109376854B (en) 2018-11-02 2018-11-02 Multi-base logarithm quantization device for deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811300010.1A CN109376854B (en) 2018-11-02 2018-11-02 Multi-base logarithm quantization device for deep neural network

Publications (2)

Publication Number Publication Date
CN109376854A true CN109376854A (en) 2019-02-22
CN109376854B CN109376854B (en) 2022-08-16

Family

ID=65397384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811300010.1A Active CN109376854B (en) 2018-11-02 2018-11-02 Multi-base logarithm quantization device for deep neural network

Country Status (1)

Country Link
CN (1) CN109376854B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844330A (en) * 2016-03-22 2016-08-10 华为技术有限公司 Data processing method of neural network processor and neural network processor
CN106485316A (en) * 2016-10-31 2017-03-08 北京百度网讯科技有限公司 Neural network model compression method and device
CN106897734A (en) * 2017-01-12 2017-06-27 南京大学 K average clusters fixed point quantization method heterogeneous in layer based on depth convolutional neural networks
CN107644254A (en) * 2017-09-09 2018-01-30 复旦大学 A kind of convolutional neural networks weight parameter quantifies training method and system
US20180046919A1 (en) * 2016-08-12 2018-02-15 Beijing Deephi Intelligence Technology Co., Ltd. Multi-iteration compression for deep neural networks
US20180174022A1 (en) * 2016-12-20 2018-06-21 Google Inc. Generating an output for a neural network output layer
CN108229681A (en) * 2017-12-28 2018-06-29 郑州云海信息技术有限公司 A kind of neural network model compression method, system, device and readable storage medium storing program for executing

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844330A (en) * 2016-03-22 2016-08-10 华为技术有限公司 Data processing method of neural network processor and neural network processor
US20180046919A1 (en) * 2016-08-12 2018-02-15 Beijing Deephi Intelligence Technology Co., Ltd. Multi-iteration compression for deep neural networks
CN106485316A (en) * 2016-10-31 2017-03-08 北京百度网讯科技有限公司 Neural network model compression method and device
US20180174022A1 (en) * 2016-12-20 2018-06-21 Google Inc. Generating an output for a neural network output layer
CN106897734A (en) * 2017-01-12 2017-06-27 南京大学 K average clusters fixed point quantization method heterogeneous in layer based on depth convolutional neural networks
CN107644254A (en) * 2017-09-09 2018-01-30 复旦大学 A kind of convolutional neural networks weight parameter quantifies training method and system
CN108229681A (en) * 2017-12-28 2018-06-29 郑州云海信息技术有限公司 A kind of neural network model compression method, system, device and readable storage medium storing program for executing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AOJUN ZHOU等: "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights", 《ARXIV COMPUTER VISION AND PATTERN RECOGNITION》 *
BENOIT JACOB等: "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference", 《ARXIV MACHINE LEARNING》 *
JUNGWOOK CHOI等: "PACT: Parameterized Clipping Activation for Quantized Neural Networks", 《ARXIV COMPUTER VISION AND PATTERN RECOGNITION》 *
雷杰等: "深度网络模型压缩综述", 《软件学报》 *

Also Published As

Publication number Publication date
CN109376854B (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN108846517B (en) Integration method for predicating quantile probabilistic short-term power load
CN108009667A (en) A kind of energy demand total amount and structure prediction system
CN110413255A (en) Artificial neural network method of adjustment and device
Lin et al. The exploration of a temporal convolutional network combined with encoder-decoder framework for runoff forecasting
CN107515839A (en) The improved quality of power supply THE FUZZY EVALUATING METHOD for assigning power algorithm process
CN107861916A (en) A kind of method and apparatus for being used to perform nonlinear operation for neutral net
Nazari et al. Tot-net: An endeavor toward optimizing ternary neural networks
CN105488592A (en) Method for predicting generated energy of photovoltaic power station
Yuan et al. Application of fractional order-based grey power model in water consumption prediction
CN116307215A (en) Load prediction method, device, equipment and storage medium of power system
CN107479856A (en) Arctan function data structure and method for building up, function value-acquiring method and device
Zheng et al. Towards many-objective optimization: objective analysis, multi-objective optimization and decision-making
CN113435208A (en) Student model training method and device and electronic equipment
CN113346504A (en) Active power distribution network voltage control method based on data knowledge driving
CN109376854A (en) More truth of a matter logarithmic quantization method and devices for deep neural network
Huan et al. Multi-step prediction of dissolved oxygen in rivers based on random forest missing value imputation and attention mechanism coupled with recurrent neural network
CN114282658B (en) Method, device and medium for analyzing and predicting flow sequence
CN109902870A (en) Electric grid investment prediction technique based on AdaBoost regression tree model
CN115169819A (en) Power supply business hall efficiency data processing method
CN114881256A (en) Equipment detection method and device
CN108364136B (en) Water resource shortage risk analysis method and system based on evidence reasoning
CN110807599A (en) Method, device, server and storage medium for deciding electrochemical energy storage scheme
Ramli et al. Fuzzy time series forecasting model based on centre of gravity similarity measure
CN111091217B (en) Building short-term load prediction method and system
Chen et al. Endpoint Temperature Prediction of Molten Steel in VD Furnace Based on AdaBoost. RT-ELM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant