CN107729984A - A kind of computing device and method suitable for neutral net activation primitive - Google Patents

A kind of computing device and method suitable for neutral net activation primitive Download PDF

Info

Publication number
CN107729984A
CN107729984A CN201711020801.4A CN201711020801A CN107729984A CN 107729984 A CN107729984 A CN 107729984A CN 201711020801 A CN201711020801 A CN 201711020801A CN 107729984 A CN107729984 A CN 107729984A
Authority
CN
China
Prior art keywords
activation primitive
function
unit
computing device
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711020801.4A
Other languages
Chinese (zh)
Inventor
韩银和
许浩博
王颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201711020801.4A priority Critical patent/CN107729984A/en
Publication of CN107729984A publication Critical patent/CN107729984A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Abstract

The present invention relates to a kind of computing device suitable for neutral net activation primitive, including searching unit, for the linear function parameter according to corresponding to the activation primitive and the determination of activation primitive variable input;And computing unit, for being calculated using the input value and the corresponding linear function parameter;Wherein, the linear function parameter can optimize according to the concavity and convexity of activation primitive.

Description

A kind of computing device and method suitable for neutral net activation primitive
Technical field
The present invention relates to calculating field, more particularly to a kind of computing device and side suitable for neutral net activation primitive Method.
Background technology
With the development of science and technology, neutral net always is artificial intelligence as a kind of machine learning techniques for simulating human brain The focus of research field, and be widely used in various fields, such as image recognition etc..Solved using neutral net One of emphasis of practical problem is to provide Nonlinear Modeling ability using activation primitive for neutral net, it is generally the case that swashs Function living appears in each layer of neutral net, and the frequency especially occurred in some algorithms is higher, such as sigmoid letters Number.Therefore, when performing computing to activation primitive, its arithmetic speed and computing energy consumption directly govern the meter of whole neutral net Calculate efficiency.
Traditional neural computing device, generally referred to as universal ALU are held to activation primitive When row calculates, resource utilization is relatively low, and it is slower to perform speed.It is above-mentioned in particular with the development of categories of portable electronic devices The circuit hardware expense of device is excessive, can not meet small volume, the demand of application scenarios low in energy consumption.
Therefore, it is necessary to which a kind of being capable of the low computing device and method suitable for neutral net of the fast energy consumption of speed.
The content of the invention
The present invention provides a kind of computing device and method suitable for neutral net activation primitive, including searching unit, uses In the linear function parameter according to corresponding to the activation primitive and the determination of activation primitive variable input;And computing unit, For being calculated using the input value and the corresponding linear function parameter;Wherein, the linear function parameter according to The concavity and convexity of activation primitive is optimized.
Preferably, the searching unit includes matching unit, for the input value of activation primitive to be matched into corresponding letter Number interval;Look-up table unit, for the linear function parameter according to corresponding to the determination of the function section.
Preferably, the matching unit is additionally operable to store the function section;The look-up table unit is additionally operable to store institute State linear function parameter corresponding to function section.
Preferably, the linear function parameter includes slope and intercept, and the slope is the end using the function section Point value calculates what is obtained;The intercept is to calculate the initial intercept obtained through worst error using the endpoint value in the function section Obtained after optimization;Wherein, the worst error refer to functional value of the activation primitive in the function section with it is described The maximum difference of initial linear functional value in function section.
Preferably, if the activation primitive in the function section is concave function, the intercept is equal to described initial cut Away from the half for subtracting worst error.
Preferably, if the activation primitive in the function section is convex function, the intercept is equal to described initial cut Away from the half plus worst error.
Preferably, the look-up table unit negates unit including numerical value.
Preferably, when the activation primitive is on origin Central Symmetry, the matching unit is by the equal institute of absolute value The input value for stating activation primitive is matched to identical function section.
Preferably, the computing unit includes multiplication unit and adder unit.
According to another aspect of the present invention, a kind of computational methods suitable for neutral net activation primitive are also provided, wrapped Include following steps:
The activation primitive is divided into some function sections;
Linear function parameter corresponding to the function section is calculated using the end points in the function section, i.e., slope and cuts Away from;
The intercept is optimized according to the concavity and convexity in the function section, the intercept after slope and optimization is stored in look-up table In;
Utilize the linear function parameter in lookup table search function section corresponding with the input value of the activation primitive;
Calculated using the input value and the corresponding linear function parameter of the activation primitive.
Relative to prior art, the present invention achieves following advantageous effects:It is provided by the invention to be applied to nerve net The computing device and method of network activation primitive, the computational methods of piecewise approximation are employed, the computer capacity of activation primitive is divided into Some sections, by activation primitive it is approximately that linear function carries out calculating functional value in section, wherein, by using activation primitive Concavity and convexity linear function parameter is optimized, improve degree of approximation, improve neutral net calculating speed while, It ensure that calculating accuracy.
Brief description of the drawings
Fig. 1 is the computing device structure schematic diagram of the activation primitive of present pre-ferred embodiments.
Fig. 2 is the structural representation of the computing unit of present pre-ferred embodiments.
Fig. 3 is the method flow diagram that activation primitive calculating is carried out using the computing device shown in Fig. 1.
Embodiment
In order that the purpose of the present invention, technical scheme and advantage are more clearly understood, below in conjunction with accompanying drawing, to according to this The computing device and method suitable for neutral net activation primitive provided in the embodiment of invention is further described.
Activation primitive is used in neutral net, is to add non-linear factor.As a rule, linear model is relied only on Practical problem can not be expressed exactly.Such as during using neutral net progress image procossing, convolutional network can be used to each picture Vegetarian refreshments carries out assignment, although this operation is linear, actual sample is not necessarily linear separability, it is therefore desirable to introduces Non-linear factor solves the insurmountable problem of linear model.
It is well known that compared with activation primitive, the analysis calculating of linear function is more fast and convenient.In order to simplify to activation The calculating of function, inventor is through having researched and proposed a kind of computational methods that activation primitive is represented using linear function.Specifically, lead to Cross by the way of segmentation, activation primitive is divided into some sections, calculated in each section by the way of linear approximation Functional value, using approximation theory, actual activation primitive is approached as much as possible.This method can both save the power in activation primitive The complicated functions such as computing, division arithmetic and trigonometric function operation calculate, and can also save circuit area overhead and energy loss.
In order to by activation primitive piecewise approximation be linear function, it is necessary to which the computer capacity of activation primitive is divided into some areas Between, at the same using each section endpoint value the function curve in each section is approximately represented as it is straight between the point value of connection end Line.
First, the computer capacity of activation primitive is divided into some sections, then made with the interval endpoint value in each section For reference point, the initial parameter value of linear function in computation interval.Assuming that the expression formula of linear function is y=ax+b, wherein, a For slope, b is y intercept.The expression formula of function interval endpoint is (x0, y0) and (x1, y1), then it can be calculated corresponding linear Function parameter value a and b.
Secondly, can be according to activation primitive in order that the linear function obtained in section more approaches with actual activation primitive Concavity and convexity, the adjusting parameter value b by way of error compensation.By taking concave function as an example, if activation primitive is recessed letter in current interval When several, show through inventor's many experiments, final b values can be arranged to:
Wherein, b0The y intercept value being directly calculated using interval endpoint is represented, b represents the y intercept after adjustment Value.EmaxMax value of error is represented, is specifically defined as:
Emax=| fa(x0)-fi(x0)|max
Wherein, faRepresent activation primitive expression formula, fiLinearly forced in the section for representing directly to be calculated using interval endpoint Nearly function expression, x0Represent to try to achieve max value of error E in sectionmaxDuring maximum, independent variable x value.Therefore, EmaxRepresent Activation primitive expression formula faWith linear approximating function expression formula f in the section that is directly calculated using interval endpointiIn section Max value of error.
After final parameter value determines, each function section of acquisition and corresponding linear function parameter value can be made Look-up table is stored in look-up table unit.So that matching unit receiver function variable input, and the value is matched to correspondingly Function section after, using look-up table unit can find corresponding to linear function parameter, recycle computing unit according to letter Number variable and corresponding linear function parameter carry out computing.
In one embodiment of the invention, the activation primitive can also be convex function, and similarly, final argument value can It is defined as:
Wherein, b0The y intercept value being directly calculated using interval endpoint is represented, b represents the y intercept after adjustment Value.EmaxMax value of error is represented, it is as defined above.
Below will be by taking sigmoid functions as an example, further explanation is provided by the invention to be applied to neutral net activation primitive Computing device.
For example, calculate functional value of the sigmoid functions in section [- 2,2], it is known that the definition of sigmord functions is:
First, the computer capacity [- 2,2] of function is divided into several function sections.For example, function can further be drawn It is divided into four sections:
[- 2, -1), [- 1,0), [0,1), [1,2]
Defined according to sigmoid functions, then can obtain the endpoint value in each section:
(- 2,0.119203), (- 1,0.268941), (0,0.5), (1,0.731059), (2,0.880797)
Secondly, the initial function parameter value in corresponding function section is calculated respectively using five endpoint values of above-mentioned acquisition, As shown in table 1.
Linear function initial parameter corresponding to the different functions section of table 1
Section Parameter a Parameter
[-2,-1) 0.14973850 0.58132008
[-1,0) 0.23105858 0.50000000
[0,1) 0.23105858 0.50000000
[1,2] 0.14973850 0.58132008
Then, the worst error E that can be obtained in each section is calculated by above-mentioned formulamax, for example, section [0,1) in, Emax =4.2 × 10-7, optimize above-mentioned parameter b using worst error0, so as to obtain final parameter value and look-up table storage is made Into look-up table unit, such as table 2
Linear function Parameter lookup step corresponding to the different functions section of table 2
Section Parameter a Parameter b
[-2,-1) 0.14973850 0.58131987
[-1,0) 0.23105858 0.49999979
[0,1) 0.23105858 0.49999979
[1,2] 0.14973850 0.58131987
Fig. 1 is the computing device structure schematic diagram suitable for neutral net activation primitive of present pre-ferred embodiments, should Device includes being used to activation primitive being matched to the matching unit in corresponding section, look-up table unit and use for storing look-up table In the computing unit for carrying out functional operation.
When carrying out activation primitive calculating, by taking sigmoid functions as an example, it is assumed that the input value of function is 0.5, utilizes matching Unit the input value can be matched to corresponding to function section for [0,1), meanwhile, matching unit exports input value 0.5 to meter Calculate unit, by function section [0,1) output is to look-up table unit.
The look-up table (such as table 2) being made up of the section and corresponding parameter of activation primitive is stored in look-up table unit, Exported available for the parameter of approximation linear function corresponding to the function range lookup according to input, and by corresponding parameter to calculating Unit.For example, look-up table unit according to the function section of reception [0,1), table 2 can obtain sigmoid functions and exist corresponding to lookup Section [0,1) in corresponding linear function parameter value be a=0.23105858, b=0.49999979, meanwhile, lookup list Member exports above-mentioned parameter value to computing unit.
Computing unit is according to the function variable input value received from matching unit, and the linear letter received from look-up table unit Number parameter completes the calculating of linear function.Fig. 2 is the structural representation of the computing unit of present pre-ferred embodiments, such as Fig. 2 institutes Show, computing unit of the invention is made up of multiplication unit and adder unit.Wherein, multiplication unit, which is used to receive, comes from above-mentioned lookup The function variable input value x that the parameter a and above-mentioned matching unit that table unit provides are provided, and parameter a and function variable are inputted Value x carries out multiplication operation as multiplier and multiplicand, obtains product p;Adder unit is used to receive the output from multiplication unit As a result the p and parameter b provided from look-up table unit, and numerical value p is added with numerical value a, so as to obtain result of calculation.
For example, the input value that computing unit receives from matching unit is 0.5, the parameter received from look-up table unit is a= 0.23105858, b=0.49999979, according to approximation linear function expression formula y=ax+b, first by input value 0.5 and parameter a =1 carries out multiplying, obtains product p=0.11552929, then product and parameter b=0.49999979 are carried out into addition fortune Calculate, obtain the section [0,1) in activation primitive input value 0.5 corresponding to approximation linear function value 0.61552908.
In one embodiment of the invention, the Parameter lookup step of multiple functions can be stored in look-up table unit, therefore, When carrying out lookup relevant parameter in the function section exported using matching unit, it is first determined corresponding to the ginseng for the function to be calculated Number look-up table, then determines corresponding linear function parameter according to corresponding function section.
In one embodiment of the invention, parameter corresponding to the different functions section of multiple functions can be stored in one In look-up table, it is that label is added in the function section of same functions, is used to search.
In one embodiment of the invention, above-mentioned matching unit and look-up table unit can be merged into searching unit, will The function section that division is completed is fabricated to look-up table with corresponding function parameter and is stored in searching unit, utilizes function and function Variable input, it is after finding corresponding section and its corresponding function parameter value using searching unit, the function variable is defeated Enter value and corresponding function parameter value is exported to computing unit and calculated.Computing unit is according to the function received from matching unit Variable input, and the linear function parameter received from look-up table unit complete the calculating of linear function.
In one embodiment of the invention, for the calculating on the centrosymmetric activation primitive of origin, using tool There is the look-up table unit for negating unit to be compressed look-up table, so that more lookups can be stored in look-up table unit Table;When search calculating, using look-up table unit as shown in Figure 3, as shown in figure 3, look-up table unit includes numerical value Unit is negated, obtains linear function parameter a and parameter b in function section in a lookup table according to function section.If negate choosing Messenger is 1, then is exported after parameter b is negated;If it is 0 to negate gating signal, parameter b is directly exported.
Fig. 3 is the method flow diagram that activation primitive calculating is carried out using the computing device shown in Fig. 1, as shown in figure 3, according to One embodiment of the present of invention, there is provided a kind of computational methods of activation primitive, this method specifically include following steps:
S10, the computer capacity of activation primitive is divided into some function sections;
Linear function initial parameter corresponding to S20, the different sections of calculating, and using activation primitive and linear function most Big error amount optimization initial parameter, is made as look-up table by the final argument value after optimization and stores into look-up table unit;
S30, using matching unit by activation primitive variable input be matched to corresponding to function section, using searching list Member searches the linear function parameter value corresponding to respective function section;
S40, counted according to the input value of activation primitive and the parameter value of corresponding linear function using computing unit Calculate.
In one embodiment of the invention, the S30 and S40 that can repeat the above steps owns until obtaining nonlinear function Linear function value corresponding to function section.
Although carried out exemplified by the above-described embodiments, employing the concavity and convexity of sigmoid functions for example, ability Domain one of ordinary skill appreciates that, the concavity and convexity using activation primitive herein calculates worst error and optimizes linear function parameter Method can be also used for other types of activation primitive, for example, Tanh functions or ReLU functions etc..
Relative to prior art, the calculating suitable for neutral net activation primitive provided in embodiments of the present invention fills Put and method, using the thought of approximate calculation, using the method for piecewise linearity, the computer capacity of function be divided into some sections, Functional value is calculated by the way of linear approximation and error optimization in each section, the power operation of activation primitive is eliminated, removes The complicated function such as method computing and trigonometric function operation calculates, and saves circuit area overhead and energy loss.
Although the present invention be described by means of preferred embodiments, but the present invention be not limited to it is described here Embodiment, also include made various changes and change without departing from the present invention.

Claims (10)

1. a kind of computing device suitable for neutral net activation primitive, including
Searching unit, for the linear function ginseng according to corresponding to the activation primitive and the determination of activation primitive variable input Number;And
Computing unit, for being calculated using the input value and the corresponding linear function parameter;
Wherein, the linear function parameter is optimized according to the concavity and convexity of activation primitive.
2. computing device according to claim 1, it is characterised in that the searching unit includes
Matching unit, for the input value of activation primitive to be matched into corresponding function section;
Look-up table unit, for the linear function parameter according to corresponding to the determination of the function section.
3. computing device according to claim 2, it is characterised in that the matching unit is additionally operable to store the function region Between;The look-up table unit is additionally operable to store linear function parameter corresponding to the function section.
4. computing device according to claim 2, it is characterised in that the linear function parameter includes slope and intercept, The slope is to calculate what is obtained using the endpoint value in the function section;The intercept is the end points using the function section Value calculates what the initial intercept obtained obtained after worst error optimizes;Wherein, the worst error refers to the activation primitive The maximum difference of functional value and the initial linear functional value in the function section in the function section.
5. computing device according to claim 4, it is characterised in that if the activation primitive is recessed in the function section Function, then the intercept be equal to the half that the initial intercept subtracts worst error.
6. computing device according to claim 4, it is characterised in that if the activation primitive is convex in the function section Function, then the intercept is equal to the initially half of the intercept plus worst error.
7. computing device according to claim 1, it is characterised in that the look-up table unit negates unit including numerical value.
8. computing device according to claim 7, it is characterised in that in the activation primitive on origin Central Symmetry When, the input value of the equal activation primitive of absolute value is matched to identical function section by the matching unit.
9. according to the computing device described in any one of claim 1 to 8, it is characterised in that the computing unit includes multiplication list Member and adder unit.
10. a kind of computational methods suitable for neutral net activation primitive, comprise the following steps:
The activation primitive is divided into some function sections;
Linear function parameter, i.e. slope and intercept corresponding to the function section are calculated using the end points in the function section;
The intercept is optimized according to the concavity and convexity in the function section, the intercept after slope and optimization is preserved in a lookup table;
Utilize the linear function parameter in lookup table search function section corresponding with the input value of the activation primitive;
Calculated using the input value and the corresponding linear function parameter of the activation primitive.
CN201711020801.4A 2017-10-27 2017-10-27 A kind of computing device and method suitable for neutral net activation primitive Pending CN107729984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711020801.4A CN107729984A (en) 2017-10-27 2017-10-27 A kind of computing device and method suitable for neutral net activation primitive

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711020801.4A CN107729984A (en) 2017-10-27 2017-10-27 A kind of computing device and method suitable for neutral net activation primitive

Publications (1)

Publication Number Publication Date
CN107729984A true CN107729984A (en) 2018-02-23

Family

ID=61202687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711020801.4A Pending CN107729984A (en) 2017-10-27 2017-10-27 A kind of computing device and method suitable for neutral net activation primitive

Country Status (1)

Country Link
CN (1) CN107729984A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647045A (en) * 2018-03-20 2018-10-12 科大讯飞股份有限公司 The implementation method and device of activation primitive, storage medium, electronic equipment
CN108921288A (en) * 2018-05-04 2018-11-30 中国科学院计算技术研究所 Neural network activates processing unit and the neural network processor based on the device
CN109583572A (en) * 2018-12-05 2019-04-05 东软睿驰汽车技术(沈阳)有限公司 A kind of training method of convolutional neural networks, verification method and device
CN110058841A (en) * 2019-04-22 2019-07-26 南京大学 Towards nonlinear function general-purpose calculating appts and method with symmetry
CN110688088A (en) * 2019-09-30 2020-01-14 南京大学 General nonlinear activation function computing device and method for neural network
CN110705756A (en) * 2019-09-07 2020-01-17 创新奇智(重庆)科技有限公司 Electric power energy consumption optimization control method based on input convex neural network
CN110796246A (en) * 2019-10-29 2020-02-14 南京宁麒智能计算芯片研究院有限公司 Hardware implementation circuit and method of activation function based on linear segmentation
CN110826706A (en) * 2018-08-10 2020-02-21 北京百度网讯科技有限公司 Data processing method and device for neural network
CN110866595A (en) * 2018-08-28 2020-03-06 北京嘉楠捷思信息技术有限公司 Method, device and circuit for operating activation function in integrated circuit
CN111126581A (en) * 2018-12-18 2020-05-08 中科寒武纪科技股份有限公司 Data processing method and device and related products
CN111507465A (en) * 2020-06-16 2020-08-07 电子科技大学 Configurable convolutional neural network processor circuit
CN111667063A (en) * 2020-06-30 2020-09-15 腾讯科技(深圳)有限公司 Data processing method and device based on FPGA
CN113379031A (en) * 2021-06-01 2021-09-10 北京百度网讯科技有限公司 Neural network processing method and device, electronic equipment and storage medium
US11551063B1 (en) * 2022-05-04 2023-01-10 Airt Technologies Ltd. Implementing monotonic constrained neural network layers using complementary activation functions
CN110866595B (en) * 2018-08-28 2024-04-26 嘉楠明芯(北京)科技有限公司 Method, device and circuit for operating activation function in integrated circuit

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1490689A (en) * 2003-09-11 2004-04-21 中国科学技术大学 Self-adaptation nonlinear time varying controller and controlling method thereof
CN106127301A (en) * 2016-01-16 2016-11-16 上海大学 A kind of stochastic neural net hardware realization apparatus
CN106130689A (en) * 2016-06-13 2016-11-16 南京邮电大学 A kind of non-linear self-feedback chaotic neural network signal blind checking method
CN106529668A (en) * 2015-11-17 2017-03-22 中国科学院计算技术研究所 Operation device and method of accelerating chip which accelerates depth neural network algorithm

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1490689A (en) * 2003-09-11 2004-04-21 中国科学技术大学 Self-adaptation nonlinear time varying controller and controlling method thereof
CN106529668A (en) * 2015-11-17 2017-03-22 中国科学院计算技术研究所 Operation device and method of accelerating chip which accelerates depth neural network algorithm
CN106127301A (en) * 2016-01-16 2016-11-16 上海大学 A kind of stochastic neural net hardware realization apparatus
CN106130689A (en) * 2016-06-13 2016-11-16 南京邮电大学 A kind of non-linear self-feedback chaotic neural network signal blind checking method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
冯畅: "《正线性函数在深度神经网络中的研究》", 《计算机工程与设计》 *
王佑锋: "《"定点参考独立分量分析算法的FPGA实现"》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647045B (en) * 2018-03-20 2021-10-01 科大讯飞股份有限公司 Method and device for realizing activation function, storage medium and electronic equipment
CN108647045A (en) * 2018-03-20 2018-10-12 科大讯飞股份有限公司 The implementation method and device of activation primitive, storage medium, electronic equipment
CN108921288A (en) * 2018-05-04 2018-11-30 中国科学院计算技术研究所 Neural network activates processing unit and the neural network processor based on the device
CN110826706A (en) * 2018-08-10 2020-02-21 北京百度网讯科技有限公司 Data processing method and device for neural network
CN110826706B (en) * 2018-08-10 2023-10-03 北京百度网讯科技有限公司 Data processing method and device for neural network
CN110866595A (en) * 2018-08-28 2020-03-06 北京嘉楠捷思信息技术有限公司 Method, device and circuit for operating activation function in integrated circuit
CN110866595B (en) * 2018-08-28 2024-04-26 嘉楠明芯(北京)科技有限公司 Method, device and circuit for operating activation function in integrated circuit
CN109583572A (en) * 2018-12-05 2019-04-05 东软睿驰汽车技术(沈阳)有限公司 A kind of training method of convolutional neural networks, verification method and device
CN111126581B (en) * 2018-12-18 2021-01-12 中科寒武纪科技股份有限公司 Data processing method and device and related products
CN111126581A (en) * 2018-12-18 2020-05-08 中科寒武纪科技股份有限公司 Data processing method and device and related products
CN110058841A (en) * 2019-04-22 2019-07-26 南京大学 Towards nonlinear function general-purpose calculating appts and method with symmetry
CN110058841B (en) * 2019-04-22 2023-03-28 南京大学 General computing device and method for nonlinear function with symmetry
CN110705756B (en) * 2019-09-07 2023-05-12 创新奇智(重庆)科技有限公司 Electric power energy consumption optimization control method based on input convex neural network
CN110705756A (en) * 2019-09-07 2020-01-17 创新奇智(重庆)科技有限公司 Electric power energy consumption optimization control method based on input convex neural network
CN110688088A (en) * 2019-09-30 2020-01-14 南京大学 General nonlinear activation function computing device and method for neural network
CN110688088B (en) * 2019-09-30 2023-03-28 南京大学 General nonlinear activation function computing device and method for neural network
CN110796246A (en) * 2019-10-29 2020-02-14 南京宁麒智能计算芯片研究院有限公司 Hardware implementation circuit and method of activation function based on linear segmentation
CN111507465B (en) * 2020-06-16 2020-10-23 电子科技大学 Configurable convolutional neural network processor circuit
CN111507465A (en) * 2020-06-16 2020-08-07 电子科技大学 Configurable convolutional neural network processor circuit
CN111667063B (en) * 2020-06-30 2021-09-10 腾讯科技(深圳)有限公司 Data processing method and device based on FPGA
CN111667063A (en) * 2020-06-30 2020-09-15 腾讯科技(深圳)有限公司 Data processing method and device based on FPGA
CN113379031A (en) * 2021-06-01 2021-09-10 北京百度网讯科技有限公司 Neural network processing method and device, electronic equipment and storage medium
US11551063B1 (en) * 2022-05-04 2023-01-10 Airt Technologies Ltd. Implementing monotonic constrained neural network layers using complementary activation functions
WO2023215658A1 (en) * 2022-05-04 2023-11-09 Airt Technologies Ltd. Implementing monotonic constrained neural network layers using complementary activation functions

Similar Documents

Publication Publication Date Title
CN107729984A (en) A kind of computing device and method suitable for neutral net activation primitive
US10929746B2 (en) Low-power hardware acceleration method and system for convolution neural network computation
CN110473141B (en) Image processing method, device, storage medium and electronic equipment
CN106529670B (en) It is a kind of based on weight compression neural network processor, design method, chip
CN108304921B (en) Convolutional neural network training method and image processing method and device
CN110362292A (en) A kind of approximate multiplying method based on approximate 4-2 compressor and approximate multiplier
CN110852434B (en) CNN quantization method, forward calculation method and hardware device based on low-precision floating point number
CN101625735A (en) FPGA implementation method based on LS-SVM classification and recurrence learning recurrence neural network
CN110361691B (en) Implementation method of coherent source DOA estimation FPGA based on non-uniform array
CN114792359B (en) Rendering network training and virtual object rendering method, device, equipment and medium
CN111985064A (en) Agent-assisted optimization design method and system for permanent magnet motor
CN115081588A (en) Neural network parameter quantification method and device
Wu et al. Efficient dynamic fixed-point quantization of CNN inference accelerators for edge devices
CN112200299B (en) Neural network computing device, data processing method and device
CN111667063B (en) Data processing method and device based on FPGA
CN104616304A (en) Self-adapting support weight stereo matching method based on field programmable gate array (FPGA)
Zhan et al. Field programmable gate array‐based all‐layer accelerator with quantization neural networks for sustainable cyber‐physical systems
Dwivedi et al. Hybrid multiplier-based optimized MAC unit
CN107590105B (en) Computing device and method towards nonlinear function
US20220113943A1 (en) Method for multiply-add operations for neural network
US11934954B2 (en) Pure integer quantization method for lightweight neural network (LNN)
CN115620147A (en) Micro-architecture searching method and device of deep convolutional neural network
CN111930670B (en) Heterogeneous intelligent processing quantization device, quantization method, electronic device and storage medium
Yuan et al. Work in progress: Mobile or FPGA? A comprehensive evaluation on energy efficiency and a unified optimization framework
Liu et al. Accurate and efficient quantized reservoir computing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180223