CN116579393A - Neural network computing method and device using AND OR gate circuit - Google Patents

Neural network computing method and device using AND OR gate circuit Download PDF

Info

Publication number
CN116579393A
CN116579393A CN202310279484.7A CN202310279484A CN116579393A CN 116579393 A CN116579393 A CN 116579393A CN 202310279484 A CN202310279484 A CN 202310279484A CN 116579393 A CN116579393 A CN 116579393A
Authority
CN
China
Prior art keywords
neuron input
input data
weights
neurons
binary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310279484.7A
Other languages
Chinese (zh)
Inventor
刘唯怩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202310279484.7A priority Critical patent/CN116579393A/en
Publication of CN116579393A publication Critical patent/CN116579393A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Feedback Control In General (AREA)

Abstract

The application discloses a neural network computing method and a device using an AND OR gate circuit, and relates to the technical field of artificial intelligence and chip integration, wherein the method comprises the steps of converting neuron input data into binary data with the corresponding number of 1 based on the proportion of the neuron input data in the total range of neuron input values, setting the 1 at equal intervals, and setting the rest positions of 0; converting the weights into binary data with the corresponding number of 1 based on the proportion of the absolute value of the weights connected between the neurons in the total range of the input values of the neurons, wherein the 1 is continuously set from high to low or low to high, and the rest positions are 0; and carrying out bit-wise and calculation on binary data corresponding to the neuron input data and the weights, and obtaining the proportion of the number of bits 1 in the result to the total data bit number, namely the value of the product of the neuron input data and the weights connected between the neurons. The application can greatly reduce the number of transistors required by the integrated circuit.

Description

Neural network computing method and device using AND OR gate circuit
Technical Field
The application relates to the technical field of artificial intelligence and chip integration, in particular to a neural network computing method and device using an AND OR gate circuit.
Background
In neural network computing, neural networks are one way to simulate human thinking, which is a nonlinear dynamical system featuring distributed storage and parallel co-processing of information. Although the structure of a single neuron is extremely simple and the function is limited, the behavior realized by a network system formed by a large number of neurons is extremely colorful.
However, at present, the calculation of the neural network uses multiplication calculation, and more transistors are needed for completing one single-period multiplication calculation on a CPU (Central Processing Unit ) or a special AI (Artificial Intelligence, artificial intelligence) chip, so that a single chip cannot integrate a large-scale multiplier, thereby causing the problem of artificial intelligence bottleneck.
Disclosure of Invention
Aiming at the defects existing in the prior art, the application aims to provide a neural network computing method and a neural network computing device using an AND OR gate circuit, which can greatly reduce the number of transistors required by an integrated circuit.
In order to achieve the above objective, the present application provides a neural network computing method using an and or gate circuit, which specifically includes the following steps:
based on the proportion of the neuron input data in the total range of the neuron input values, converting the neuron input data into binary data with the corresponding number of 1, setting the 1 at equal intervals, and setting the rest positions of 0;
converting the weights into binary data with the corresponding number of 1 based on the proportion of the absolute value of the weights connected between the neurons in the total range of the input values of the neurons, wherein the 1 is continuously set from high to low or low to high, and the rest positions are 0;
and carrying out bit-wise and calculation on binary data corresponding to the neuron input data and the weights, and obtaining the proportion of the number of bits 1 in the result to the total data bit number, namely the value of the product of the neuron input data and the weights connected between the neurons.
On the basis of the above technical solution, the ratio of the neuron input data in the total range of the neuron input values is based on that the neuron input data is converted into binary data with the corresponding number of 1, 1 is set at equal intervals, and the rest positions 0 are specifically:
determining the number of 1 in the binary value to be converted corresponding to the neuron input data based on the bit number of the binary value to be converted corresponding to the neuron input data and the proportion of the neuron input data in the total range of the neuron input values;
and setting the determined 1 at equal intervals according to the bit number of the binary system to be converted corresponding to the neuron input data, and setting the rest positions 0.
On the basis of the technical scheme, the proportion of the absolute value of the weight connected between the neurons in the total range of the input numerical values of the neurons converts the weight into binary data with the corresponding number of 1, the 1 is continuously set from high position to low position or from low position to high position, and the rest positions 0 comprise the following specific steps:
determining the number of 1 in the binary values to be converted corresponding to the weights based on the number of digits of the binary values to be converted corresponding to the weights connected between the neurons and the proportion of the absolute value of the weights in the total range of the input numerical values of the neurons;
and continuously setting the determined 1 from high order to low order or low order to high order according to the number of bits of the binary system to be converted corresponding to the weights connected between the neurons, and setting the rest positions 0.
On the basis of the technical proposal, the method comprises the following steps,
the numerical range of the neuron input data is any numerical value;
the numerical range of the weights connected between the neurons is-1 to 1.
On the basis of the technical proposal, the method comprises the following steps,
the performing bit-wise and calculation of binary data corresponding to neuron input data and the weights specifically comprises: starting from the second layer, performing calculation of each layer by adopting a preset calculation mode until the calculation is completed;
the preset calculation mode is as follows:
for each neuron, performing bit-wise AND operation by using the output and weight of each neuron at the upper layer;
comparing the calculated results according to the sign classification of the weights:
if the total number of 1 contained in the calculation result with the positive weight is larger than the total number of 1 contained in the calculation result with the negative sign, the result is 1;
if the total number of 1 contained in the calculation result with the positive weight is equal to or smaller than the total number of 1 contained in the calculation result with the negative sign, the result is 0.
The present application provides a neural network computing device using an and or circuit, comprising:
the first conversion module is used for converting the neuron input data into binary data with the corresponding number of 1 based on the proportion of the neuron input data in the total range of the neuron input values, wherein the 1 is set at equal intervals, and the rest positions are 0;
a second conversion module for converting weights connected between neurons into binary data having a corresponding number 1 based on a proportion of the absolute value of the weights in a total range of neuron input values, and 1 is continuously set from high to low or low to high, and the rest positions are 0;
and the calculation module is used for carrying out bit-wise and calculation on binary data corresponding to the neuron input data and the weights, and obtaining the proportion of the number of bits of 1 in the result to the total data bit number, namely the value of the product of the weights connected between the neuron input data and the neurons.
On the basis of the above technical solution, the ratio of the neuron input data in the total range of the neuron input values is based on that the neuron input data is converted into binary data with the corresponding number of 1, 1 is set at equal intervals, and the rest positions 0 are specifically:
determining the number of 1 in the binary value to be converted corresponding to the neuron input data based on the bit number of the binary value to be converted corresponding to the neuron input data and the proportion of the neuron input data in the total range of the neuron input values;
and setting the determined 1 at equal intervals according to the bit number of the binary system to be converted corresponding to the neuron input data, and setting the rest positions 0.
On the basis of the technical scheme, the weight is converted into binary data with the corresponding number 1 based on the proportion of the absolute value of the weight connected between the neurons in the total range of the input numerical values of the neurons, the 1 is continuously set from high position to low position or from low position to high position, and the rest positions 0 are specifically:
determining the number of 1 in the binary values to be converted corresponding to the weights based on the number of digits of the binary values to be converted corresponding to the weights connected between the neurons and the proportion of the absolute value of the weights in the total range of the input numerical values of the neurons;
and continuously setting the determined 1 from high order to low order or low order to high order according to the number of bits of the binary system to be converted corresponding to the weights connected between the neurons, and setting the rest positions 0.
On the basis of the technical proposal, the method comprises the following steps,
the numerical range of the neuron input data is any numerical value;
the numerical range of the weights connected between the neurons is-1 to 1.
On the basis of the technical proposal, the method comprises the following steps,
the performing bit-wise and calculation of binary data corresponding to neuron input data and the weights specifically comprises: starting from the second layer, performing calculation of each layer by adopting a preset calculation mode until the calculation is completed;
the preset calculation mode is as follows:
for each neuron, performing bit-wise AND operation by using the output and weight of each neuron at the upper layer;
comparing the calculated results according to the sign classification of the weights:
if the total number of 1 contained in the calculation result with the positive weight is larger than the total number of 1 contained in the calculation result with the negative sign, the result is 1;
if the total number of 1 contained in the calculation result with the positive weight is equal to or smaller than the total number of 1 contained in the calculation result with the negative sign, the result is 0.
Compared with the prior art, the application has the advantages that: the method can use a simple AND/OR logic circuit to realize a multiplication circuit within the range of accuracy permission, effectively reduce the number of transistors and greatly improve the number of neurons of one chip, thereby providing possibility for realizing parallel calculation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic circuit diagram;
FIG. 2 is a flowchart of a neural network computing method using AND OR gates in accordance with an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application.
It should be noted that, between neurons, the number of binary bits of the upper layer neuron a, the lower layer neuron c and the connected weight b are consistent by using bit-by-bit connection, and the data length is 8 (in practical application, the data length may be greater than 8 bits, and here, the data length is 8 bits for convenience of explanation), and the circuit structure is shown in fig. 1.
The AND circuit is used for completing the calculation of the neuron, certain conditions are required, the data are required to be stored in the calculation unit according to a certain rule, and certain requirements are required for the numerical range. Therefore, in the present application, the numerical range of the neuron input data is an arbitrary numerical value, the numerical range of the weights connected between neurons is-1 to 1, the absolute value is taken at the time of calculation, the subtraction operation is performed at the time of summation smaller than 0, and the addition operation is performed at the time of summation larger than 0. The total range of neuron input values is 0-1.
Now, assume that two numbers a and B are provided, wherein a is neuron input data, B is weight connected between neurons, and the steps of storing and calculating values are shown in fig. 2, namely, a neural network calculating method using and or gate circuit provided by the embodiment of the application specifically includes the following steps:
s1: based on the proportion of the neuron input data in the total range of the neuron input values, converting the neuron input data into binary data with the corresponding number of 1, setting the 1 at equal intervals, and setting the rest positions of 0;
in the application, based on the proportion of the neuron input data in the total range of the neuron input values, the neuron input data is converted into binary data with the corresponding number of 1, the 1 is set at equal intervals, and the rest positions 0 are specifically:
s101: determining the number of 1 in the binary value to be converted corresponding to the neuron input data based on the bit number of the binary value to be converted corresponding to the neuron input data and the proportion of the neuron input data in the total range of the neuron input values;
s102: and setting the determined 1 at equal intervals according to the bit number of the binary system to be converted corresponding to the neuron input data, and setting the rest positions 0.
For example, if the neuron input data is 8-bit unsigned integer data, that is, the number of bits of the neuron input data corresponding to the binary value to be converted is 8, and if the ratio of the neuron input data in the total range of the neuron input values is 0.25, that is, one fourth, the determined neuron input data includes 8×0.25 1 s, that is, 2 1 s, the 2 1 s are set in the 8 bits at equal intervals, and the rest positions are 0 s, to obtain 10001000.
It should be noted that, before converting the neuron input data into binary data having a corresponding number 1 based on the proportion of the neuron input data in the total range of the neuron input values, the method further includes: the number of intermediate layers of the neural network is appropriately increased (compared with the traditional multiplication calculation) so as to ensure that the weight coefficient of the neural network connection can finish the identification process between-1 and 1.
S2: converting the weights into binary data with the corresponding number of 1 based on the proportion of the absolute value of the weights connected between the neurons in the total range of the input values of the neurons, wherein the 1 is continuously set from high to low or low to high, and the rest positions are 0;
in the application, based on the proportion of the absolute value of the weight connected between neurons in the total range of the input values of the neurons, the weight is converted into binary data with the corresponding number of 1, 1 is continuously set from high position to low position or from low position to high position, and the rest positions are 0, and the specific steps are as follows:
s201: determining the number of 1 in the binary values to be converted corresponding to the weights based on the number of digits of the binary values to be converted corresponding to the weights connected between the neurons and the proportion of the absolute value of the weights in the total range of the input numerical values of the neurons;
s202: and continuously setting the determined 1 from high order to low order or low order to high order according to the number of bits of the binary system to be converted corresponding to the weights connected between the neurons, and setting the rest positions 0.
For example, the weights connected between the neurons are 8-bit unsigned integer data, that is, the number of bits of the weights corresponding to the binary system to be converted is 8 bits, if the ratio of the absolute value of the weights connected between the neurons in the total range of the input values of the neurons is 0.25, that is, one fourth, the determined weights corresponding to the binary system to be converted include 8×0.25 1 s, that is, 2 1 s, if 2 1 s are continuously set in 8 bits from high order to low order, the rest positions 0 s, that is, 2 1 s are arranged in front, so as to obtain 11000000, if 2 1 s are continuously set in 8 bits from low order to high order, the rest positions 0 s, that is, 2 1 s are arranged in back, so as to obtain 00000011.
S3: and carrying out bit-wise and calculation on binary data corresponding to the neuron input data and the weights, and obtaining the proportion of the number of bits 1 in the result to the total data bit number, namely the value of the product of the neuron input data and the weights connected between the neurons.
The binary data corresponding to the numerical value A and the numerical value B are subjected to bitwise and calculation to obtain the proportion of the number of 1 bits in the result to the total data bit number, namely the value of A.times.B, and the value is inaccurate, but the neural network has fault tolerance, so that the practically usable result can be approximately obtained only by properly expanding the neuron number or increasing the number of bits of each neuron value. In practical applications, other logical operations may achieve the same result, such as bitwise or operations, and or hybrid operations. The application replaces complex multiplication operation by AND operation or XOR operation, thereby greatly reducing the number of transistors required by the integrated circuit.
The application carries out bit-wise and calculation of binary data corresponding to neuron input data and the weights, and specifically comprises the following steps: starting from the first layer, performing calculation of each layer by adopting a preset calculation mode until the calculation is completed;
the preset calculation mode is as follows:
for each neuron, performing bit-wise AND operation by using the output and weight of each neuron at the upper layer;
comparing the calculated results according to the sign classification of the weights:
if the total number of 1 contained in the calculation result with the positive weight is larger than the total number of 1 contained in the calculation result with the negative sign, the result is 1;
if the total number of 1 contained in the calculation result with the positive weight is equal to or smaller than the total number of 1 contained in the calculation result with the negative sign, the result is 0.
That is, from the first layer, each neuron is calculated, and the conventional multiplication operation is changed into bitwise AND operation according to the conventional neural network calculation flow. For each neuron, the output of each neuron on the upper layer is used for bit-wise operation with the weight, then the calculation results are compared according to the sign classification of the weight, if the total number of 1 contained in the calculation result with positive weight is larger than the total number of 1 contained in the calculation result with negative sign, the result is 1, otherwise the result is 0, and one layer of calculation is completed to the next layer of calculation until all calculation is completed.
The neural network calculation method of the present application is described below with reference to an example.
Taking the example that both the neuron and the weight are 8-bit unsigned integer data (64 bits or higher are needed in practical application, here only for convenience of taking the 8-bit unsigned integer data as an example), the decimal range of the neuron input value is 0 to 1, if the decimal of the neuron input value is 0, the binary value of the input is 8 bits and all 0, i.e., 00000000; if the decimal value of the neuron input value is 1, the binary value is 8 1 s, 11111111. If the decimal value of the neuron input data is 0.5, which is half of the total range of the neuron input values, half of the input layer values are 1 and half are 0, and the binary value is 10101010.
Assuming decimal a=0.5, b=0.5, a×b=0.25;
the binary bitwise and operation result is 10101010& 00001111=00001010;
it can be seen that the result of the calculation has 2 1 s, 1/4 of the 8 bits of the total number of bits, consistent with the decimal calculation.
By increasing the data length of the weights of the neurons and the connections between neurons, the accuracy can be improved, and increasing the length to 128 bits can result in an accuracy of 0.01. Because a neural network is composed of a plurality of neuron connections, the neural network has certain fault tolerance and fuzzy processing characteristics, and therefore correct results can be obtained. And the logical and operation requires much fewer transistors than the single-cycle floating point fractional multiplication operation.
It should be noted that the neural network computing method of the present application may be applied to an integrated circuit dedicated to a neural network, where a logic circuit of an or gate is used to replace a multiplier to perform computation, and of course, the present application may also be applied to an FPGA (Field Programmable Gate Array ) chip, where the computation of the output of a neuron and the connection between neurons only needs to be performed by a simple logic operation such as direct and, or, exclusive or, and the like, without requiring the computation by multiplication.
Meanwhile, for the application, the assignment modes of the two multipliers can be interchanged, and the result is not influenced; the calculation result of each nerve of the input layer can be normalized into all 0 and all 1, or 0 and other values of 1, or the total number is within the number of the neuron data bits without normalization. The weights are converted in the manner described in step S1, the neuron input data are converted in the manner described in step S2, and then the calculation is performed, so that the obtained calculation result is the same as the above.
According to the neural network calculation method using the AND OR gate circuit, the values of the neurons at the upper layer are converted into the binary values with the corresponding values of 1 and distributed at equal intervals according to the data range, the connection weights among the neurons are also converted into the binary values with the corresponding values of 1 according to the data range and distributed at equal intervals from high to low or from low to high, then the two data are subjected to bit-wise AND operation, and the number of 1 in the operation result represents the value size of the data.
The neural network computing device using an AND OR gate circuit provided by the embodiment of the application comprises a first conversion module, a second conversion module and a computing module.
The first conversion module is used for converting the neuron input data into binary data with the corresponding number of 1 based on the proportion of the neuron input data in the total range of the neuron input values, wherein the 1 is set at equal intervals, and the rest positions are 0; the second conversion module is used for converting the weights into binary data with the corresponding number of 1 based on the proportion of the absolute values of the weights connected between the neurons in the total range of the input values of the neurons, wherein the 1 is continuously set from high to low or low to high, and the rest positions are 0; the calculation module is used for carrying out bit-wise and calculation on binary data corresponding to the neuron input data and the weights, and then the proportion of the number of bits of 1 in the result to the total data bit number is obtained, namely the value of the product between the weights connected between the neuron input data and the neurons.
In the application, based on the proportion of the neuron input data in the total range of the neuron input values, the neuron input data is converted into binary data with the corresponding number of 1, the 1 is set at equal intervals, and the rest positions 0 are specifically:
determining the number of 1 in the binary value to be converted corresponding to the neuron input data based on the bit number of the binary value to be converted corresponding to the neuron input data and the proportion of the neuron input data in the total range of the neuron input values;
and setting the determined 1 at equal intervals according to the bit number of the binary system to be converted corresponding to the neuron input data, and setting the rest positions 0.
In the application, based on the proportion of the absolute value of the weight connected between neurons in the total range of the input values of the neurons, the weight is converted into binary data with the corresponding number of 1, 1 is continuously set from high to low or low to high, and the rest positions 0 are specifically:
determining the number of 1 in the binary values to be converted corresponding to the weights based on the number of digits of the binary values to be converted corresponding to the weights connected between the neurons and the proportion of the absolute value of the weights in the total range of the input numerical values of the neurons;
and continuously setting the determined 1 from high order to low order or low order to high order according to the number of bits of the binary system to be converted corresponding to the weights connected between the neurons, and setting the rest positions 0.
In the application, the numerical range of the neuron input data is any numerical value; the number range of the weights of the connection between the neurons is-1 to 1.
In the application, the bit-wise and calculation of binary data corresponding to neuron input data and the weights is carried out, specifically: starting from the second layer, performing calculation of each layer by adopting a preset calculation mode until the calculation is completed;
the preset calculation mode is as follows:
for each neuron, performing bit-wise AND operation by using the output and weight of each neuron at the upper layer;
comparing the calculated results according to the sign classification of the weights:
if the total number of 1 contained in the calculation result with the positive weight is larger than the total number of 1 contained in the calculation result with the negative sign, the result is 1;
if the total number of 1 contained in the calculation result with the positive weight is equal to or smaller than the total number of 1 contained in the calculation result with the negative sign, the result is 0.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

Claims (10)

1. The neural network computing method using the AND OR gate circuit is characterized by comprising the following steps of:
based on the proportion of the neuron input data in the total range of the neuron input values, converting the neuron input data into binary data with the corresponding number of 1, setting the 1 at equal intervals, and setting the rest positions of 0;
converting the weights into binary data with the corresponding number of 1 based on the proportion of the absolute value of the weights connected between the neurons in the total range of the input values of the neurons, wherein the 1 is continuously set from high to low or low to high, and the rest positions are 0;
and carrying out bit-wise and calculation on binary data corresponding to the neuron input data and the weights, and obtaining the proportion of the number of bits 1 in the result to the total data bit number, namely the value of the product of the neuron input data and the weights connected between the neurons.
2. The neural network computing method using and-gate circuit according to claim 1, wherein the neuron input data is converted into binary data having a corresponding number of 1 based on the proportion of the neuron input data in the total range of the neuron input values, and 1 is set at equal intervals, and the remaining positions 0 are specifically:
determining the number of 1 in the binary value to be converted corresponding to the neuron input data based on the bit number of the binary value to be converted corresponding to the neuron input data and the proportion of the neuron input data in the total range of the neuron input values;
and setting the determined 1 at equal intervals according to the bit number of the binary system to be converted corresponding to the neuron input data, and setting the rest positions 0.
3. The neural network computing method using and-gate circuit according to claim 1, wherein the weight is converted into binary data having a corresponding number of 1 based on a ratio of an absolute value of the weight connected between neurons in a total range of values of neuron input, and 1 is continuously set from high to low or low to high, and the remaining positions 0 are as follows:
determining the number of 1 in the binary values to be converted corresponding to the weights based on the number of digits of the binary values to be converted corresponding to the weights connected between the neurons and the proportion of the absolute value of the weights in the total range of the input numerical values of the neurons;
and continuously setting the determined 1 from high order to low order or low order to high order according to the number of bits of the binary system to be converted corresponding to the weights connected between the neurons, and setting the rest positions 0.
4. A neural network computing method using and gates as defined in claim 1, wherein:
the numerical range of the neuron input data is any numerical value;
the numerical range of the weights connected between the neurons is-1 to 1.
5. A neural network computing method using and gates as defined in claim 1, wherein:
the performing bit-wise and calculation of binary data corresponding to neuron input data and the weights specifically comprises: starting from the second layer, performing calculation of each layer by adopting a preset calculation mode until the calculation is completed;
the preset calculation mode is as follows:
for each neuron, performing bit-wise AND operation by using the output and weight of each neuron at the upper layer;
comparing the calculated results according to the sign classification of the weights:
if the total number of 1 contained in the calculation result with the positive weight is larger than the total number of 1 contained in the calculation result with the negative sign, the result is 1;
if the total number of 1 contained in the calculation result with the positive weight is equal to or smaller than the total number of 1 contained in the calculation result with the negative sign, the result is 0.
6. A neural network computing device using and-gate circuitry, comprising:
the first conversion module is used for converting the neuron input data into binary data with the corresponding number of 1 based on the proportion of the neuron input data in the total range of the neuron input values, wherein the 1 is set at equal intervals, and the rest positions are 0;
a second conversion module for converting weights connected between neurons into binary data having a corresponding number 1 based on a proportion of the absolute value of the weights in a total range of neuron input values, and 1 is continuously set from high to low or low to high, and the rest positions are 0;
and the calculation module is used for carrying out bit-wise and calculation on binary data corresponding to the neuron input data and the weights, and obtaining the proportion of the number of bits of 1 in the result to the total data bit number, namely the value of the product of the weights connected between the neuron input data and the neurons.
7. The neural network computing device of claim 6, wherein the neuron input data is converted into binary data having a corresponding number of 1 based on a ratio of the neuron input data in a total range of neuron input values, and 1 is set at equal intervals, and the remaining positions 0 are specifically:
determining the number of 1 in the binary value to be converted corresponding to the neuron input data based on the bit number of the binary value to be converted corresponding to the neuron input data and the proportion of the neuron input data in the total range of the neuron input values;
and setting the determined 1 at equal intervals according to the bit number of the binary system to be converted corresponding to the neuron input data, and setting the rest positions 0.
8. A neural network computing device using and gates as recited in claim 6, wherein: the ratio of the absolute value of the weight connected between the neurons in the total range of the input values of the neurons converts the weight into binary data with the corresponding number of 1, and the 1 is continuously set from high order to low order or from low order to high order, and the rest positions 0 are specifically:
determining the number of 1 in the binary values to be converted corresponding to the weights based on the number of digits of the binary values to be converted corresponding to the weights connected between the neurons and the proportion of the absolute value of the weights in the total range of the input numerical values of the neurons;
and continuously setting the determined 1 from high order to low order or low order to high order according to the number of bits of the binary system to be converted corresponding to the weights connected between the neurons, and setting the rest positions 0.
9. A neural network computing device using and gates as recited in claim 6, wherein:
the numerical range of the neuron input data is any numerical value;
the numerical range of the weights connected between the neurons is-1 to 1.
10. A neural network computing device using and gates as recited in claim 6, wherein:
the performing bit-wise and calculation of binary data corresponding to neuron input data and the weights specifically comprises: starting from the second layer, performing calculation of each layer by adopting a preset calculation mode until the calculation is completed;
the preset calculation mode is as follows:
for each neuron, performing bit-wise AND operation by using the output and weight of each neuron at the upper layer;
comparing the calculated results according to the sign classification of the weights:
if the total number of 1 contained in the calculation result with the positive weight is larger than the total number of 1 contained in the calculation result with the negative sign, the result is 1;
if the total number of 1 contained in the calculation result with the positive weight is equal to or smaller than the total number of 1 contained in the calculation result with the negative sign, the result is 0.
CN202310279484.7A 2023-03-21 2023-03-21 Neural network computing method and device using AND OR gate circuit Pending CN116579393A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310279484.7A CN116579393A (en) 2023-03-21 2023-03-21 Neural network computing method and device using AND OR gate circuit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310279484.7A CN116579393A (en) 2023-03-21 2023-03-21 Neural network computing method and device using AND OR gate circuit

Publications (1)

Publication Number Publication Date
CN116579393A true CN116579393A (en) 2023-08-11

Family

ID=87544224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310279484.7A Pending CN116579393A (en) 2023-03-21 2023-03-21 Neural network computing method and device using AND OR gate circuit

Country Status (1)

Country Link
CN (1) CN116579393A (en)

Similar Documents

Publication Publication Date Title
Kim et al. Efficient Mitchell’s approximate log multipliers for convolutional neural networks
Yang et al. Design space exploration of neural network activation function circuits
US5095457A (en) Digital multiplier employing CMOS transistors
US20190087713A1 (en) Compression of sparse deep convolutional network weights
EP0629969A1 (en) Artificial neuron and method of using same
US11669747B2 (en) Constraining function approximation hardware integrated with fixed-point to floating-point conversion
Nazari et al. Tot-net: An endeavor toward optimizing ternary neural networks
Nojehdeh et al. Efficient hardware implementation of artificial neural networks using approximate multiply-accumulate blocks
Jia et al. An energy-efficient Bayesian neural network implementation using stochastic computing method
Gholami et al. Reconfigurable field‐programmable gate array‐based on‐chip learning neuromorphic digital implementation for nonlinear function approximation
Kawashima et al. FPGA implementation of hardware-oriented chaotic Boltzmann machines
EP3767455A1 (en) Apparatus and method for processing floating-point numbers
CN110825346B (en) Low logic complexity unsigned approximation multiplier
GB2173328A (en) Cmos subtractor
CN116579393A (en) Neural network computing method and device using AND OR gate circuit
US5581661A (en) Artificial neuron using adder circuit and method of using same
CN116257210A (en) Spatial parallel hybrid multiplier based on probability calculation and working method thereof
Sasao et al. Classification functions for handwritten digit recognition
CN113988279A (en) Output current reading method and system of storage array supporting negative value excitation
Madenda et al. New Approach of Signed Binary Numbers Multiplication and Its Implementation on FPGA
Kavipriya et al. Booth multiplier design using modified square root carry-select-adder
CN111091190A (en) Data processing method and device, photonic neural network chip and data processing circuit
Li et al. A novel area-efficient fast CORDIC for energy-efficient adaptive exponential integrate and fire neuron design
US20230367356A1 (en) Digital signal processing device and method of calculating softmax performed by the same
Pandit et al. VLSI architecture of sigmoid activation function for rapid prototyping of machine learning applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination