CN109978152B - Integrated circuit chip device and related product - Google Patents

Integrated circuit chip device and related product Download PDF

Info

Publication number
CN109978152B
CN109978152B CN201711455388.4A CN201711455388A CN109978152B CN 109978152 B CN109978152 B CN 109978152B CN 201711455388 A CN201711455388 A CN 201711455388A CN 109978152 B CN109978152 B CN 109978152B
Authority
CN
China
Prior art keywords
processing circuit
data
circuit
basic
main processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711455388.4A
Other languages
Chinese (zh)
Other versions
CN109978152A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cambricon Technologies Corp Ltd
Original Assignee
Cambricon Technologies Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to CN201711455388.4A priority Critical patent/CN109978152B/en
Application filed by Cambricon Technologies Corp Ltd filed Critical Cambricon Technologies Corp Ltd
Priority to EP20201907.1A priority patent/EP3783477B1/en
Priority to PCT/CN2018/123929 priority patent/WO2019129070A1/en
Priority to EP18896519.8A priority patent/EP3719712B1/en
Priority to EP20203232.2A priority patent/EP3789871B1/en
Publication of CN109978152A publication Critical patent/CN109978152A/en
Application granted granted Critical
Publication of CN109978152B publication Critical patent/CN109978152B/en
Priority to US16/903,304 priority patent/US11544546B2/en
Priority to US17/134,444 priority patent/US11748601B2/en
Priority to US17/134,445 priority patent/US11748602B2/en
Priority to US17/134,435 priority patent/US11741351B2/en
Priority to US17/134,486 priority patent/US11748604B2/en
Priority to US17/134,487 priority patent/US11748605B2/en
Priority to US17/134,446 priority patent/US11748603B2/en
Priority to US18/073,924 priority patent/US11983621B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides an integrated circuit chip device and related products, the integrated circuit chip device comprising: a main processing circuit and a plurality of basic processing circuits; the main processing circuit includes: a data type arithmetic circuit; the data type arithmetic circuit is used for executing conversion between floating point type data and fixed point type data; the plurality of base processing circuits are distributed in an array; each basic processing circuit is connected with other adjacent basic processing circuits, and the main processing circuit is connected with the n basic processing circuits of the 1 st row, the n basic processing circuits of the m th row and the m basic processing circuits of the 1 st column. The technical scheme provided by the disclosure has the advantages of small calculation amount and low power consumption.

Description

Integrated circuit chip device and related product
Technical Field
The present disclosure relates to the field of neural networks, and more particularly to an integrated circuit chip device and related products.
Background
Artificial Neural Networks (ANN) are a research hotspot in the field of Artificial intelligence since the 80 s of the 20 th century. The method abstracts the human brain neuron network from the information processing angle, establishes a certain simple model, and forms different networks according to different connection modes. It is also often directly referred to in engineering and academia as neural networks or neural-like networks. A neural network is an operational model, which is formed by connecting a large number of nodes (or neurons). The operation of the existing neural network is realized based on a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU), and the operation has a large amount of calculation and high power consumption.
Disclosure of Invention
Embodiments of the present disclosure provide an integrated circuit chip device and related products, which can increase the processing speed and efficiency of a computing device.
In a first aspect, an integrated circuit chip device is provided, the integrated circuit chip device comprising: a main processing circuit and a plurality of basic processing circuits; the main processing circuit includes: a data type arithmetic circuit; the data type arithmetic circuit is used for executing conversion between floating point type data and fixed point type data;
the plurality of base processing circuits are distributed in an array; each basic processing circuit is connected with other adjacent basic processing circuits, and the main processing circuit is connected with the n basic processing circuits of the 1 st row, the n basic processing circuits of the m th row and the m basic processing circuits of the 1 st column;
the main processing circuit is used for acquiring an input data block, a weight data block and a multiplication instruction, converting the input data block and the weight data block into a fixed-point type input data block and a fixed-point type weight data block through the data type operation circuit, dividing the fixed-point type input data block into a distribution data block according to the multiplication instruction, and dividing the fixed-point type weight data block into a broadcast data block; splitting the distributed data block to obtain a plurality of basic data blocks, distributing the plurality of basic data blocks to at least one basic processing circuit in basic processing circuits connected with the main processing circuit, and broadcasting the broadcast data block to the basic processing circuit connected with the main processing circuit;
the plurality of basic processing circuits are used for executing operation in the neural network in a parallel mode according to the fixed-point type broadcast data block and the fixed-point type basic data block to obtain an operation result, and transmitting the operation result to the main processing circuit through the basic processing circuit connected with the main processing circuit.
In a second aspect, a neural network computing device is provided, which includes one or more integrated circuit chip devices provided in the first aspect.
In a third aspect, there is provided a combined processing apparatus comprising: the neural network arithmetic device, the universal interconnection interface and the universal processing device are provided by the second aspect;
the neural network operation device is connected with the general processing device through the general interconnection interface.
In a fourth aspect, a chip is provided that integrates the apparatus of the first aspect, the apparatus of the second aspect, or the apparatus of the third aspect.
In a fifth aspect, an electronic device is provided, which comprises the chip of the fourth aspect.
In a sixth aspect, a method for operating a neural network is provided, where the method is applied in an integrated circuit chip device, and the integrated circuit chip device includes: the integrated circuit chip apparatus of the first aspect, configured to perform an operation of a neural network.
It can be seen that, by the embodiments of the present disclosure, the data conversion operation circuit is provided to perform the post-conversion operation on the type of the data block, so that transmission resources and calculation resources are saved, and therefore, the data conversion operation circuit has the advantages of low power consumption and small calculation amount.
Drawings
FIG. 1a is a schematic diagram of an integrated circuit chip device.
FIG. 1b is a schematic diagram of another integrated circuit chip device.
FIG. 1c is a schematic diagram of a basic processing circuit.
FIG. 1d is a schematic diagram of a main processing circuit.
FIG. 1e is a schematic block diagram of a fixed point data type.
FIG. 2a is a schematic diagram of a method of using a basic processing circuit.
FIG. 2b is a schematic diagram of a main processing circuit transmitting data.
Fig. 2c is a schematic diagram of a matrix multiplied by a vector.
FIG. 2d is a schematic diagram of an integrated circuit chip device.
FIG. 2e is a schematic diagram of another integrated circuit chip device.
Fig. 2f is a schematic diagram of a matrix multiplied by a matrix.
Fig. 3 is a schematic structural diagram of a combined processing device disclosed in the present disclosure.
FIG. 4 is a schematic view of another structure of a combined processing device disclosed in the present disclosure.
Fig. 5a is a schematic structural diagram of a neural network processor board card according to an embodiment of the present disclosure;
fig. 5b is a schematic structural diagram of a neural network chip package structure according to an embodiment of the present disclosure;
fig. 5c is a schematic structural diagram of a neural network chip according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a neural network chip package structure according to an embodiment of the disclosure;
fig. 6a is a schematic diagram of another neural network chip package structure according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those skilled in the art, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. All other embodiments, which can be derived by one of ordinary skill in the art from the embodiments disclosed herein without making any creative effort, shall fall within the scope of protection of the present disclosure.
In the apparatus provided in the first aspect, the plurality of basic processing circuits are specifically configured to perform a multiplication operation on the broadcast data block and the received basic data block according to a fixed-point data type to obtain a product result of the fixed-point data type, and transmit the product result as an operation result to the main processing circuit through a basic processing circuit connected to the main processing circuit;
the main processing circuit is used for converting the product result of the fixed-point data type into the product result of the floating-point type through the data type operation circuit, performing accumulation operation on the product result of the floating-point type to obtain an accumulation result, and sequencing the accumulation result to obtain the instruction result.
In the apparatus provided in the first aspect, the plurality of basic processing circuits are specifically configured to perform an inner product operation on the broadcast data block and the received basic data block in a fixed-point data type to obtain an inner product result in the fixed-point data type, and transmit the inner product result as an operation result to the main processing circuit through the basic processing circuit connected to the main processing circuit;
and the main processing circuit is used for converting the inner product result into a floating-point type inner product result through the data type operation circuit, and sequencing the inner product result to obtain the instruction result.
In the apparatus provided in the first aspect, the main processing circuit is specifically configured to broadcast the broadcast data block to a basic processing circuit connected to the main processing circuit at a time.
In the apparatus provided in the first aspect, the main processing circuit is specifically configured to divide the broadcast data block into a plurality of partial broadcast data blocks, and broadcast the plurality of partial broadcast data blocks to a basic processing circuit connected to the main processing circuit by multiple times.
In the apparatus provided in the first aspect, the basic processing circuit is specifically configured to perform an inner product processing on the partial broadcast data block and the basic data block in a fixed-point data type once to obtain an inner product processing result, and send the inner product processing result to the main processing circuit. The inner product treatment may specifically be: if the elements of the partial broadcast data block are the first 2 elements of the matrix B, namely B10 and B11; the basic data block is the first 2 elements of the first row of the input data matrix a, i.e., a10 and a11, and the inner product is a10 × b10+ a11 × b 11.
In the apparatus provided in the first aspect, the basic processing circuit is specifically configured to multiplex p times the partial broadcast data block to execute the partial broadcast data block and the p basic data blocks to respectively execute an inner product operation to obtain p partial processing results, and send the p partial processing results to the main processing circuit, where p is an integer greater than or equal to 2.
In an apparatus provided in the first aspect, the main processing circuit includes: a master register or on-master cache circuit; the base processing circuit includes: basic registers or basic on-chip cache circuits.
In an apparatus provided in the first aspect, the main processing circuit includes: one or any combination of vector arithmetic unit circuit, arithmetic logic unit circuit, accumulator circuit, matrix transposition circuit, direct memory access circuit or data rearrangement circuit.
In the apparatus provided in the first aspect, the input data block is: a vector or matrix; the weight data block is: a vector or a matrix.
Referring to fig. 1a, fig. 1a is an integrated circuit chip device provided by the present disclosure, which includes: the device comprises a main processing circuit and a plurality of basic processing circuits, wherein the basic processing circuits are arranged in an array (m x n array), the value ranges of m and n are integers which are more than or equal to 1, and at least one value of m and n is more than or equal to 2. For a plurality of basic processing circuits distributed in an m x n array, each basic processing circuit is connected to an adjacent basic processing circuit, the main processing circuit is connected to k basic processing circuits of the plurality of basic processing circuits, and the k basic processing circuits may be: n basic processing circuits of row 1, n basic processing circuits of row m, and m basic processing circuits of column 1. As shown in fig. 1a, the main processing circuit and/or the plurality of basic processing circuits may include a data type conversion operation circuit, and some of the plurality of basic processing circuits may include a data type conversion circuit, for example, in an alternative embodiment, k basic processing circuits may be configured with the data type conversion circuit, so that n basic processing circuits may be respectively responsible for performing a data type conversion step on data of m basic processing circuits in the row. This arrangement can improve the operation efficiency and reduce the power consumption because, for the n basic processing circuits in row 1, since it receives the data sent by the main processing circuit first, converting the received data into the fixed-point type data can reduce the calculation amount of the subsequent basic processing circuit and the data transmission amount with the subsequent basic processing circuit, and similarly, configuring the data type converting circuit for the m basic processing circuits in the first column also has the advantages of small calculation amount and low power consumption. In addition, according to this structure, the main processing circuit may adopt a dynamic data transmission strategy, for example, the main processing circuit broadcasts data to the m basic processing circuits in the 1 st column, and the main processing circuit transmits the distributed data to the n basic processing circuits in the 1 st row, which is advantageous in that different data are transmitted into the basic processing circuits through different data input ports, so that the basic processing circuits may not distinguish what kind of data the received data belong to, and only need to determine from which receiving port the data are received, which kind of data the data belong to can be known.
The main processing circuit is used for executing each continuous operation in the neural network operation and transmitting data with the basic processing circuit connected with the main processing circuit; the above successive operations are not limited to: accumulate operations, ALU operations, activate operations, and the like.
And the plurality of basic processing circuits are used for executing operation in the neural network in a parallel mode according to the transmitted data and transmitting the operation result to the main processing circuit through the basic processing circuit connected with the main processing circuit. The above parallel way of executing the operations in the neural network includes but is not limited to: inner product operations, matrix or vector multiplication operations, and the like.
The main processing circuit may include: the data transmitting circuit may be integrated with the data distributing circuit and the data broadcasting circuit, but in practical applications, the data distributing circuit and the data broadcasting circuit may be separately provided. For broadcast data, i.e. data that needs to be sent to each of the basic processing circuits. For the distribution data, that is, the data that needs to be selectively sent to part of the basic processing circuits, specifically, for example, convolution operation, convolution input data of the convolution operation needs to be sent to all the basic processing circuits, all of which are broadcast data, and convolution kernel needs to be selectively sent to part of the basic data blocks, so the convolution kernel is the distribution data. The particular selection of the distribution data to send to that base processing circuit may be specifically determined by the main processing circuit depending on the load and other distribution means. For the broadcast transmission mode, broadcast data is transmitted to each base processing circuit in a broadcast form. (in practical applications, broadcast data is transmitted to each basic processing circuit by way of one-time broadcast, or broadcast data is transmitted to each basic processing circuit by way of multiple broadcasts, and the number of times of the broadcasts is not limited by the embodiments of the present disclosure), the distribution data is selectively transmitted to a part of the basic processing circuits for the distribution transmission mode.
The main processing circuit (as shown in fig. 1 d) may include a register and/or an on-chip cache circuit, and the main processing circuit may further include a control circuit, a vector operator circuit, an ALU (arithmetic and logic unit) circuit, an accumulator circuit, a DMA (Direct Memory Access) circuit, and other circuits, such as a conversion circuit (e.g. a matrix transpose circuit), a data rearrangement circuit, an activation circuit, and the like.
Each base processing circuit may include a base register and/or a base on-chip cache circuit; each base processing circuit may further include: an inner product operator circuit, a vector operator circuit, an accumulator circuit, or the like, in any combination. The inner product operator circuit, the vector operator circuit, and the accumulator circuit may be integrated circuits, or the inner product operator circuit, the vector operator circuit, and the accumulator circuit may be circuits provided separately.
Optionally, the accumulator circuit of the n basic processing circuits in the mth row may perform the accumulation operation of the inner product operation, because for the mth row of basic processing circuits, the accumulator circuit can receive the product result of all the basic processing circuits in the row, and perform the accumulation operation of the inner product operation through the n basic processing circuits in the mth row, so that the calculation resources can be effectively allocated, and the advantage of saving power consumption is provided. The technical scheme is particularly suitable for the condition that the number of m is large.
For the data type conversion, the main processing circuit may allocate the circuit to be executed, specifically, the circuit to be executed may be allocated in a display manner or an implicit manner, for the display manner, the main processing circuit may configure a special instruction or instruction, and when the basic processing circuit receives the special instruction or instruction, it is determined to execute the data type conversion, for example, when the basic processing circuit does not receive the special instruction or instruction, it is determined not to execute the data type conversion. As another example, this may be performed in an implied manner, e.g., where the underlying processing circuitry receives data of a data type that is a floating point type and determines that an inner product operation needs to be performed, converts the data type to a fixed point type of data. For the way of display configuration, the special instruction or indication may configure a decrement sequence, the value of which is decremented by 1 every time it passes through a basic processing circuit, the basic processing circuit reads the value of the decrement sequence, if the value is greater than zero, data type conversion is performed, if the value is equal to or less than zero, data type conversion is not performed. This arrangement is configured according to the basic processing circuits allocated to the array, for example, for the m basic processing circuits in the ith row, the main processing circuit needs the first 5 basic processing circuits to perform data type conversion, the main processing circuit issues a special instruction, the special instruction includes a decrement sequence, the initial value of the decrement sequence may be 5, the value of the decrement sequence decreases by 1 every time passing through one basic processing circuit, the value of the decrement sequence is 1 in the case of the 5 th basic processing circuit, the decrement sequence is 0 in the case of the 6 th basic processing circuit, and the 6 th basic processing circuit does not perform the data type conversion.
The main processing circuit is used for acquiring an input data block, a weight data block and a multiplication instruction, converting the input data block and the weight data block into a fixed-point type input data block and a fixed-point type weight data block through the data type operation circuit, dividing the fixed-point type input data block into a distribution data block according to the multiplication instruction, and dividing the fixed-point type weight data block into a broadcast data block; splitting the distributed data block to obtain a plurality of basic data blocks, distributing the plurality of basic data blocks to at least one basic processing circuit in basic processing circuits connected with the main processing circuit, and broadcasting the broadcast data block to the basic processing circuit connected with the main processing circuit;
the plurality of basic processing circuits are used for executing operation in the neural network in a parallel mode according to the fixed-point type broadcast data block and the fixed-point type basic data block to obtain an operation result, and transmitting the operation result to the main processing circuit through the basic processing circuit connected with the main processing circuit.
One embodiment of the present disclosure provides an integrated circuit chip apparatus, including a main processing circuit (which may also be referred to as a master unit) and a plurality of basic processing circuits (which may also be referred to as base units); the structure of the embodiment is shown in FIG. 1 b; wherein, the dotted line frame is the internal structure of the neural network arithmetic device; the gray-filled arrows indicate data transmission paths between the main processing circuit and the basic processing circuit array, and the open arrows indicate data transmission paths between the respective basic processing circuits (adjacent basic processing circuits) in the basic processing circuit array. The length, width and length of the basic processing circuit array may be different, that is, the values of m and n may be different or may be the same, and the disclosure does not limit the specific values of the values.
The circuit structure of the basic processing circuit is shown in fig. 1 c; in the figure, a dashed box represents the boundary of the basic processing circuit, and a thick arrow intersecting the dashed box represents a data input/output channel (the dashed box is indicated by an input channel and a dashed box is indicated by an output channel); the rectangle in the dashed box represents the memory cell circuit (register and/or on-chip cache) including input data 1, input data 2, multiplication or inner product result, and accumulated data; the diamond-shaped blocks represent arithmetic circuits comprising multiplication or inner product arithmetic units and adders.
In this embodiment, the neural network computing device includes a main processing circuit and 16 basic processing circuits (the 16 basic processing circuits are merely for illustration, and in practical applications, other values may be adopted);
in this embodiment, the basic processing circuit has two data input interfaces and two data output interfaces; in the following description of this example, the horizontal input interface (horizontal arrow pointing to the unit in fig. 1b) is referred to as input 0, and the vertical input interface (vertical arrow pointing to the unit in fig. 1b) is referred to as input 1; each horizontal data output interface (the horizontal arrow pointing from the unit in fig. 1b) is referred to as output 0 and the vertical data output interface (the vertical arrow pointing from the unit in fig. 1b) is referred to as output 1.
The data input interface and the data output interface of each basic processing circuit can be respectively connected with different units, including a main processing circuit and other basic processing circuits;
in this example, the inputs 0 of the four basic processing circuits 0,4,8,12 (see fig. 1b for reference) are connected to the data output interface of the main processing circuit;
in this example, the input 1 of the four basic processing circuits 0,1,2,3 is connected to the data output interface of the main processing circuit;
in this example, the outputs 1 of the four basic processing circuits 12,13,14,15 are connected to the data input interface of the main processing circuit;
in this example, the situation that the output interface of the basic processing circuit is connected with the input interfaces of other basic processing circuits is shown in fig. 1b, which is not listed one by one;
specifically, the output interface S1 of the S cell is connected with the input interface P1 of the P cell, indicating that the P cell will be able to receive data from its P1 interface that the S cell sent to its S1 interface.
The embodiment comprises a main processing circuit, a data output interface and a data input interface, wherein the main processing circuit is connected with an external device (namely, the input interface also has an output interface), and a part of data output interfaces of the main processing circuit are connected with a part of data input interfaces of a basic processing circuit; a part of data input interfaces of the main processing circuit are connected with a part of data output interfaces of the basic processing circuit.
Method for using integrated circuit chip device
The data involved in the usage methods provided by the present disclosure may be any data type of data, for example, data represented by floating point numbers of any bit width may be data represented by fixed point numbers of any bit width.
A schematic structural diagram of the fixed-point type data is shown in fig. 1e, as shown in fig. 1e, which is an expression method of the fixed-point type data, for a computing system, the storage Bit number of 1 floating-point data is 32 bits, and for the fixed-point data, especially for the representation of the data by using the floating-point type data as shown in fig. 1e, the storage Bit number of 1 fixed-point data can be less than 16 bits, so that for the conversion, the transmission overhead between the calculators can be greatly reduced, in addition, for the calculators, the space for storing the data with fewer bits is also smaller, i.e., the storage overhead is smaller, the calculation amount is also reduced, i.e., the calculation overhead is reduced, so that the calculation overhead and the storage overhead can be reduced, but for the conversion of the data type, a part of overhead is also required, hereinafter referred to as the conversion overhead for short, the calculation amount is large, the conversion cost of data with large data storage capacity can be almost ignored relative to the subsequent calculation cost, storage cost and transmission cost, so for data with large calculation capacity and large data storage capacity, the technical scheme of converting the data type into the fixed point type data is adopted in the disclosure, otherwise, for data with small calculation capacity and small data storage capacity, the calculation cost, the storage cost and the transmission cost are relatively small, at the moment, if the fixed point data is used, the precision of the fixed point data is slightly lower than that of floating point data, on the premise of smaller calculation capacity, the calculation precision needs to be ensured, so the fixed point type data is converted into the floating point data, namely, the purpose of improving the calculation precision is achieved by increasing smaller cost.
The operations that need to be performed in the basic processing circuitry can be performed using the following method:
the main processing circuit converts the type of the data and transmits the converted data to the basic processing circuit for operation (for example, the main processing circuit can convert floating point number into fixed point number with lower bit width and then transmits the fixed point number to the basic processing circuit, which has the advantages of reducing bit width of transmitted data, reducing total bit number of transmission, higher efficiency of executing the fixed point operation with wide bit width by the basic processing circuit and lower power consumption)
The basic processing circuit can receive the data and then perform data type conversion and calculation (for example, the basic processing circuit receives floating point numbers transmitted by the main processing circuit and then converts the floating point numbers into fixed point numbers for calculation, so that the calculation efficiency is improved, and the power consumption is reduced).
The result calculated by the basic processing circuit can be firstly converted into a data type and then transmitted to the main processing circuit (for example, the result calculated by the basic processing circuit can be firstly converted into a fixed point number with a low bit width and then transmitted to the main processing circuit, which has the advantages of reducing the data bit width in the transmission process, improving the efficiency and saving the power consumption).
The method of use of the basic processing circuit (see FIG. 2 a);
the main processing circuit receives input data to be calculated from the outside of the device;
optionally, the main processing circuit performs arithmetic processing on data by using various arithmetic circuits, a vector arithmetic circuit, an inner product arithmetic circuit, an accumulator circuit and the like of the unit;
the main processing circuit sends data (as shown in fig. 2 b) to the basic processing circuit array (the set of all basic processing circuits is called basic processing circuit array) through the data output interface;
the data transmission mode here may be a mode of directly transmitting data to a part of the basic processing circuit, that is, a multi-broadcast mode;
here, the data transmission mode may be a distribution mode, in which different data is transmitted to different basic processing circuits;
the basic processing circuit array calculates data;
the basic processing circuit receives the input data and then carries out operation;
optionally, the basic processing circuit transmits the data from the data output interface of the unit after receiving the data; (for transmission to other base processing circuits that do not receive data directly from the main processing circuit.)
Optionally, the basic processing circuit transmits the operation result from the data output interface; (intermediate calculation result or final calculation result)
The main processing circuit receives output data returned from the basic processing circuit array;
optionally, the main processing circuit continues processing (e.g., accumulation or activation operations) the data received from the base processing circuit array;
and after the processing of the main processing circuit is finished, the processing result is transmitted to the outside of the device from the data output interface.
Completing a matrix multiply vector operation using the circuit arrangement;
(the matrix multiplication vector can be that each row in the matrix is respectively subjected to inner product operation with the vector, and the results are arranged into a vector according to the sequence of the corresponding rows.)
The operation of calculating the multiplication of a matrix S of size M rows and L columns and a vector P of length L is described below, as shown in fig. 2c below.
The method uses all or part of basic processing circuits of the neural network computing device, and K basic processing circuits are assumed to be used;
the main processing circuit transmits data in part or all rows of the matrix S to each of the k basic processing circuits;
in an alternative scheme, the control circuit of the main processing circuit sends one number or a part of numbers to a certain basic processing circuit at a time to the data of a certain row in the matrix S; (for example, for each transmission of one number, it can be that for a certain basic processing circuit, the 1 st transmission of the 1 st number of the 3 rd line, the 2 nd transmission of the 2 nd number in the 3 rd line data, and the 3 rd transmission of the 3 rd line … …, or for each transmission of one number, the 1 st transmission of the 3 rd line two numbers (i.e., the 1 st and 2 nd numbers), the second transmission of the 3 rd and 4 th numbers of the 3 rd line, and the third transmission of the 3 rd and 6 rd numbers … …;)
In an alternative scheme, the control circuit of the main processing circuit sends data of a certain row in the matrix S to a certain basic processing circuit one number at a time and one part of the data; (e.g., for any base processing circuit, row 3,4,5, line 1, row 2, row 3,4,5, row 3,4,5, … … are transmitted for row 1, row 3,4,5, two first numbers of rows 3,4,5, row 1, row 3,4,5, row 5, and row 5, 6, … … are transmitted for row 3,4,5 for the second time.)
The control circuit of the main processing circuit successively transmits the data in the vector P to the 0 th basic processing circuit;
after receiving the data of the vector P, the 0 th basic processing circuit transmits the data to the next basic processing circuit connected thereto, that is, the basic processing circuit 1;
specifically, some basic processing circuits cannot directly obtain all the data required for calculation from the main processing circuit, for example, the basic processing circuit 1 in fig. 2d has only one data input interface connected to the main processing circuit, so that the data of the matrix S can only be directly obtained from the main processing circuit, and the data of the vector P needs to be output to the basic processing circuit 1 by the basic processing circuit 0, and similarly, the basic processing circuit 1 also needs to continue to output the data of the vector P to the basic processing circuit 2 after receiving the data.
Each basic processing circuit performs operations on received data, including but not limited to: inner product operations, multiplication operations, addition operations, and the like;
in one alternative, the basic processing circuit calculates the multiplication of one or more groups of two data at a time, and then accumulates the result to a register and/or on-chip cache;
in one alternative, the base processing circuit computes the inner product of one or more sets of two vectors at a time, and then accumulates the results onto a register and or on-chip cache;
after the basic processing circuit calculates the result, the result is transmitted out from the data output interface (namely transmitted to other basic processing circuits connected with the basic processing circuit);
in one alternative, the calculation result may be the final result or an intermediate result of the inner product operation;
after receiving the calculation results from other basic processing circuits, the basic processing circuit transmits the data to other basic processing circuits or main processing circuits connected with the basic processing circuit;
the main processing circuit receives the result of the inner product operation of each basic processing circuit, and processes the result to obtain a final result (the processing can be an accumulation operation or an activation operation, etc.).
The embodiment of the matrix vector multiplication method is realized by adopting the computing device as follows:
in one alternative, the plurality of basic processing circuits used in the method are arranged as shown in FIG. 2d or FIG. 2e below;
as shown in fig. 2c, the data conversion operation circuit of the main processing circuit converts the matrix S and the matrix P into fixed-point type data; the control circuit of the main processing unit divides M row data of the matrix S into K groups, and the ith basic processing circuit is responsible for the operation of the ith group (the set of rows in the group of data is recorded as Ai);
here, the method of grouping M rows of data is any grouping method that does not cause repeated allocation;
in one alternative, the following distribution is used: dividing the jth line into jth% K (% remainder operation) basic processing circuits;
in one alternative, it is also possible to first assign a portion of the rows equally and assign the remaining rows in an arbitrary manner for the case where grouping cannot be averaged.
The control circuit of the main processing circuit sequentially sends data in part or all rows of the matrix S to the corresponding basic processing circuit each time;
in an alternative, the control circuit of the main processing circuit sends one or more data in one row of data in the ith group of data Mi for which it is responsible to the ith basic processing circuit at a time;
in an alternative, the control circuit of the main processing circuit sends one or more data of each of some or all rows in the ith group of data Mi for which it is responsible to the ith basic processing circuit at a time;
the control circuit of the main processing circuit sequentially sends the data in the vector P to the 1 st basic processing circuit;
in one alternative, the control circuit of the main processing circuit may send one or more data of the vector P at a time;
after receiving the data of the vector P, the ith basic processing circuit sends the data to the (i + 1) th basic processing circuit connected with the ith basic processing circuit;
each basic processing circuit receives one or more data from a certain row or certain rows in the matrix S and one or more data from the vector P, and then performs operation (including but not limited to multiplication or addition);
in one alternative, the basic processing circuit calculates the multiplication of one or more groups of two data at a time, and then accumulates the result to a register and/or on-chip cache;
in one alternative, the base processing circuit computes the inner product of one or more sets of two vectors at a time, and then accumulates the results onto a register and or on-chip cache;
in one alternative, the data received by the basic processing circuit can also be an intermediate result, and is stored on a register and/or an on-chip cache;
the basic processing circuit transmits the local calculation result to the next basic processing circuit or the main processing circuit connected with the basic processing circuit;
in an alternative, corresponding to the structure of fig. 2d, only the output interface of the last basic processing circuit in each row is connected to the main processing circuit, in this case, only the last basic processing circuit can directly transmit the local calculation result to the main processing circuit, the calculation results of other basic processing circuits are transmitted to its next basic processing circuit, the next basic processing circuit is transmitted to the next basic processing circuit until all the calculation results are transmitted to the last basic processing circuit, the last basic processing circuit performs the accumulation calculation on the local calculation result and the received results of other basic processing circuits in the row to obtain an intermediate result, and the intermediate result is transmitted to the main processing circuit; it is of course also possible that the results of other basic circuits of the column as well as the local processing results are sent directly to the main processing circuit for the last basic processing circuit.
In an alternative, corresponding to the configuration of fig. 2e, each basic processing circuit has an output interface connected to the main processing circuit, in which case each basic processing circuit directly transmits the local calculation result to the main processing circuit;
after receiving the calculation results transmitted from other basic processing circuits, the basic processing circuit transmits the calculation results to the next basic processing circuit or the main processing circuit connected with the basic processing circuit.
The main processing circuit receives the results of the M inner product operations as the result of the matrix-by-vector operation.
Using the circuit arrangement to perform a matrix multiplication matrix operation;
the operation of calculating the multiplication of a matrix S of size M rows and L columns and a matrix P of size L rows and N columns (each row in the matrix S being the same length as each column of the matrix P, as shown in FIG. 2 f)
The method is illustrated using the apparatus as described in the embodiment shown in FIG. 1 b;
a data conversion operation circuit of the main processing circuit converts the matrix S and the matrix P into fixed-point type data;
the control circuitry of the main processing circuitry sends data in some or all of the rows of the matrix S to those basic processing circuitry that are directly connected to the main processing circuitry through the horizontal data input interface (e.g., the uppermost gray-filled vertical data path in fig. 1 b);
in one alternative, the control circuit of the main processing circuit sends a number or a part of the number of data of a certain row in the matrix S to a certain basic processing circuit at a time; (for example, for a given basic processing circuit, line 3 1 is transmitted 1 st number, line 3 is transmitted 2 nd number in 2 nd line 3, line 3 is transmitted 3 rd number … …, or line 3 first two numbers are transmitted 1 st time, line 3 and 4 are transmitted second time, line 3 5 and 6 th numbers are transmitted third time … …;)
In an alternative scheme, the control circuit of the main processing circuit sends data of a certain row in the matrix S to a certain basic processing circuit one number at a time and one part of the number; (for example, for a base processing circuit, row 3,4,5, line 1, row 2, row 3,4,5, row 3,4,5, … … is transmitted 1 time, row 3,4,5, two previous rows 3,4,5, row 3, row 5, row 6, row 5, … … is transmitted 1 time)
The control circuitry of the main processing circuitry sends the data in some or all of the columns in the matrix P to those base processing circuitry directly connected to the main processing circuitry through vertical data input interfaces (e.g., grey-filled horizontal data paths to the left of the array of base processing circuitry in fig. 1 b);
in one alternative, the control circuit of the main processing circuit sends a number or a part of the number of data of a certain column in the matrix P to a certain basic processing circuit at a time; (for example, for a basic processing circuit, the 1 st transmission of the 1 st number of the 3 rd column, the 2 nd transmission of the 2 nd number in the 3 rd column data, the 3 rd transmission of the 3 rd column of … …, or the 1 st transmission of the first two numbers of the 3 rd column, the second transmission of the 3 rd and 4 th numbers of the 3 rd column, the third transmission of the 3 rd column of the 5 th and 6 th numbers of … …;)
In an alternative, the control circuit of the main processing circuit sends a part of the data of a certain column in the matrix P to a certain basic processing circuit one number at a time; (for example, for a base processing circuit, the 1 st transmission of the 1 st number of columns 3,4,5 per column, the 2 nd transmission of the 2 nd number of columns 3,4,5 per column, the 3 rd transmission of the 3 rd number of columns 3,4,5 per column … …, or the 1 st transmission of the first two numbers of columns 3,4,5 per column, the second transmission of the 3 rd and 4 th numbers of columns 3,4,5 per column, the third transmission of the 5 th and 6 th numbers of columns 3,4,5 per column … …;)
After receiving the data of the matrix S, the basic processing circuit transmits the data to the next basic processing circuit connected thereto through the data output interface in the horizontal direction (for example, the horizontal data path filled with white in the middle of the basic processing circuit array in fig. 1 b); after receiving the data of the matrix P, the basic processing circuit transmits the data to the next basic processing circuit connected thereto through the vertical data output interface (for example, the vertical data path filled with white in the middle of the basic processing circuit array in fig. 1 b);
each basic processing circuit operates on the received data;
in one alternative, the basic processing circuit calculates the multiplication of one or more groups of two data at a time, and then accumulates the result to a register and/or on-chip cache;
in one alternative, the base processing circuit computes the inner product of one or more sets of two vectors at a time, and then accumulates the results onto a register and or on-chip cache;
after the basic processing circuit calculates the result, the result can be transmitted out from the data output interface;
in one alternative, the calculation result may be the final result or an intermediate result of the inner product operation;
specifically, if the basic processing circuit has an output interface directly connected to the main processing circuit, the result is transmitted from the interface, and if not, the result is output in a direction of the basic processing circuit capable of directly outputting to the main processing circuit (for example, in fig. 1b, the lowermost row of basic processing circuits directly outputs the output result thereof to the main processing circuit, and the other basic processing circuits transmit the operation result downward from the vertical output interface).
After receiving the calculation results from other basic processing circuits, the basic processing circuit transmits the data to other basic processing circuits or main processing circuits connected with the basic processing circuit;
outputting the result to a direction capable of being directly output to the main processing circuit (for example, in fig. 1b, the bottom row of basic processing circuits directly outputs the output result to the main processing circuit, and the other basic processing circuits transmit the operation result from the vertical output interface downward);
the main processing circuit receives the inner product operation result of each basic processing circuit, and the output result can be obtained.
Example of the "matrix by matrix" method:
the method uses an array of basic processing circuits arranged as shown in FIG. 1b, assuming h rows and w columns;
a data conversion operation circuit of the main processing circuit converts the matrix S and the matrix P into fixed-point type data;
the control circuit of the main processing circuit divides h rows of data of the matrix S into h groups, and the ith basic processing circuit is responsible for the operation of the ith group (the set of rows in the group of data is recorded as Hi);
here, the method of grouping the h-row data is any grouping mode which cannot be repeatedly distributed;
in one alternative, the following distribution is used: the control circuit of the main processing circuit divides the jth row into jth% h basic processing circuits;
in one alternative, it is also possible to first assign a portion of the rows equally and assign the remaining rows in an arbitrary manner for the case where grouping cannot be averaged.
The control circuit of the main processing circuit divides W columns of data of the matrix P into W groups, and the ith basic processing circuit is responsible for the operation of the ith group (the set of rows in the group of data is denoted as Wi);
here, the method of grouping W-line data is any grouping method that does not cause repeated allocation;
in one alternative, the following distribution is used: the control circuit of the main processing circuit divides the jth row into jth% w basic processing circuits;
in an alternative, it is also possible to allocate some columns evenly first for the case where the grouping cannot be averaged, and allocate the remaining columns in an arbitrary manner.
The control circuit of the main processing circuit transmits data in part or all rows of the matrix S to the first basic processing circuit of each row in the basic processing circuit array;
in an alternative, the control circuit of the main processing circuit sends one or more data in one row of data in the ith group of data Hi in charge of the control circuit to the first basic processing circuit in the ith row of the basic processing circuit array at a time;
in an alternative, the control circuit of the main processing circuit sends one or more data of each row in part or all of the ith group of data Hi for which it is responsible to the first basic processing circuit of the ith row in the basic processing circuit array at a time;
the control circuit of the main processing circuit transmits data in part or all columns of the matrix P to the first basic processing circuit of each column in the basic processing circuit array;
in an alternative, the control circuit of the main processing circuit sends one or more data in one column of data in the ith group of data Wi responsible for the control circuit to the first base processing circuit in the ith column of the base processing circuit array at a time;
in an alternative, the control circuit of the main processing circuit sends one or more data of each column in partial or all columns in the ith group of data Ni responsible for the control circuit to the first base processing circuit of the ith column in the base processing circuit array at a time;
after receiving the data of the matrix S, the basic processing circuit transmits the data to the next basic processing circuit connected thereto through the data output interface in the horizontal direction (for example, the horizontal data path filled with white in the middle of the basic processing circuit array in fig. 1 b); after receiving the data of the matrix P, the basic processing circuit transmits the data to the next basic processing circuit connected thereto through the vertical data output interface (for example, the vertical data path filled with white in the middle of the basic processing circuit array in fig. 1 b);
each basic processing circuit operates on the received data;
in one alternative, the basic processing circuit calculates the multiplication of one or more groups of two data at a time, and then accumulates the result to a register and/or on-chip cache;
in one alternative, the base processing circuit computes the inner product of one or more sets of two vectors at a time, and then accumulates the results onto a register and or on-chip cache;
after the basic processing circuit calculates the result, the result can be transmitted out from the data output interface;
in one alternative, the calculation result may be the final result or an intermediate result of the inner product operation;
specifically, if the basic processing circuit has an output interface directly connected to the main processing circuit, the result is transmitted from the interface, and if not, the result is output in a direction of the basic processing circuit capable of directly outputting to the main processing circuit (for example, the lowermost row of basic processing circuits directly outputs the output result thereof to the main processing circuit, and the other basic processing circuits transmit the operation result downward from the vertical output interface).
After receiving the calculation results from other basic processing circuits, the basic processing circuit transmits the data to other basic processing circuits or main processing circuits connected with the basic processing circuit;
outputting the result in a direction capable of being directly output to the main processing circuit (for example, the bottom row of basic processing circuits directly outputs the output result to the main processing circuit, and the other basic processing circuits transmit the operation result downwards from the vertical output interface);
the main processing circuit receives the inner product operation result of each basic processing circuit, and the output result can be obtained.
The terms "horizontal" and "vertical" used in the above description are only used to describe the example shown in fig. 1b, and in practical use, only the "horizontal" and "vertical" interfaces of each unit need to be distinguished to represent two different interfaces.
Using the circuit arrangement to perform a full connect operation:
if the input data of the full connection layer is a vector (namely the input of the neural network is the case of a single sample), taking the weight matrix of the full connection layer as a matrix S and the input vector as a vector P, and executing operation according to the method of multiplying the matrix used by the device by the vector;
if the input data of the full connection layer is a matrix (namely the input of the neural network is the condition of a plurality of samples), taking the weight matrix of the full connection layer as a matrix S and the input vector as a matrix P, or taking the weight matrix of the full connection layer as the matrix P and the input vector as the matrix S, and executing operation according to the matrix multiplication matrix of the device;
a method of performing an activation function operation using the circuit arrangement:
inputting a vector by using an activation circuit of a main processing circuit, and calculating an activation vector of the vector;
in an alternative scheme, the activation circuit of the main processing circuit calculates a value output to the corresponding position of the output vector by passing each value in the input vector through an activation function (the input of the activation function is a value, and the output is also a value);
in one alternative, the activation function may be: y ═ max (m, x), where x is the input value, y is the output value, and m is a constant;
in one alternative, the activation function may be: y ═ tanh (x), where x is the input value and y is the output value;
in one alternative, the activation function may be: y is sigmoid (x), where x is the input value and y is the output value;
in one alternative, the activation function may be a piecewise linear function;
in one alternative, the activation function may be any function that inputs a number and outputs a number.
In one alternative, the sources of the input vector are (including but not limited to):
a source of data external to the device;
in one alternative, the input data comes from the result of matrix multiplication vector operation performed by the device;
in one alternative, the input data comes from the device to perform matrix multiplication operation;
the main processing circuit of the device calculates the result;
in one alternative, the input data is from the calculation results after the device main processing circuit implements biasing.
A method of using the device to implement blas (basic Linear algibra subparograms);
the GEMM calculation means: the operation of matrix-matrix multiplication in the BLAS library. The general representation of this operation is: c ═ alpha _ op (S) op (P) + beta _ C, where a and B are two input matrices, C is the output matrix, alpha and beta are scalars, op represents some operation on matrix S or P, and there are some additional integers as parameters to account for the width and height of matrix a and B;
the steps of using the device to realize GEMM calculation are as follows:
the main processing circuit can convert the data types of the input matrix S and the matrix P before the OP operation;
the conversion circuit of the main processing circuit carries out respective corresponding op operations on the input matrix S and the matrix P;
in one alternative, the op may be a transpose operation of the matrix; the matrix transposition operation is realized by using the vector operation function or the data rearrangement function of the main processing circuit (the main processing circuit has a data rearrangement circuit mentioned above), but in practical application, the OP may also be directly realized by the conversion circuit, for example, when the matrix transposition operation is performed, the OP operation is directly realized by the matrix transposition circuit;
in one alternative, an OP of a certain matrix may be empty, and OP operations are not performed;
the matrix multiplication between the op (S) and the op (P) is completed by using a matrix multiplication matrix calculation method;
multiplying each value in the result of op(s) op (p) by alpha using the arithmetic logic circuit of the main processing circuit;
in one alternative, the multiply by alpha operation is not performed with alpha being 1;
realizing beta C operation by using an arithmetic logic circuit of the main processing circuit;
in one alternative, in the case of beta being 1, the multiply by beta operation is not performed;
a step of adding corresponding positions of the matrixes alpha _ op (S) op (P) and beta _ C by using an arithmetic logic circuit of the main processing circuit;
in one alternative, in the case where beta is 0, no addition operation is performed;
the GEMV calculation means: the operation of matrix-vector multiplication in the BLAS library. The general representation of this operation is: c ═ alpha _ op (S) _ P + beta _ C, where S is the input matrix, P is the vector of inputs, C is the output vector, alpha and beta are scalars, and op represents some operation on the matrix S;
the steps of using the device to realize GEMV calculation are as follows:
the main processing circuit can convert the data types of the input matrix S and the matrix P before the OP operation;
the conversion circuit of the main processing circuit performs corresponding op operation on the input matrix S;
in one alternative, the op may be a transpose operation of the matrix; the matrix transposition operation is realized by utilizing a matrix transposition circuit of the main processing circuit;
in one alternative, an op of a certain matrix may be empty, and op operations are not performed;
completing matrix-vector multiplication between the matrix op (S) and the vector P by using a matrix multiplication vector calculation method;
multiplying each value in the result of op(s) P by alpha using an arithmetic logic circuit of the main processing circuit;
in one alternative, the multiply by alpha operation is not performed with alpha being 1;
the arithmetic logic circuit of the main processing circuit is utilized to realize the operation of beta C;
in one alternative, in the case of beta being 1, the multiply by beta operation is not performed;
a step of adding corresponding positions of the matrices alpha op (S) P and beta C by using an arithmetic logic circuit of the main processing circuit;
in one alternative, in the case where beta is 0, no addition operation is performed;
implementing data type conversion
The data type conversion operation circuit of the main processing circuit is used for realizing the conversion of the data type;
in one alternative, the form of data type conversion includes, but is not limited to: the number of floating point is converted into a fixed point number, the number of fixed point is converted into a floating point number, and the like;
the disclosure also discloses a combined processing device, which includes the above neural network computing device, the universal interconnect interface, and other processing devices (i.e., general processing devices). The neural network arithmetic device interacts with other processing devices to jointly complete the operation designated by the user. Fig. 3 is a schematic diagram of the combined treatment device.
Other processing devices include one or more of general purpose/special purpose processors such as Central Processing Units (CPUs), Graphics Processing Units (GPUs), neural network processors, and the like. The number of processors included in the other processing devices is not limited. The other processing devices are used as interfaces of the neural network arithmetic device and external data and control, and comprise data transportation to finish basic control of starting, stopping and the like of the neural network arithmetic device; other processing devices can cooperate with the neural network arithmetic device to complete the arithmetic task.
And the universal interconnection interface is used for transmitting data and control instructions between the neural network arithmetic device and other processing devices. The neural network arithmetic device acquires required input data from other processing devices and writes the input data into a storage device on the neural network arithmetic device chip; control instructions can be obtained from other processing devices and written into a control cache on a neural network arithmetic device chip; the data in the storage module of the neural network arithmetic device can also be read and transmitted to other processing devices.
As shown in fig. 4, the structure may further include a storage device for storing data required by the present arithmetic unit/arithmetic device or other arithmetic units, and is particularly suitable for data that is required to be calculated and cannot be stored in the internal storage of the present neural network arithmetic device or other processing devices.
The combined processing device can be used as an SOC (system on chip) system of equipment such as a mobile phone, a robot, an unmanned aerial vehicle and video monitoring equipment, the core area of a control part is effectively reduced, the processing speed is increased, and the overall power consumption is reduced. In this case, the generic interconnect interface of the combined processing device is connected to some component of the apparatus. Some parts are such as camera, display, mouse, keyboard, network card, wifi interface.
Embodiments of the present disclosure provide a neural network processor board card that may be used in numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, smart homes, appliances, multiprocessor systems, microprocessor-based systems, robots, programmable consumer electronics, network Personal Computers (PCs), minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Referring to fig. 5a, fig. 5a is a schematic structural diagram of a neural network processor board card according to an embodiment of the disclosure. As shown in fig. 5a, the neural network processor board 10 includes a neural network chip package structure 11, a first electrical and non-electrical connection device 12, and a first substrate (substrate) 13.
The present disclosure does not limit the specific structure of the neural network chip package structure 11, and optionally, as shown in fig. 5b, the neural network chip package structure 11 includes: a neural network chip 111, a second electrical and non-electrical connection device 112, and a second substrate 113.
The specific form of the neural network chip 111 related to the present disclosure is not limited, and the neural network chip 111 includes, but is not limited to, a neural network chip integrating a neural network processor, and the neural network chip may be made of silicon material, germanium material, quantum material, molecular material, or the like. The neural network chip can be packaged according to practical conditions (such as a severer environment) and different application requirements, so that most of the neural network chip is wrapped, and the pins on the neural network chip are connected to the outer side of the packaging structure through conductors such as gold wires and the like for circuit connection with a further outer layer.
The present disclosure is not limited to the specific structure of the neural network chip 111, and please refer to the apparatus shown in fig. 1a or fig. 1 b.
The type of the first substrate 13 and the second substrate 113 is not limited in this disclosure, and may be a Printed Circuit Board (PCB) or a Printed Wiring Board (PWB), and may be other circuit boards. The material of the PCB is not limited.
The second substrate 113 according to the present disclosure is used for carrying the neural network chip 111, and the neural network chip package structure 11 obtained by connecting the neural network chip 111 and the second substrate 113 through the second electrical and non-electrical connection device 112 is used for protecting the neural network chip 111, so as to further package the neural network chip package structure 11 and the first substrate 13.
The specific packaging method and the corresponding structure of the second electrical and non-electrical connecting device 112 are not limited, and an appropriate packaging method can be selected according to actual conditions and different application requirements, and can be simply improved, for example: flip Chip Ball Grid Array (FCBGAP) packages, Low-profile Quad Flat packages (LQFP), Quad Flat packages with Heat sinks (HQFP), Quad Flat packages (Quad Flat Non-lead Package, QFN), or small pitch Quad Flat packages (FBGA).
The Flip Chip (Flip Chip) is suitable for the conditions of high requirements on the area after packaging or sensitivity to the inductance of a lead and the transmission time of a signal. In addition, a Wire Bonding (Wire Bonding) packaging mode can be used, so that the cost is reduced, and the flexibility of a packaging structure is improved.
Ball Grid Array (Ball Grid Array) can provide more pins, and the average wire length of the pins is short, and has the function of transmitting signals at high speed, wherein, the package can be replaced by Pin Grid Array Package (PGA), Zero Insertion Force (ZIF), Single Edge Contact Connection (SECC), Land Grid Array (LGA) and the like.
Optionally, the neural network Chip 111 and the second substrate 113 are packaged in a Flip Chip Ball Grid Array (Flip Chip Ball Grid Array) packaging manner, and a schematic diagram of a specific neural network Chip packaging structure may refer to fig. 6. As shown in fig. 6, the neural network chip package structure includes: the neural network chip 21, the bonding pad 22, the solder ball 23, the second substrate 24, the connection point 25 on the second substrate 24, and the pin 26.
The bonding pads 22 are connected to the neural network chip 21, and the solder balls 23 are formed between the bonding pads 22 and the connection points 25 on the second substrate 24 by soldering, so that the neural network chip 21 and the second substrate 24 are connected, that is, the package of the neural network chip 21 is realized.
The pins 26 are used for connecting with an external circuit of the package structure (for example, the first substrate 13 on the neural network processor board 10), so as to realize transmission of external data and internal data, and facilitate processing of data by the neural network chip 21 or a neural network processor corresponding to the neural network chip 21. The present disclosure is also not limited to the type and number of pins, and different pin types can be selected according to different packaging technologies and arranged according to certain rules.
Optionally, the neural network chip packaging structure further includes an insulating filler, which is disposed in a gap between the pad 22, the solder ball 23 and the connection point 25, and is used for preventing interference between the solder ball and the solder ball.
Wherein, the material of the insulating filler can be silicon nitride, silicon oxide or silicon oxynitride; the interference includes electromagnetic interference, inductive interference, and the like.
Optionally, the neural network chip package structure further includes a heat dissipation device for dissipating heat generated when the neural network chip 21 operates. The heat dissipation device may be a metal plate with good thermal conductivity, a heat sink, or a heat sink, such as a fan.
For example, as shown in fig. 6a, the neural network chip package structure 11 includes: the neural network chip 21, the bonding pad 22, the solder ball 23, the second substrate 24, the connection point 25 on the second substrate 24, the pin 26, the insulating filler 27, the thermal grease 28 and the metal housing heat sink 29. The heat dissipation paste 28 and the metal case heat dissipation sheet 29 are used to dissipate heat generated during operation of the neural network chip 21.
Optionally, the neural network chip package structure 11 further includes a reinforcing structure connected to the bonding pad 22 and embedded in the solder ball 23 to enhance the connection strength between the solder ball 23 and the bonding pad 22.
The reinforcing structure may be a metal wire structure or a columnar structure, which is not limited herein.
The present disclosure is not limited to the specific form of the first electrical and non-electrical device 12, and reference may be made to the description of the second electrical and non-electrical device 112, that is, the neural network chip package structure 11 is packaged by soldering, and a connection wire or a plug connection may be used to connect the second substrate 113 and the first substrate 13, so as to facilitate subsequent replacement of the first substrate 13 or the neural network chip package structure 11.
Optionally, the first substrate 13 includes an interface of a memory unit for expanding a storage capacity, for example: synchronous Dynamic Random Access Memory (SDRAM), Double Rate SDRAM (DDR), etc., which improve the processing capability of the neural network processor by expanding the Memory.
The first substrate 13 may further include a Peripheral component interconnect Express (PCI-E or PCIe) interface, a Small Form-factor pluggable (SFP) interface, an ethernet interface, a Controller Area Network (CAN) interface, and the like on the first substrate, for data transmission between the package structure and the external circuit, which may improve the operation speed and the convenience of operation.
The neural network processor is packaged into a neural network chip 111, the neural network chip 111 is packaged into a neural network chip packaging structure 11, the neural network chip packaging structure 11 is packaged into a neural network processor board card 10, and data interaction is performed with an external circuit (for example, a computer motherboard) through an interface (a slot or a plug core) on the board card, that is, the function of the neural network processor is directly realized by using the neural network processor board card 10, and the neural network chip 111 is protected. And other modules can be added to the neural network processor board card 10, so that the application range and the operation efficiency of the neural network processor are improved.
In one embodiment, the present disclosure discloses an electronic device comprising the above neural network processor board card 10 or the neural network chip package 11.
Electronic devices include data processing devices, robots, computers, printers, scanners, tablets, smart terminals, cell phones, tachographs, navigators, sensors, cameras, servers, cameras, video cameras, projectors, watches, headphones, mobile storage, wearable devices, vehicles, home appliances, and/or medical devices.
The vehicle comprises an airplane, a ship and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
The above-described embodiments, objects, technical solutions and advantages of the present disclosure are further described in detail, it should be understood that the above-described embodiments are only illustrative of the embodiments of the present disclosure, and are not intended to limit the present disclosure, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (15)

1. An integrated circuit chip apparatus, comprising: a main processing circuit and a plurality of basic processing circuits; the main processing circuit is physically connected with the plurality of basic processing circuits; the main processing circuit includes: a data type arithmetic circuit; the data type arithmetic circuit is used for executing conversion between floating point type data and fixed point type data;
the plurality of base processing circuits are distributed in an array; each basic processing circuit is connected with other adjacent basic processing circuits, and the main processing circuit is connected with the n basic processing circuits of the 1 st row, the n basic processing circuits of the m th row and the m basic processing circuits of the 1 st column;
the main processing circuit is used for acquiring an input data block, a weight data block and a multiplication instruction, converting the input data block and the weight data block into a fixed-point type input data block and a fixed-point type weight data block through the data type operation circuit, dividing the fixed-point type input data block into a distribution data block according to the multiplication instruction, and dividing the fixed-point type weight data block into a broadcast data block; splitting the distributed data block to obtain a plurality of basic data blocks, selectively distributing the plurality of basic data blocks to at least one basic processing circuit in basic processing circuits connected with the main processing circuit, and broadcasting the broadcast data block to the basic processing circuit connected with the main processing circuit;
the plurality of basic processing circuits are used for executing operation in the neural network in a parallel mode according to the fixed-point type broadcast data block and the fixed-point type basic data block to obtain an operation result, and transmitting the operation result to the main processing circuit through the basic processing circuit connected with the main processing circuit;
and the main processing circuit is used for processing the operation result to obtain an instruction result of the multiplication instruction.
2. The integrated circuit chip apparatus of claim 1,
the plurality of basic processing circuits are specifically configured to perform multiplication operations on the broadcast data block and the received basic data block according to a fixed-point data type to obtain a product result of the fixed-point data type, and transmit the product result as an operation result to the main processing circuit through the basic processing circuit connected to the main processing circuit;
the main processing circuit is used for converting the product result of the fixed-point data type into the product result of the floating-point type through the data type operation circuit, performing accumulation operation on the product result of the floating-point type to obtain an accumulation result, and sequencing the accumulation result to obtain the instruction result.
3. The integrated circuit chip apparatus of claim 1,
the plurality of basic processing circuits are specifically configured to perform an inner product operation on the broadcast data block and the received basic data block in a fixed-point data type to obtain an inner product result of the fixed-point data type, and transmit the inner product result as an operation result to the main processing circuit through the basic processing circuit connected to the main processing circuit;
and the main processing circuit is used for converting the inner product result into a floating-point type inner product result through the data type operation circuit, and sequencing the inner product result to obtain the instruction result.
4. The integrated circuit chip apparatus according to any one of claims 1 to 3,
the main processing circuit is specifically configured to broadcast the broadcast data block to a basic processing circuit connected to the main processing circuit at a time.
5. The integrated circuit chip apparatus according to any one of claims 1 to 3,
the main processing circuit is specifically configured to divide the broadcast data block into a plurality of partial broadcast data blocks, and broadcast the plurality of partial broadcast data blocks to a basic processing circuit connected to the main processing circuit by multiple times.
6. The integrated circuit chip apparatus of claim 5,
the basic processing circuit is specifically configured to perform an inner product processing on the partial broadcast data block and the basic data block in a fixed-point data type to obtain an inner product processing result, and send the inner product processing result to the main processing circuit.
7. The integrated circuit chip apparatus of claim 5,
the basic processing circuit is specifically configured to multiplex p times that the partial broadcast data block executes the partial broadcast data block and p basic data blocks to respectively execute inner product operation to obtain p partial processing results, and send the p partial processing results to the main processing circuit, where p is an integer greater than or equal to 2.
8. The integrated circuit chip apparatus of claim 1,
the main processing circuit includes: a master register or on-master cache circuit;
the base processing circuit includes: basic registers or basic on-chip cache circuits.
9. The integrated circuit chip apparatus of claim 8,
the main processing circuit includes: one or any combination of vector arithmetic unit circuit, arithmetic logic unit circuit, accumulator circuit, matrix transposition circuit, direct memory access circuit or data rearrangement circuit.
10. The integrated circuit chip apparatus of claim 1,
the input data block is: a vector or matrix;
the weight data block is: a vector or a matrix.
11. A neural network operation device, comprising one or more integrated circuit chip devices as claimed in any one of claims 1 to 10.
12. A combined processing apparatus, characterized in that the combined processing apparatus comprises: the neural network computing device, the universal interconnect interface, and the general purpose processing device of claim 11;
the neural network operation device is connected with the general processing device through the general interconnection interface.
13. A chip incorporating the device of any one of claims 1-12.
14. An electronic device, characterized in that the electronic device comprises a chip according to claim 13.
15. A method of operation of a neural network, the method being implemented within an integrated circuit chip device, the integrated circuit chip device comprising: the integrated circuit chip apparatus of any one of claims 1-10, the integrated circuit chip apparatus to perform a matrix-by-matrix operation, a matrix-by-vector operation, or a vector-by-vector operation of a neural network.
CN201711455388.4A 2017-12-27 2017-12-27 Integrated circuit chip device and related product Active CN109978152B (en)

Priority Applications (13)

Application Number Priority Date Filing Date Title
CN201711455388.4A CN109978152B (en) 2017-12-27 2017-12-27 Integrated circuit chip device and related product
EP20201907.1A EP3783477B1 (en) 2017-12-27 2018-12-26 Integrated circuit chip device
PCT/CN2018/123929 WO2019129070A1 (en) 2017-12-27 2018-12-26 Integrated circuit chip device
EP18896519.8A EP3719712B1 (en) 2017-12-27 2018-12-26 Integrated circuit chip device
EP20203232.2A EP3789871B1 (en) 2017-12-27 2018-12-26 Integrated circuit chip device
US16/903,304 US11544546B2 (en) 2017-12-27 2020-06-16 Integrated circuit chip device
US17/134,444 US11748601B2 (en) 2017-12-27 2020-12-27 Integrated circuit chip device
US17/134,446 US11748603B2 (en) 2017-12-27 2020-12-27 Integrated circuit chip device
US17/134,487 US11748605B2 (en) 2017-12-27 2020-12-27 Integrated circuit chip device
US17/134,445 US11748602B2 (en) 2017-12-27 2020-12-27 Integrated circuit chip device
US17/134,435 US11741351B2 (en) 2017-12-27 2020-12-27 Integrated circuit chip device
US17/134,486 US11748604B2 (en) 2017-12-27 2020-12-27 Integrated circuit chip device
US18/073,924 US11983621B2 (en) 2017-12-27 2022-12-02 Integrated circuit chip device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711455388.4A CN109978152B (en) 2017-12-27 2017-12-27 Integrated circuit chip device and related product

Publications (2)

Publication Number Publication Date
CN109978152A CN109978152A (en) 2019-07-05
CN109978152B true CN109978152B (en) 2020-05-22

Family

ID=67074163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711455388.4A Active CN109978152B (en) 2017-12-27 2017-12-27 Integrated circuit chip device and related product

Country Status (1)

Country Link
CN (1) CN109978152B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111158967B (en) 2019-12-31 2021-06-08 北京百度网讯科技有限公司 Artificial intelligence chip testing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104572011A (en) * 2014-12-22 2015-04-29 上海交通大学 FPGA (Field Programmable Gate Array)-based general matrix fixed-point multiplier and calculation method thereof
CN105426344A (en) * 2015-11-09 2016-03-23 南京大学 Matrix calculation method of distributed large-scale matrix multiplication based on Spark
CN106126481A (en) * 2016-06-29 2016-11-16 华为技术有限公司 A kind of computing engines and electronic equipment
CN106844294A (en) * 2016-12-29 2017-06-13 华为机器有限公司 Convolution algorithm chip and communication equipment
CN107229967A (en) * 2016-08-22 2017-10-03 北京深鉴智能科技有限公司 A kind of hardware accelerator and method that rarefaction GRU neutral nets are realized based on FPGA

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104572011A (en) * 2014-12-22 2015-04-29 上海交通大学 FPGA (Field Programmable Gate Array)-based general matrix fixed-point multiplier and calculation method thereof
CN105426344A (en) * 2015-11-09 2016-03-23 南京大学 Matrix calculation method of distributed large-scale matrix multiplication based on Spark
CN106126481A (en) * 2016-06-29 2016-11-16 华为技术有限公司 A kind of computing engines and electronic equipment
CN107229967A (en) * 2016-08-22 2017-10-03 北京深鉴智能科技有限公司 A kind of hardware accelerator and method that rarefaction GRU neutral nets are realized based on FPGA
CN106844294A (en) * 2016-12-29 2017-06-13 华为机器有限公司 Convolution algorithm chip and communication equipment

Also Published As

Publication number Publication date
CN109978152A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
US11748605B2 (en) Integrated circuit chip device
CN109961138B (en) Neural network training method and related product
CN109961136B (en) Integrated circuit chip device and related product
CN109978131B (en) Integrated circuit chip apparatus, method and related product
CN109961134B (en) Integrated circuit chip device and related product
WO2019114842A1 (en) Integrated circuit chip apparatus
CN109961135B (en) Integrated circuit chip device and related product
TWI767098B (en) Method for neural network forward computation and related product
CN109977446B (en) Integrated circuit chip device and related product
CN109978152B (en) Integrated circuit chip device and related product
CN109960673B (en) Integrated circuit chip device and related product
CN109978157B (en) Integrated circuit chip device and related product
CN110197267B (en) Neural network processor board card and related product
CN110197264B (en) Neural network processor board card and related product
CN109978148B (en) Integrated circuit chip device and related product
CN109978156B (en) Integrated circuit chip device and related product
CN109961133B (en) Integrated circuit chip device and related product
WO2019165946A1 (en) Integrated circuit chip device, board card and related product
CN109978153B (en) Integrated circuit chip device and related product
CN109961137B (en) Integrated circuit chip device and related product
CN109978158B (en) Integrated circuit chip device and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100000 room 644, No. 6, No. 6, South Road, Beijing Academy of Sciences

Applicant after: Zhongke Cambrian Technology Co., Ltd

Address before: 100000 room 644, No. 6, No. 6, South Road, Beijing Academy of Sciences

Applicant before: Beijing Zhongke Cambrian Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant