CN113597621A - Computing resource allocation technique and neural network system - Google Patents

Computing resource allocation technique and neural network system Download PDF

Info

Publication number
CN113597621A
CN113597621A CN201880100574.2A CN201880100574A CN113597621A CN 113597621 A CN113597621 A CN 113597621A CN 201880100574 A CN201880100574 A CN 201880100574A CN 113597621 A CN113597621 A CN 113597621A
Authority
CN
China
Prior art keywords
neural network
layer
computing
units
output data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880100574.2A
Other languages
Chinese (zh)
Inventor
刘哲
曾重
王铁英
段小祥
张慧敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN113597621A publication Critical patent/CN113597621A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a computing resource allocation technology and a neural network system. The neural network system comprises a processor and a plurality of neural network chips connected with the processor, wherein each neural network chip comprises a plurality of computing units with a storage and computation integrated function. And the processor configures a computing unit for executing the neural network operation of each neural network layer according to the output data volume of the neural network layer in the neural network system, so that the computing power of the computing units executing the operation of the adjacent neural network layers is matched. The neural network system provided by the application can be applied to the field of artificial intelligence, and the data processing efficiency of the neural network system is improved.

Description

Computing resource allocation technique and neural network system Technical Field
The present application relates to the field of computer technologies, and in particular, to a computing resource allocation technique and a neural network system.
Background
Deep Learning (DL) is an important branch of Artificial Intelligence (AI), and is a neural network for simulating human brain structure, and can achieve better recognition effect than the traditional shallow Learning mode. Convolutional Neural Network (CNN) is one of the most common deep learning architectures and the most widely studied deep learning method. A typical area of convolutional neural network processing is image processing. Image processing is an application that recognizes and analyzes an input image, and finally outputs a set of classified image contents. For example, the convolutional neural network algorithm can be used for extracting and classifying the body color, the license plate number and the vehicle type of the vehicle on one picture.
Convolutional neural networks are typically constructed by a three-layer sequence: the features of the picture are extracted by a Convolutional Layer (volumetric Layer), a Pooling Layer (Pooling Layer), and a modified Linear Unit (ReLU). The process of extracting the picture features is actually a process of a series of matrix operations (e.g., matrix multiply-add operations). Therefore, how to process the pictures in the network in parallel and quickly becomes a problem to be researched by the convolutional neural network.
Disclosure of Invention
The computing resource allocation technology and the neural network system can improve the data processing speed in the neural network.
In a first aspect, an embodiment of the present invention provides a method for allocating computing resources applied in a neural network system. The method may be performed by a host connected to the neural network chip. According to the method, after the data volume of first output data of a first neural network layer and the data volume of second output data of a second neural network layer in the neural network system are obtained, N first weights to be configured for the first neural network layer and M second weights to be configured for the second neural network layer are determined according to deployment requirements of the neural network system. Further, according to the calculation specification of the calculation unit in the neural network system, the N first weights are deployed on the P calculation units, and the M second weights are deployed on the Q calculation units. Wherein the input data of the second neural network layer includes the first output data, N and M are both positive integers, and a ratio of N to M corresponds to a ratio of a data amount of the first output data to a data amount of the second output data. P and Q are positive integers, the P computational units are used for executing the operation of the first neural network layer, and the Q computational units are used for executing the operation of the second neural network layer.
According to the computing resource allocation method provided by the embodiment of the invention, when the computing unit executing each layer of neural network operation is configured according to the deployment requirement, the data volume output by the adjacent neural network layers is considered, so that the computing capacities of the computing nodes executing different neural network layer operations are matched, the computing capacity of the computing node executing each layer of neural network operation can be fully utilized, and the data processing efficiency is improved.
With reference to the first aspect, in a possible implementation manner, the deployment requirement includes a computation delay, and the first neural network layer is a starting layer of all neural network layers in the neural network system. The determining N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer includes: determining the value of N according to the data volume of the first output data, the calculation time delay and the calculation frequency of a resistance change random access memory cross matrix ReRAM crossbar in a calculation unit; and determining the value of M according to the ratio of the data volume of the first output data to the data volume of the second output data and the value of N.
Specifically, in a possible implementation manner, when the first neural network layer is a starting layer of all neural network layers in the neural network system, the value of N may be obtained according to the following formula:
Figure PCTCN2018125239-APPB-000001
wherein,
Figure PCTCN2018125239-APPB-000002
the number of weights N used to indicate the required configuration of the first layer neural network,
Figure PCTCN2018125239-APPB-000003
the number of rows of output data for the first layer neural network,
Figure PCTCN2018125239-APPB-000004
and outputting the column number of the data for the first layer of neural network. t is the set calculation delay and f is the calculation frequency of CrossBar in the calculation unit. The value of M may be calculated according to the following formula: N/M is the first output data amount/the second output data amount.
With reference to the first aspect, in yet another possible implementation manner, the neural network system includes a plurality of neural network chips, each neural network chip includes a plurality of computing units, each computing unit includes at least one resistive random access memory crossbar ReRAM crossbar, and the deployment requirement includes the number of chips of the neural network system. When the first neural network layer is an initial layer of the neural network system, the determining N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer includes: determining the value of N according to the number of the chips, the number of ReRAM cross bars in each chip, the number of ReRAM cross bars required for deploying one weight of each layer of the neural network and the ratio of the output data quantity of the adjacent neural network layers; and determining the value of M according to the ratio of the data volume of the first output data to the data volume of the second output data and the value of N.
In particular, in one possible implementation,when the deployment requirement is the number of chips required by the neural network system and the first neural network layer is the initial layer of the neural network system, N first weights to be configured for the first neural network layer and M second weights to be configured for the second neural network layer may be obtained according to the following two formulas, where the value of N is the number of chips required by the neural network system, and the value of N is the number of chips required by the first neural network layer and the M second weights to be configured for the second neural network layer
Figure PCTCN2018125239-APPB-000005
The value of M is
Figure PCTCN2018125239-APPB-000006
The value of (c).
Figure PCTCN2018125239-APPB-000007
Figure PCTCN2018125239-APPB-000008
Wherein, xb1Representing the number of crossbars required to deploy one weight of the first (or called starting) layer neural network,
Figure PCTCN2018125239-APPB-000009
for representing the number of weights, xb, required for the start layer2For representing the number of crossbars required to deploy a weight in the second tier neural network,
Figure PCTCN2018125239-APPB-000010
for representing the number of weights required by the second layer neural network. xbnTo represent the number of crossbars required to deploy a share of weights in the nth layer neural network,
Figure PCTCN2018125239-APPB-000011
the number of weights required to represent the n-th layer neural network, K is the number of chips of the neural network system required by the deployment requirement, and L is the number of crossbars in each chip.
Figure PCTCN2018125239-APPB-000012
The number of weights required for representing the ith layer;
Figure PCTCN2018125239-APPB-000013
for indicating the number of weights required for the i-1 st layer,
Figure PCTCN2018125239-APPB-000014
for indicating the number of rows of the ith layer output data,
Figure PCTCN2018125239-APPB-000015
for indicating the number of columns of the i-th layer output data,
Figure PCTCN2018125239-APPB-000016
for indicating the number of rows of the output data of the i-1 th layer,
Figure PCTCN2018125239-APPB-000017
the number of columns used for representing the output data of the (i-1) th layer, the value of i can be from 2 to n, and n is the total number of the neural network layers in the neural network system.
With reference to the first aspect, in yet another possible implementation manner, the neural network system includes a plurality of neural network chips, each neural network chip includes a plurality of secondary computing nodes, each secondary computing node includes a plurality of computing units, and the method further includes mapping the P computing units and the Q computing units to a plurality of secondary computing nodes according to the number of computing units included in the secondary computing nodes in the neural network system. Wherein at least a portion of the P computing units and at least a portion of the Q computing units are mapped into the same secondary computing node. According to the mode, the computing units executing the operation of the adjacent neural network layers can be positioned in the same secondary computing node as much as possible, so that the data volume transmitted between the computing nodes can be reduced, and the data transmission speed between different neural network layers can be improved.
In yet another possible implementation manner, the method further includes mapping the plurality of secondary computing nodes mapped by the P computing units and the Q computing units into the plurality of neural network chips according to the number of secondary computing nodes included in each neural network chip. At least a part of the secondary computing nodes to which the P computing units belong and at least a part of the secondary computing nodes to which the Q computing units belong are mapped into the same neural network chip. According to the mode, secondary computing nodes for executing the operation of the adjacent neural network layers can be positioned in the same neural network chip as much as possible, the data volume transmitted between the computing nodes is further reduced, and the speed of data transmission between different neural network layers is improved.
With reference to the first aspect and the one possible implementation manner of the first aspect, in yet another possible implementation manner, the corresponding relationship between the ratio of N and M and the ratio between the data amount of the first output data and the data amount of the second output data includes: the ratio of N to M is the same as the ratio of the data amount of the first output data to the data amount of the second output data.
In a second aspect, the present application provides a neural network system, including a host and a plurality of neural network chips, where each neural network chip includes a plurality of computing units, and the host is connected to the plurality of neural network chips and is configured to execute the method for allocating computing resources described in the first aspect or any one of the possible implementation manners of the first aspect.
In a third aspect, the present application provides a resource allocation apparatus, which includes a functional module capable of executing the method for allocating computing resources described in the first aspect and any one of the possible implementation manners of the first aspect.
In a fourth aspect, the present application further provides a computer program product, which includes program code including instructions to be executed by a computer to implement the method for allocating computing resources described in the first aspect and any one of the possible implementation manners of the first aspect.
In a fifth aspect, the present application further provides a computer-readable storage medium for storing program code, where the program code includes instructions to be executed by a computer to implement the method for allocating computing resources described in the foregoing first aspect and any one of the possible implementations of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a schematic structural diagram of a neural network system according to an embodiment of the present invention;
fig. 1A is a schematic structural diagram of another neural network system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a compute node in a neural network chip according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a logic structure of a neural network layer in a neural network system according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a set of computing nodes for processing data of different neural network layers in a neural network system according to an embodiment of the present invention;
FIG. 5 is a flowchart of a method for allocating computing resources in a neural network system according to an embodiment of the present invention;
FIG. 6 is a flowchart of another method for allocating computing resources according to an embodiment of the present invention;
fig. 6A is a flowchart of a resource mapping method according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating another method for allocating computing resources according to an embodiment of the present invention;
FIG. 8 is a flow chart of a data processing method according to an embodiment of the present invention;
FIG. 9 is a weight diagram according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a resistive random access memory crossbar (ReRAM crossbar) according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a resource allocation apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of a portion of the invention and not all embodiments.
Deep Learning (DL) is an important branch of Artificial Intelligence (AI), and is a neural network for simulating human brain structure, and can achieve better recognition effect than the traditional shallow Learning mode. An Artificial Neural Network (ANN), referred to as Neural Network (NN) or Neural Network-like Network for short, is a mathematical model or computational model that imitates the structure and function of a biological Neural Network (central nervous system of animals, especially the brain) in the field of machine learning and cognitive science, and is used for estimating or approximating functions. The artificial Neural Network may include a Convolutional Neural Network (CNN), a Deep Neural Network (DNN), a Multilayer Perceptron (MLP), and the like. Fig. 1 is a schematic structural diagram of an artificial neural network system according to an embodiment of the present invention. Fig. 1 illustrates a convolutional neural network as an example. As shown in fig. 1, convolutional neural network system 100 may include a host 105 and convolutional neural network circuitry 110. The convolutional neural network circuit 110 may also be referred to as a neural network accelerator. The convolutional neural network circuit 110 is connected to the host 105 through a host interface. The host interface may include a standard host interface as well as a network interface (network interface). For example, the host interface may include a Peripheral Component Interconnect Express (PCIE) interface. As shown in fig. 1, convolutional neural network circuitry 110 may be connected to host 105 via PCIE bus 106. Therefore, data can be input into the convolutional neural network circuit 110 through the PCIE bus 106, and the data after the processing by the convolutional neural network circuit 110 is received through the PCIE bus 106. Furthermore, the host 105 may also monitor the operating state of the convolutional neural network circuit 110 through a host interface.
Host 105 may include a processor 1052 and a memory 1054. It should be noted that, in addition to the devices shown in fig. 1, the host 105 may further include other devices such as a communication interface and a magnetic disk as an external storage, which is not limited herein.
Processor (Processor)1052 is the computational core and Control core (Control Unit) of host 105. Processor 1052 may include multiple processor cores (cores). Processor 1052 may be an ultra-large scale integrated circuit. An operating system and other software programs are installed on processor 1052, and thus processor 1052 is able to access memory 1054, cache, disks, and peripheral devices, such as the neural network circuitry of FIG. 1. It is understood that, in the embodiment of the present invention, the Core in the processor 1052 may be, for example, a Central Processing Unit (CPU) or other Specific Integrated Circuit (ASIC).
The memory 1054 is the main memory of the host 105. The memory 1054 is coupled to the processor 1052 via a Double Data Rate (DDR) bus. Memory 1054 is typically used to store various operating systems running software, input and output data, and information exchanged with external memory. In order to increase the access speed of the processor 1052, the memory 1054 needs to have an advantage of high access speed. In a conventional computer system architecture, a Dynamic Random Access Memory (DRAM) is usually used as the Memory 1054. The processor 1052 is capable of accessing the memory 1054 at high speed through a memory controller (not shown in fig. 1) to perform read and write operations on any one of the memory locations in the memory 1054.
The Convolutional Neural Network (CNN) circuit 110 is a chip array composed of a plurality of Neural Network (NN) chips (chips). For example, as shown in fig. 1, the CNN circuit 110 includes a plurality of NN chips 115 and a plurality of routers 120. For convenience of description, the NN chip115 in the application is simply referred to as the chip115 in the embodiment of the present invention. The plurality of chips 115 are connected to each other through the router 120. For example, one chip115 may be connected to one or more routers 120. The plurality of routers 120 may comprise one or more network topologies. The chips 115 may communicate data therebetween over the one or more network topologies. For example, the plurality of routers 120 may form a first network 1106 and a second network 1108, where the first network 1106 is a ring network and the second network 1108 is a two-dimensional mesh (2D mesh) network. Thus, data input from the input port 1102 can be sent to the corresponding chip115 through the network formed by the plurality of routers 120, and data processed by any one chip115 can also be sent to other chips 115 through the network formed by the plurality of routers 120 to be processed or sent out from the output port 1104.
Further, fig. 1 also shows a schematic structural diagram of the chip 115. As shown in fig. 1, the chip115 may include a plurality of neural network processing units 125 and a plurality of routers 122. Fig. 1 is a diagram illustrating an example of a tile (tile) as a neural network processing unit. In the architecture of the data processing chip115 shown in fig. 1, a tile125 may be connected to one or more routers 122. The plurality of routers 122 in chip115 may constitute one or more network topologies. The tiles 125 may communicate data therebetween via the various network topologies. For example, the plurality of routers 122 may constitute a first network 1156 and a second network 1158, wherein the first network 1156 is a ring network and the second network 1158 is a two-dimensional mesh (2D mesh) network. Accordingly, data input to the chip115 through the input port 1152 can be transmitted to the corresponding tile125 through the network formed by the plurality of routers 122, and data processed by any tile125 can also be transmitted to other tiles 125 through the network formed by the plurality of routers 122 or transmitted from the output port 1154.
It should be noted that, when the chips 115 are interconnected by routers, one or more network topologies formed by the routers 120 in the convolutional neural network circuit 110 and the network topology formed by the routers 122 in the data processing chip115 may be the same or different, as long as data transmission can be performed between the chips 115 or between the tiles 125 through the network topologies, and the chips 115 or the tiles 125 can receive data or output data through the network topologies. The number and type of networks formed by the plurality of routers 120 and 122 are not limited in the embodiments of the present invention. In addition, in the embodiment of the present invention, the router120 and the router 122 may be the same or different. For clarity of description, the chip-connected router120 and tile-connected router 122 are identified in FIG. 1. For convenience of description, in the embodiment of the present invention, the chip115 or tile125 in the convolutional neural network system may also be referred to as a computing node (computing node).
In another situation, in practical applications, the chips 115 may be interconnected through a High Transport IO (High Transport IO) rather than the router 120. As shown in fig. 1A, fig. 1A is a schematic structural diagram of another neural network system according to an embodiment of the present invention. In the neural network system shown in fig. 1A, the host 105 is connected to a plurality of PCIE cards 109 through PCIE interfaces 107, each PCIE card 109 may include a plurality of neural network chips 115, and the neural network chips are connected through a high-speed interconnect interface. The manner of interconnection between the chips is not limited herein. It is understood that, in practical applications, tiles inside a chip may be connected to each other without a router, and a high-speed interconnection manner between chips shown in fig. 1A may also be used. In another case, the tiles inside the chips may be connected by the router shown in fig. 1, and the chips may be interconnected by the high-speed interconnection shown in fig. 1A. The embodiment of the present invention does not limit the connection manner between chips or inside chips.
Fig. 2 is a schematic structural diagram of a compute node in a neural network chip according to an embodiment of the present invention. As shown in fig. 2, a plurality of routers 120, each of which may be connected to a tile125, are included in the chip 115. In practical applications, one router120 may also be connected to multiple tiles 125. As shown in fig. 2, each tile125 may include an input output interface (TxRx)1252, a switch device (TSW)1254, and a plurality of Processing Elements (PEs) 1256. The TxRx 1252 is used for receiving data of tile125 input from Router120 or outputting the calculation result of tile 125. Put another way, TxRx 1252 is used to implement data transfer between tile125 and router 120. A switch (TSW)1254 connects the TxRx 1252, the TSW 1254 is used to implement data transmission between the TxRx 1252 and a plurality of PEs 1256. Included in each PE 1256 may be one or more computing engines (computing engines) 1258, the one or more computing engines 1258 configured to perform neural network computations on data input into PE 1256. For example, the data input to tile125 may be multiplied and added with a convolution kernel preset in tile 125. The result of the Engine 1258 calculation can be sent to other tiles 125 via TSW 1254 and TxRx 1252. In practice, an Engine 1258 may include modules to implement convolution, pooling, or other neural network operations. Here, the specific circuit or function of the Engine is not limited. For simplicity of description, in the embodiment of the present invention, the computing engine is simply referred to as engine.
Those skilled in the art can appreciate that, since a new nonvolatile memory, such as a Resistive random access memory (ReRAM), has the advantage of integrating storage and computation, in recent years, it is also widely used in a neural network system. For example, a resistive random access memory crossbar (ReRAM crossbar) composed of a plurality of memristor cells (ReRAM cells) may be used to perform matrix multiply-add operations in a neural network system. In an embodiment of the present invention, Engine 1258 may include one or more crossbars. The structure of ReRAM crossbar can be shown in fig. 10, and how to perform matrix multiply-add operations by ReRAM crossbar will be described later. As can be seen from the above description of the neural network, the neural network circuit provided in the embodiment of the present invention includes a plurality of NN chips, each NN chip includes a plurality of tiles tile, each tile includes a plurality of processing devices PE, each PE includes a plurality of engine engines, and each engine is implemented by one or more ReRAM crossbar. It can be seen that the neural network system provided by the embodiment of the present invention may include multiple levels of computing nodes, for example, may include four levels of computing nodes: the first level computing node is a chip115, the second level computing node is a tile in the chip, the third level computing node is a PE in the tile, and the fourth level computing node is an Engine in the PE.
On the other hand, those skilled in the art will appreciate that a neural network system may include a plurality of neural network layers. In the embodiment of the present invention, the neural network layer is a logical layer concept, and one neural network layer means that a neural network operation is to be performed. Each layer of neural network calculation is realized by a calculation node. The neural network layer may include convolutional layers, pooling layers, and the like. As shown in fig. 3, the neural network system may include n neural network layers (also referred to as n-layer neural networks), where n is an integer greater than or equal to 2. Fig. 3 illustrates a portion of the neural network layers in the neural network system, which, as shown in fig. 3, may include a first layer 302, a second layer 304, a third layer 306, a fourth layer 308, a fifth layer 310 through an nth layer 312. Among them, the first layer 302 may perform a convolution operation, the second layer 304 may perform a pooling operation on the output data of the first layer 302, the third layer 306 may perform a convolution operation on the output data of the second layer 304, the fourth layer 308 may perform a convolution operation on the output result of the third layer 306, the fifth layer 310 may perform a summing operation on the output data of the second layer 304 and the output data of the fourth layer 308, and so on. It is understood that fig. 4 is only a simple example and illustration of the neural network layers in the neural network system, and does not limit the specific operation of each layer of the neural network, for example, the fourth layer 308 may be a pooling operation, and the fifth layer 310 may be other neural network operations such as a convolution operation or a pooling operation.
In an existing neural network system, after the ith layer in the neural network is calculated, the calculation result of the ith layer is temporarily stored in a preset cache, and when the (i + 1) th layer is calculated, the calculation unit needs to load the calculation result of the ith layer and the weight of the (i + 1) th layer from the preset cache again for calculation. Wherein, the ith layer is any layer in the neural network system. In the embodiment of the invention, because the Engine of the neural network system adopts ReRAM crossbar and the ReRAM has the advantage of integrating storage and calculation, the weight can be configured on a ReRAM cell before calculation, and the calculation result can be directly sent to the next layer for pipeline calculation. Therefore, each layer of neural network only needs to buffer very little data, for example, each layer of neural network only needs to buffer input data for one window calculation. Further, in order to implement parallel and fast processing of data, embodiments of the present invention provide a method for performing stream processing on data through a neural network. For clarity of description, the flow processing of the neural network system is briefly described below in conjunction with the convolutional neural network system of fig. 1.
As shown in fig. 4, to achieve fast processing of data, the computational nodes in the system may be divided into a plurality of node sets to perform computations of different neural network layers, respectively. Fig. 4 illustrates different sets of computing nodes for implementing different layers of neural network computations according to an embodiment of the present invention by dividing tile125 in the neural network system shown in fig. 1. As shown in fig. 4, multiple tiles 125 in a chip115 may be divided into multiple node sets. For example: a first set of nodes 402, a second set of nodes 404, a third set of nodes 406, a fourth set of nodes 408, and a fifth set of nodes 410. Wherein each node set comprises at least one computing node (e.g., tile 125). The computing nodes of the same node set are used for executing neural network operation on the data entering the same neural network layer, and the data of different neural network layers are processed by the computing nodes of different node sets. The processing result processed by one computing node is transmitted to the computing nodes in other node sets for processing, and the pipelined processing mode enables each layer of neural network to only need to cache little data, enables a plurality of computing nodes to concurrently process the same data stream, and improves the processing efficiency. It should be noted that fig. 4 illustrates a set of computing nodes for processing different neural network layers (e.g., convolutional layers) by using tile as an example. In practical applications, since the tile includes a plurality of PEs, each PE includes a plurality of engines, and the amount of computation required for different application scenarios is different. Therefore, according to the actual application situation, the computing nodes in the neural network system may be divided by taking PE, Engine, or chip as the granularity, so that the computing nodes in different sets are used for processing the operations of different neural network layers. According to this way, the compute node according to the embodiment of the present invention may be an Engine, PE, tile, or chip.
Furthermore, those skilled in the art will appreciate that a computing node (e.g., tile125) may perform a computation on data input to the computing node based on weights (weights) of corresponding neural network layers when performing a neural network operation (e.g., a convolution computation), e.g., a tile125 may perform a convolution operation on input data input to the tile125 based on weights of corresponding convolution layers, e.g., perform a matrix multiply-add computation on the weights and the input data. Weights are typically used to represent how important input data is to output data. In neural networks, the weights are typically represented by a matrix. As shown in fig. 9, the weight matrix of j rows and k columns shown in fig. 9 may be a weight of a neural network layer, and each element in the weight matrix represents a weight value. In the embodiment of the present invention, since the computing nodes of one node set are used to perform an operation of one neural network layer, the computing nodes of the same node set may share weights, and the weights of the computing nodes in different node sets may be different. In the embodiment of the present invention, the weight in each computing node may be configured in advance. Specifically, each element in a weight matrix is configured in a ReRAM cell in a corresponding crossbar array, so that a matrix multiply-add operation of input data and configured weights can be realized through the crossbar array. A brief description of how the matrix multiply-add operation is implemented by crossbars will follow.
As can be seen from the above description, in the embodiment of the present invention, in the process of implementing the neural network stream processing, the computation nodes in the neural network may be divided into node sets for processing different neural network layers, and corresponding weights may be configured. Thus, the computing nodes of different node sets can perform corresponding calculations according to the configured weights. And, the compute nodes of each node set can send the computed results to compute nodes for performing the next layer of neural network operations. Those skilled in the art can appreciate that, in the stream processing process for implementing a neural network, if the computing resources for performing the neural network operations of different layers do not match, for example, the computing resources for performing the neural network operations of an upper layer are few, and the computing resources for performing the neural network operations of a next layer are relatively many, the computing resources of the computing nodes of the next layer are wasted. In order to fully utilize the computing power of the computing nodes and match the computing power of the computing nodes executing different neural network layer operations, the embodiment of the invention provides a computing resource allocation method, which is used for allocating the computing nodes executing different neural network layer operations, so that the computing power of the computing nodes executing two adjacent layers of neural network operations in a neural network system is matched, the data processing efficiency in the neural network system is improved, and the computing resources are not wasted.
Fig. 5 is a flowchart of a method for allocating computing resources in a neural network system according to an embodiment of the present invention. The method can be applied to the neural network system shown in fig. 1. The method may be implemented by the host 105, and in particular, may be implemented by the processor 1052 in the host 105, when deploying the neural network or when configuring the neural network system. As shown in fig. 5, the method may include the following steps.
In step 502, network model information of the neural network system is obtained. The network model information includes a first output data volume of a first neural network layer and a second output data volume of a second neural network layer in the neural network system. The network model information may be determined according to actual application requirements. For example, the total number of neural network layers and the algorithm of each layer may be determined according to an application scenario of the neural network system. The network model information may include the total number of neural network layers in the neural network system, the algorithm of each layer, and the data output amount of each layer of the neural network. In the embodiment of the present invention, the algorithm refers to a neural network operation that needs to be performed, for example, the algorithm may refer to a convolution operation, a pooling operation, and the like. As shown in fig. 3, the neural network layer of the neural network system according to the embodiment of the present invention may have n layers, where n is an integer not less than 2. In this step, the first neural network layer and the second neural network layer may be two layers of the n layers that are operationally dependent. In the embodiment of the present invention, the two neural network layers having a dependency relationship means that input data of one neural network layer includes output data of the other neural network layer. Two neural network layers having a dependency relationship may also be referred to as being adjacent layers. For example, as shown in fig. 3, the output data of the first layer 302 is the input data of the second layer 304, and thus, the first layer 302 and the second layer 304 have a dependency relationship. The output data of the second layer 304 is the input data of the third layer 306, and the input data of the fifth layer 310 includes the output data of the second layer 304, so that the second layer 304 and the third layer 306 have a dependency relationship, and the second layer 304 and the fifth layer 310 also have a dependency relationship. For clarity of description, in the embodiment of the present invention, the first layer 302 shown in fig. 3 is taken as a first neural network layer, and the second layer 304 is taken as a second neural network layer.
In step 504, N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer are determined according to the deployment requirement of the neural network system, the first output data volume and the second output data volume. Wherein N and M are positive integers, and the ratio of N to M corresponds to the ratio of the first output data amount to the second output data amount. In practical applications, the deployment requirement may include a computation delay of the neural network system, and may also include the number of chips to be deployed by the neural network system. As will be appreciated by those skilled in the art, the operation of the neural network is mainly to perform a matrix multiply-add operation, and the output data of each layer of the neural network is also a real matrix in one or more dimensions, so that the first output data amount includes the number of rows and columns of the output data of the first neural network layer, and the second output data amount includes the number of rows and columns of the output data of the second neural network layer.
As described above, when a compute node performs a neural network operation, for example, a convolution operation or a pooling operation, it is necessary to perform a multiply-add calculation on input data and weights of corresponding neural network layers. Since the weights are arranged on cells in a crossbar, the crossbar in the computing unit performs computations on the input data in parallel, and thus, the number of weights may determine the parallel computing power of the plurality of computing units performing the neural network operation. Stated another way, the computational power of a compute node performing a neural network operation is determined by the number of weights configured in the compute unit performing the neural network operation. In the embodiment of the present invention, in order to match the computing power of two layers of neural networks performing adjacent operations in the neural network system, the number of weights to be configured for the first neural network layer and the second neural network layer may be determined according to a specific deployment requirement and the first output data amount and the second output data amount. Since the weights of different neural network layers are not necessarily the same, for clarity of description, in the embodiment of the present invention, the weight required for the first neural network layer to operate is referred to as a first weight, and the weight required for the second neural network layer to operate is referred to as a second weight. The first neural network layer operation is executed by the computing node based on the first weight to perform corresponding calculation on the data input into the first neural network layer, and the first neural network layer operation is executed by the computing node based on the second weight to perform corresponding calculation on the data input into the second neural network layer. The calculation here may be to perform a neural network operation such as a convolution or pooling operation.
How to determine the number of weights to be configured for each layer of neural network in this step will be described in detail below according to different deployment requirements. Wherein the number of weights to be configured for each layer of the neural network comprises the number N of first weights to be configured for the first neural network layer and the number M of second weights to be configured for the second neural network layer. In the embodiment of the present invention, the weight refers to a weight matrix. The number of weights refers to the number of weight matrices required, or the number of copies of the weights. The number of weights can also be understood as how many identical weight matrices need to be configured.
In one case, in order to enable the calculation of the entire neural network system not to exceed the set calculation delay when the deployment requirement of the neural network system is the calculation delay of the neural network system, the number of weights that need to be configured for the first layer of neural network may be determined according to the data output amount of the first layer (i.e., the initial layer in all the neural network layers in the neural network system), the calculation delay, and the calculation frequency of ReRAM crossbar used in the neural network system, and then the number of weights that need to be configured for each layer of neural network may be obtained according to the number of weights that need to be configured for the first layer of neural network and the output data amount of each layer of neural network. Specifically, the number of weights required to be configured by the first-layer (i.e., the starting layer) neural network can be obtained according to the following formula one:
Figure PCTCN2018125239-APPB-000018
(formula one)
Wherein,
Figure PCTCN2018125239-APPB-000019
a number of weights used to indicate the required configuration of the first layer (i.e., the starting layer) neural network,
Figure PCTCN2018125239-APPB-000020
the number of rows of output data for the first layer (i.e., the starting layer) neural network,
Figure PCTCN2018125239-APPB-000021
the number of columns of data for the first layer (i.e., the starting layer) neural network output. t is the set calculation delay and f is the calculation frequency of CrossBar in the calculation unit. Those skilled in the art will appreciate that the value of f can be obtained based on the configuration parameters of the crossbar employed. The data amount of the output data of the first-layer neural network can be obtained according to the network model information acquired in step 502. It should be noted that, in the embodiment of the present invention, the first layer neural network is a starting layer neural network in all neural network layers in the neural network system. It can be understood that, when the first neural network layer is the starting layer of all the neural network layers in the neural network system, the number N of the first weights is calculated according to the formula one
Figure PCTCN2018125239-APPB-000022
The value of (c).
After the number of weights required by the first layer of neural network is obtained, in order to improve the data processing efficiency in the neural network system, avoid the occurrence of bottlenecks or data waiting in a pipeline parallel processing mode and match the processing speeds of adjacent neural network layers, in the embodiment of the present invention, the ratio of the number of weights required by the adjacent two layers may be made to correspond to the ratio of the output data amount of the adjacent two layers. For example, the ratios may be the same. Therefore, in the embodiment of the present invention, the number of weights required for each layer of the neural network may be determined according to the number of weights required for the first layer of the neural network and the ratio of the amount of output data of each layer of the neural network. Specifically, the number of weights required by each layer of neural network can be calculated according to the following formula (two):
Figure PCTCN2018125239-APPB-000023
(formula two)
Wherein,
Figure PCTCN2018125239-APPB-000024
the number of weights required for representing the ith layer;
Figure PCTCN2018125239-APPB-000025
for indicating the number of weights required for the i-1 st layer,
Figure PCTCN2018125239-APPB-000026
for indicating the number of rows of the ith layer output data,
Figure PCTCN2018125239-APPB-000027
for indicating the number of columns of the i-th layer output data,
Figure PCTCN2018125239-APPB-000028
for indicating the number of rows of the output data of the i-1 th layer,
Figure PCTCN2018125239-APPB-000029
the number of columns used for representing the output data of the (i-1) th layer, the value of i can be from 2 to n, and n is the total number of the neural network layers in the neural network system. Put another way, in an embodiment of the present invention, the ratio of the number of weights required to perform the i-1 th layer neural network operation to the number of weights required to perform the i-th layer neural network operation corresponds to the ratio of the output data amount of the i-1 th layer to the output data amount of the i-th layer.
Those skilled in the art will appreciate that the output data of each neural network layer may include a plurality of channels (channels), wherein a channel refers to the number of kernels in each neural network layer. One Kernel represents a feature extraction method, and one feature map (feature map) is correspondingly generated, and a plurality of feature maps constitute output data of the layer. The weights used by one neural network layer include a plurality of kernel. Therefore, in practical applications, in yet another case, the output data amount of each layer may also take into account the number of channels of the neural network of each layer. Specifically, after the number of weights required for the first neural network layer is obtained according to the first formula, the number of weights required for each layer of neural network may be obtained according to the following third formula:
Figure PCTCN2018125239-APPB-000030
(formula three)
The difference between the formula three and the formula two is that the formula three further considers the number of channels output by each layer of neural network on the basis of the formula two. Wherein, Ci-1For indicating the number of channels, C, of the i-1 st layeriThe channel number used for expressing the ith layer is from 2 to n, n is the total number of the neural network layers in the neural network system, and n is an integer not less than 2. The number of channels per layer of neural network can be obtained from the network model information.
In the embodiment of the present invention, after the number of weights required for the start layer is obtained according to the above formula one, the number of weights required for each layer of the neural network may be calculated according to formula two (or formula three) and the output data amount of each layer of the neural network included in the network model information. For example, when the first neural network layer is a starting layer of all the neural network layers in the neural network system, after the number N of the first weights is obtained by calculation according to the formula one, the number M of the second weights required by the second neural network layer may be obtained according to the formula two, according to the value of N and the set first output data amount and second output data amount. Stated another way, after obtaining the value of N, the value of M can be calculated according to the following equation: N/M is the first output data amount/the second output data amount.
In still another case, when the deployment requirement is the number of chips required by the neural network system, the number of weights required to obtain the first layer of neural network may be calculated by combining the following formula four and the foregoing formula two, or may be calculated by combining the following formula four and the foregoing formula three.
Figure PCTCN2018125239-APPB-000031
(formula four)
In the above formula four, xb1Representing the number of crossbars required to deploy one weight of the first (or called starting) layer neural network,
Figure PCTCN2018125239-APPB-000032
for representing the number of weights, xb, required for the start layer2For representing the number of crossbars required to deploy a weight in the second tier neural network,
Figure PCTCN2018125239-APPB-000033
for representing the number of weights required by the second layer neural network. xbnTo represent the number of crossbars required to deploy a share of weights in the nth layer neural network,
Figure PCTCN2018125239-APPB-000034
the number of weights required to represent the n-th layer neural network, K is the number of chips of the neural network system required by the deployment requirement, and L is the number of crossbars in each chip. The fourth expression shows that the sum of the number of crossbars of each neural network layer is less than or equal to the total number of crossbars included in the chip in the set neural network. The description of formula two and formula three can refer to the previous description, and will not be repeated herein.
As will be appreciated by those skilled in the art, after the neural network system is modeled, a weight for each neural network layer of the neural network system and the specification of the crossbar employed in the neural network system (i.e., ReRAM ce in crossbar)The number of rows and columns of ll) has already been determined. Alternatively, the network model information of the neural network system further includes a weight size and crossbar specification information used by each neural network layer. Therefore, in the embodiment of the present invention, xb of the neural network of the i-th layer can be obtained according to the size of the weight of each layer (i.e. the number of rows and columns of the weight matrix) and the specification of crossbariWherein i takes on a value from 1 to n. The value of L may be obtained from parameters of a chip employed by the neural network system. In an embodiment of the present invention, in one case, the number of weights (i.e., the number of weights required to obtain the initial layer neural network according to the above formula four and formula two
Figure PCTCN2018125239-APPB-000035
) Then, the number of weights that each layer needs to be configured can be obtained according to formula two and the output data amount of each layer obtained from the network model information. In another case, the number of weights (i.e., the number of weights needed to obtain the starting layer neural network) is obtained according to the above equations four and three
Figure PCTCN2018125239-APPB-000036
) Then, the number of weights to be arranged for each layer may also be obtained according to the formula three and the output data amount for each layer.
In step 506, according to the calculation specification of the calculation units in the neural network system, N first weights are deployed on P calculation units, and M second weights are deployed on Q calculation units. Wherein P and Q are positive integers, the P computational units are configured to perform operations of the first neural network layer, and the Q computational units are configured to perform operations of the second neural network layer. In the embodiment of the present invention, the calculation specification of the calculation unit refers to the number of crossbar included in one calculation unit. In practice, one computing unit may comprise one or more crossbars. Specifically, as described above, since the network model information of the neural network system further includes the size of a weight used by each neural network layer and the specification information of the crossbar, a deployment relationship between a weight and a crossbar may be obtained. After the number of weights that each layer of neural network needs to be configured is obtained in step 504, the weights of each layer may be deployed on the corresponding number of computing units according to the number of crossbars included in each computing unit. Specifically, the elements in the weight matrix are respectively configured in the ReRAM cell of the crossbar of the computing unit. In the embodiment of the present invention, the computing unit may refer to a PE or an engine, one PE may include a plurality of engines, and one engine may include one or more crossbars. Since the weights for each layer may be different in size, one weight may be deployed on one or more edges.
Specifically, in this step, P computing units to be deployed by the N first weights and Q computing units to be deployed by the M second weights may be determined according to a deployment relationship between one weight and a crossbar and the number of crossbar included in the computing units. For example, N of the first weights of the first neural network layer may be deployed to P computational units, and M of the second weights may be deployed to Q computational units. Specifically, the elements in the N first weights are respectively configured in ReRAM cells of corresponding crossbars in the P computing units. The elements in the M second weights are respectively configured into ReRAM cells of corresponding crossbars in the Q computing units. Thus, the P calculation units may perform operations of a first neural network layer on the input data input to the P calculation units based on the configured N first weights, and the Q calculation units may perform operations of a second neural network layer on the input data input to the Q calculation units based on the configured Q second weights.
It can be seen from the foregoing embodiments that, in the computing resource allocation method provided in the embodiments of the present invention, when configuring the computing unit executing each layer of neural network operation according to the deployment requirement, the data amount output by the adjacent neural network layers is considered, so that the computing capabilities of the computing nodes executing different neural network layer operations are matched, thereby fully utilizing the computing capabilities of the computing nodes and improving the efficiency of data processing.
Further, in the embodiment of the present invention, in order to further reduce the transmission amount of data between the computing units executing different neural network layers, the transmission bandwidth between the computing units or the computing nodes is saved. The computing unit may be mapped into an upper level computing node of the computing unit in the following manner. As previously mentioned, four levels of compute nodes may be included in a neural network system: the system comprises a first-level computing node chip, a second-level computing node tile, a third-level computing node PE and a fourth-level computing node engine. Fig. 6 details how to map P computing units that need to deploy the N first weights and Q computing units that need to deploy the M second weights to an upper computing node, taking a fourth-level computing node engine as an example of a computing unit. The method may still be implemented by the host 105 in the neural network system shown in fig. 1 and 1A. As shown in fig. 6, the method may include the following steps.
In step 602, network model information of the neural network system is obtained. The network model information includes a first output data volume of a first neural network layer and a second output data volume of a second neural network layer in the neural network system. In step 604, N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer are determined according to the deployment requirement of the neural network system, the first output data volume and the second output data volume. In step 606, P computing units to be deployed by the N first weights and Q computing units to be deployed by the M second weights are determined according to the computing specifications of the computing units in the neural network system. In the embodiment of the present invention, steps 602, 604 and 606 can be referred to the related descriptions in the aforementioned steps 502, 504 and 506, respectively. Step 606 is different from step 506 in that, in step 606, after determining that P computing units that need to be deployed by the N first weights and Q computing units that need to be deployed by the M second weights, the N first weights are not directly deployed to the P computing units, and the M second weights are deployed to the Q computing units. But rather proceeds to step 608.
In step 608, the P computing units and the Q computing units are mapped to a plurality of three-level computing nodes according to the number of computing units included in the three-level computing nodes in the neural network system. Specifically, as shown in fig. 6A, fig. 6A is a flowchart of a resource mapping method according to an embodiment of the present invention. Fig. 6A illustrates how the engine is mapped into the third-level computing node PE, taking the computing unit as the fourth-level computing node engine as an example. As shown in fig. 6A, the method may include the following steps.
In step 6082, the P computational units and the Q computational units are divided into m groups, each group including P/m computational units for executing a first neural network layer and Q/m computational units for executing a second neural network layer. Wherein m is an integer of not less than 2, and both the values of P/m and Q/m are integers. Specifically, the P calculation units are taken as the calculation units for executing the i-1 st layer, and the Q calculation units are taken as the calculation units for executing the i-1 st layer. As shown in fig. 7, 8 computing units (i.e., P ═ 8) need to be allocated to the i-1 th layer, 4 computing units (i.e., Q ═ 4) need to be allocated to the i-th layer, 4 computing units need to be allocated to the i +1 th layer, and the layers are divided into 2 groups (i.e., m ═ 2) for example. Two groups can be obtained as shown in fig. 7, wherein the 1 st group includes 4 calculation units of the i-1 st layer, 2 calculation units of the i-1 st layer, and 2 calculation units of the i +1 st layer. Similarly, group 2 includes 4 computational units of the i-1 th layer, 2 computational units of the i-1 th layer, and 2 computational units of the i +1 th layer.
In step 6084, the computing units of each group are mapped to the three-level computing nodes, respectively, according to the number of computing units included in the three-level computing nodes. In the mapping process, the computing units which execute the operation of the adjacent neural network layers are mapped into the same three-level node as much as possible. As shown in fig. 7, in the neural network system, it is assumed that each first-level computing node chip includes 8 second-level computing nodes tile, each tile includes 2 third-level computing nodes PE, and each PE includes 4 edges. For group 1, the 4 fields of layer i-1 may be mapped to a tertiary compute node PE (e.g., PE1 in fig. 7), and the 2 fields of layer i +1 may be mapped together to a tertiary compute node PE (e.g., PE2 in fig. 7). Similarly, according to the mapping manner for the computing units in the first group, for the computing units in the second group, 4 engines of the i-1 st layer may be mapped to PE3, and 2 engines of the i-th layer and 2 engines of the i +1 st layer may be mapped together to PE 4. In practical applications, after the mapping of the computing units in the group 1 is completed, the computing units in other groups may be mapped in a mirror image manner according to the mapping manner of the group 1.
According to the mapping mode, the computing units executing adjacent neural network layers (for example, the ith layer and the (i + 1) th layer in fig. 7) can be mapped into the same three-level computing node as much as possible. Therefore, when the output data of the ith layer is sent to the computing unit of the (i + 1) th layer, the output data only needs to be transmitted between the same three-level nodes (PE), the bandwidth between the three-level nodes does not need to be occupied, the data transmission speed can be further improved, and the transmission bandwidth consumption between the nodes is reduced.
Returning to fig. 6, in step 610, the P computing units and the plurality of third-level computing nodes mapped by the Q computing units are mapped into a plurality of second-level computing nodes according to the number of third-level computing nodes included in the second-level computing nodes in the neural network system. In step 612, the plurality of secondary computing nodes mapped by the P computing units and the Q computing units are mapped into the plurality of neural network chips according to the number of secondary computing nodes included in each neural network chip. As described above, fig. 6A is described by taking the example of mapping the engine executing the ith layer operation to the third-level computing node, and similarly, according to the method shown in fig. 6A, the third-level node may also be mapped to the second-level node, and the second-level node may also be mapped to the first-level node. For example, as shown in FIG. 7, for group 1, PE1 performing layer i-1 operations and PE2 performing layer i and layer i +1 operations may be further mapped into the same secondary compute node Tile 1. For group 2, PE3 performing layer i-1 operations and PE4 performing layer i and layer i +1 operations may be further mapped into the same secondary compute node Tile 2. Further, operations Tile1 and Tile2 for performing the i-1 st, i +1 st and i +1 st layers may be mapped to the same chip 1. According to the method, the mapping relation from the first-level computing node chip to the fourth-level computing node engine in the neural network system can be obtained.
In step 614, the N first weight deployments and the M second weights are deployed into P computing units and Q computing units corresponding to the plurality of tertiary nodes, the plurality of secondary computing nodes, and the plurality of primary computing nodes, respectively. In the embodiment of the present invention, the mapping relationship from the first-level computing node chip to the fourth-level computing node engine in the neural network system may be obtained according to the methods described in fig. 6A and fig. 7. For example, mapping relationships between the P computing units and the Q computing units and the plurality of tertiary nodes, the plurality of secondary computing nodes, and the plurality of primary computing nodes, respectively, may be obtained. Furthermore, in this step, the weights corresponding to the neural network layers may be deployed to the computing units of the computing nodes at each level according to the obtained mapping relationship. For example, as shown in fig. 7A, N weights of the i-1 th layer may be respectively deployed to 4 computing units corresponding to chip1, tile1, and PE1 and 4 computing units corresponding to chip1, tile2, and PE3, and M second weights of the i-th layer may be respectively deployed to 2 computing units corresponding to chip1, tile1, and PE2 and 2 computing units corresponding to chip1, tile2, and PE 4. Put another way, the N weights at level i-1 are deployed in 4 computational units (engine) in chip1 — > tile1 — > PE1 and 4 computational units in chip1 — > tile2 — > PE3, respectively. The M weights of the i-th layer are deployed into 2 computational units of chip1 — > tile1 — > PE2 and 2 computational units of chip1 — > tile2 — > PE4, respectively.
Through the arrangement mode, the computing capacity of the computing units supporting the operation of the adjacent neural network layer in the neural network system disclosed by the embodiment of the invention can be matched, the computing units executing the operation of the adjacent neural network layer can be positioned in the same three-level computing node as much as possible, the three-level computing nodes executing the adjacent neural network layer are positioned in the same two-level computing node as much as possible, and the two-level computing nodes executing the adjacent neural network layer are positioned in the same one-level computing node (such as a neural network chip) as much as possible, so that the data volume transmitted between the computing nodes can be reduced, and the data transmission speed between different neural network layers can be improved.
It should be noted that, in the network neural system including the fourth-level computing nodes, the fourth-level computing node engine is used as a computing unit to describe an allocation process of computing resources for performing operations of different neural network layers in the embodiment of the present invention. Stated another way, the above embodiments partition the set of operations that perform different neural network layers at engine granularity. In practical application, the third-level computing node PE may also be used as a computing unit for allocation, and in this case, the mapping between the third-level computing node PE and the second-level computing node tile and the first-level computing node chip may be established according to the above method. Of course, when the amount of data to be calculated is large, the second-level computing node tile may be allocated as a granularity. Alternatively, in the embodiment of the present invention, the calculation unit may be an engine, a PE, a tile, or a chip, which is not limited herein.
How the neural network system provided by the embodiment of the present invention configures the computing resources is described in detail above. The neural network system will be further described below from the viewpoint of processing data. Fig. 8 is a flowchart of a data processing method according to an embodiment of the present invention. The method is applied to the neural network system shown in fig. 1, and the neural network system shown in fig. 1 is configured by the method shown in fig. 5-7, and computing resources for executing different neural network layer operations are allocated. As shown in fig. 8, the method may be implemented by the neural network circuit shown in fig. 1, and may include the following steps.
In step 802, P computational units in the neural network system receive first input data. Wherein the P computational units are configured to perform a first neural network layer operation of the neural network system. In an embodiment of the present invention, the first neural network layer is any layer in the neural network system. The first input data is data required to perform the first neural network layer operation. When the first neural network layer is the layer 1 302 in the neural network system shown in fig. 3, the first input data may be data input to the neural network system for the first time. When the first neural network layer is not the layer 1 of the neural network system, the first input data may be output data processed by other neural network layers.
In step 804, the P calculation units perform calculation on the first input data according to the configured N first weights to obtain first output data. In an embodiment of the invention, the first weight is a weight matrix. The N first weights are N weight matrices, and the N first weights may also be referred to as N first weight copies. The N first weights may be configured in the P calculation units according to the methods shown in fig. 5 to 7. Specifically, the elements in the first weights are respectively configured in ReRAM cells of crossbars included in the P computing units, so that the crossbars in the P computing units can be computed in parallel on the input data based on the N first weights, and the computing power of the crossbars in the P computing units is fully utilized. In an embodiment of the present invention, after receiving the first input data, the P calculation units may perform a neural network operation on the received first input data based on the configured N first weights, to obtain the first output data. For example, a crossbar in the P compute units may perform a matrix multiply-add operation on the first input data and configured first weights.
In step 806, Q computational units in the neural network system receive second input data. Wherein the Q computing units are used for executing second neural network layer operation of the neural network system, and the second input data comprises the first output data. Specifically, in one case, the Q calculation units may perform the operation of the second neural network layer only on the first output data of the P calculation units. For example, the P computing units are configured to perform the operations of the first layer 302 shown in fig. 3, and the Q computing units are configured to perform the operations of the second layer 302 shown in fig. 3. In this case, the second input data is the first output data. In yet another case, the Q computational units may be further operable to collectively perform a second neural network operation on the first output data of the first neural network layer and output data of other neural network layers. For example, the P computational units may be used to perform the neural network operation of the second layer 304 shown in fig. 3, and the Q computational units may be used to perform the neural network operation of the fifth layer 310 shown in fig. 3. In this case, the Q calculation units are configured to perform an operation on the output data of the second layer 304 and the fourth layer 308, and the second input data includes the first output data and the output data of the fourth layer 308.
In step 808, the Q calculation units perform calculation on the second input data according to the configured M second weights to obtain second output data. In the embodiment of the present invention, the second weight is also a weight matrix. The M second weights are M weight matrices, and the M second weights may also be referred to as M second weight copies. Similar to step 804, the second weight may be configured into a ReRAM cell of a crossbar included in the Q computing units according to the method shown in fig. 5. After receiving the second input data, the Q calculation units may perform a neural network operation on the received second input data based on the configured M second weights, resulting in the second output data. For example, the crossbars in the Q computing units may perform a matrix multiply-add operation on the second input data and the configured second weights. It should be noted that, in the embodiment of the present invention, the ratio of N and M corresponds to the ratio of the data amount of the first output data to the data amount of the second output data.
For clarity of description, a brief description of how ReRAM crossbar implements matrix multiply-add operations follows. As shown in fig. 9, the weight matrix of j rows and k columns shown in fig. 9 may be a weight of a neural network layer, and each element in the weight matrix represents a weight value. FIG. 10 is a ReRAM cro in the computing unit provided by the embodiment of the present inventionschematic of the structure of ssbar. For convenience of description, the ReRAM crossbar may be referred to as crossbar for short in the embodiments of the present invention. As shown in FIG. 10, a crossbar includes multiple ReRAM cells, such as G1,1、G 2,1And the like. The ReRAM cells form a neural network matrix. In an embodiment of the present invention, the weights shown in fig. 9 may be input into the crossbar from the bit lines (as shown by the input ports 1002 in fig. 10) of the crossbar shown in fig. 10 in the process of configuring the neural network, so that each element in the weights is configured into a corresponding ReRAM cell. For example, the weight element W in FIG. 90,0Is configured to G of FIG. 101,1In FIG. 9, the weight element W1,0Is configured to G of FIG. 102,1Medium, etc. Each weight element corresponds to a ReRAM cell. In performing the neural network computation, input data is input to the crossbar via the wordline of the crossbar (input port 1004 shown in FIG. 10). It is understood that the input data can be represented by a voltage, so that the input data and the weight value configured in the ReRAM cell implement a dot product operation, and the obtained calculation result is output from the output end (such as the output port 1006 shown in fig. 10) of each column of the crossbar in the form of an output voltage.
As described above, since the amount of data output by the adjacent neural network layer is taken into consideration when configuring the calculation unit performing each layer of neural network operation in the neural network system, the calculation capabilities of the calculation nodes performing the adjacent neural network layer operation can be matched. Therefore, the data processing method provided by the embodiment of the invention can fully utilize the computing power of the computing node and improve the data processing efficiency of the neural network system.
In another situation, an embodiment of the present invention provides a resource allocation apparatus. The device can be applied to the neural network systems shown in fig. 1 and fig. 1A, and is used for allocating the computing nodes for executing different neural network layer operations, so that the computing capabilities of the computing nodes for executing two adjacent layers of neural network operations in the neural network system are matched, the data processing efficiency in the neural network system is improved, and the computing resources are not wasted. It is understood that the resource allocation apparatus may be located in the host, may be implemented by the processor in the host, or may exist separately from the processor as a physical device. For example, as a processor-independent compiler. As shown in fig. 11, the resource allocation apparatus 1100 may include an acquisition module 1102, a computation module 1104, and a deployment module 1106.
An obtaining module 1102, configured to obtain a data amount of first output data of a first neural network layer and a data amount of second output data of a second neural network layer in the neural network system, where input data of the second neural network layer includes the first output data. A calculating module 1104, configured to determine, according to the deployment requirement of the neural network system, N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer. Wherein N and M are both positive integers, and a ratio of N to M corresponds to a ratio of a data amount of the first output data to a data amount of the second output data.
As described above, the neural network system according to the embodiment of the present invention includes a plurality of neural network chips, each of which includes a plurality of computing units, and each of the computing units includes at least one resistive random access memory crossbar ReRAM crossbar. In one case, the deployment requirement includes a computation delay, and when the first neural network layer is a starting layer of all neural network layers in the neural network system, the computation module is configured to determine the value of N according to the data amount of the first output data, the computation delay, and a computation frequency of a resistive random access memory crossbar in a computation unit, and determine the value of M according to a ratio of the data amount of the first output data to the data amount of the second output data and the value of N.
In yet another case, the deployment requirement includes a number of chips of the neural network system, the first neural network layer is a starting layer of the neural network system, and the calculation module is configured to determine the value of N according to the number of chips, the number of ReRAM crossbars in each chip, the number of ReRAM crossbars required to deploy one weight of each layer of the neural network, and a ratio of output data amounts of adjacent neural network layers, and determine the value of M according to the ratio of the data amount of the first output data and the data amount of the second output data, and the value of N.
A deployment module 1106, configured to deploy the N first weights to P computing units and deploy the M second weights to Q computing units according to the computing specifications of the computing units in the neural network system, where P and Q are positive integers, the P computing units are configured to perform operations of the first neural network layer, and the Q computing units are configured to perform operations of the second neural network layer. The calculation specification of the calculation unit refers to the number of crossbars included in one calculation unit. In practice, one computing unit may comprise one or more crossbars. Specifically, after the calculation module 1104 obtains the number of weights that each layer of the neural network needs to be configured, the deployment module 1106 may deploy the weights of each layer on the corresponding calculation units according to the number of crossbars included in each calculation unit. Specifically, the elements in the weight matrix are configured in ReRAM cells of crossbars of the computing unit, respectively. In the embodiment of the present invention, the computing unit may refer to a PE or an engine, one PE may include a plurality of engines, and one engine may include one or more crossbars. Since the weights for each layer may be different in size, one weight may be deployed on one or more edges.
As described above, the neural network system shown in fig. 1 includes a plurality of neural network chips, each of which includes a plurality of secondary computation nodes, each of which includes a plurality of computation units. In order to further reduce the transmission amount of data between the computing units executing different neural network layers, the transmission bandwidth between the computing units or computing nodes is saved. The resource allocation apparatus 1100 may further include a mapping module 1108 for mapping the computing unit into an upper level computing node of the computing unit. Specifically, after the calculating module 1104 obtains N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer, the mapping module 1108 is configured to establish a mapping relationship between the N first weights and the P calculating units, and establish a mapping relationship between the M second weights and the Q calculating units. Further, the mapping module 1108 is further configured to map the P computing units and the Q computing units into a plurality of secondary computing nodes according to the number of computing units included in the secondary computing nodes in the neural network system, where at least a part of the P computing units and at least a part of the Q computing units are mapped into the same secondary computing node.
Further, the mapping module 1108 is further configured to map the plurality of secondary computing nodes mapped by the P computing units and the Q computing units into the plurality of neural network chips according to the number of secondary computing nodes included in each neural network chip. At least a part of the secondary computing nodes to which the P computing units belong and at least a part of the secondary computing nodes to which the Q computing units belong are mapped into the same neural network chip.
In this embodiment of the present invention, how the mapping module 1108 establishes the mapping relationships between the N first weights and the P computing units, establishes the mapping relationships between the M second weights and the Q computing units, and how to map the P computing units and the Q computing units to the upper computing nodes of the computing units, may refer to the foregoing corresponding descriptions of fig. 6, fig. 6A, and fig. 7, and is not described herein again.
Embodiments of the present invention further provide a computer program product for implementing the resource allocation method, and embodiments of the present invention also provide a computer program product for implementing the data processing method, where the computer program product includes a computer-readable storage medium storing program codes, where the program codes include instructions for executing the method flows described in any of the foregoing method embodiments. It will be understood by those of ordinary skill in the art that the foregoing storage media include: various non-transitory machine-readable media that can store program code, such as a U-Disk, a removable hard Disk, a magnetic Disk, an optical Disk, a Random-Access Memory (RAM), a Solid State Disk (SSD), or a non-volatile Memory (non-volatile Memory).
It should be noted that the examples provided in this application are only illustrative. It will be clear to those skilled in the art that, for convenience and brevity of description, the description of each embodiment has been given with emphasis on the description of the embodiments, and reference may be made to the related descriptions of other embodiments for those parts of an embodiment which are not described in detail. The features disclosed in the embodiments of the invention, in the claims and in the drawings may be present independently or in combination. Features described in hardware in embodiments of the invention may be implemented by software and vice versa. And are not limited herein.

Claims (17)

  1. A computing resource allocation method applied to a neural network system is characterized by comprising the following steps:
    acquiring a data volume of first output data of a first neural network layer and a data volume of second output data of a second neural network layer in the neural network system, wherein input data of the second neural network layer comprises the first output data;
    determining N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer according to deployment requirements of the neural network system, wherein N and M are positive integers, and the ratio of N to M corresponds to the ratio of the data quantity of the first output data to the data quantity of the second output data;
    according to the calculation specification of the calculation units in the neural network system, deploying the N first weights to P calculation units and deploying the M second weights to Q calculation units, wherein P and Q are positive integers, the P calculation units are used for executing the operation of the first neural network layer, and the Q calculation units are used for executing the operation of the second neural network layer.
  2. The method of claim 1, wherein the deployment requirement comprises a computation delay, wherein the first neural network layer is a starting layer of all neural network layers in the neural network system,
    the determining N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer includes:
    determining the value of N according to the data volume of the first output data, the calculation time delay and the calculation frequency of a resistance change random access memory cross matrix ReRAM crossbar in a calculation unit;
    and determining the value of M according to the ratio of the data volume of the first output data to the data volume of the second output data and the value of N.
  3. The method of claim 1, wherein: the neural network system comprises a plurality of neural network chips, each neural network chip comprises a plurality of computing units, each computing unit comprises at least one resistive random access memory crossbar ReRAM crossbar, the deployment requirement comprises the number of chips of the neural network system, the first neural network layer is an initial layer of the neural network system,
    the determining N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer includes:
    determining the value of N according to the number of the chips, the number of ReRAM cross bars in each chip, the number of ReRAM cross bars required for deploying one weight of each layer of the neural network and the ratio of the output data quantity of the adjacent neural network layers;
    and determining the value of M according to the ratio of the data volume of the first output data to the data volume of the second output data and the value of N.
  4. The method of claim 1, wherein the neural network system comprises a plurality of neural network chips, each neural network chip comprising a plurality of secondary compute nodes, each secondary compute node comprising a plurality of compute units, the method further comprising:
    and mapping the P computing units and the Q computing units to a plurality of secondary computing nodes according to the number of the computing units contained in the secondary computing nodes in the neural network system, wherein at least one part of the P computing units and at least one part of the Q computing units are mapped to the same secondary computing node.
  5. The method of claim 4, further comprising:
    and mapping the plurality of secondary computing nodes mapped by the P computing units and the Q computing units into the plurality of neural network chips according to the number of the secondary computing nodes contained in each neural network chip, wherein at least part of the secondary computing nodes of the P computing units and at least part of the secondary computing nodes of the Q computing units are mapped into the same neural network chip.
  6. A neural network system, comprising:
    a plurality of neural network chips, each neural network chip including a plurality of computing units;
    a processor coupled to the plurality of neural network chips and configured to:
    acquiring a data volume of first output data of a first neural network layer and a data volume of second output data of a second neural network layer in the neural network system, wherein input data of the second neural network layer comprises the first output data;
    determining N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer according to deployment requirements of the neural network system, wherein N and M are positive integers, and the ratio of N to M corresponds to the ratio of the data quantity of the first output data to the data quantity of the second output data;
    according to the calculation specification of the calculation units in the neural network system, deploying the N first weights to P calculation units in the plurality of calculation units, and deploying the M second weights to Q calculation units in the plurality of calculation units, wherein P and Q are positive integers, the P calculation units are used for executing the operation of the first neural network layer, and the Q calculation units are used for executing the operation of the second neural network layer.
  7. The neural network system of claim 6, wherein the deployment requirement includes a computation delay, the first neural network layer is a starting layer of all neural network layers in the neural network system,
    in the step of determining N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer, the processor is configured to:
    determining the value of N according to the data volume of the first output data, the calculation time delay and the calculation frequency of a resistance change random access memory cross matrix ReRAM crossbar in a calculation unit;
    and determining the value of M according to the ratio of the data volume of the first output data to the data volume of the second output data and the value of N.
  8. The neural network system of claim 6, wherein: each computing unit comprises at least one resistive random access memory crossbar ReRAM crossbar, the deployment requirement comprises the number of chips of the neural network system, the first neural network layer is an initial layer of the neural network system,
    in the step of determining N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer, the processor is configured to:
    determining the value of N according to the number of the chips, the number of ReRAM cross bars in each chip, the number of ReRAM cross bars required for deploying one weight of each layer of the neural network and the ratio of the output data quantity of the adjacent neural network layers;
    and determining the value of M according to the ratio of the data volume of the first output data to the data volume of the second output data and the value of N.
  9. The neural network system of claim 6, wherein: the neural network system includes a plurality of neural network chips, each neural network chip including a plurality of secondary computational nodes, each secondary computational node including a plurality of computational units, the processor further configured to:
    and mapping the P computing units and the Q computing units to a plurality of secondary computing nodes according to the number of the computing units contained in the secondary computing nodes in the neural network system, wherein at least one part of the P computing units and at least one part of the Q computing units are mapped to the same secondary computing node.
  10. The neural network system of claim 9, wherein the processor is further configured to:
    and mapping the plurality of secondary computing nodes mapped by the P computing units and the Q computing units into the plurality of neural network chips according to the number of the secondary computing nodes contained in each neural network chip, wherein at least part of the secondary computing nodes of the P computing units and at least part of the secondary computing nodes of the Q computing units are mapped into the same neural network chip.
  11. A resource allocation apparatus, comprising:
    an obtaining module, configured to obtain a data volume of first output data of a first neural network layer and a data volume of second output data of a second neural network layer in the neural network system, where input data of the second neural network layer includes the first output data;
    a calculating module, configured to determine, according to a deployment requirement of the neural network system, N first weights to be configured by the first neural network layer and M second weights to be configured by the second neural network layer, where N and M are positive integers, and a ratio of N to M corresponds to a ratio of a data amount of the first output data to a data amount of the second output data;
    a deployment module, configured to deploy the N first weights to P computing units and deploy the M second weights to Q computing units according to a computing specification of the computing units in the neural network system, where P and Q are positive integers, the P computing units are configured to execute operations of the first neural network layer, and the Q computing units are configured to execute operations of the second neural network layer.
  12. The apparatus according to claim 11, wherein the deployment requirement comprises a computation delay, the first neural network layer is a starting layer of all neural network layers in the neural network system, and the computation module is configured to:
    determining the value of N according to the data volume of the first output data, the calculation time delay and the calculation frequency of a resistance change random access memory cross matrix ReRAM crossbar in a calculation unit;
    and determining the value of M according to the ratio of the data volume of the first output data to the data volume of the second output data and the value of N.
  13. The apparatus according to claim 11, wherein the neural network system comprises a plurality of neural network chips, each neural network chip comprises a plurality of computing units, each computing unit comprises at least one resistive random access memory crossbar, ReRAM crossbar, the deployment requirement comprises a number of chips of the neural network system, the first neural network layer is a starting layer of the neural network system, and the computing module is configured to:
    determining the value of N according to the number of chips, the number of ReRAM cross bars in each chip, the number of ReRAM cross bars required for deploying one weight of each layer of neural network and the ratio of the output data quantity of adjacent neural network layers;
    and determining the value of M according to the ratio of the data volume of the first output data to the data volume of the second output data and the value of N.
  14. The apparatus of claim 11, wherein the neural network system comprises a plurality of neural network chips, each neural network chip comprising a plurality of secondary computational nodes, each secondary computational node comprising a plurality of computational units, the apparatus further comprising:
    a mapping module, configured to map the P computing units and the Q computing units to a plurality of secondary computing nodes according to the number of computing units included in the secondary computing nodes in the neural network system, where at least a part of the P computing units and at least a part of the Q computing units are mapped to the same secondary computing node.
  15. The apparatus of claim 14, wherein the mapping module is further configured to:
    and mapping the plurality of secondary computing nodes mapped by the P computing units and the Q computing units into the plurality of neural network chips according to the number of the secondary computing nodes contained in each neural network chip, wherein at least part of the secondary computing nodes of the P computing units and at least part of the secondary computing nodes of the Q computing units are mapped into the same neural network chip.
  16. A computer program product comprising program code comprising instructions to be executed by a computer to perform a method of allocating computational resources as claimed in any one of claims 1 to 5.
  17. A computer readable storage medium comprising computer program instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-5.
CN201880100574.2A 2018-12-29 2018-12-29 Computing resource allocation technique and neural network system Pending CN113597621A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/125239 WO2020133317A1 (en) 2018-12-29 2018-12-29 Computing resource allocation technology and neural network system

Publications (1)

Publication Number Publication Date
CN113597621A true CN113597621A (en) 2021-11-02

Family

ID=71126750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880100574.2A Pending CN113597621A (en) 2018-12-29 2018-12-29 Computing resource allocation technique and neural network system

Country Status (2)

Country Link
CN (1) CN113597621A (en)
WO (1) WO2020133317A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115204380A (en) * 2022-09-15 2022-10-18 之江实验室 Data storage and array mapping method and device of storage-computation integrated convolutional neural network
CN116089095A (en) * 2023-02-28 2023-05-09 苏州亿铸智能科技有限公司 Deployment method for ReRAM neural network computing engine network
US20230153570A1 (en) * 2021-11-15 2023-05-18 T-Head (Shanghai) Semiconductor Co., Ltd. Computing system for implementing artificial neural network models and method for implementing artificial neural network models
WO2023123905A1 (en) * 2021-12-28 2023-07-06 深圳云天励飞技术股份有限公司 Data transmission processing method in chip system and related apparatus

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036559A (en) * 2020-08-26 2020-12-04 北京灵汐科技有限公司 Neural network structure division method and device, computer equipment and storage medium
CN112579285B (en) * 2020-12-10 2023-07-25 南京工业大学 Distributed neural network collaborative optimization method for edge network
CN115115020A (en) * 2021-03-22 2022-09-27 华为技术有限公司 Data processing method and device
CN113158243A (en) * 2021-04-16 2021-07-23 苏州大学 Distributed image recognition model reasoning method and system
CN113238715B (en) * 2021-06-03 2022-08-30 上海新氦类脑智能科技有限公司 Intelligent file system, configuration method thereof, intelligent auxiliary computing equipment and medium
CN113517009A (en) * 2021-06-10 2021-10-19 上海新氦类脑智能科技有限公司 Storage and calculation integrated intelligent chip, control method and controller
CN116306811B (en) * 2023-02-28 2023-10-27 苏州亿铸智能科技有限公司 Weight distribution method for deploying neural network for ReRAM

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180034853A (en) * 2016-09-28 2018-04-05 에스케이하이닉스 주식회사 Apparatus and method test operating of convolutional neural network
US11062203B2 (en) * 2016-12-30 2021-07-13 Intel Corporation Neuromorphic computer with reconfigurable memory mapping for various neural network topologies
CN107622305A (en) * 2017-08-24 2018-01-23 中国科学院计算技术研究所 Processor and processing method for neutral net

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230153570A1 (en) * 2021-11-15 2023-05-18 T-Head (Shanghai) Semiconductor Co., Ltd. Computing system for implementing artificial neural network models and method for implementing artificial neural network models
WO2023123905A1 (en) * 2021-12-28 2023-07-06 深圳云天励飞技术股份有限公司 Data transmission processing method in chip system and related apparatus
CN115204380A (en) * 2022-09-15 2022-10-18 之江实验室 Data storage and array mapping method and device of storage-computation integrated convolutional neural network
CN116089095A (en) * 2023-02-28 2023-05-09 苏州亿铸智能科技有限公司 Deployment method for ReRAM neural network computing engine network
CN116089095B (en) * 2023-02-28 2023-10-27 苏州亿铸智能科技有限公司 Deployment method for ReRAM neural network computing engine network

Also Published As

Publication number Publication date
WO2020133317A1 (en) 2020-07-02

Similar Documents

Publication Publication Date Title
CN113597621A (en) Computing resource allocation technique and neural network system
CN113261015A (en) Neural network system and data processing technology
US20210224125A1 (en) Operation Accelerator, Processing Method, and Related Device
US11294599B1 (en) Registers for restricted memory
US11462003B2 (en) Flexible accelerator for sparse tensors in convolutional neural networks
CN112149816B (en) Heterogeneous memory-computation fusion system and method supporting deep neural network reasoning acceleration
CN110738308B (en) Neural network accelerator
US11579921B2 (en) Method and system for performing parallel computations to generate multiple output feature maps
US20220036165A1 (en) Method and apparatus with deep learning operations
WO2018010244A1 (en) Systems, methods and devices for data quantization
US11797830B2 (en) Flexible accelerator for sparse tensors in convolutional neural networks
CN109993275B (en) Signal processing method and device
CN110766127B (en) Neural network computing special circuit and related computing platform and implementation method thereof
CN112950656A (en) Block convolution method for pre-reading data according to channel based on FPGA platform
CN114446365A (en) Method and apparatus for deep learning operations
JP2022137247A (en) Processing for a plurality of input data sets
CN114003201A (en) Matrix transformation method and device and convolutional neural network accelerator
CN111971692A (en) Convolutional neural network
CN111078624B (en) Network-on-chip processing system and network-on-chip data processing method
CN111078623B (en) Network-on-chip processing system and network-on-chip data processing method
CN111078625B (en) Network-on-chip processing system and network-on-chip data processing method
WO2024130830A1 (en) Data processing apparatus, computer system and operating method thereof
KR20240025827A (en) In memory computing(imc) processor and operating method of imc processor
CN117951216A (en) Data processing system and data processing method
CN118012623A (en) Data processing method and processor of neuromorphic chip under many-core architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination