US20210357732A1 - Neural network accelerator hardware-specific division of inference into groups of layers - Google Patents

Neural network accelerator hardware-specific division of inference into groups of layers Download PDF

Info

Publication number
US20210357732A1
US20210357732A1 US17/186,003 US202117186003A US2021357732A1 US 20210357732 A1 US20210357732 A1 US 20210357732A1 US 202117186003 A US202117186003 A US 202117186003A US 2021357732 A1 US2021357732 A1 US 2021357732A1
Authority
US
United States
Prior art keywords
hardware chip
layers
inference
neural network
chip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/186,003
Other versions
US11176449B1 (en
Inventor
Nikolay Nez
Antonio Tomas Nevado Vilchez
Hamid Reza Zohouri
Mikhail Volkov
Oleg Khavin
Sakyasingha Dasgupta
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Edgecortix Inc
Original Assignee
Edgecortix Pte Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Edgecortix Pte Ltd filed Critical Edgecortix Pte Ltd
Assigned to EDGECORTIX PTE. LTD. reassignment EDGECORTIX PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VOLKOV, MIKHAIL, KHAVIN, OLEG, NEZ, NIKOLAY, VILCHEZ, ANTONIO TOMAS NEVADO, ZOHOURI, HAMID REZA, DASGUPTA, SAKYASINGHA
Priority to US17/492,681 priority Critical patent/US20220027716A1/en
Application granted granted Critical
Publication of US11176449B1 publication Critical patent/US11176449B1/en
Publication of US20210357732A1 publication Critical patent/US20210357732A1/en
Assigned to EDGECORTIX PTE. LTD. reassignment EDGECORTIX PTE. LTD. ASSIGNEE ADDRESS CHANGE Assignors: EDGECORTIX PTE. LTD.
Assigned to EDGECORTIX INC. reassignment EDGECORTIX INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EDGECORTIX PTE. LTD.
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0207Addressing or allocation; Relocation with multidimensional access, e.g. row/column, matrix
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to neural network accelerator hardware-specific division of neural network inference. More specifically, the present invention relates to division of a neural network into groups of layers and/or division of each layer into portions based on estimates of duration and energy consumption.
  • NN inference Real-time neural network (NN) inference is going to be ubiquitous for computer vision or speech tasks on edge devices for applications such as autonomous vehicles, robotics, smartphones, portable healthcare devices, surveillance, etc.
  • Specialized NN inference hardware such as Google TPU, has become a mainstream way of providing power efficient inference.
  • Google TPU's efficiency is restricted mainly to point-wise convolution and dense fully connected layer types of a deep neural network (DNN).
  • DNN deep neural network
  • MobileNet-like DNN architectures greatly reduce the number of Multiply and Accumulate (MAC) computations to be performed while achieving high accuracy, resulting in lower total latency and energy spent on MAC operations.
  • MAC Multiply and Accumulate
  • accelerating the inference of such DNNs on hardware requires support for Inverted Residual Bottleneck type DNN Layers or similarly constructed combination of point-wise and depth-wise convolution DNN layers.
  • Providing efficient inference system with support for such MobileNet-like architectures will enable a new generation of energy efficient hardware-software systems for edge computing applications.
  • a computer program including instructions that are executable by a computer to cause the computer to perform operations for hardware-specific division of inference.
  • the operations include obtaining a computational graph and a hardware chip configuration.
  • the computational graph of a neural network has a plurality of layers. Each layer has a plurality of nodes and a plurality of edges. Each node includes a representation of a mathematical operation.
  • the hardware chip configuration includes at least one module for performing the mathematical operations and an on-chip memory.
  • the hardware chip is operable to perform inference of the neural network in portions of each layer by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers while interfacing with an external memory storing the activation data.
  • the operations also include dividing inference of the plurality of layers into a plurality of groups. Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations, sequentially by layer, of corresponding portions of layers of each group.
  • the operations further include generating instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.
  • This aspect may also include the method performed by the processor executing the instructions of the computer program, and an apparatus that performs the method.
  • the apparatus may include an obtaining section configured to obtain a computational graph and a hardware chip configuration, a dividing section configured to divide inference of the plurality of layers into a plurality of groups, and a generating section configured to generate instructions for the hardware chip to perform inference of the convolutional neural network, sequentially by group, of the plurality of groups.
  • an apparatus including an activation memory, a data loading module configured to load activation data from an external memory onto the activation memory, and a data storing module configured to store activation data from the activation memory onto the external memory.
  • the apparatus also includes a weight memory, and a weight loading module configured to load weight values from an external memory onto the activation memory.
  • the apparatus further includes an accumulation memory, a plurality of convolution modules configured to perform mathematical operations on the activation data stored in the activation data memory and the weight values stored in the weight memory, and to store values resulting from the mathematical operations onto the accumulation memory, and a plurality of activation modules configured to perform activation operations on the values stored in the accumulation memory, and to store resulting activation data onto the activation data memory.
  • the apparatus also includes an instruction module configured to feed and synchronize instructions from the external memory to the data loading module, the data storing module, the weight loading module, the plurality of convolution modules, and the plurality of activation modules, to perform inference of a convolutional neural network.
  • FIG. 1 shows an operational flow for hardware-specific division of inference, according to an embodiment of the present invention.
  • FIG. 2 shows an exemplary configuration of a hardware chip operable to perform neural network inference, according to an embodiment of the present invention.
  • FIG. 3 shows a diagram of a performance of inference of the neural network in portions of each layer, according to an embodiment of the present invention.
  • FIG. 4 shows an operational flow for dividing inference of layers into groups, according to an embodiment of the present invention.
  • FIG. 5 shows an operational flow for simulating performance of inference on a hardware chip, according to an embodiment of the present invention.
  • FIG. 6 shows an operational flow for hardware specific division of inference, according to another embodiment of the present invention.
  • FIG. 7 shows an operational flow for generating instructions for the hardware chip to perform inference, according to another embodiment of the present invention.
  • FIG. 8 shows an exemplary configuration of a multi-core hardware chip operable to perform neural network inference, according to an embodiment of the present invention.
  • FIG. 9 shows an exemplary configuration of multi-chip hardware operable to perform neural network inference, according to an embodiment of the present invention.
  • FIG. 10A shows an exemplary configuration of a depth-wise convolution module, according to an embodiment of the present invention.
  • FIG. 10B shows an exemplary configuration of a per-channel pipeline for a depth-wise convolution module, according to an embodiment of the present invention.
  • FIG. 11 shows an exemplary configuration of a point-wise convolution module, according to an embodiment of the present invention.
  • FIG. 12 shows an exemplary hardware configuration for hardware-specific division of inference, according to an embodiment of the present invention.
  • the inventors herein have found that a significant part of the total energy consumed during performance of inference is dissipated in external memory access, with more external memory throughput requiring more energy consumption.
  • Embodiments of the present invention may seek to minimize the number of external memory accesses, and generally provide high computation density in terms of teraoperations per second per unit of area (TOP/s/Area) and resource utilization.
  • Exemplary embodiments may generate instructions to perform inference by a hardware system, such as an ASIC or an FPGA, capable of performing efficient neural network inference by grouping neural network layers and avoiding external memory accesses between processing them to reduce the total number of external memory accesses as compared to processing the layers one by one and storing all intermediate data in the external memory. This may allow flexibility in handling various neural networks with performance and power efficiency close to a fixed-neural-network chip, and flexibility to handle a variety of neural networks, such as convolutional neural networks, including MobileNet variations.
  • a hardware chip By modifying various degrees of parallelism in the system, a hardware chip could be tuned for a particular set or “family” of neural networks and a set of resource constraints, such as area and power, such as by using an automated design-search process.
  • the hardware can be scaled from power-restricted edge devices to data centers by adjusting scaling parameters. By reducing external memory accesses, stochasticity in performance may be reduced as well.
  • FIG. 1 shows an operational flow for neural network accelerator hardware-specific division of inference, according to an embodiment of the present invention.
  • the operational flow may provide a method of dividing inference for performance on a specific hardware chip configuration.
  • an obtaining section obtains a computational graph and a hardware chip configuration.
  • the computational graph is of a neural network having a plurality of layers, each layer having a plurality of nodes and a plurality of edges, and each node including a representation of a mathematical operation.
  • the hardware chip configuration includes at least one module for performing the mathematical operations and an on-chip memory.
  • the hardware chip is operable to perform inference of the neural network in portions of each layer by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers while interfacing with an external memory storing the activation data.
  • a dividing section divides inference of the plurality of layers into a plurality of groups.
  • Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations, sequentially by layer, of corresponding portions of layers of each group.
  • a generating section generates instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.
  • FIG. 2 shows an exemplary configuration of a hardware chip 250 operable to perform neural network inference, according to an embodiment of the present invention.
  • Hardware chip 250 may be referred to as a neural network accelerator.
  • hardware chip 250 is an Application Specific Integrated Circuit (ASIC).
  • the modules of hardware chip 250 may be groups of logic gates arranged to perform specific functions.
  • the memories of hardware chip 250 may be RAM, flash memory, or other embedded writable memory.
  • the hardware chip configuration includes at least one module for performing the mathematical operations and an on-chip memory.
  • hardware chip 250 includes an external memory interface 252 .
  • the at least one module of the hardware chip 250 includes at least one convolution module 262 , at least one module for performing activation operations, an activation module 266 , at least one module for loading the activation data from the external memory onto the on-chip memory, a data loading module 258 , at least one module for storing activation data on the external memory from the on-chip memory, a data storing module 259 , at least one module for loading weights of the convolution neural network from the external memory to the on-chip memory, a weight loading module 254 , and at least one module for loading instructions of these modules from the external memory, an instruction DMA module 256 .
  • the on-chip memory includes a weight memory 255 , an activation data memory 260 , and an accumulation memory 264 .
  • External memory interface 252 is configured to allow hardware chip 250 , and the various modules therein, to exchange data with a DRAM memory 206 , the external memory.
  • a Central Processing Unit (CPU) 208 may request neural network inference for use in an application.
  • Weight loading module 254 and data loading module 258 are configured to read and load data from an external memory, such as DRAM memory 206 , through external memory interface 252 . Weight loading module 254 may sequentially read weight values from the external memory and load such data onto weight memory 255 .
  • Data loading module 258 may read input values, activation data, etc., from the external memory and load such data onto activation data memory 260 .
  • Data storing module 259 is configured to store activation data onto the external memory through external memory interface 252 .
  • Data storing module 259 may read activation data from activation data memory 260 and store such data onto DRAM memory 206 .
  • Data loading module 258 and data storing module 259 may operate on portions, such as rectangular subregions, blocks, or tiles, of activation data stored in the external memory.
  • Data loading module 258 and data storing module 259 may also be used for a type of operation known as a “spill-fill”, in which intermediate computation results are temporarily “evacuated” to the external memory when the capacity of an on-chip memory is insufficient.
  • Weight memory 255 is all blocks of the on-chip memory of hardware chip 250 .
  • the hardware chip configuration specifies a number and size of the banks of each block of the on-chip memory. Each block may be organized as a set of one or two port memory banks. Each block may have read and write ports exposed to corresponding computation modules, load modules, and store modules.
  • Hardware chip 250 may further include arbitration & interconnect logic connecting the on-chip memory to I/O ports, such as external memory interface 252 . Loading and storing modules of hardware chip 250 may be configured to acquire locks to a memory bank of the on-chip memory, perform a set of read or write transactions, and then release the memory bank when no longer in use. In this manner, two or more modules may access different memory banks in parallel.
  • hardware chip 250 is configured to perform inference of a convolutional neural network, and so the portions of each layer are tiles, and hardware chip 250 includes convolution modules 262 .
  • the at least one module of the hardware chip 250 includes at least one convolution module.
  • Convolution modules 262 are configured to perform mathematical operations on the input values or activation data stored in activation data memory 260 and the weight values stored in weight memory 255 . Convolution modules 262 may output partial sums to accumulation memory 264 , and may also perform accumulation with existing partial sums stored in accumulation memory 264 . Convolution modules 262 may provide direct support for different parameters of mathematical operations, such as a kernel size of height (KH) ⁇ width (KW), vertical and horizontal strides, dilation, padding, etc.
  • convolution modules 262 include at least one dedicated depth-wise convolution module and at least one point-wise convolution module.
  • convolution modules 262 include generic convolution modules, which may support combinations of depth-wise convolution and point-wise convolution layers, such as Inverted Residual Blocks in MobileNet architectures.
  • Activation modules 266 are configured to perform activation operations on values stored in accumulation memory 264 .
  • Activation modules 266 may read input values from accumulation memory 264 and store computation results in activation data memory 260 .
  • Activation modules 266 may perform computations such as elementwise math functions, including addition, multiplication, division, square root, etc. of scalar or vector values following the mathematical operations of convolution modules 262 in order to provide activation functions, such as ReLU, LeakyReLU, Hsigmoid, H-Swish, etc.
  • Activation modules 266 may further perform residual addition of branches, requantization, local pooling such as max-pooling and average pooling with a set of fixed window sizes.
  • Instruction DMA module 256 is configured to load instructions of the various modules of hardware chip 250 .
  • Instruction DMA module 256 may load instructions of the various modules of hardware chip 250 in round-robin fashion from the external memory.
  • the instruction infrastructure of hardware chip 250 may feed and synchronize instructions.
  • the instruction infrastructure of hardware chip 250 may include, in addition to instruction DMA module 256 , at least one instruction queue, such as First-In-First-Out (FIFO) memories, for carrying encoded instructions to each of the various modules, which explicitly controls the behavior of the modules.
  • FIFO First-In-First-Out
  • the hardware chip is configured to perform inference of a convolutional neural network
  • other embodiments may perform hardware-specific division of inference of other kinds of neural networks.
  • the hardware chip may include an additional pair of loading and storing modules that may be attached to the accumulation memory.
  • the weight loading module may also be used for loading activation module parameters.
  • FIG. 3 shows a diagram of a performance of inference of the neural network in portions of each layer, according to an embodiment of the present invention.
  • a convolutional neural network has been divided into groups of layers based on some heuristic including an estimate of duration and energy consumption.
  • Each layer is apportioned into tiles of 3 dimensions: height, width, and channels. The sizes of the dimensions are established such that the tiles of a layer may be processed using a subset of tiles from a previous layer.
  • all tiles in the channel dimension are required for processing the activation data thereof.
  • one tile is sufficient to process the activation data of the corresponding tile in a subsequent layer.
  • the neural network includes example sequential layers 301 , 302 , 303 , and 304 among other layers.
  • a data loading module 358 reads input values or activation data from an external memory through external memory interface 352 , and loads such data onto an activation data memory 360 .
  • a data storing module 359 reads activation data from activation data memory 360 , and stores such data onto the external memory through external memory interface 352 .
  • the generating instructions for the hardware chip further includes generating instructions for the hardware chip to retrieve activation data of corresponding portions in the first layer in each group from the external memory, and record activation data resulting from the mathematical operations of corresponding portions in the last layer in each group to an external memory.
  • layers 301 , 302 , 303 , and 304 belong to a single group, which means that activation data is loaded from the external memory only once and stored on the external memory only once during the performance of inference of corresponding portions of layers 301 , 302 , 303 , and 304 .
  • Enough input tiles must be loaded to process the activation values of tile 301 A in the height and width dimension of layer 301 into on-chip memory. Because of data dependencies of convolution operations other than 1 ⁇ 1, tiles of subsequent layers will shrink in area. Thus, all but the tile of the last layer usually overlap by (K ⁇ 1)/2 for a K ⁇ K (equal height and width) convolution kernel, which may increase the amount of computations.
  • the computational graph of the neural network is divided into groups of layers to balance the amount of additional computations with the number of memory transactions required to store a whole intermediate layer into external memory.
  • activation data of both tiles 301 A and 301 B are required to process the activation data of tile 302 A
  • activation data of tiles 301 A and 301 B of layer 301 are loaded onto activation data memory 360 .
  • the activation data of tiles 301 A and 301 B are processed to yield activation data of tiles 302 A and 302 B of layer 302 , which are also stored onto activation data memory 360 .
  • This allows processing of the next layer of activation data of tiles based on activation data of the previous layer already loaded onto activation data memory 360 , with the resulting activation data stored in the activation data memory as well.
  • the activation data of tiles 302 A and 302 B may be cleared to free space on activation data memory 360 for the next activation data.
  • the processing and yielding is repeated for each layer moving deeper in the group.
  • the activation data of tiles 302 A and 302 B are processed to yield activation data of tiles 303 A and 303 B of layer 303 , which are loaded onto activation data memory 360 .
  • the activation data of tiles 303 A and 303 B are then processed to yield activation data of tiles 304 A and 304 B of layer 304 , which are loaded onto activation data memory 360 .
  • data storing module 359 stores the activation data of tiles 304 A and 304 B onto the external memory through external memory interface 352 .
  • the performance of inference was divided into portions, or tiles, as well as groups; other embodiments may not require apportioning each layer, such as when the activation data memory is large enough to load activation data for an entire layer.
  • FIG. 4 shows an operational flow for dividing inference of layers into groups, such as S 120 of FIG. 1 , according to an embodiment of the present invention.
  • the operations within this operational flow may be performed by a dividing section or a correspondingly named sub-section thereof.
  • the computational graph and the hardware chip configuration are obtained prior to dividing inference of layers into groups.
  • a preparing section such as the dividing section or a sub-section thereof, prepares a plurality of candidate group divisions, each candidate group division identifying a unique division of the plurality of layers.
  • a candidate group division specifies a group to which each layer belongs, provided that each group must have consecutive layers.
  • each of the plurality of candidate group divisions may identify even divisions of the plurality of layers.
  • each of the plurality of candidate group divisions may identify random divisions of the plurality of layers in groups of single layers, two layers, three layers, etc.
  • a candidate group division may also include only some of the layers of the neural network, so that finer divisions can be analyzed.
  • a simulating section simulates a performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for one of the candidate group divisions. As iterations proceed, the simulating section simulates performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for each of the plurality of candidate group divisions.
  • the dividing section or a sub-section thereof determines whether all of the candidate group divisions have been simulated. If unsimulated candidates remain, then the operational flow proceeds to S 428 , where a new candidate group division is selected for simulation. If all candidate group divisions have been simulated, then operational flow proceeds to S 426 .
  • a comparing section such as the dividing section or a sub-section thereof compares the estimate of duration and energy consumption of each candidate group division of the same layers among the plurality of layers.
  • partial candidate group divisions may be included, to make a fair comparison, the estimates must cover an inference performance of the same layers.
  • the plurality of candidate group divisions may identify a single layer as a first candidate group division, a preceding group of layers as a second group division, and the single layer together with the preceding group of layers as a third candidate group division.
  • a fair comparison may include comparing (i) an estimate of duration and energy consumption to perform the mathematical operations of the third candidate group division and (ii) an estimate of total duration and total energy consumption to perform the mathematical operations of the first candidate group division and the second candidate group division.
  • This example may be useful for a particular embodiment of dividing inference of the layers into groups, in which a heuristic algorithm uses layer-aware grouping. The algorithm starts with an empty group, and then a first ungrouped layer is added to the group. The simulating section then estimates duration and energy consumption of inference of the group, inference of next ungrouped layer, and inference of the group with next ungrouped layer added.
  • inference of the group with next ungrouped layer added outperforms the sum of inference of the group and inference of next ungrouped layer, then the process is repeated for the next layer. However, if inference of the group with the next ungrouped layer added does not outperform the sum of inference of the group and inference of next ungrouped layer, then the group will not include the next ungrouped layer, and the process will proceed to consider a group of only the next ungrouped layer. This process is repeated for all of the layers of the network.
  • FIG. 5 shows an operational flow for simulating performance of inference on a hardware chip, such as S 430 of FIG. 4 , according to an embodiment of the present invention.
  • the operations within this operational flow may be performed by a simulating section or a correspondingly named sub-section thereof.
  • candidate group divisions are prepared prior to simulating performance of inference.
  • a generating section generates instructions for the hardware chip to perform inference according to the candidate group division.
  • the generating section generates instructions for the hardware chip to perform the mathematical operations, sequentially by layer, of corresponding portions in layers of each group.
  • the instructions may be generated in the same manner as for the actual hardware chip, such as S 140 of FIG. 1 . More details of the instruction generation operation are described with respect to FIG. 7 .
  • an executing section such as the simulating section or a sub-section thereof, executes the instructions on a simulation of the hardware chip. This may include tracking, recording, or otherwise identifying the operations in each clock cycle. The operations that are identified are the simple, fine-grained operations that are performed by individual modules, many times in parallel with operations of other modules.
  • a summing section such as the simulating section or a sub-section thereof, sums the clock cycles during the simulation.
  • the simulation may run magnitudes faster than inference on the actual hardware chip, the amount of time of a clock cycle of the hardware chip can be determined based on the configuration of the hardware chip. For example, if the hardware chip configuration runs at 2 GHz, then it can be estimated that two billion clock cycles will last one second of time.
  • an assigning section such as the simulating section or a sub-section thereof assigns an energy consumption to each fine-grained operation of the simulation.
  • performance of inference may include complex processes, those processes are broken down into these fine-grained operations, each of which can be associated with an energy consumption measured from this simulation or a previous simulation of the same fine-grained operation on the same hardware chip.
  • energy consumptions associated with each fine-grained operation of the hardware chip may be supplied from an input file independent of the simulation environment.
  • the summing section sums the energy consumption of all of the fine-grained operations of the simulation.
  • the estimate of energy consumption of the hardware chip is based on a sum of individual energy consumptions associated with each operation, and the estimate of duration is based on the number of clock cycles.
  • FIG. 6 shows an operational flow for hardware specific division of inference, according to another embodiment of the present invention.
  • the operational flow may provide a method of dividing inference for performance on a specific hardware chip configuration.
  • the operations performed at S 610 , S 620 , and S 640 are substantially similar to the operations performed at S 110 , S 120 , and S 140 , described above with respect to FIG. 1 .
  • the hardware chip is operable to perform inference of the neural network in portions of each layer.
  • the dimensions of the portions, or tiles in the case of a convolutional neural network are predetermined.
  • the operational flow for hardware specific division of inference includes an operation of determining the dimensions of the portions.
  • a determining section determines dimensions of the portions of each layer.
  • the determining section determines the dimensions of the portions of each layer by simulating a performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for each of a plurality of candidate dimension specifications.
  • each candidate dimension specification may be based on a capacity of the on-chip memory and a degree of parallelism of the hardware chip.
  • one of the dimensions of each portion may be defined by the degree of parallelism of the hardware chip, while the other dimensions can be variable.
  • a comparing section such as a simulating section or a sub-section thereof, compares the estimate of duration and energy consumption of each candidate dimension specification.
  • One of the candidate dimension specifications may be then be selected for use in the performance of inference. The selection may be based on duration or energy consumption or a balance of both.
  • FIG. 7 shows an operational flow for generating instructions for the hardware chip to perform inference, such as S 140 of FIG. 1 , according to an embodiment of the present invention.
  • the operations within this operational flow may be performed by a generating section or a correspondingly named sub-section thereof.
  • the layers of the neural network have been divided into groups.
  • an assigning section such as the generating section or a sub-section thereof, assigns each operation of each module in the hardware chip to a queue.
  • the generating instructions for the hardware chip further includes assigning each operation to a queue among a plurality of queues.
  • each node represents an instruction from an Instruction Set Architecture (ISA) of the hardware chip
  • each edge represents a virtual buffer holding data from one portion of a layer.
  • ISA Instruction Set Architecture
  • Each virtual buffer is unique and associated with one particular value in the computational graph. However, the same physical buffer may be assigned to multiple edges with non-overlapping lifetimes across the scheduled computational graph.
  • each queue is the simple, fine-grain operations that are performed by individual modules, many times in parallel with operations of other modules.
  • Each instruction may be realized by multiple fine-grain operations.
  • a queue may have operations that are performed by more than one module. Every module in the system executes its own linear sequence of instructions, which can be broken down into operations. The performance of inference may be thought of as a set of sequential processes running in parallel.
  • the generating instructions for the hardware chip further includes ordering execution of operations in each queue.
  • Each parallel process may read from and/or write to multiple memories.
  • Each instruction in the process may result in operations on many data elements during many clock cycles. Therefore, proper ordering of the operations may be critical to ensuring that operation dependencies are satisfied and each operation is performed at a time when the necessary resources are available.
  • the ordering section may also optimize the order to minimize execution time, and minimize the number of potential evacuations of data.
  • an allocating section such as the generating section or a sub-section thereof, allocates locations in the on-chip memory of the hardware chip for data.
  • the generating instructions for the hardware chip further includes allocating locations in the on-chip memory to data for performing inference of the neural network.
  • the generating instructions may also include generating instructions for the at least one module of the hardware chip to perform loading of data from the external memory to the allocated locations.
  • the allocating section may replace virtual buffers with physical memory locations of the on-chip memory of the hardware chip for purposes of generating the instructions before execution of inference by the hardware chip.
  • the generating section or a sub-section thereof determines whether all of the data that requires allocation can be allocated to available memory. In other words, the generating section determines whether there is enough memory to hold all necessary data for each clock cycle. If there is not enough memory for all necessary data for one or more clock cycles, then the operational flow proceeds to S 746 , where one or more evacuations of data may be introduced. If there is enough memory for all necessary data for all clock cycles, then the operational flow proceeds to S 747 .
  • an evacuating section such as the generating section or a sub-section thereof, introduces evacuations of data to the external memory into the operations.
  • the dimensions of the portions of each layer are set, as is the division of layers into groups, the performance of inference may encounter times when a particular memory requires more storage space than exists, such as when there are not enough physical memory locations to perform assignment of all edges.
  • some or all of the data currently stored on the on-chip memory is temporary offloaded onto the external memory, so that the on-chip memory can be cleared for storage of more immediately required data. The cleared data will then later be loaded back onto the on-chip memory when that data once again becomes necessary for further processing.
  • the values to evacuate are selected in an attempt to minimize evacuations of data to the external memory, i.e.—in an attempt to reduce the number of external memory accesses.
  • the evacuations Once the evacuations are introduced, they must be scheduled into the order of operations, and so the operation flow returns to S 742 whenever new evacuations of data are introduced.
  • the generating instructions for the hardware chip further includes scheduling evacuation of data to the external memory in order to perform inference of the neural network.
  • an annotating section such as the generating section or a sub-section thereof, annotates synchronization flags.
  • the generating instructions for the hardware chip further includes synchronization flag annotating to preserve mutual ordering of dependent operations.
  • Each consumer-producer pair of processes may have a pair of semaphores/token-queues for Read After Write (RAW) and Write After Read (WAR) dependency synchronization.
  • RAW Read After Write
  • WAR Write After Read
  • dependencies of each pair of semaphores/token-queues for RAW and WAR may be tracked.
  • each instruction may have set of flags to decrement and increment semaphores corresponding to a particular process. Therefore, in some embodiments, an explicit, compiler-guided token-based synchronization mechanism may be employed to avoid data hazards, while maintaining task-level parallelism.
  • a converting section such as the generating section or a sub-section thereof, converts the instructions into a binary representation.
  • the generating instructions for the hardware chip further includes converting instructions into binary representation.
  • the binary representation is a format that is suitable to be run on the hardware chip.
  • FIG. 8 shows an exemplary configuration of a multi-core hardware chip 850 operable to perform neural network inference, according to an embodiment of the present invention.
  • the hardware chip configuration further includes a plurality of cores 851 , and the at least one module for performing the mathematical operations and the on-chip memory are distributed among the plurality of cores.
  • the hardware chip configuration further shows that each core includes at least one transmitter block 867 and at least one receiver block 868 configured for inter-core communication.
  • Multi-core hardware chip 850 includes four cores 851 , each of which is substantially similar to hardware chip 250 , described above with respect to FIG. 2 , including all the same modules and memories, but with two additional blocks, transmitter block 867 , and receiver block 868 .
  • the transmitter blocks 867 and receiver blocks 868 of cores 851 are interconnected through one or more write channels 869 allowing write access to memories of other cores, and allowing the loading modules in the core read access to memories of other cores.
  • data exchange may be facilitated through a circuit-switched arbitrated intra-core interconnect, through which an initiator side must first acquire a lock inside of another core's memory, and then perform “burst” transfer of the data.
  • Other embodiments may include other structures for performing inter-core communication.
  • Generating instructions for the hardware chip further includes distributing instructions among the cores.
  • multi-core hardware chip 850 to perform inference of the neural network, more operations can be performed in parallel, significantly reducing the duration, while requiring little additional energy consumption in the form of data transfers among cores.
  • multi-core hardware chip 850 includes four cores, it would not be unreasonable to expect the duration of the performance of inference to be reduced by about 75%.
  • Utilizing multi-core hardware chip 850 may allow the performance to be further scaled up to exceed the limits of power density for a single core.
  • FIG. 9 shows an exemplary configuration of multi-chip hardware operable to perform neural network inference, according to an embodiment of the present invention.
  • the hardware chip configuration further includes at least one transmitter block 967 and at least one receiver block 968 configured to communicate with a second instance of the hardware chip 950 of a multi-chip hardware configuration.
  • the multi-chip hardware of this embodiment includes four hardware chips 950 , each of which is substantially similar to each core 851 , described above with respect to FIG. 8 , including all the same modules and memories. Furthermore, the structures and functions of transmitter blocks 967 , receiver blocks 968 , and write channels 969 are substantially similar to that of transmitter blocks 867 , receiver blocks 868 , and write channels 869 of FIG. 8 . In some embodiments, each hardware chip 950 includes four transmitter blocks and four receiver blocks, which may allow creation of multichip configurations of arbitrary size with hardware chips 950 connected in mesh or 2D Torus topologies.
  • high speed serial interfaces such as Serializer/Deserializer (SerDes) interfaces, which are frequently employed in FPGAs and ASICs for creating multi-chip configurations, may be employed for the purpose of implementation of such transmitter and receiver blocks.
  • SerDes Serializer/Deserializer
  • each hardware chip is identical.
  • the hardware chips of a multi-chip hardware configuration may have different components, such as modules for performing different operations, and memories of different sizes. This may be because the chips are used to perform inference of different neural networks.
  • a multi-chip hardware configuration including chips of different configuration may be beneficial for more scalability and when the chips perform inference of multiple neural networks in parallel.
  • each hardware chip of a multi-chip hardware may be a multi-core hardware chip, such as multi-core hardware chip 850 of FIG. 8 .
  • FIG. 10A shows an exemplary configuration of a depth-wise convolution module 1062 , according to an embodiment of the present invention.
  • Depth-wise convolution module 1062 includes a queue 1062 Q, a main sequencer 1062 MS, a window sequencer 1062 WS, an activation feeder 1062 AF, a weight feeder 1062 WF, a pipeline controller 1062 PC, convolution pipelines 1062 CP, an external accumulation logic 1062 A, and an accumulation memory interface 1062 AI.
  • Queue 1062 Q receives and sends instructions. Queue 1062 Q may receive instructions from an instruction DMA module, such as instruction DMA module 256 of FIG. 2 , and send the instructions to main sequencer 1062 MS. Queue 1062 Q may be a FIFO memory or any other memory suitable for queueing instructions.
  • instruction DMA module such as instruction DMA module 256 of FIG. 2
  • Queue 1062 Q may be a FIFO memory or any other memory suitable for queueing instructions.
  • Main sequencer 1062 MS sequences control parameters for convolution.
  • Main sequencer 1062 MS may receive instructions from queue 1062 Q, and output instructions to window sequencer 1062 WS.
  • Main sequencer 1062 MS splits KH ⁇ KW convolution into smaller convolutions of size 1x ⁇ window> and prepares instructions for activation data and weight values according to order of input regions within the kernel.
  • ⁇ window> refers to an architecture parameter determining line buffer length.
  • Window sequencer 1062 WS sequences control parameters for one 1x ⁇ window> convolution.
  • Window sequencer 1062 WS may receive instructions from Main sequencer 1062 MS, and output a data sequence of activation data according to order of input regions within the kernel to activation feeder 1062 AF and a data sequence of weight values according to order of input regions within the kernel to weight feeder 1062 WF.
  • Activation feeder 1062 AF feeds activation data accessed from an activation data memory, such as activation data memory 260 of FIG. 2 , through data memory interface 1062 DI to convolution pipelines 1062 CP in accordance with the activation data indicated in the data sequence from window sequencer 1062 S.
  • Activation feeder 1062 AF may read activation data sufficient for 1x ⁇ window> computation from the activation data memory into a line buffer of the convolution pipelines 1062 CP.
  • Weight feeder 1062 WF preloads weight values accessed from a weight memory, such as weight memory 255 of FIG. 2 , through weight memory interface 1062 WI to convolution pipelines 1062 CP in accordance with the weight values indicated in the data sequence from window sequencer 1062 S. Weight feeder 1062 WF may read weight values sufficient for 1x ⁇ window> computation from the weight memory into a weight buffer of the convolution pipelines 1062 CP.
  • Pipeline controller 1062 PC controls data transfer operations of convolution pipelines 1062 CP.
  • Pipeline controller 1062 PC may initiate copying of data from the line buffer into an activation buffer of convolution pipelines 1062 CP once the current activation buffer content has been processed.
  • Pipeline controller 1062 PC may control convolution computations performed by each channel pipeline 1062 CH of convolution pipelines 1062 CP, where each channel pipeline 1062 CH operates on one channel of the input to the depth-wise convolution layer.
  • Convolution pipelines 1062 CP performs mathematical operations on activation data fed from activation feeder 1062 AF and weight values preloaded from weight feeder 1062 WF.
  • Convolution pipelines 1062 CP is divided into channel pipelines 1062 CH, each channel pipeline 1062 CH performing mathematical operations for one channel.
  • convolution pipeline logically performs the convolution computations.
  • External accumulation logic 1062 A receives data from convolution pipelines 1062 CP, and stores the data in an accumulation memory, such as accumulation memory 264 of FIG. 2 , through accumulation memory interface 1062 AI.
  • Accumulation logic 1062 A includes an adder 1062 P for each channel pipeline 1062 CH.
  • Accumulation logic 1062 A may be used for point-wise summation of results of 1x ⁇ window> convolutions with the contents of the accumulation memory.
  • this embodiment there are three channels as exemplified by the three window pipelines. However, other embodiments may have a different number of channels. Although possible, this embodiment shows three channels mainly for simplicity. Many embodiments will include at least 16 channels to accommodate practical applications.
  • FIG. 10B shows an exemplary configuration of a channel pipeline 1062 CH for a depth-wise convolution module, according to an embodiment of the present invention.
  • Channel pipeline 1062 CH includes a line buffer 1062 LB, an activation buffer 1062 AB, a weight buffer 1062 WB, a plurality of multipliers 1062 X, a plurality of adders 1062 P, a delay register 1062 DR, and an internal accumulation register 1062 NB.
  • Line buffer 1062 LB stores activation data received from an activation feeder 1062 AF.
  • Line buffer 1062 LB may include a shift register storing activation data as read by activation feeder 1062 AF at one pixel per cycle.
  • Activation buffer 1062 AB stores activation data received from line buffer 1062 LB.
  • Activation buffer 1062 AB may include a set of registers storing activation data to which the current convolution computation is applied.
  • Weight buffer 1062 WB stores weight values received from weight feeder 1062 WF.
  • Weight buffer 1062 WB may include a shift register storing weight values to which the current convolution computation is applied.
  • Multipliers 1062 X multiply the activation data from activation buffer 1062 AB by the weight values from weight buffer 1062 WB.
  • there are three multipliers 1062 X meaning that the degree of parallelism in the width or height dimension of a convolution kernel is three.
  • Adders 1062 P which collectively form an adder tree, then add together the products of the activation data and the weight values.
  • delay register 1062 DR which is also considered part of the adder tree, balances the adder tree.
  • Internal accumulation register 1062 IA assists in the addition by storing partial sums.
  • internal accumulation register 1062 IA may be used for accumulation of partial sums when the number of windows of the buffers, which is six in this embodiment, as well as width or height of convolution filter, is more than the degree of parallelism, which is three.
  • the total sum is output to an accumulation logic 1062 A, which then stores the data in an accumulation memory, such as accumulation memory 264 of FIG. 2 , through accumulation memory interface 1062 AI.
  • FIG. 11 shows an exemplary configuration of a point-wise convolution module 1162 , according to an embodiment of the present invention.
  • Point-wise convolution module 1162 includes queues 1162 Q, a main sequencer 1162 S, a weight memory interface 1162 WI, a weight feeder 1162 WF, a weight memory interface 1162 WI, an activation feeder 1162 AF, a data memory interface 1162 DI, a systolic array 1162 S, an accumulation logic 1162 A, and an accumulation memory interface 1162 AI.
  • Queue 1162 Q receives and sends instructions. Queue 1162 Q may receive instructions from an instruction DMA module, such as instruction DMA module of 256 of FIG. 2 , and send the instructions to main sequencer 1162 S. Queue 1162 Q may be a FIFO memory or any other memory suitable for queueing instructions.
  • instruction DMA module such as instruction DMA module of 256 of FIG. 2
  • Queue 1162 Q may be a FIFO memory or any other memory suitable for queueing instructions.
  • Main sequencer 1162 S sequences control parameters for convolution.
  • Main sequencer 1162 S may receive instructions from queue 1162 Q, and output a control sequence to weight feeder 1162 WF and activation feeder 1162 AF, each through a queue.
  • main sequencer 1162 S splits KH ⁇ KW convolutions into a sequence of 1 ⁇ 1 convolutions, fed as control parameters into weight feeder 1162 WF and activation feeder 1162 AF.
  • Weight feeder 1162 WF preloads weight values accessed from a weight memory, such as weight memory 255 of FIG. 2 , through weight memory interface 1162 WI to systolic array 1162 SA in accordance with the activation data indicated in the control parameters from main sequencer 1162 S.
  • Activation feeder 1162 AF feeds activation data accessed from an activation data memory, such as activation data memory 260 of FIG. 2 , through data memory interface 1162 DI to systolic array 1162 SA in accordance with the activation data indicated in the data sequence from main sequencer 1162 S.
  • an activation data memory such as activation data memory 260 of FIG. 2
  • data memory interface 1162 DI to systolic array 1162 SA in accordance with the activation data indicated in the data sequence from main sequencer 1162 S.
  • Systolic array 1162 SA includes a plurality of MAC elements 1162 M. Each MAC element 1162 M is preloaded with a weight value from weight feeder 1162 WF before computation starts, and then receives an activation value from activation feeder 1162 F. To allow overlapping of computation and weight value preload, multiple weight buffers may be used. MAC elements 1162 M are arranged in an array such that the product of the activation value and the weight output from preceding MAC elements 1162 M is input to subsequent MAC elements 1162 M.
  • each MAC element 1162 M outputs an accumulation value equal to the value output from its left neighbor MAC element 1162 M multiplied by the preloaded weight value 1162 W, the product of which is added to the value output from its top neighbor MAC element 1162 M.
  • the MAC elements 1162 M of the lowest row output their products to accumulation logic 1162 A.
  • Accumulation logic 1162 A receives products from systolic array 1162 SA, and stores the products in an accumulation memory, such as accumulation memory 264 of FIG. 2 . In this embodiment, if accumulation required by main Sequencer 1162 S reads an old value in the memory location to be written, accumulation logic 1162 A will overwrite it by sum with the new value. Otherwise, accumulation logic 1162 A writes the new value as is.
  • Point-wise convolution module 1162 may be useful in performing point-wise convolution by splitting a single KH ⁇ KW convolution into multiple KH ⁇ KW 1 ⁇ 1 convolutions. For example, in a region of accumulation data memory, such as accumulation data memory 264 of FIG. 2 , corresponding to four different 1 ⁇ 1 convolutions, 2 ⁇ 2 convolutions may be substituted. Point-wise convolution module 1162 may compute each 1 ⁇ 1 convolution as a dot product of the matrix of activation values in the MAC elements, and the matrix of weight values in the MAC elements, and then sum the results of the 1 ⁇ 1 convolutions.
  • FIG. 12 shows an exemplary hardware configuration for hardware-specific division of inference, according to an embodiment of the present invention.
  • the exemplary hardware configuration includes apparatus 1290 , which communicates with network 1298 , and interacts with inference environment 1296 .
  • Apparatus 1290 may be a host computer such as a server computer or a mainframe computer that executes an on-premise application and hosts client computers that use it, in which case apparatus 1290 may not be directly connected to inference environment 1296 , but are connected through a terminal device through network 1298 .
  • Apparatus 1290 may be a computer system that includes two or more computers.
  • Apparatus 1290 may be a personal computer that executes an application for a user of apparatus 1290 .
  • Apparatus 1290 includes a logic section 1270 , a storage section 1280 , a communication interface 1292 , and an input/output controller 1294 .
  • Logic section 1270 may be a computer program product including one or more computer readable storage mediums collectively storing program instructions that are executable by a processor or programmable circuitry to cause the processor or programmable circuitry to perform the operations of the various sections.
  • Logic section 1270 may alternatively be analog or digital programmable circuitry, or any combination thereof.
  • Logic section 1270 may be composed of physically separated storage or circuitry that interacts through communication.
  • Storage section 1280 may be a non-volatile computer-readable medium capable of storing non-executable data for access by logic section 1270 during performance of the processes herein.
  • Communication interface 1292 reads transmission data, which may be stored on a transmission buffering region provided in a recording medium, such as storage section 1280 , and transmits the read transmission data to network 1298 or writes reception data received from network 1298 to a reception buffering region provided on the recording medium.
  • Input/output controller 1294 connects to various input and output units, such as inference environment 1296 , via a parallel port, a serial port, a keyboard port, a mouse port, a monitor port, and the like to accept commands and present information.
  • Inference environment 1296 may be a hardware chip capable of performing neural network inference, such as hardware chip 250 of FIG. 2 , or may be a computer or similar device with a processor and memory, such as a smartphone, smart car, etc., which also includes a hardware chip in communication with the memory.
  • Logic section 1270 includes obtaining section 1272 , dividing section 1274 , which includes simulating section 1275 , and generating section 1277 .
  • Storage section 1280 includes computational graph 1282 , hardware chip configuration 1284 , candidates 1286 , simulation environment 1287 , and instructions 1289 .
  • Obtaining section 1272 is the portion of logic section 1270 that obtains information for hardware-specific division of inference.
  • obtaining section 1272 may be configured to a computational graph and a hardware chip configuration.
  • Obtaining section 1272 may store obtained information in storage section 1280 as computational graph 1282 and hardware chip configuration 1284 .
  • Obtaining section 1272 may include sub-sections for performing additional functions, as described in the foregoing flow charts. Such sub-sections may be referred to by a name associated with their function.
  • Dividing section 1274 is the portion of logic section 1270 that divides inference for hardware-specific division of inference.
  • dividing section 1274 may be configured to divide inference of a plurality of layers of a neural network into a plurality of groups, each group including a number of sequential layers based on an estimate of duration and energy consumption by a hardware chip to perform inference of the neural network. While dividing, dividing section 1274 may access computational graph 1282 , hardware chip configuration 1284 , and candidates 1286 .
  • Dividing section 1274 may include sub-sections for performing additional functions, as described in the foregoing flow charts. Such sub-sections may be referred to by a name associated with their function.
  • Simulating section 1275 is the portion of logic section 1270 that simulates a performance of inference of a neural network by a specific hardware chip.
  • simulating section 1275 may be configured to simulate a performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for each of a plurality of candidate group divisions. While simulating, simulating section 1275 may access computational graph 1282 , hardware chip configuration 1284 , candidates 1286 , simulation environment 1287 , and instructions 1289 .
  • Simulating section 1275 may include sub-sections for performing additional functions, as described in the foregoing flow charts. Such sub-sections may be referred to by a name associated with their function.
  • Generating section 1277 is the portion of logic section 1270 that generates instructions for hardware-specific division of inference.
  • generating section 1277 may be configured to generate instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.
  • the instructions may be used for simulation, such as by simulating section 1275 , or may be used directly on the hardware chip.
  • generating section 1277 may access computational graph 1282 , hardware chip configuration 1284 , candidates 1286 , and instructions 1289 .
  • Generating section 1277 may include sub-sections for performing additional functions, as described in the foregoing flow charts. Such sub-sections may be referred to by a name associated with their function.
  • the apparatus may be any other device capable of processing logical functions in order to perform the processes herein.
  • the apparatus may not need to be connected to a network in environments where the input, output, and all information is directly connected.
  • the logic section and the storage section need not be entirely separate devices, but may share one or more computer-readable mediums.
  • the storage section may be a hard drive storing both the computer-executable instructions and the data accessed by the logic section, and the logic section may be a combination of a central processing unit (CPU) and random access memory (RAM), in which the computer-executable instructions may be copied in whole or in part for execution by the CPU during performance of the processes herein.
  • CPU central processing unit
  • RAM random access memory
  • a program that is installed in the computer can cause the computer to function as or perform operations associated with apparatuses of the embodiments of the present invention or one or more sections (including modules, components, elements, etc.) thereof, and/or cause the computer to perform processes of the embodiments of the present invention or steps thereof.
  • a program may be executed by a processor to cause the computer to perform certain operations associated with some or all of the blocks of flowcharts and block diagrams described herein.
  • Various embodiments of the present invention may be described with reference to flowcharts and block diagrams whose blocks may represent (1) steps of processes in which operations are performed or (2) sections of apparatuses responsible for performing operations. Certain steps and sections may be implemented by dedicated circuitry, programmable circuitry supplied with computer-readable instructions stored on computer-readable media, and/or processors supplied with computer-readable instructions stored on computer-readable media.
  • Dedicated circuitry may include digital and/or analog hardware circuits and may include integrated circuits (IC) and/or discrete circuits.
  • Programmable circuitry may include reconfigurable hardware circuits comprising logical AND, OR, XOR, NAND, NOR, and other logical operations, flip-flops, registers, memory elements, etc., such as field-programmable gate arrays (FPGA), programmable logic arrays (PLA), etc.
  • FPGA field-programmable gate arrays
  • PLA programmable logic arrays
  • the present invention may be a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to individualize the electronic circuitry, in order to perform aspects of the present invention.

Abstract

Neural network accelerator hardware-specific division of inference may be performed by operations including obtaining a computational graph and a hardware chip configuration. The operations also include dividing inference of the plurality of layers into a plurality of groups. Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers of each group. The operations further include generating instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.

Description

    BACKGROUND Technical Field
  • The present invention relates to neural network accelerator hardware-specific division of neural network inference. More specifically, the present invention relates to division of a neural network into groups of layers and/or division of each layer into portions based on estimates of duration and energy consumption.
  • Background
  • Real-time neural network (NN) inference is going to be ubiquitous for computer vision or speech tasks on edge devices for applications such as autonomous vehicles, robotics, smartphones, portable healthcare devices, surveillance, etc. Specialized NN inference hardware, such as Google TPU, has become a mainstream way of providing power efficient inference. Google TPU's efficiency is restricted mainly to point-wise convolution and dense fully connected layer types of a deep neural network (DNN).
  • On the other hand, MobileNet-like DNN architectures greatly reduce the number of Multiply and Accumulate (MAC) computations to be performed while achieving high accuracy, resulting in lower total latency and energy spent on MAC operations. However, accelerating the inference of such DNNs on hardware requires support for Inverted Residual Bottleneck type DNN Layers or similarly constructed combination of point-wise and depth-wise convolution DNN layers. Providing efficient inference system with support for such MobileNet-like architectures will enable a new generation of energy efficient hardware-software systems for edge computing applications.
  • SUMMARY
  • According to an aspect of the present invention, provided is a computer program including instructions that are executable by a computer to cause the computer to perform operations for hardware-specific division of inference. The operations include obtaining a computational graph and a hardware chip configuration. The computational graph of a neural network has a plurality of layers. Each layer has a plurality of nodes and a plurality of edges. Each node includes a representation of a mathematical operation. The hardware chip configuration includes at least one module for performing the mathematical operations and an on-chip memory. The hardware chip is operable to perform inference of the neural network in portions of each layer by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers while interfacing with an external memory storing the activation data. The operations also include dividing inference of the plurality of layers into a plurality of groups. Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations, sequentially by layer, of corresponding portions of layers of each group. The operations further include generating instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.
  • This aspect may also include the method performed by the processor executing the instructions of the computer program, and an apparatus that performs the method. The apparatus may include an obtaining section configured to obtain a computational graph and a hardware chip configuration, a dividing section configured to divide inference of the plurality of layers into a plurality of groups, and a generating section configured to generate instructions for the hardware chip to perform inference of the convolutional neural network, sequentially by group, of the plurality of groups.
  • According to an aspect of the present invention, provided is an apparatus including an activation memory, a data loading module configured to load activation data from an external memory onto the activation memory, and a data storing module configured to store activation data from the activation memory onto the external memory. The apparatus also includes a weight memory, and a weight loading module configured to load weight values from an external memory onto the activation memory. The apparatus further includes an accumulation memory, a plurality of convolution modules configured to perform mathematical operations on the activation data stored in the activation data memory and the weight values stored in the weight memory, and to store values resulting from the mathematical operations onto the accumulation memory, and a plurality of activation modules configured to perform activation operations on the values stored in the accumulation memory, and to store resulting activation data onto the activation data memory. The apparatus also includes an instruction module configured to feed and synchronize instructions from the external memory to the data loading module, the data storing module, the weight loading module, the plurality of convolution modules, and the plurality of activation modules, to perform inference of a convolutional neural network.
  • The summary clause does not necessarily describe all necessary features of the embodiments of the present invention. The present invention may also be a sub-combination of the features described above.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows an operational flow for hardware-specific division of inference, according to an embodiment of the present invention.
  • FIG. 2 shows an exemplary configuration of a hardware chip operable to perform neural network inference, according to an embodiment of the present invention.
  • FIG. 3 shows a diagram of a performance of inference of the neural network in portions of each layer, according to an embodiment of the present invention.
  • FIG. 4 shows an operational flow for dividing inference of layers into groups, according to an embodiment of the present invention.
  • FIG. 5 shows an operational flow for simulating performance of inference on a hardware chip, according to an embodiment of the present invention.
  • FIG. 6 shows an operational flow for hardware specific division of inference, according to another embodiment of the present invention.
  • FIG. 7 shows an operational flow for generating instructions for the hardware chip to perform inference, according to another embodiment of the present invention.
  • FIG. 8 shows an exemplary configuration of a multi-core hardware chip operable to perform neural network inference, according to an embodiment of the present invention.
  • FIG. 9 shows an exemplary configuration of multi-chip hardware operable to perform neural network inference, according to an embodiment of the present invention.
  • FIG. 10A shows an exemplary configuration of a depth-wise convolution module, according to an embodiment of the present invention.
  • FIG. 10B shows an exemplary configuration of a per-channel pipeline for a depth-wise convolution module, according to an embodiment of the present invention.
  • FIG. 11 shows an exemplary configuration of a point-wise convolution module, according to an embodiment of the present invention.
  • FIG. 12 shows an exemplary hardware configuration for hardware-specific division of inference, according to an embodiment of the present invention.
  • DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • Hereinafter, example embodiments of the present invention will be described. The example embodiments shall not limit the invention according to the claims, and the combinations of the features described in the embodiments are not necessarily essential to the invention.
  • The inventors herein have found that a significant part of the total energy consumed during performance of inference is dissipated in external memory access, with more external memory throughput requiring more energy consumption.
  • Embodiments of the present invention may seek to minimize the number of external memory accesses, and generally provide high computation density in terms of teraoperations per second per unit of area (TOP/s/Area) and resource utilization. Exemplary embodiments may generate instructions to perform inference by a hardware system, such as an ASIC or an FPGA, capable of performing efficient neural network inference by grouping neural network layers and avoiding external memory accesses between processing them to reduce the total number of external memory accesses as compared to processing the layers one by one and storing all intermediate data in the external memory. This may allow flexibility in handling various neural networks with performance and power efficiency close to a fixed-neural-network chip, and flexibility to handle a variety of neural networks, such as convolutional neural networks, including MobileNet variations.
  • Techniques herein may be beneficial in conditions when an entire input layer cannot fit into an on-chip memory. By modifying various degrees of parallelism in the system, a hardware chip could be tuned for a particular set or “family” of neural networks and a set of resource constraints, such as area and power, such as by using an automated design-search process. The hardware can be scaled from power-restricted edge devices to data centers by adjusting scaling parameters. By reducing external memory accesses, stochasticity in performance may be reduced as well.
  • FIG. 1 shows an operational flow for neural network accelerator hardware-specific division of inference, according to an embodiment of the present invention. The operational flow may provide a method of dividing inference for performance on a specific hardware chip configuration.
  • At S110, an obtaining section obtains a computational graph and a hardware chip configuration. The computational graph is of a neural network having a plurality of layers, each layer having a plurality of nodes and a plurality of edges, and each node including a representation of a mathematical operation. The hardware chip configuration includes at least one module for performing the mathematical operations and an on-chip memory. The hardware chip is operable to perform inference of the neural network in portions of each layer by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers while interfacing with an external memory storing the activation data.
  • At S120, a dividing section divides inference of the plurality of layers into a plurality of groups. Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations, sequentially by layer, of corresponding portions of layers of each group.
  • At S140, a generating section generates instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.
  • FIG. 2 shows an exemplary configuration of a hardware chip 250 operable to perform neural network inference, according to an embodiment of the present invention. Hardware chip 250 may be referred to as a neural network accelerator. In this embodiment, hardware chip 250 is an Application Specific Integrated Circuit (ASIC). The modules of hardware chip 250 may be groups of logic gates arranged to perform specific functions. The memories of hardware chip 250 may be RAM, flash memory, or other embedded writable memory.
  • The hardware chip configuration includes at least one module for performing the mathematical operations and an on-chip memory. In this embodiment, hardware chip 250 includes an external memory interface 252. The at least one module of the hardware chip 250 includes at least one convolution module 262, at least one module for performing activation operations, an activation module 266, at least one module for loading the activation data from the external memory onto the on-chip memory, a data loading module 258, at least one module for storing activation data on the external memory from the on-chip memory, a data storing module 259, at least one module for loading weights of the convolution neural network from the external memory to the on-chip memory, a weight loading module 254, and at least one module for loading instructions of these modules from the external memory, an instruction DMA module 256. The on-chip memory includes a weight memory 255, an activation data memory 260, and an accumulation memory 264.
  • External memory interface 252 is configured to allow hardware chip 250, and the various modules therein, to exchange data with a DRAM memory 206, the external memory. A Central Processing Unit (CPU) 208 may request neural network inference for use in an application.
  • Weight loading module 254 and data loading module 258 are configured to read and load data from an external memory, such as DRAM memory 206, through external memory interface 252. Weight loading module 254 may sequentially read weight values from the external memory and load such data onto weight memory 255.
  • Data loading module 258 may read input values, activation data, etc., from the external memory and load such data onto activation data memory 260. Data storing module 259 is configured to store activation data onto the external memory through external memory interface 252. Data storing module 259 may read activation data from activation data memory 260 and store such data onto DRAM memory 206. Data loading module 258 and data storing module 259 may operate on portions, such as rectangular subregions, blocks, or tiles, of activation data stored in the external memory. Data loading module 258 and data storing module 259 may also be used for a type of operation known as a “spill-fill”, in which intermediate computation results are temporarily “evacuated” to the external memory when the capacity of an on-chip memory is insufficient.
  • Weight memory 255, activation memory 260, and accumulation memory 264 are all blocks of the on-chip memory of hardware chip 250. The hardware chip configuration specifies a number and size of the banks of each block of the on-chip memory. Each block may be organized as a set of one or two port memory banks. Each block may have read and write ports exposed to corresponding computation modules, load modules, and store modules. Hardware chip 250 may further include arbitration & interconnect logic connecting the on-chip memory to I/O ports, such as external memory interface 252. Loading and storing modules of hardware chip 250 may be configured to acquire locks to a memory bank of the on-chip memory, perform a set of read or write transactions, and then release the memory bank when no longer in use. In this manner, two or more modules may access different memory banks in parallel.
  • In this exemplary embodiment, hardware chip 250 is configured to perform inference of a convolutional neural network, and so the portions of each layer are tiles, and hardware chip 250 includes convolution modules 262. In other words, the at least one module of the hardware chip 250 includes at least one convolution module.
  • Convolution modules 262 are configured to perform mathematical operations on the input values or activation data stored in activation data memory 260 and the weight values stored in weight memory 255. Convolution modules 262 may output partial sums to accumulation memory 264, and may also perform accumulation with existing partial sums stored in accumulation memory 264. Convolution modules 262 may provide direct support for different parameters of mathematical operations, such as a kernel size of height (KH)×width (KW), vertical and horizontal strides, dilation, padding, etc. In some embodiments of the hardware chip 250, convolution modules 262 include at least one dedicated depth-wise convolution module and at least one point-wise convolution module. In other embodiments of the hardware chip 250, convolution modules 262 include generic convolution modules, which may support combinations of depth-wise convolution and point-wise convolution layers, such as Inverted Residual Blocks in MobileNet architectures.
  • Activation modules 266 are configured to perform activation operations on values stored in accumulation memory 264. Activation modules 266 may read input values from accumulation memory 264 and store computation results in activation data memory 260. Activation modules 266 may perform computations such as elementwise math functions, including addition, multiplication, division, square root, etc. of scalar or vector values following the mathematical operations of convolution modules 262 in order to provide activation functions, such as ReLU, LeakyReLU, Hsigmoid, H-Swish, etc. Activation modules 266 may further perform residual addition of branches, requantization, local pooling such as max-pooling and average pooling with a set of fixed window sizes.
  • Parameters of the operations performed by hardware chip 250, and the various modules therein, may be stored in a separate memory, such as weight memory 255, a dedicated memory, or embedded into the instructions as immediate values. Instruction DMA module 256 is configured to load instructions of the various modules of hardware chip 250. Instruction DMA module 256 may load instructions of the various modules of hardware chip 250 in round-robin fashion from the external memory. The instruction infrastructure of hardware chip 250 may feed and synchronize instructions. The instruction infrastructure of hardware chip 250 may include, in addition to instruction DMA module 256, at least one instruction queue, such as First-In-First-Out (FIFO) memories, for carrying encoded instructions to each of the various modules, which explicitly controls the behavior of the modules.
  • Although in this embodiment the hardware chip is configured to perform inference of a convolutional neural network, other embodiments may perform hardware-specific division of inference of other kinds of neural networks. In addition to the data loading module and the data storing module attached to the activation data memory, other embodiments of the hardware chip may include an additional pair of loading and storing modules that may be attached to the accumulation memory. In other embodiments, the weight loading module may also be used for loading activation module parameters.
  • FIG. 3 shows a diagram of a performance of inference of the neural network in portions of each layer, according to an embodiment of the present invention. In this embodiment, a convolutional neural network has been divided into groups of layers based on some heuristic including an estimate of duration and energy consumption. Each layer is apportioned into tiles of 3 dimensions: height, width, and channels. The sizes of the dimensions are established such that the tiles of a layer may be processed using a subset of tiles from a previous layer. For point-wise convolution, all tiles in the channel dimension are required for processing the activation data thereof. For depth-wise convolution, one tile is sufficient to process the activation data of the corresponding tile in a subsequent layer.
  • The neural network includes example sequential layers 301, 302, 303, and 304 among other layers. During the performance of inference in this embodiment, a data loading module 358 reads input values or activation data from an external memory through external memory interface 352, and loads such data onto an activation data memory 360. A data storing module 359 reads activation data from activation data memory 360, and stores such data onto the external memory through external memory interface 352. In other words, the generating instructions for the hardware chip further includes generating instructions for the hardware chip to retrieve activation data of corresponding portions in the first layer in each group from the external memory, and record activation data resulting from the mathematical operations of corresponding portions in the last layer in each group to an external memory.
  • In this embodiment, layers 301, 302, 303, and 304 belong to a single group, which means that activation data is loaded from the external memory only once and stored on the external memory only once during the performance of inference of corresponding portions of layers 301, 302, 303, and 304. Enough input tiles must be loaded to process the activation values of tile 301A in the height and width dimension of layer 301 into on-chip memory. Because of data dependencies of convolution operations other than 1×1, tiles of subsequent layers will shrink in area. Thus, all but the tile of the last layer usually overlap by (K−1)/2 for a K×K (equal height and width) convolution kernel, which may increase the amount of computations. Thus, the computational graph of the neural network is divided into groups of layers to balance the amount of additional computations with the number of memory transactions required to store a whole intermediate layer into external memory.
  • Since activation data of both tiles 301A and 301B are required to process the activation data of tile 302A, activation data of tiles 301A and 301B of layer 301 are loaded onto activation data memory 360. The activation data of tiles 301A and 301B are processed to yield activation data of tiles 302A and 302B of layer 302, which are also stored onto activation data memory 360. This allows processing of the next layer of activation data of tiles based on activation data of the previous layer already loaded onto activation data memory 360, with the resulting activation data stored in the activation data memory as well.
  • Once the activation data of tiles 302A and 302B are loaded onto activation data memory 360, the activation data of tiles 301A and 301B may be cleared to free space on activation data memory 360 for the next activation data. The processing and yielding is repeated for each layer moving deeper in the group. Next, the activation data of tiles 302A and 302B are processed to yield activation data of tiles 303A and 303B of layer 303, which are loaded onto activation data memory 360. The activation data of tiles 303A and 303B are then processed to yield activation data of tiles 304A and 304B of layer 304, which are loaded onto activation data memory 360. Finally, data storing module 359 stores the activation data of tiles 304A and 304B onto the external memory through external memory interface 352.
  • In this embodiment, the performance of inference was divided into portions, or tiles, as well as groups; other embodiments may not require apportioning each layer, such as when the activation data memory is large enough to load activation data for an entire layer.
  • FIG. 4 shows an operational flow for dividing inference of layers into groups, such as S120 of FIG. 1, according to an embodiment of the present invention. The operations within this operational flow may be performed by a dividing section or a correspondingly named sub-section thereof. As described in FIG. 1, the computational graph and the hardware chip configuration are obtained prior to dividing inference of layers into groups.
  • At S422, a preparing section, such as the dividing section or a sub-section thereof, prepares a plurality of candidate group divisions, each candidate group division identifying a unique division of the plurality of layers. A candidate group division specifies a group to which each layer belongs, provided that each group must have consecutive layers. For example, each of the plurality of candidate group divisions may identify even divisions of the plurality of layers. As another example, each of the plurality of candidate group divisions may identify random divisions of the plurality of layers in groups of single layers, two layers, three layers, etc. A candidate group division may also include only some of the layers of the neural network, so that finer divisions can be analyzed.
  • At S430, a simulating section simulates a performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for one of the candidate group divisions. As iterations proceed, the simulating section simulates performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for each of the plurality of candidate group divisions.
  • At S424, the dividing section or a sub-section thereof determines whether all of the candidate group divisions have been simulated. If unsimulated candidates remain, then the operational flow proceeds to S428, where a new candidate group division is selected for simulation. If all candidate group divisions have been simulated, then operational flow proceeds to S426.
  • At S426, a comparing section, such as the dividing section or a sub-section thereof, compares the estimate of duration and energy consumption of each candidate group division of the same layers among the plurality of layers. Although partial candidate group divisions may be included, to make a fair comparison, the estimates must cover an inference performance of the same layers. For example, the plurality of candidate group divisions may identify a single layer as a first candidate group division, a preceding group of layers as a second group division, and the single layer together with the preceding group of layers as a third candidate group division. In such an example, a fair comparison may include comparing (i) an estimate of duration and energy consumption to perform the mathematical operations of the third candidate group division and (ii) an estimate of total duration and total energy consumption to perform the mathematical operations of the first candidate group division and the second candidate group division. This example may be useful for a particular embodiment of dividing inference of the layers into groups, in which a heuristic algorithm uses layer-aware grouping. The algorithm starts with an empty group, and then a first ungrouped layer is added to the group. The simulating section then estimates duration and energy consumption of inference of the group, inference of next ungrouped layer, and inference of the group with next ungrouped layer added. If inference of the group with next ungrouped layer added outperforms the sum of inference of the group and inference of next ungrouped layer, then the process is repeated for the next layer. However, if inference of the group with the next ungrouped layer added does not outperform the sum of inference of the group and inference of next ungrouped layer, then the group will not include the next ungrouped layer, and the process will proceed to consider a group of only the next ungrouped layer. This process is repeated for all of the layers of the network.
  • While this embodiment simulates performance of inference of the neural network by the hardware chip, other embodiments may execute inference of the neural network directly on the hardware chip. While such embodiments may not need a simulation environment, measuring duration and energy consumption for all the different candidates may be more time consuming than in the simulation environment.
  • FIG. 5 shows an operational flow for simulating performance of inference on a hardware chip, such as S430 of FIG. 4, according to an embodiment of the present invention. The operations within this operational flow may be performed by a simulating section or a correspondingly named sub-section thereof. As described in FIG. 4, candidate group divisions are prepared prior to simulating performance of inference.
  • At S532, a generating section generates instructions for the hardware chip to perform inference according to the candidate group division. In other words, the generating section generates instructions for the hardware chip to perform the mathematical operations, sequentially by layer, of corresponding portions in layers of each group. Although just for simulation, the instructions may be generated in the same manner as for the actual hardware chip, such as S140 of FIG. 1. More details of the instruction generation operation are described with respect to FIG. 7.
  • At S534, an executing section, such as the simulating section or a sub-section thereof, executes the instructions on a simulation of the hardware chip. This may include tracking, recording, or otherwise identifying the operations in each clock cycle. The operations that are identified are the simple, fine-grained operations that are performed by individual modules, many times in parallel with operations of other modules.
  • At S535, a summing section, such as the simulating section or a sub-section thereof, sums the clock cycles during the simulation. Although the simulation may run magnitudes faster than inference on the actual hardware chip, the amount of time of a clock cycle of the hardware chip can be determined based on the configuration of the hardware chip. For example, if the hardware chip configuration runs at 2 GHz, then it can be estimated that two billion clock cycles will last one second of time.
  • At S537, an assigning section, such as the simulating section or a sub-section thereof, assigns an energy consumption to each fine-grained operation of the simulation. Although performance of inference may include complex processes, those processes are broken down into these fine-grained operations, each of which can be associated with an energy consumption measured from this simulation or a previous simulation of the same fine-grained operation on the same hardware chip. In some embodiments, energy consumptions associated with each fine-grained operation of the hardware chip may be supplied from an input file independent of the simulation environment.
  • At S538, the summing section sums the energy consumption of all of the fine-grained operations of the simulation. In other words, the estimate of energy consumption of the hardware chip is based on a sum of individual energy consumptions associated with each operation, and the estimate of duration is based on the number of clock cycles.
  • FIG. 6 shows an operational flow for hardware specific division of inference, according to another embodiment of the present invention. The operational flow may provide a method of dividing inference for performance on a specific hardware chip configuration.
  • The operations performed at S610, S620, and S640 are substantially similar to the operations performed at S110, S120, and S140, described above with respect to FIG. 1. As explained above, the hardware chip is operable to perform inference of the neural network in portions of each layer. In some embodiments, the dimensions of the portions, or tiles in the case of a convolutional neural network, are predetermined. However, in this embodiment, the operational flow for hardware specific division of inference includes an operation of determining the dimensions of the portions.
  • At S612, a determining section, such as the dividing section or a sub-section thereof, determines dimensions of the portions of each layer. In some embodiments, the determining section determines the dimensions of the portions of each layer by simulating a performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for each of a plurality of candidate dimension specifications. In such embodiments, each candidate dimension specification may be based on a capacity of the on-chip memory and a degree of parallelism of the hardware chip. In some of these embodiments, one of the dimensions of each portion may be defined by the degree of parallelism of the hardware chip, while the other dimensions can be variable. Once all the candidate dimension specifications have been simulated, a comparing section, such as a simulating section or a sub-section thereof, compares the estimate of duration and energy consumption of each candidate dimension specification. One of the candidate dimension specifications may be then be selected for use in the performance of inference. The selection may be based on duration or energy consumption or a balance of both.
  • While this embodiment simulates performance of inference of the neural network by the hardware chip, other embodiments may execute inference of the neural network directly on the hardware chip. While such embodiments may not need a simulation environment, measuring duration and energy consumption may be more difficult than in the simulation environment.
  • FIG. 7 shows an operational flow for generating instructions for the hardware chip to perform inference, such as S140 of FIG. 1, according to an embodiment of the present invention. The operations within this operational flow may be performed by a generating section or a correspondingly named sub-section thereof. As described in FIG. 1, the layers of the neural network have been divided into groups.
  • At S741, an assigning section, such as the generating section or a sub-section thereof, assigns each operation of each module in the hardware chip to a queue. In other words, the generating instructions for the hardware chip further includes assigning each operation to a queue among a plurality of queues. Beginning from the computational graph, each node represents an instruction from an Instruction Set Architecture (ISA) of the hardware chip, and each edge represents a virtual buffer holding data from one portion of a layer. For purposes of assigning operations to queues, the number of virtual buffers is unlimited. Each virtual buffer is unique and associated with one particular value in the computational graph. However, the same physical buffer may be assigned to multiple edges with non-overlapping lifetimes across the scheduled computational graph. In order to perform the instructions in the computational graph on the hardware chip, there must be a load instruction for each input portion of a group, and there must be a store instruction for each output portion of a group. Similar to the operations identified during simulation of the performance of inference, the operations assigned to each queue are the simple, fine-grain operations that are performed by individual modules, many times in parallel with operations of other modules. Each instruction may be realized by multiple fine-grain operations. A queue may have operations that are performed by more than one module. Every module in the system executes its own linear sequence of instructions, which can be broken down into operations. The performance of inference may be thought of as a set of sequential processes running in parallel.
  • In other words, the generating instructions for the hardware chip further includes ordering execution of operations in each queue. Each parallel process may read from and/or write to multiple memories. Each instruction in the process may result in operations on many data elements during many clock cycles. Therefore, proper ordering of the operations may be critical to ensuring that operation dependencies are satisfied and each operation is performed at a time when the necessary resources are available. The ordering section may also optimize the order to minimize execution time, and minimize the number of potential evacuations of data.
  • At S744, an allocating section, such as the generating section or a sub-section thereof, allocates locations in the on-chip memory of the hardware chip for data. In other words, the generating instructions for the hardware chip further includes allocating locations in the on-chip memory to data for performing inference of the neural network. In this embodiment, the generating instructions may also include generating instructions for the at least one module of the hardware chip to perform loading of data from the external memory to the allocated locations. In doing so, the allocating section may replace virtual buffers with physical memory locations of the on-chip memory of the hardware chip for purposes of generating the instructions before execution of inference by the hardware chip.
  • At S745, the generating section or a sub-section thereof determines whether all of the data that requires allocation can be allocated to available memory. In other words, the generating section determines whether there is enough memory to hold all necessary data for each clock cycle. If there is not enough memory for all necessary data for one or more clock cycles, then the operational flow proceeds to S746, where one or more evacuations of data may be introduced. If there is enough memory for all necessary data for all clock cycles, then the operational flow proceeds to S747.
  • At S746, an evacuating section, such as the generating section or a sub-section thereof, introduces evacuations of data to the external memory into the operations. Although the dimensions of the portions of each layer are set, as is the division of layers into groups, the performance of inference may encounter times when a particular memory requires more storage space than exists, such as when there are not enough physical memory locations to perform assignment of all edges. In that case, some or all of the data currently stored on the on-chip memory is temporary offloaded onto the external memory, so that the on-chip memory can be cleared for storage of more immediately required data. The cleared data will then later be loaded back onto the on-chip memory when that data once again becomes necessary for further processing. The values to evacuate are selected in an attempt to minimize evacuations of data to the external memory, i.e.—in an attempt to reduce the number of external memory accesses. Once the evacuations are introduced, they must be scheduled into the order of operations, and so the operation flow returns to S742 whenever new evacuations of data are introduced. In other words, the generating instructions for the hardware chip further includes scheduling evacuation of data to the external memory in order to perform inference of the neural network.
  • At S747, an annotating section, such as the generating section or a sub-section thereof, annotates synchronization flags. In other words, the generating instructions for the hardware chip further includes synchronization flag annotating to preserve mutual ordering of dependent operations. Each consumer-producer pair of processes may have a pair of semaphores/token-queues for Read After Write (RAW) and Write After Read (WAR) dependency synchronization. For any consumer-producer pair of modules to communicate through the same memory, dependencies of each pair of semaphores/token-queues for RAW and WAR may be tracked. Furthermore, each instruction may have set of flags to decrement and increment semaphores corresponding to a particular process. Therefore, in some embodiments, an explicit, compiler-guided token-based synchronization mechanism may be employed to avoid data hazards, while maintaining task-level parallelism.
  • At S749, a converting section, such as the generating section or a sub-section thereof, converts the instructions into a binary representation. In other words, the generating instructions for the hardware chip further includes converting instructions into binary representation. The binary representation is a format that is suitable to be run on the hardware chip.
  • FIG. 8 shows an exemplary configuration of a multi-core hardware chip 850 operable to perform neural network inference, according to an embodiment of the present invention. In this embodiment, the hardware chip configuration further includes a plurality of cores 851, and the at least one module for performing the mathematical operations and the on-chip memory are distributed among the plurality of cores. The hardware chip configuration further shows that each core includes at least one transmitter block 867 and at least one receiver block 868 configured for inter-core communication.
  • Multi-core hardware chip 850 includes four cores 851, each of which is substantially similar to hardware chip 250, described above with respect to FIG. 2, including all the same modules and memories, but with two additional blocks, transmitter block 867, and receiver block 868. The transmitter blocks 867 and receiver blocks 868 of cores 851 are interconnected through one or more write channels 869 allowing write access to memories of other cores, and allowing the loading modules in the core read access to memories of other cores. In some embodiments, data exchange may be facilitated through a circuit-switched arbitrated intra-core interconnect, through which an initiator side must first acquire a lock inside of another core's memory, and then perform “burst” transfer of the data. Other embodiments may include other structures for performing inter-core communication.
  • Generating instructions for the hardware chip further includes distributing instructions among the cores. By utilizing multi-core hardware chip 850 to perform inference of the neural network, more operations can be performed in parallel, significantly reducing the duration, while requiring little additional energy consumption in the form of data transfers among cores. For example, since multi-core hardware chip 850 includes four cores, it would not be unreasonable to expect the duration of the performance of inference to be reduced by about 75%. Utilizing multi-core hardware chip 850 may allow the performance to be further scaled up to exceed the limits of power density for a single core. When generating instructions for the hardware chip, although additional instructions may be necessary for inter-core data transfer, the generation of instructions for each individual core remains substantially the same as described above.
  • FIG. 9 shows an exemplary configuration of multi-chip hardware operable to perform neural network inference, according to an embodiment of the present invention. In this embodiment, the hardware chip configuration further includes at least one transmitter block 967 and at least one receiver block 968 configured to communicate with a second instance of the hardware chip 950 of a multi-chip hardware configuration.
  • The multi-chip hardware of this embodiment includes four hardware chips 950, each of which is substantially similar to each core 851, described above with respect to FIG. 8, including all the same modules and memories. Furthermore, the structures and functions of transmitter blocks 967, receiver blocks 968, and write channels 969 are substantially similar to that of transmitter blocks 867, receiver blocks 868, and write channels 869 of FIG. 8. In some embodiments, each hardware chip 950 includes four transmitter blocks and four receiver blocks, which may allow creation of multichip configurations of arbitrary size with hardware chips 950 connected in mesh or 2D Torus topologies. In such embodiments, high speed serial interfaces, such as Serializer/Deserializer (SerDes) interfaces, which are frequently employed in FPGAs and ASICs for creating multi-chip configurations, may be employed for the purpose of implementation of such transmitter and receiver blocks.
  • In this embodiment, each hardware chip is identical. However, in other embodiments, the hardware chips of a multi-chip hardware configuration may have different components, such as modules for performing different operations, and memories of different sizes. This may be because the chips are used to perform inference of different neural networks. A multi-chip hardware configuration including chips of different configuration may be beneficial for more scalability and when the chips perform inference of multiple neural networks in parallel. In further embodiments, each hardware chip of a multi-chip hardware may be a multi-core hardware chip, such as multi-core hardware chip 850 of FIG. 8.
  • FIG. 10A shows an exemplary configuration of a depth-wise convolution module 1062, according to an embodiment of the present invention. Depth-wise convolution module 1062 includes a queue 1062Q, a main sequencer 1062MS, a window sequencer 1062WS, an activation feeder 1062AF, a weight feeder 1062WF, a pipeline controller 1062PC, convolution pipelines 1062CP, an external accumulation logic 1062A, and an accumulation memory interface 1062AI.
  • Queue 1062Q receives and sends instructions. Queue 1062Q may receive instructions from an instruction DMA module, such as instruction DMA module 256 of FIG. 2, and send the instructions to main sequencer 1062MS. Queue 1062Q may be a FIFO memory or any other memory suitable for queueing instructions.
  • Main sequencer 1062MS sequences control parameters for convolution. Main sequencer 1062MS may receive instructions from queue 1062Q, and output instructions to window sequencer 1062WS. Main sequencer 1062MS splits KH×KW convolution into smaller convolutions of size 1x<window> and prepares instructions for activation data and weight values according to order of input regions within the kernel. Wherein <window> refers to an architecture parameter determining line buffer length.
  • Window sequencer 1062WS sequences control parameters for one 1x<window> convolution. Window sequencer 1062WS may receive instructions from Main sequencer 1062MS, and output a data sequence of activation data according to order of input regions within the kernel to activation feeder 1062AF and a data sequence of weight values according to order of input regions within the kernel to weight feeder 1062WF.
  • Activation feeder 1062AF feeds activation data accessed from an activation data memory, such as activation data memory 260 of FIG. 2, through data memory interface 1062DI to convolution pipelines 1062CP in accordance with the activation data indicated in the data sequence from window sequencer 1062S. Activation feeder 1062AF may read activation data sufficient for 1x<window> computation from the activation data memory into a line buffer of the convolution pipelines 1062CP.
  • Weight feeder 1062WF preloads weight values accessed from a weight memory, such as weight memory 255 of FIG. 2, through weight memory interface 1062WI to convolution pipelines 1062CP in accordance with the weight values indicated in the data sequence from window sequencer 1062S. Weight feeder 1062WF may read weight values sufficient for 1x<window> computation from the weight memory into a weight buffer of the convolution pipelines 1062CP.
  • Pipeline controller 1062PC controls data transfer operations of convolution pipelines 1062CP. Pipeline controller 1062PC may initiate copying of data from the line buffer into an activation buffer of convolution pipelines 1062CP once the current activation buffer content has been processed. Pipeline controller 1062PC may control convolution computations performed by each channel pipeline 1062CH of convolution pipelines 1062CP, where each channel pipeline 1062CH operates on one channel of the input to the depth-wise convolution layer.
  • Convolution pipelines 1062CP performs mathematical operations on activation data fed from activation feeder 1062AF and weight values preloaded from weight feeder 1062WF. Convolution pipelines 1062CP is divided into channel pipelines 1062CH, each channel pipeline 1062CH performing mathematical operations for one channel. Combined with activation feeder 1062AF, weight feeder 1062WF, and pipeline controller 1062PC, convolution pipeline logically performs the convolution computations.
  • External accumulation logic 1062A receives data from convolution pipelines 1062CP, and stores the data in an accumulation memory, such as accumulation memory 264 of FIG. 2, through accumulation memory interface 1062AI. Accumulation logic 1062A includes an adder 1062P for each channel pipeline 1062CH. Accumulation logic 1062A may be used for point-wise summation of results of 1x<window> convolutions with the contents of the accumulation memory.
  • In this embodiment, there are three channels as exemplified by the three window pipelines. However, other embodiments may have a different number of channels. Although possible, this embodiment shows three channels mainly for simplicity. Many embodiments will include at least 16 channels to accommodate practical applications.
  • FIG. 10B shows an exemplary configuration of a channel pipeline 1062CH for a depth-wise convolution module, according to an embodiment of the present invention. Channel pipeline 1062CH includes a line buffer 1062LB, an activation buffer 1062AB, a weight buffer 1062WB, a plurality of multipliers 1062X, a plurality of adders 1062P, a delay register 1062DR, and an internal accumulation register 1062NB.
  • Line buffer 1062LB stores activation data received from an activation feeder 1062AF. Line buffer 1062LB may include a shift register storing activation data as read by activation feeder 1062AF at one pixel per cycle.
  • Activation buffer 1062AB stores activation data received from line buffer 1062LB. Activation buffer 1062AB may include a set of registers storing activation data to which the current convolution computation is applied.
  • Weight buffer 1062WB stores weight values received from weight feeder 1062WF. Weight buffer 1062WB may include a shift register storing weight values to which the current convolution computation is applied.
  • Multipliers 1062X multiply the activation data from activation buffer 1062AB by the weight values from weight buffer 1062WB. In this embodiment there are three multipliers 1062X, meaning that the degree of parallelism in the width or height dimension of a convolution kernel is three. Adders 1062P, which collectively form an adder tree, then add together the products of the activation data and the weight values. During this process, delay register 1062DR, which is also considered part of the adder tree, balances the adder tree. Internal accumulation register 1062IA assists in the addition by storing partial sums. For example, internal accumulation register 1062IA may be used for accumulation of partial sums when the number of windows of the buffers, which is six in this embodiment, as well as width or height of convolution filter, is more than the degree of parallelism, which is three.
  • Once the products are all added together as a total sum, the total sum is output to an accumulation logic 1062A, which then stores the data in an accumulation memory, such as accumulation memory 264 of FIG. 2, through accumulation memory interface 1062AI.
  • FIG. 11 shows an exemplary configuration of a point-wise convolution module 1162, according to an embodiment of the present invention. Point-wise convolution module 1162 includes queues 1162Q, a main sequencer 1162S, a weight memory interface 1162WI, a weight feeder 1162WF, a weight memory interface 1162WI, an activation feeder 1162AF, a data memory interface 1162DI, a systolic array 1162S, an accumulation logic 1162A, and an accumulation memory interface 1162AI.
  • Queue 1162Q receives and sends instructions. Queue 1162Q may receive instructions from an instruction DMA module, such as instruction DMA module of 256 of FIG. 2, and send the instructions to main sequencer 1162S. Queue 1162Q may be a FIFO memory or any other memory suitable for queueing instructions.
  • Main sequencer 1162S sequences control parameters for convolution. Main sequencer 1162S may receive instructions from queue 1162Q, and output a control sequence to weight feeder 1162WF and activation feeder 1162AF, each through a queue. In this embodiment, main sequencer 1162S splits KH×KW convolutions into a sequence of 1×1 convolutions, fed as control parameters into weight feeder 1162WF and activation feeder 1162AF.
  • Weight feeder 1162WF preloads weight values accessed from a weight memory, such as weight memory 255 of FIG. 2, through weight memory interface 1162WI to systolic array 1162SA in accordance with the activation data indicated in the control parameters from main sequencer 1162S.
  • Activation feeder 1162AF feeds activation data accessed from an activation data memory, such as activation data memory 260 of FIG. 2, through data memory interface 1162DI to systolic array 1162SA in accordance with the activation data indicated in the data sequence from main sequencer 1162S.
  • Systolic array 1162SA includes a plurality of MAC elements 1162M. Each MAC element 1162M is preloaded with a weight value from weight feeder 1162WF before computation starts, and then receives an activation value from activation feeder 1162F. To allow overlapping of computation and weight value preload, multiple weight buffers may be used. MAC elements 1162M are arranged in an array such that the product of the activation value and the weight output from preceding MAC elements 1162M is input to subsequent MAC elements 1162M. In this embodiment, for every cycle, each MAC element 1162M outputs an accumulation value equal to the value output from its left neighbor MAC element 1162M multiplied by the preloaded weight value 1162W, the product of which is added to the value output from its top neighbor MAC element 1162M. The MAC elements 1162M of the lowest row output their products to accumulation logic 1162A.
  • Accumulation logic 1162A receives products from systolic array 1162SA, and stores the products in an accumulation memory, such as accumulation memory 264 of FIG. 2. In this embodiment, if accumulation required by main Sequencer 1162S reads an old value in the memory location to be written, accumulation logic 1162A will overwrite it by sum with the new value. Otherwise, accumulation logic 1162A writes the new value as is.
  • Point-wise convolution module 1162 may be useful in performing point-wise convolution by splitting a single KH×KW convolution into multiple KH×KW 1×1 convolutions. For example, in a region of accumulation data memory, such as accumulation data memory 264 of FIG. 2, corresponding to four different 1×1 convolutions, 2×2 convolutions may be substituted. Point-wise convolution module 1162 may compute each 1×1 convolution as a dot product of the matrix of activation values in the MAC elements, and the matrix of weight values in the MAC elements, and then sum the results of the 1×1 convolutions.
  • FIG. 12 shows an exemplary hardware configuration for hardware-specific division of inference, according to an embodiment of the present invention. The exemplary hardware configuration includes apparatus 1290, which communicates with network 1298, and interacts with inference environment 1296. Apparatus 1290 may be a host computer such as a server computer or a mainframe computer that executes an on-premise application and hosts client computers that use it, in which case apparatus 1290 may not be directly connected to inference environment 1296, but are connected through a terminal device through network 1298. Apparatus 1290 may be a computer system that includes two or more computers. Apparatus 1290 may be a personal computer that executes an application for a user of apparatus 1290.
  • Apparatus 1290 includes a logic section 1270, a storage section 1280, a communication interface 1292, and an input/output controller 1294. Logic section 1270 may be a computer program product including one or more computer readable storage mediums collectively storing program instructions that are executable by a processor or programmable circuitry to cause the processor or programmable circuitry to perform the operations of the various sections. Logic section 1270 may alternatively be analog or digital programmable circuitry, or any combination thereof. Logic section 1270 may be composed of physically separated storage or circuitry that interacts through communication. Storage section 1280 may be a non-volatile computer-readable medium capable of storing non-executable data for access by logic section 1270 during performance of the processes herein. Communication interface 1292 reads transmission data, which may be stored on a transmission buffering region provided in a recording medium, such as storage section 1280, and transmits the read transmission data to network 1298 or writes reception data received from network 1298 to a reception buffering region provided on the recording medium. Input/output controller 1294 connects to various input and output units, such as inference environment 1296, via a parallel port, a serial port, a keyboard port, a mouse port, a monitor port, and the like to accept commands and present information. Inference environment 1296 may be a hardware chip capable of performing neural network inference, such as hardware chip 250 of FIG. 2, or may be a computer or similar device with a processor and memory, such as a smartphone, smart car, etc., which also includes a hardware chip in communication with the memory.
  • Logic section 1270 includes obtaining section 1272, dividing section 1274, which includes simulating section 1275, and generating section 1277. Storage section 1280 includes computational graph 1282, hardware chip configuration 1284, candidates 1286, simulation environment 1287, and instructions 1289.
  • Obtaining section 1272 is the portion of logic section 1270 that obtains information for hardware-specific division of inference. For example, obtaining section 1272 may be configured to a computational graph and a hardware chip configuration. Obtaining section 1272 may store obtained information in storage section 1280 as computational graph 1282 and hardware chip configuration 1284. Obtaining section 1272 may include sub-sections for performing additional functions, as described in the foregoing flow charts. Such sub-sections may be referred to by a name associated with their function.
  • Dividing section 1274 is the portion of logic section 1270 that divides inference for hardware-specific division of inference. For example, dividing section 1274 may be configured to divide inference of a plurality of layers of a neural network into a plurality of groups, each group including a number of sequential layers based on an estimate of duration and energy consumption by a hardware chip to perform inference of the neural network. While dividing, dividing section 1274 may access computational graph 1282, hardware chip configuration 1284, and candidates 1286. Dividing section 1274 may include sub-sections for performing additional functions, as described in the foregoing flow charts. Such sub-sections may be referred to by a name associated with their function.
  • Simulating section 1275 is the portion of logic section 1270 that simulates a performance of inference of a neural network by a specific hardware chip. For example, simulating section 1275 may be configured to simulate a performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for each of a plurality of candidate group divisions. While simulating, simulating section 1275 may access computational graph 1282, hardware chip configuration 1284, candidates 1286, simulation environment 1287, and instructions 1289. Simulating section 1275 may include sub-sections for performing additional functions, as described in the foregoing flow charts. Such sub-sections may be referred to by a name associated with their function.
  • Generating section 1277 is the portion of logic section 1270 that generates instructions for hardware-specific division of inference. For example, generating section 1277 may be configured to generate instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups. The instructions may be used for simulation, such as by simulating section 1275, or may be used directly on the hardware chip. While generating instructions, generating section 1277 may access computational graph 1282, hardware chip configuration 1284, candidates 1286, and instructions 1289. Generating section 1277 may include sub-sections for performing additional functions, as described in the foregoing flow charts. Such sub-sections may be referred to by a name associated with their function.
  • In other embodiments, the apparatus may be any other device capable of processing logical functions in order to perform the processes herein. The apparatus may not need to be connected to a network in environments where the input, output, and all information is directly connected. The logic section and the storage section need not be entirely separate devices, but may share one or more computer-readable mediums. For example, the storage section may be a hard drive storing both the computer-executable instructions and the data accessed by the logic section, and the logic section may be a combination of a central processing unit (CPU) and random access memory (RAM), in which the computer-executable instructions may be copied in whole or in part for execution by the CPU during performance of the processes herein.
  • In embodiments where the apparatus is a computer, a program that is installed in the computer can cause the computer to function as or perform operations associated with apparatuses of the embodiments of the present invention or one or more sections (including modules, components, elements, etc.) thereof, and/or cause the computer to perform processes of the embodiments of the present invention or steps thereof. Such a program may be executed by a processor to cause the computer to perform certain operations associated with some or all of the blocks of flowcharts and block diagrams described herein.
  • Various embodiments of the present invention may be described with reference to flowcharts and block diagrams whose blocks may represent (1) steps of processes in which operations are performed or (2) sections of apparatuses responsible for performing operations. Certain steps and sections may be implemented by dedicated circuitry, programmable circuitry supplied with computer-readable instructions stored on computer-readable media, and/or processors supplied with computer-readable instructions stored on computer-readable media. Dedicated circuitry may include digital and/or analog hardware circuits and may include integrated circuits (IC) and/or discrete circuits. Programmable circuitry may include reconfigurable hardware circuits comprising logical AND, OR, XOR, NAND, NOR, and other logical operations, flip-flops, registers, memory elements, etc., such as field-programmable gate arrays (FPGA), programmable logic arrays (PLA), etc.
  • The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to individualize the electronic circuitry, in order to perform aspects of the present invention.
  • While the embodiments of the present invention have been described, the technical scope of the invention is not limited to the above described embodiments. It is apparent to persons skilled in the art that various alterations and improvements can be added to the above-described embodiments. It is also apparent from the scope of the claims that the embodiments added with such alterations or improvements can be included in the technical scope of the invention.
  • The operations, procedures, steps, and stages of each process performed by an apparatus, system, program, and method shown in the claims, embodiments, or diagrams can be performed in any order as long as the order is not indicated by “prior to,” “before,” or the like and as long as the output from a previous process is not used in a later process. Even if the process flow is described using phrases such as “first” or “next” in the claims, embodiments, or diagrams, it does not necessarily mean that the process must be performed in this order.

Claims (20)

1. A computer-readable medium including instructions that are executable by a computer to cause the computer to perform operations comprising:
obtaining a computational graph and a configuration of a hardware chip,
the computational graph of a neural network having a plurality of layers, each layer having a plurality of nodes and a plurality of edges, each node including a representation of a mathematical operation;
the configuration of the hardware chip including at least one module for performing the mathematical operations and an on-chip memory, the hardware chip operable to perform inference of the neural network in portions of each layer by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers while interfacing with an external memory storing the activation data;
dividing inference of the plurality of layers into a plurality of groups, each group including a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations, sequentially by layer, of corresponding portions of layers of each group, the dividing inference further including
simulating a performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for each of a plurality of candidate group divisions, each candidate group division identifying a unique division of the plurality of layers, and
comparing the estimate of duration and energy consumption of each candidate group division of the same layers among the plurality of layers; and
generating instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.
2. (canceled)
3. The computer-readable medium of claim 1, wherein each of the plurality of candidate group divisions identify even divisions of the plurality of layers.
4. The computer-readable medium of claim 1, wherein the dividing inference further includes:
the plurality of candidate group divisions identify a single layer as a first candidate group division, a preceding group of layers as a second group division, and the single layer together with the preceding group of layers as a third candidate group division,
the comparing includes comparing (i) an estimate of duration and energy consumption to perform the mathematical operations of the third candidate group division and (ii) an estimate of total duration and total energy consumption to perform the mathematical operations of the first candidate group division and the second candidate group division.
5. The computer-readable medium of claim 1, wherein the simulating includes
generating instructions for the hardware chip to perform the mathematical operations, sequentially by layer, of corresponding portions in layers of each group,
executing the instructions on a simulation of the hardware chip while identifying operations of each clock cycle,
wherein the estimate of energy consumption of the hardware chip is based on a sum of individual energy consumptions associated with each operation, and the estimate of duration is based on the number of clock cycles.
6. The computer-readable medium of claim 1, further comprising determining dimensions of the portions of each layer by:
simulating a performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for each of a plurality of candidate dimension specifications, each candidate dimension specification based on a capacity of the on-chip memory and a degree of parallelism of the hardware chip, and
comparing the estimate of duration and energy consumption of each candidate dimension specification.
7. The computer-readable medium of claim 1, wherein
the neural network is a convolutional neural network, and the portions of each layer are tiles, and
the at least one module of the hardware chip includes at least one convolution module.
8. The computer-readable medium of claim 7, wherein the at least one convolution module includes at least one dedicated depth-wise convolution module and at least one point-wise convolution module.
9. The computer-readable medium of claim 7, wherein the at least one module further includes at least one module for performing activation operations, at least one module for loading the activation data from the external memory onto the on-chip memory, at least one module for storing activation data to the external memory from the on-chip memory, and at least one module for loading weights of the convolution neural network from the external memory to the on-chip memory.
10. The computer-readable medium of claim 1, wherein the generating instructions for the hardware chip further includes generating instructions for the hardware chip to:
retrieve activation data of corresponding portions in the first layer in each group from the external memory, and
record activation data resulting from the mathematical operations of corresponding portions in the last layer in each group to an external memory.
11. The computer-readable medium of claim 1, wherein the generating instructions for the hardware chip further includes:
assigning each operation to a queue among a plurality of queues, and
ordering execution of operations in each queue.
12. The computer-readable medium of claim 1, wherein the generating instructions for the hardware chip further includes:
allocating locations in the on-chip memory to data for performing inference of the neural network, and
scheduling evacuation of data to the external memory in order to perform inference of the neural network.
13. The computer-readable medium of claim 12, wherein the generating instructions includes generating instructions for the at least one module of the hardware chip to perform loading of data from the external memory to the allocated locations.
14. The computer-readable medium of claim 1, wherein the generating instructions for the hardware chip further includes synchronization flag annotating to preserve mutual ordering of dependent operations.
15. The computer-readable medium of claim 1, wherein the generating instructions for the hardware chip further includes converting instructions into binary representation.
16. The computer-readable medium of claim 1, wherein
the hardware chip further includes a plurality of cores, the at least one module for performing the mathematical operations and the on-chip memory being distributed among the plurality of cores,
each core includes at least one transmitter block and at least one receiver block configured for inter-core communication, and
the generating instructions for the hardware chip further includes distributing instructions among the cores.
17. The computer-readable medium of claim 1, wherein the configuration of the hardware chip further includes at least one transmitter block and at least one receiver block configured to communication with a second instance of the hardware chip of a multi-chip configuration.
18. A computer-implemented method comprising:
obtaining a computational graph and a configuration of a hardware chip,
the computational graph of a neural network having a plurality of layers, each layer having a plurality of nodes and a plurality of edges, each node including a representation of a mathematical operation;
the configuration of the hardware chip including at least one module for performing the mathematical operations and an on-chip memory, the hardware chip operable to perform inference of the neural network in portions of each layer by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers while interfacing with an external memory storing the activation data;
dividing inference of the plurality of layers into a plurality of groups, each group including a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations, sequentially by layer, of corresponding portions of layers of each group, the dividing inference further including
simulating a performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for each of a plurality of candidate group divisions, each candidate group division identifying a unique division of the plurality of layers, and
comparing the estimate of duration and energy consumption of each candidate group division of the same layers among the plurality of layers; and
generating instructions for the hardware chip to perform inference of the convolutional neural network, sequentially by group, of the plurality of groups.
19. An apparatus comprising:
an obtaining section configured to obtain a computational graph and a configuration of a hardware chip,
the computational graph of a neural network having a plurality of layers, each layer having a plurality of nodes and a plurality of edges, each node including a representation of a mathematical operation;
the configuration of the hardware chip including at least one module for performing the mathematical operations and an on-chip memory, the hardware chip operable to perform inference of the neural network in portions of each layer by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers while interfacing with an external memory storing the activation data;
a dividing section configured to divide inference of the plurality of layers into a plurality of groups, each group including a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations, sequentially by layer, of corresponding portions of layers of each group, the dividing section further configured to
simulate a performance of inference of the neural network by the hardware chip to determine the estimate of duration and energy consumption of the hardware chip for each of a plurality of candidate group divisions, each candidate group division identifying a unique division of the plurality of layers, and
compare the estimate of duration and energy consumption of each candidate group division of the same layers among the plurality of layers; and
a generating section configured to generate instructions for the hardware chip to perform inference of the convolutional neural network, sequentially by group, of the plurality of groups.
20. (canceled)
US17/186,003 2020-05-15 2021-02-26 Neural network accelerator hardware-specific division of inference into groups of layers Active US11176449B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/492,681 US20220027716A1 (en) 2020-05-15 2021-10-04 Neural network accelerator

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2020-086356 2020-05-15
JPJP2020-086356 2020-05-15
JP2020086356A JP6834097B1 (en) 2020-05-15 2020-05-15 Hardware-specific partitioning of inference neural network accelerators

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/492,681 Division US20220027716A1 (en) 2020-05-15 2021-10-04 Neural network accelerator

Publications (2)

Publication Number Publication Date
US11176449B1 US11176449B1 (en) 2021-11-16
US20210357732A1 true US20210357732A1 (en) 2021-11-18

Family

ID=74665138

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/186,003 Active US11176449B1 (en) 2020-05-15 2021-02-26 Neural network accelerator hardware-specific division of inference into groups of layers
US17/492,681 Pending US20220027716A1 (en) 2020-05-15 2021-10-04 Neural network accelerator

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/492,681 Pending US20220027716A1 (en) 2020-05-15 2021-10-04 Neural network accelerator

Country Status (2)

Country Link
US (2) US11176449B1 (en)
JP (1) JP6834097B1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926733B (en) * 2021-03-10 2022-09-16 之江实验室 Special chip for voice keyword detection
CN115706703A (en) * 2021-08-13 2023-02-17 中移系统集成有限公司 Edge AI acceleration processing method and device, electronic equipment and readable storage medium
CN114819084B (en) * 2022-04-26 2024-03-01 北京百度网讯科技有限公司 Model reasoning method, device, equipment and storage medium

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7818273B2 (en) * 2007-09-18 2010-10-19 International Business Machines Corporation System and method for cortical simulation
US9747546B2 (en) 2015-05-21 2017-08-29 Google Inc. Neural network processor
US10387770B2 (en) * 2015-06-10 2019-08-20 Samsung Electronics Co., Ltd. Spiking neural network with reduced memory access and reduced in-network bandwidth consumption
KR102433254B1 (en) 2015-10-28 2022-08-18 구글 엘엘씨 Processing computational graphs
JP2017102790A (en) 2015-12-03 2017-06-08 富士通株式会社 Information processing apparatus, arithmetic processor, and control method of information processing apparatus
US10733350B1 (en) * 2015-12-30 2020-08-04 Sharat C Prasad On-chip and system-area multi-processor interconnection networks in advanced processes for maximizing performance minimizing cost and energy
US10083347B2 (en) * 2016-07-29 2018-09-25 NTech lab LLC Face identification using artificial neural network
US11157814B2 (en) * 2016-11-15 2021-10-26 Google Llc Efficient convolutional neural networks and techniques to reduce associated computational costs
CN106557332A (en) * 2016-11-30 2017-04-05 上海寒武纪信息科技有限公司 A kind of multiplexing method and device of instruction generating process
US10019668B1 (en) 2017-05-19 2018-07-10 Google Llc Scheduling neural network processing
WO2019040866A2 (en) * 2017-08-25 2019-02-28 The Board Of Trustees Of The University Of Illinois Apparatus and method for agricultural data collection and agricultural operations
US20190340490A1 (en) * 2018-05-04 2019-11-07 Apple Inc. Systems and methods for assigning tasks in a neural network processor
US11093225B2 (en) * 2018-06-28 2021-08-17 Xilinx, Inc. High parallelism computing system and instruction scheduling method thereof
US10846201B1 (en) * 2018-09-21 2020-11-24 Amazon Technologies, Inc. Performance debug for networks
US11526759B2 (en) * 2018-11-05 2022-12-13 International Business Machines Corporation Large model support in deep learning
CN110889497B (en) * 2018-12-29 2021-04-23 中科寒武纪科技股份有限公司 Learning task compiling method of artificial intelligence processor and related product
US11488011B2 (en) * 2019-03-13 2022-11-01 United States Of America As Represented By The Secretary Of The Navy Scalable extensible neural network system and methods
US11526736B2 (en) * 2019-08-15 2022-12-13 Intel Corporation Methods, systems, articles of manufacture and apparatus to map workloads
CN112465129B (en) * 2019-09-09 2024-01-09 上海登临科技有限公司 On-chip heterogeneous artificial intelligent processor
US11562205B2 (en) * 2019-09-19 2023-01-24 Qualcomm Incorporated Parallel processing of a convolutional layer of a neural network with compute-in-memory array
CN111583940A (en) * 2020-04-20 2020-08-25 东南大学 Very low power consumption keyword awakening neural network circuit

Also Published As

Publication number Publication date
US11176449B1 (en) 2021-11-16
JP6834097B1 (en) 2021-02-24
US20220027716A1 (en) 2022-01-27
JP2021179937A (en) 2021-11-18

Similar Documents

Publication Publication Date Title
US11176449B1 (en) Neural network accelerator hardware-specific division of inference into groups of layers
KR102650299B1 (en) Static block scheduling in massively parallel software-defined hardware systems.
Fowers et al. A configurable cloud-scale DNN processor for real-time AI
CN107679621B (en) Artificial neural network processing device
US9529590B2 (en) Processor for large graph algorithm computations and matrix operations
US8769216B2 (en) Optimizing output vector data generation using a formatted matrix data structure
US20120203985A1 (en) Data Structure For Tiling And Packetizing A Sparse Matrix
US20180246765A1 (en) System and method for scheduling jobs in distributed datacenters
US10564929B2 (en) Communication between dataflow processing units and memories
EP2657842B1 (en) Workload optimization in a multi-processor system executing sparse-matrix vector multiplication
US9420036B2 (en) Data-intensive computer architecture
US20190377606A1 (en) Smart accelerator allocation and reclamation for deep learning jobs in a computing cluster
US11275561B2 (en) Mixed precision floating-point multiply-add operation
CN114730275A (en) Method and apparatus for vectorized resource scheduling in a distributed computing system using tensor
US20120204183A1 (en) Associative distribution units for a high flowrate synchronizer/schedule
CN113449861A (en) Speculative training using partial gradient update
US11941528B2 (en) Neural network training in a distributed system
Özden et al. ElastiSim: a batch-system simulator for malleable workloads
CN114008589A (en) Dynamic code loading for multiple executions on a sequential processor
US20210390405A1 (en) Microservice-based training systems in heterogeneous graphic processor unit (gpu) cluster and operating method thereof
US11221979B1 (en) Synchronization of DMA transfers for large number of queues
Lim et al. ODMDEF: on-device multi-DNN execution framework utilizing adaptive layer-allocation on general purpose cores and accelerators
US20220229880A1 (en) Compute time point processor array for solving partial differential equations
US11847507B1 (en) DMA synchronization using alternating semaphores
US11494326B1 (en) Programmable computations in direct memory access engine

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: EDGECORTIX PTE. LTD., SINGAPORE

Free format text: ASSIGNEE ADDRESS CHANGE;ASSIGNOR:EDGECORTIX PTE. LTD.;REEL/FRAME:062967/0881

Effective date: 20221209

AS Assignment

Owner name: EDGECORTIX INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EDGECORTIX PTE. LTD.;REEL/FRAME:065056/0608

Effective date: 20230908