EP3931758A1 - Neural network layer processing with scaled quantization - Google Patents

Neural network layer processing with scaled quantization

Info

Publication number
EP3931758A1
EP3931758A1 EP20711689.8A EP20711689A EP3931758A1 EP 3931758 A1 EP3931758 A1 EP 3931758A1 EP 20711689 A EP20711689 A EP 20711689A EP 3931758 A1 EP3931758 A1 EP 3931758A1
Authority
EP
European Patent Office
Prior art keywords
subset
data
scaled
quantized
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP20711689.8A
Other languages
German (de)
French (fr)
Inventor
Daniel Lo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Publication of EP3931758A1 publication Critical patent/EP3931758A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/483Computations with numbers represented by a non-linear combination of denominational numbers, e.g. rational numbers, logarithmic number system or floating-point numbers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • Neural network technology is used to perform complex tasks such as image classification, reading comprehension, language translation, or speech recognition. Many of these tasks include deep learning that involves performing large numbers of floating point matrix multiply and accumulate operations. These operations are performed during training as well as during serving of results based on the input data and the trained data.
  • Neural networks may use values corresponding to the input data and the training data expressed in different formats, including data expressed in different levels of precision.
  • the present disclosure relates to a method including receiving a subset of data corresponding to a layer of a neural network.
  • the method may further include prior to performing any matrix operations using the subset of the data, using a processor, scaling the subset of the data by a scaling factor to generate a scaled subset of data.
  • the method may further include using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data.
  • the method may further include performing the matrix operations using the scaled and quantized subset of the data to generate a subset of results of the matrix operations.
  • the method may further include descaling the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
  • the present disclosure relates to a processor configured to receive a subset of data corresponding to a layer of a neural network.
  • the processor may further be configured to prior to performing any matrix operations using the subset of the data, scale the subset of the data by a scaling factor to generate a scaled subset of data.
  • the processor may further be configured to quantize the scaled subset of the data to generate a scaled and quantized subset of data.
  • the processor may further be configured to perform the matrix operations using the scaled and quantized subset of the data and generate a subset of results of the matrix operations.
  • the processor may further be configured to descale the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
  • the present disclosure relates to a method including receiving a subset of data corresponding to a layer of a neural network.
  • the method may further include prior to performing any matrix operations using the subset of the data, using a processor, scaling the subset of the data by a scaling factor to generate a scaled subset of data.
  • the method may further include using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data.
  • the method may further include using the processor, quantizing the subset of the data to generate a quantized subset of data.
  • the method may further include determining a first quantization error associated with the scaled and quantized subset of data and determining a second quantization error associated with the quantized subset of data.
  • the method may further include if the first quantization error is greater than or equal to the second quantization error, then performing the matrix operations using the quantized subset of data to generate a first subset of results of the matrix operations.
  • the method may further include if the first quantization error is lower than the second quantization error, then performing the matrix operations using the scaled and quantized subset of the data to generate a second subset of results of the matrix operations and descaling the second subset of the results of the matrix operations, by multiplying the second subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
  • FIG. l is a block diagram of a neural network processor with scaled quantization in accordance with one example
  • FIG. 2 is diagram showing scaling and quantization
  • FIG. 3 is a block diagram of a system for scaled quantization in accordance with one example
  • FIG. 4 shows a flow diagram of a method for performing scaled quantization in accordance with one example
  • FIG. 5 shows a flow diagram of a method for selectively performing scaled quantization in accordance with one example.
  • Examples disclosed in the present disclosure relate to systems, methods, and components for implementing neural network based processing. Certain examples relate to processing layers of Convolutional Neural Networks (CNNs) using scaled quantization. Certain examples relate to processing layers of CNNs using a neural network processor.
  • a neural network processor may be implemented using any of Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Erasable and/or Complex programmable logic devices (PLDs), Programmable Array Logic (PAL) devices, and Generic Array Logic (GAL) devices.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • PLDs Erasable and/or Complex programmable logic devices
  • PAL Programmable Array Logic
  • GAL Generic Array Logic
  • Neural network processors may also be implemented using a CPU, a GPU, a combination of CPUs and GPUs, or a combination of any of the programmable hardware, CPUs, and GPUs.
  • An image file may be used to configure or re-configure FPGAs.
  • the image file or similar file or program may be delivered via a network link or a local link (e.g., PCIe) from a host CPU.
  • Information included in an image file can be used to program hardware blocks of a node (e.g., logic blocks and reconfigurable interconnects of an FPGA) to implement desired functionality. Desired functionality can be implemented to support any service that can be offered via a combination of computing, networking, and storage resources, such as via a data center or other infrastructure for delivering a service.
  • Cloud computing may refer to a model for enabling on-demand network access to a shared pool of configurable computing resources.
  • cloud computing can be employed in the marketplace to offer ubiquitous and convenient on- demand access to the shared pool of configurable computing resources.
  • the shared pool of configurable computing resources can be rapidly provisioned via virtualization and released with low management effort or service provider interaction, and then scaled accordingly.
  • a cloud computing model can be composed of various characteristics such as, for example, on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth.
  • a cloud computing model may be used to expose various service models, such as, for example, Hardware as a Service (“HaaS”), Software as a Service (“SaaS”), Platform as a Service (“PaaS”), and Infrastructure as a Service (“IaaS”).
  • SaaS Hardware as a Service
  • SaaS Software as a Service
  • PaaS Platform as a Service
  • IaaS Infrastructure as a Service
  • a cloud computing model can also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth.
  • Machine learning services such as those based on Recurrent Neural
  • RNNs Long Short Term Memory (LSTM) neural networks
  • LSTM Long Short Term Memory
  • Recurrent Units may be implemented using the systems and nodes described in this disclosure.
  • the service-related content or other information such as words, sentences, images, videos, or other such content/information may be translated into a vector representation.
  • the neural network model may comprise of many layers and each layer may be encoded as matrices or vectors of weights expressed in the form of coefficients or constants that have been obtained via training of a neural network.
  • GPUs or programmable hardware logic blocks in the nodes may process the matrices or vectors to perform various operations, including multiply, add, and other operations against input vectors representing encoded information related to the service.
  • an LSTM network may comprise a sequence of repeating RNN layers or other types of layers.
  • Each layer of the LSTM network may consume an input at a given time step, e.g., a layer’s state from a previous time step, and may produce a new set of outputs or states.
  • a single chunk of content may be encoded into a single vector or multiple vectors.
  • a word or a combination of words e.g., a phrase, a sentence, or a paragraph
  • Each chunk may be encoded into an individual layer (e.g., a particular time step) of an LSTM network.
  • An LSTM layer may be described using a set of equations, such as the ones below:
  • the inputs and hidden states may be processed using a combination of vector operations (e.g., dot-product, inner product, or vector addition) and non-linear functions (e.g., sigmoids, hyperbolic and tangents).
  • vector operations e.g., dot-product, inner product, or vector addition
  • non-linear functions e.g., sigmoids, hyperbolic and tangents.
  • the most compute intensive operations may arise from the dot products, which may be implemented using dense matrix -vector and matrix-matrix multiplication routines.
  • the processing of the vector operations and non-linear functions may be performed in parallel.
  • Values corresponding to the training data, including vector data may be represented in a number format.
  • Floating point representation for the values of the vector data is expensive because each individual point value has an exponent specific to that point value.
  • the alternative may be a fixed point representation.
  • Performance, energy usage, and storage requirements can be improved through the use of reduced precision formats to implement artificial neural networks.
  • Such formats can represent floating point numbers using a small (e.g. 3, 4, or 5-bit) mantissa and an exponent shared by two or more floating point numbers.
  • Neural networks that use reduced precision formats may be referred to as quantized neural networks.
  • fixed point representation may use a set number of integer bits and fractional bits to express numbers.
  • Fixed point can be efficiently processed in hardware with integer arithmetic, which may make it a preferred format when applicable.
  • Fixed point format may be represented as qX. Y, where X is the number of integer bits and Y is the number of fractional bits.
  • Block-floating point (BFP) may apply a shared exponent to a block of fixed point numbers; for example, a vector or matrix. The shared exponent may allow a significantly higher dynamic range for the block, although individual block members have a fixed range with respect to each other.
  • Quantized neural networks can improve the latency and throughput of running neural networks by reducing computation and memory demands.
  • reduced precision formats e.g., any of a reduced precision floating point format, the Block Floating Point (BFP), or integers
  • BFP Block Floating Point
  • Parameters that may be suitable at the beginning of training may become suboptimal as the neural network converges.
  • many neural network approaches typically use full precision floating point (e.g., 32-or 16-bit floating point numbers).
  • certain software implementations of neural networks may use full precision floating point numbers.
  • certain hardware may use full precision floating point numbers.
  • implementations of neural networks may use reduce precision numbers. Because underlying implementations of software and hardware-accelerated neural networks are different, small differences in calculations can arise that can cause errors over time.
  • quantization is applied to the input activations and the weight matrix to reduce the hardware costs of computing the matrix multiplication.
  • y Q(x)Q(W).
  • y the arbitrary scalar values may be factored out without affecting the result.
  • the number 1.0011101101 2 L 6 may be scaled to the value 1.111 2 L 6 prior to quantization.
  • the scaled value may be quantized, and operations may be performed in the quantized domain (e.g., training, inference).
  • a value may be portioned into tiles. A scaling operation can be performed for each tile.
  • FIG. 1 is a block diagram of a neural network processor 100 in accordance with one example.
  • Each neural network processor 100 may include an Input Message Processor (IMP) 104 for receiving messages from other processors and an Output Message Processor (OMP) 106 for processing outgoing messages to other processors or
  • IMP Input Message Processor
  • OMP Output Message Processor
  • Each neural network processor 100 may further include a matrix vector multiplier (MVM) 110 and two or more multifunction units (MFUs) (e.g., MFU[0] 140 and MFU[1] 160).
  • MFUs multifunction units
  • Each neural network processor 100 may further include a matrix memory manager 170, a vector memory manager 180, a Vector DRAM 182, and a Matrix DRAM 184.
  • the processor may accept off-chip messages containing auxiliary information such as control and scalar data and payload data (e.g., vectors, matrices, or other tensor data structures).
  • the incoming messages may be handled by a lightweight input message processor (IMP) 104, which sends the vectors to vector memory manager 180.
  • IMP 104 may send the matrices to matrix memory manager 170.
  • vector data e.g., data corresponding to activations
  • matrix data e.g., data corresponding to weights
  • vector data received from vector memory manager 180 may be in a higher precision format (e.g.,
  • FP16 or FP32 and vector scaling and quantization 192 may scale the higher precision vector data and then convert the scaled vector data from the higher precision format to a lower precision format (e.g., block floating point format).
  • matrix data received via network 102 or otherwise may be in a higher precision format (e.g., FP16 or FP32) and matrix scaling and quantization 194 may scaled the higher precision matrix data and then convert the scaled matrix data from the higher precision format to a lower precision format (e.g., block floating point format).
  • MVM 110 In the example shown in FIG. 1, only the inputs to MVM 110 are being scaled and quantized.
  • the inputs to the MFUs are not being scaled and quantized.
  • inputs to both MVM 110 and the MFUs may be scaled and quantized.
  • Quantization may involve mapping continuous or high-precision values onto a discrete, low-precision grid. If the original points are close to their mapped quantization value, then one expects that the resulting computations will be close to the original computations.
  • the original values are quantized to better match a quantization grid.
  • scaled quantization manager is configured to scale the values such that the largest value matches up with the largest quantization grid point. As shown in FIG. 2, as an example, when no scaling is applied, the top two quantization points (226 and 228) along the quantization grid are unused, causing the three input values (212, 214, and 216) to be put into two bins (222 and 224).
  • the weight values represented in the matrix may be scaled such that the highest value matches up with the highest quantization point.
  • the max mantissa value is 2 b — 1.
  • the weight matrix W may be scaled using the equations below such that the largest absolute value becomes 2 b — 1.
  • Table 1 shows results comparing validation accuracy on the ImageNet dataset for ResNet-50. As shown, with scaling and quantization, the top- 1 accuracy is improved from 53.13% to 59.21% and the top-5 accuracy is improved from 76.49% to 81.36%.
  • each of the matrices may have an N by N size and each of the vectors may have a size of 1 by N.
  • all instructions corresponding to neural network processor 100 may operate on native-sized data.
  • Logical vectors and matrices corresponding to the applications handled by neural network processor 100 may often be larger than the native size; in these cases, the vectors and matrices may be broken up into native-sized tiles.
  • the block size of the BFP format data may be equal to the native dimension. Therefore, each native 1 by N vector may have a shared exponent, and each row of an N by N matrix may have a shared exponent.
  • Each of the vector data and the matrix data may have a two’s complement mantissa portion, and the mantissa size for the vector data and the matrix data may be different.
  • the weight values may be fixed to be within an integer range, e.g.,— 2 b — 1 to 2 b — 1. This may further simplify the hardware used for performing the dot product operations.
  • MVM 110 may include a vector register file (VRF)
  • MRF matrix register file
  • tile engines e.g., tile engines 114, 116, and 118.
  • Tile engines may receive input matrix and input vector data from VRF 112.
  • MVM 110 may further include precision format converters.
  • a scaling operation can be performed for each tile.
  • matrix scaling and quantization 194 may scale and quantize weight values for each of tile engines 114, 116, and 118 in parallel.
  • vector scaling and quantization 192 may scale and quantize activation values for each of tile engines 114, 116, and 118 in parallel.
  • two internal BFP formats may be used by MVM 110 for expressing its input and output: BFP short, for vector and matrix storage, and BFP long for accumulation.
  • BFP short may use ql .15 fixed point values with a shared 5 bit exponent
  • BFP long may use q34.40 fixed point values with a shared 5 bit exponent.
  • the matrix-vector multiplication may result in BFP long, which may be converted back to a floating-point format as a final output stage.
  • the example MVM 110 shown in FIG. 1 may include BFP to FP16 Converters 122, 124, and 126 at the output stages.
  • Tile engines 114, 116, and 118 may, in parallel, provide outputs to the respective converters as shown in the example in FIG. 5.
  • the outputs from the respective converters representing the results of the matrix operations performed by MVM 110 may be output to de-scalers 123, 125, and 127, respectively.
  • each of de-scalers 123, 125, and 127 may be configured to multiply the respective result of the matrix operations with an inverse of the scaling factor used as part of scaling and quantization.
  • the descaled results may then be further processed, including for example by MFU[0] 140.
  • the matrix data may be communicated between Matrix DRAM 184 and Matrix Memory manager 170 using M number of channels.
  • Vector memory manager 180 may move vector data over C number of channels.
  • each MFU may include crossbars (e.g., crossbars labeled as xbars).
  • MFU[0] 140 may support vector operations, such as vector-vector multiply and addition, a Sigmoid function, a TanH function, a softmax operation, a Rectified Linear Unit (ReLU) operation, and/or an activation block operation.
  • MFU[0] 140 may include crossbars (e.g., xbar 146, 148, and 150) that may stream a vector from its input bus through a pipelined sequence of operations.
  • a vector may be received via a register file labeled MulVrf 142 or another register file labeled AsVrf[0] 144, and such vectors may be subjected to any of a multiply operation, an addition operation, or some other operation.
  • MFU[0] 140 may include several hardware blocks for performing addition (e.g., 153, 157, and 161).
  • MFU[0] 140 may also include several hardware blocks for performing multiplication (e.g., 152, 156, and 159).
  • MFU[0] 140 may also include several hardware blocks for performing activation (e.g., 151, 154, and 158).
  • MFU[1] 160 may include crossbars (e.g., xbar 162, 163, and 164) that may allow MFU[1] 160 to receive outputs from MFU[0] 140 and perform additional operations on those outputs and any additional inputs received via ADD/SUB VRF 168.
  • MFU[1] 160 may include several hardware blocks for performing addition (e.g., 169, 171, and 172).
  • MFU[1] 160 may also include several hardware blocks for performing activation.
  • the outputs from MFU[1] 160 received via C channels may be coupled via a multiplexing circuit 174 to vector memory manager 180.
  • FIG. 1 shows a certain number of components of neural network processor 100 arranged in a certain manner, there could be more or fewer number of components arranged differently.
  • Neural network processor 100 may be used to enable issuance of instructions that can trigger millions of operations using a small number of instructions.
  • Table 2 below shows instructions corresponding to a fully parameterized LSTM:
  • neural network processor 100 may execute more or fewer instructions having a different format to accomplish the same objectives.
  • Table 3 shows how to compute a 1 X 1 convolution as part of a CNN evaluation.
  • the number of iterations over a chain of instructions for the computation may be specified.
  • the native dimension of each instruction chain may be scaled by a column scaling factor.
  • the output may be provided.
  • a pointwise Rectified Linear Unit (ReLU) operation may be performed for each element of the vector data.
  • Table 4 below shows how to compute an N X N convolution as part of a CNN evaluation.
  • the instructions below that are similar to the 1 X 1 convolution are not described again.
  • the Set2dWindows instruction may be used to set the total window size and then Setlterations instruction may be used to slide that window across the input volume.
  • the *_inc instructions e.g., v rd inc and v add inc
  • a stride of 2 may result in skipping of every other vector in the vector register file that is used to store vector data for operations, such as addition.
  • FIG. 3 is a block diagram of a system for scaled quantization in accordance with one example.
  • System 300 may include a processor 310, a memory 320, input/output devices 340, display 350, and network interfaces 360 interconnected via bus system 302.
  • Memory 320 may include input data 322, training data 324, training code 326, scaling and quantization (SQ) code 328, inference code 330, and evaluation code 332.
  • Input data 322 may comprise data corresponding to images or other types of information that can be classified or otherwise processed using a neural network.
  • Memory 320 may further include training data 324 that may include weights obtained by training the neural network using the higher precision numbers (e.g., floating point format numbers).
  • Memory 320 may further include training code 326 comprising instructions configured to train a neural network, such as ResNet-50. Training code 326 may use the weights obtained by training the neural network using the higher precision numbers (e.g., floating point format numbers).
  • Scaling and quantization (SQ) code 328 may include instructions configured to scale and quantize input data 322 or training data 324. As described earlier, in one example, scaling may include multiplying the data that is in a higher precision format (e.g., FP32 or FP16) by a scaling factor. Quantizing may include converting the scaled values of the data from the higher precision format to a lower precision format (e.g., a lower precision floating point format, an integer, or a block floating point format).
  • a higher precision format e.g., FP32 or FP16
  • Quantizing may include converting the scaled values of the data from the higher precision format to a lower precision format (e.g., a lower precision floating point format, an integer, or a block floating point format).
  • memory 320 may further include inference code 330 comprising instructions to perform inference using a trained neural network.
  • Memory 320 may further include evaluation code 332 comprising instructions to evaluate the performance of the trained neural network in terms of the accuracy of inference.
  • FIG. 3 shows a certain number of components of system 300 arranged in a certain way, additional or fewer components arranged differently may also be used.
  • memory 320 shows certain blocks of code, the functionality provided by this code may be combined or distributed.
  • the various blocks of code may be stored in non-transitory computer-readable media, such as non-volatile media and/or volatile media.
  • Non-volatile media include, for example, a hard disk, a solid state drive, a magnetic disk or tape, an optical disk or tape, a flash memory, an EPROM, NVRAM, PRAM, or other such media, or networked versions of such media.
  • Volatile media include, for example, dynamic memory, such as, DRAM, SRAM, a cache, or other such media.
  • FIG. 4 is a flow chart 400 of a method in accordance with one example.
  • the method may be implemented by processor 310 of FIG. 3.
  • all or some of the steps of this method may be implemented by neural network processor 100.
  • Step 410 may include receiving a subset of data corresponding to a layer of a neural network.
  • the subset of the data is expressed in a higher precision format (e.g., FP32 of FP16).
  • the higher precision format may be floating point precision format.
  • another step may include memory 320 receiving input data 322 via another storage or via a network and storing as part of input data 322.
  • Step 420 may include, prior to prior to performing any matrix operations using the subset of the data, using the processor, scaling the subset of the data by a scaling factor to generate a scaled subset of the data. This step may be performed using SQ code 328. Alternatively, this step may be performed by vector scaling and quantization 192 and/or by matrix scaling and quantization 194.
  • Step 430 may include, using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data. This step may be performed using SQ code 328. Alternatively, this step may be performed by vector scaling and quantization 192 and/or by matrix scaling and quantization 194. In one example, this step may include converting the scaled subset of the data from a higher precision format to a lower precision format.
  • Step 440 may include performing the matrix operations using the scaled and quantized subset of the data to generate a subset of results of the matrix operations. This step may be performed using training code 326 or inference code 330 when executed by processor 310 of FIG. 3. Alternatively, this step may be performed by MVM 110 of FIG.
  • Step 450 may include descaling the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
  • This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3.
  • this step may be performed by de-scalers 123, 125, and 127 of FIG. 1.
  • each of de-scalers 123, 125, and 127 may be configured to multiply the respective result of the matrix operations with an inverse of the scaling factor used as part of the scaling operation.
  • FIG. 4 describes several steps performed in a certain order, additional or fewer steps may be performed in a different order.
  • each of the steps of flowchart 400 may be performed by neural network processor 100, or by processor 310, or by a combination of the two.
  • FIG. 5 is a flow chart 500 of a method in accordance with one example. The method may be implemented by processor 310 of FIG. 3. Alternatively, all or some of the steps of this method may be implemented by neural network processor 100.
  • Step 510 may include receiving a subset of data corresponding to a layer of a neural network.
  • the subset of the data is expressed in a higher precision format (e.g., FP32 of FP16).
  • the higher precision format may be floating point precision format.
  • another step may include memory 320 receiving input data 322 via another storage or via a network and storing as part of input data 322.
  • Step 520 may include, prior to performing any matrix operations using the subset of the data, using the processor, scaling the subset of the data by a scaling factor to generate a scaled subset of the data.
  • This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3.
  • this step may be performed by vector scaling and quantization 192 and/or by matrix scaling and quantization 194.
  • Step 530 may include, using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data.
  • This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3.
  • this step may be performed by vector scaling and quantization 192 and/or by matrix scaling and quantization 194.
  • this step may include converting the scaled subset of the data from a higher precision format to a lower precision format.
  • Step 540 may include using the processor, quantizing the subset of the data to generate a quantized subset of data.
  • This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3.
  • this step may be performed by vector scaling and quantization 192 and/or by matrix scaling and quantization 194.
  • this step may include converting the subset of the data from a higher precision format to a lower precision format.
  • Step 550 may include determining a first quantization error associated with the scaled and quantized subset of data and determining a second quantization error associated with the quantized subset of data.
  • the quantization error may be the LI error, which is the sum of the absolute differences between the original values and the quantized values. This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3.
  • Step 560 may include, if the first quantization error is greater than or equal to the second quantization error, performing the matrix operations using the quantized subset of data to generate a first subset of results of the matrix operations. This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3. Alternatively, this step may be performed using neural network processor 100.
  • Step 570 may include if the first quantization error is lower than the second quantization error, performing the matrix operations using the scaled and quantized subset of the data to generate a second subset of results of the matrix operations and, using the processor, descaling the second subset of the results of the matrix operations, by multiplying the second subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
  • This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3.
  • this step may be performed using neural network processor 100.
  • FIG. 5 describes several steps performed in a certain order, additional or fewer steps may be performed in a different order.
  • each of the steps of flowchart 500 may be performed by neural network processor 100, or by processor 310, or by a combination of the two.
  • the present disclosure relates to a method including receiving a subset of data corresponding to a layer of a neural network.
  • the method may further include prior to performing any matrix operations using the subset of the data, using a processor, scaling the subset of the data by a scaling factor to generate a scaled subset of data.
  • the method may further include using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data.
  • the method may further include performing the matrix operations using the scaled and quantized subset of the data to generate a subset of results of the matrix operations.
  • the method may further include descaling the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
  • the scaled and the quantized subset of the data may comprise scaled and quantized activation values corresponding to the layer of the neural network.
  • the scaled and the quantized subset of the data may comprise scaled and quantized weight values corresponding to the layer of the neural network.
  • the subset of the data may be expressed in a first precision format, where the first precision format comprises floating point format.
  • the quantized subset of the data may be expressed in a second precision format, and the second precision format may comprise a precision format selected from one of an integer format, a reduced floating point precision format, or a block floating point format.
  • the scaled and quantized subset of the data may comprise scaled and quantized matrix data corresponding to weight values for the layer of the neural network, and the performing the matrix operations may comprise performing matrix-vector multiplication operations using the scaled and quantized matrix data and vector data corresponding to activation values for the layer of the neural network.
  • the scaled and quantized subset of the data may comprise scaled and quantized matrix data corresponding to weight values for the layer of the neural network and scaled and quantized vector data corresponding to activation values for the layer of the neural network, and the performing the matrix operations may comprise performing matrix-vector multiplication operations using the scaled and quantized matrix data and the scaled and quantized vector data.
  • the present disclosure relates to a processor configured to receive a subset of data corresponding to a layer of a neural network.
  • the processor may further be configured to prior to performing any matrix operations using the subset of the data, scale the subset of the data by a scaling factor to generate a scaled subset of data.
  • the processor may further be configured to quantize the scaled subset of the data to generate a scaled and quantized subset of data.
  • the processor may further be configured to perform the matrix operations using the scaled and quantized subset of the data and generate a subset of results of the matrix operations.
  • the processor may further be configured to descale the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
  • the scaled and the quantized subset of the data may comprise scaled and quantized activation values corresponding to the layer of the neural network.
  • the scaled and the quantized subset of the data may comprise scaled and quantized weight values corresponding to the layer of the neural network.
  • the subset of the data may be expressed in a first precision format, where the first precision format comprises floating point format.
  • the quantized subset of the data may be expressed in a second precision format, and the second precision format may comprise a precision format selected from one of an integer format, a reduced floating point precision format, or a block floating point format.
  • the scaled and quantized subset of the data may comprise scaled and quantized matrix data corresponding to weight values for the layer of the neural network, and the processor may further be configured to perform matrix-vector multiplication operations using the scaled and quantized matrix data and vector data corresponding to activation values for the layer of the neural network.
  • the scaled and quantized subset of the data may comprise quantized matrix data corresponding to weight values for the layer of the neural network and scaled and quantized vector data corresponding to activation values for the layer of the neural network, and the processor may further be configured to perform matrix-vector multiplication operations using the scaled and quantized matrix data and the scaled and quantized vector data.
  • the present disclosure relates to a method including receiving a subset of data corresponding to a layer of a neural network.
  • the method may further include prior to performing any matrix operations using the subset of the data, using a processor, scaling the subset of the data by a scaling factor to generate a scaled subset of data.
  • the method may further include using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data.
  • the method may further include using the processor, quantizing the subset of the data to generate a quantized subset of data.
  • the method may further include determining a first quantization error associated with the scaled and quantized subset of data and determining a second quantization error associated with the quantized subset of data.
  • the method may further include if the first quantization error is greater than or equal to the second quantization error, then performing the matrix operations using the quantized subset of data to generate a first subset of results of the matrix operations.
  • the method may further include if the first quantization error is lower than the second quantization error, then performing the matrix operations using the scaled and quantized subset of the data to generate a second subset of results of the matrix operations and descaling the second subset of the results of the matrix operations, by multiplying the second subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
  • the scaled and the quantized subset of the data may comprise scaled and quantized activation values corresponding to the layer of the neural network.
  • the scaled and the quantized subset of the data may comprise scaled and quantized weight values corresponding to the layer of the neural network.
  • the subset of the data may be expressed in a first precision format, where the first precision format comprises floating point format.
  • the quantized subset of the data may be expressed in a second precision format, and the second precision format may comprise a precision format selected from one of an integer format, a reduced floating point precision format, or a block floating point format.
  • the method may further comprise selecting the scaling factor for the subset of the data.
  • any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved.
  • any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or inter-medial components.
  • any two components so associated can also be viewed as being “operably connected,” or “coupled,” to each other to achieve the desired functionality.
  • non-transitory media refers to any media storing data and/or instructions that cause a machine to operate in a specific manner.
  • exemplary non-transitory media include non volatile media and/or volatile media.
  • Non-volatile media include, for example, a hard disk, a solid state drive, a magnetic disk or tape, an optical disk or tape, a flash memory, an EPROM, NVRAM, PRAM, or other such media, or networked versions of such media.
  • Volatile media include, for example, dynamic memory, such as, DRAM, SRAM, a cache, or other such media.
  • Non-transitory media is distinct from, but can be used in conjunction with transmission media.
  • Transmission media is used for transferring data and/or instruction to or from a machine.
  • Exemplary transmission media include coaxial cables, fiber-optic cables, copper wires, and wireless media, such as radio waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Pure & Applied Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Nonlinear Science (AREA)
  • Neurology (AREA)
  • Complex Calculations (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Processors and methods for neural network processing are provided. A method includes receiving a subset of data corresponding to a layer of a neural network. The method further includes prior to performing any matrix operations using the subset of the data, scaling the subset of the data by a scaling factor to generate a scaled subset of data. The method further includes quantizing the scaled subset of the data to generate a scaled and quantized subset of data. The method further includes performing the matrix operations using the scaled and quantized subset of the data to generate a subset of results of the matrix operations. The method further includes descaling the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.

Description

NEURAL NETWORK LAYER PROCESSING WITH SCALED QUANTIZATION
BACKGROUND
[0001] Neural network technology is used to perform complex tasks such as image classification, reading comprehension, language translation, or speech recognition. Many of these tasks include deep learning that involves performing large numbers of floating point matrix multiply and accumulate operations. These operations are performed during training as well as during serving of results based on the input data and the trained data.
[0002] Neural networks may use values corresponding to the input data and the training data expressed in different formats, including data expressed in different levels of precision.
SUMMARY
[0003] In one example, the present disclosure relates to a method including receiving a subset of data corresponding to a layer of a neural network. The method may further include prior to performing any matrix operations using the subset of the data, using a processor, scaling the subset of the data by a scaling factor to generate a scaled subset of data. The method may further include using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data. The method may further include performing the matrix operations using the scaled and quantized subset of the data to generate a subset of results of the matrix operations. The method may further include descaling the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
[0004] In another example, the present disclosure relates to a processor configured to receive a subset of data corresponding to a layer of a neural network. The processor may further be configured to prior to performing any matrix operations using the subset of the data, scale the subset of the data by a scaling factor to generate a scaled subset of data.
The processor may further be configured to quantize the scaled subset of the data to generate a scaled and quantized subset of data. The processor may further be configured to perform the matrix operations using the scaled and quantized subset of the data and generate a subset of results of the matrix operations. The processor may further be configured to descale the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations. [0005] In yet another example, the present disclosure relates to a method including receiving a subset of data corresponding to a layer of a neural network. The method may further include prior to performing any matrix operations using the subset of the data, using a processor, scaling the subset of the data by a scaling factor to generate a scaled subset of data. The method may further include using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data. The method may further include using the processor, quantizing the subset of the data to generate a quantized subset of data. The method may further include determining a first quantization error associated with the scaled and quantized subset of data and determining a second quantization error associated with the quantized subset of data. The method may further include if the first quantization error is greater than or equal to the second quantization error, then performing the matrix operations using the quantized subset of data to generate a first subset of results of the matrix operations. The method may further include if the first quantization error is lower than the second quantization error, then performing the matrix operations using the scaled and quantized subset of the data to generate a second subset of results of the matrix operations and descaling the second subset of the results of the matrix operations, by multiplying the second subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
[0006] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The present disclosure is illustrated by way of example and is not limited by the accompanying figures, in which like references indicate similar elements. Elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale.
[0008] FIG. l is a block diagram of a neural network processor with scaled quantization in accordance with one example;
[0009] FIG. 2 is diagram showing scaling and quantization;
[00010] FIG. 3 is a block diagram of a system for scaled quantization in accordance with one example;
[00011] FIG. 4 shows a flow diagram of a method for performing scaled quantization in accordance with one example; and
[00012] FIG. 5 shows a flow diagram of a method for selectively performing scaled quantization in accordance with one example.
DETAILED DESCRIPTION
[00013] Examples disclosed in the present disclosure relate to systems, methods, and components for implementing neural network based processing. Certain examples relate to processing layers of Convolutional Neural Networks (CNNs) using scaled quantization. Certain examples relate to processing layers of CNNs using a neural network processor. A neural network processor may be implemented using any of Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Erasable and/or Complex programmable logic devices (PLDs), Programmable Array Logic (PAL) devices, and Generic Array Logic (GAL) devices. Neural network processors may also be implemented using a CPU, a GPU, a combination of CPUs and GPUs, or a combination of any of the programmable hardware, CPUs, and GPUs. An image file may be used to configure or re-configure FPGAs. The image file or similar file or program may be delivered via a network link or a local link (e.g., PCIe) from a host CPU. Information included in an image file can be used to program hardware blocks of a node (e.g., logic blocks and reconfigurable interconnects of an FPGA) to implement desired functionality. Desired functionality can be implemented to support any service that can be offered via a combination of computing, networking, and storage resources, such as via a data center or other infrastructure for delivering a service.
[00014] The described aspects can also be implemented in cloud computing environments. Cloud computing may refer to a model for enabling on-demand network access to a shared pool of configurable computing resources. For example, cloud computing can be employed in the marketplace to offer ubiquitous and convenient on- demand access to the shared pool of configurable computing resources. The shared pool of configurable computing resources can be rapidly provisioned via virtualization and released with low management effort or service provider interaction, and then scaled accordingly. A cloud computing model can be composed of various characteristics such as, for example, on-demand self-service, broad network access, resource pooling, rapid elasticity, measured service, and so forth. A cloud computing model may be used to expose various service models, such as, for example, Hardware as a Service (“HaaS”), Software as a Service ("SaaS"), Platform as a Service ("PaaS"), and Infrastructure as a Service ("IaaS"). A cloud computing model can also be deployed using different deployment models such as private cloud, community cloud, public cloud, hybrid cloud, and so forth.
[00015] Machine learning services, such as those based on Recurrent Neural
Networks (RNNs), Long Short Term Memory (LSTM) neural networks, or Gated
Recurrent Units (GRUs) may be implemented using the systems and nodes described in this disclosure. In one example, the service-related content or other information such as words, sentences, images, videos, or other such content/information may be translated into a vector representation.
[00016] In one example, the neural network model may comprise of many layers and each layer may be encoded as matrices or vectors of weights expressed in the form of coefficients or constants that have been obtained via training of a neural network. GPUs or programmable hardware logic blocks in the nodes may process the matrices or vectors to perform various operations, including multiply, add, and other operations against input vectors representing encoded information related to the service.
[00017] Taking the LSTM example, an LSTM network may comprise a sequence of repeating RNN layers or other types of layers. Each layer of the LSTM network may consume an input at a given time step, e.g., a layer’s state from a previous time step, and may produce a new set of outputs or states. In case of using the LSTM, a single chunk of content may be encoded into a single vector or multiple vectors. As an example, a word or a combination of words (e.g., a phrase, a sentence, or a paragraph) may be encoded as a single vector. Each chunk may be encoded into an individual layer (e.g., a particular time step) of an LSTM network. An LSTM layer may be described using a set of equations, such as the ones below:
Ot— ^iy^xo-^t L Whoht-i + WC0ct T h0)
ht = ottanh(ct)
[00018] In this example, inside each LSTM layer the inputs and hidden states may be processed using a combination of vector operations (e.g., dot-product, inner product, or vector addition) and non-linear functions (e.g., sigmoids, hyperbolic and tangents). In certain cases, the most compute intensive operations may arise from the dot products, which may be implemented using dense matrix -vector and matrix-matrix multiplication routines. In one example, the processing of the vector operations and non-linear functions may be performed in parallel.
[00019] Values corresponding to the training data, including vector data, may be represented in a number format. Floating point representation for the values of the vector data is expensive because each individual point value has an exponent specific to that point value. The alternative may be a fixed point representation. Performance, energy usage, and storage requirements can be improved through the use of reduced precision formats to implement artificial neural networks. Such formats can represent floating point numbers using a small (e.g. 3, 4, or 5-bit) mantissa and an exponent shared by two or more floating point numbers. Neural networks that use reduced precision formats may be referred to as quantized neural networks.
[00020] In one example, fixed point representation may use a set number of integer bits and fractional bits to express numbers. Fixed point can be efficiently processed in hardware with integer arithmetic, which may make it a preferred format when applicable. Fixed point format may be represented as qX. Y, where X is the number of integer bits and Y is the number of fractional bits. Block-floating point (BFP) may apply a shared exponent to a block of fixed point numbers; for example, a vector or matrix. The shared exponent may allow a significantly higher dynamic range for the block, although individual block members have a fixed range with respect to each other.
[00021] Quantized neural networks can improve the latency and throughput of running neural networks by reducing computation and memory demands. The use of reduced precision formats (e.g., any of a reduced precision floating point format, the Block Floating Point (BFP), or integers), however, can create issues when training neural networks. Parameters that may be suitable at the beginning of training may become suboptimal as the neural network converges. In addition, many neural network approaches typically use full precision floating point (e.g., 32-or 16-bit floating point numbers). As an example, certain software implementations of neural networks may use full precision floating point numbers. On the other hand, certain hardware
implementations of neural networks may use reduce precision numbers. Because underlying implementations of software and hardware-accelerated neural networks are different, small differences in calculations can arise that can cause errors over time.
[00022] The core of many neural network algorithms is a matrix multiplication operation: y = xW , where x are input activations and W is a weight matrix. In one example, quantization is applied to the input activations and the weight matrix to reduce the hardware costs of computing the matrix multiplication. Thus, the matrix
multiplication operation may be y = Q(x)Q(W). With the unquantized matrix multiplication y = the arbitrary scalar values may be factored out without affecting the result. However, if quantization is applied to this form, then the result of the quantized matrix multiplication y = aQ(x)Q is affected in two ways. First, the weight values that are quantized are different, and thus they may lead to a different set of quantized points. Second, the high-precision scalar multiply after the matrix
multiplication may also change the results. Certain examples in this disclosure relate to scaling weight values and other parameter values associated with a neural network prior to quantization. For example, the number 1.0011101101 2L6 may be scaled to the value 1.111 2L6 prior to quantization. The scaled value may be quantized, and operations may be performed in the quantized domain (e.g., training, inference). In some examples, a value may be portioned into tiles. A scaling operation can be performed for each tile.
[00023] FIG. 1 is a block diagram of a neural network processor 100 in accordance with one example. Each neural network processor 100 may include an Input Message Processor (IMP) 104 for receiving messages from other processors and an Output Message Processor (OMP) 106 for processing outgoing messages to other processors or
components. Such messages may be received and transmitted via network 102. Each neural network processor 100 may further include a matrix vector multiplier (MVM) 110 and two or more multifunction units (MFUs) (e.g., MFU[0] 140 and MFU[1] 160). Each neural network processor 100 may further include a matrix memory manager 170, a vector memory manager 180, a Vector DRAM 182, and a Matrix DRAM 184. In this example, the processor may accept off-chip messages containing auxiliary information such as control and scalar data and payload data (e.g., vectors, matrices, or other tensor data structures). In this example, the incoming messages may be handled by a lightweight input message processor (IMP) 104, which sends the vectors to vector memory manager 180. IMP 104 may send the matrices to matrix memory manager 170.
[00024] Each of vector data (e.g., data corresponding to activations) and matrix data (e.g., data corresponding to weights) may be scaled and quantized using vector scaling and quantization 192 and matrix scaling and quantization 194, respectively. Thus, vector data received from vector memory manager 180 may be in a higher precision format (e.g.,
FP16 or FP32) and vector scaling and quantization 192 may scale the higher precision vector data and then convert the scaled vector data from the higher precision format to a lower precision format (e.g., block floating point format). Similarly, matrix data received via network 102 or otherwise may be in a higher precision format (e.g., FP16 or FP32) and matrix scaling and quantization 194 may scaled the higher precision matrix data and then convert the scaled matrix data from the higher precision format to a lower precision format (e.g., block floating point format). Because the matrix multiplication operations are more expensive in terms of resources and time, in one example, it may be advantageous to scale and quantize only the inputs to MVM 110. Thus, in the example shown in FIG. 1, only the inputs to MVM 110 are being scaled and quantized. The inputs to the MFUs are not being scaled and quantized. Alternatively, in another example, inputs to both MVM 110 and the MFUs may be scaled and quantized.
[00025] Quantization may involve mapping continuous or high-precision values onto a discrete, low-precision grid. If the original points are close to their mapped quantization value, then one expects that the resulting computations will be close to the original computations. In the present disclosure, the original values are quantized to better match a quantization grid. Thus, as shown in FIG. 2, in one example, scaled quantization manager is configured to scale the values such that the largest value matches up with the largest quantization grid point. As shown in FIG. 2, as an example, when no scaling is applied, the top two quantization points (226 and 228) along the quantization grid are unused, causing the three input values (212, 214, and 216) to be put into two bins (222 and 224). Additional bins (226 and 228) along the quantization grid are left unused. When quantization is applied, advantageously now the full quantization grid range is used; that in turn allows one to distinguish between the lowest two values after quantization. Thus, as shown in FIG. 2, with scaling, original value 212 is scaled to scaled value 232; original value 214 is scaled to scaled value 234, and original value 216 is scaled to scaled value 236. The scaled values are then quantized resulting in quantized values 242, 244, and 248 along the quantization grid. Only one quantization bin (246) is left unused this time.
[00026] In one example, the weight values represented in the matrix may be scaled such that the highest value matches up with the highest quantization point. For a quantized mantissa of b bits, the max mantissa value is 2b— 1. In this example, the weight matrix W may be scaled using the equations below such that the largest absolute value becomes 2b— 1.
m = max(abs(W ))
m
a =
2b - 1
[00027] For one example, Table 1 below shows results comparing validation accuracy on the ImageNet dataset for ResNet-50. As shown, with scaling and quantization, the top- 1 accuracy is improved from 53.13% to 59.21% and the top-5 accuracy is improved from 76.49% to 81.36%.
Table 1
[00028] With continued reference to FIG. 1, each of the matrices may have an N by N size and each of the vectors may have a size of 1 by N. In this example, all instructions corresponding to neural network processor 100 may operate on native-sized data. Logical vectors and matrices corresponding to the applications handled by neural network processor 100 may often be larger than the native size; in these cases, the vectors and matrices may be broken up into native-sized tiles. In one example, the block size of the BFP format data may be equal to the native dimension. Therefore, each native 1 by N vector may have a shared exponent, and each row of an N by N matrix may have a shared exponent. Each of the vector data and the matrix data may have a two’s complement mantissa portion, and the mantissa size for the vector data and the matrix data may be different. In one example, there may not be a need for the shared exponent. Instead, the weight values may be fixed to be within an integer range, e.g.,— 2b— 1 to 2b— 1. This may further simplify the hardware used for performing the dot product operations.
[00029] Still referring to FIG. 1, MVM 110 may include a vector register file (VRF)
112, a matrix register file (MRF) 120, and tile engines (e.g., tile engines 114, 116, and 118). Tile engines may receive input matrix and input vector data from VRF 112. MVM 110 may further include precision format converters. In this example, a scaling operation can be performed for each tile. Thus, matrix scaling and quantization 194 may scale and quantize weight values for each of tile engines 114, 116, and 118 in parallel. Similarly, vector scaling and quantization 192 may scale and quantize activation values for each of tile engines 114, 116, and 118 in parallel. In one example, two internal BFP formats may be used by MVM 110 for expressing its input and output: BFP short, for vector and matrix storage, and BFP long for accumulation. In one example of MVM 110, BFP short may use ql .15 fixed point values with a shared 5 bit exponent, and BFP long may use q34.40 fixed point values with a shared 5 bit exponent. In this example, the matrix-vector multiplication may result in BFP long, which may be converted back to a floating-point format as a final output stage. Thus, the example MVM 110 shown in FIG. 1 may include BFP to FP16 Converters 122, 124, and 126 at the output stages. Tile engines 114, 116, and 118 may, in parallel, provide outputs to the respective converters as shown in the example in FIG. 5. The outputs from the respective converters representing the results of the matrix operations performed by MVM 110 may be output to de-scalers 123, 125, and 127, respectively. In this example, each of de-scalers 123, 125, and 127 may be configured to multiply the respective result of the matrix operations with an inverse of the scaling factor used as part of scaling and quantization. The descaled results may then be further processed, including for example by MFU[0] 140.
[00030] The matrix data may be communicated between Matrix DRAM 184 and Matrix Memory manager 170 using M number of channels. Vector memory manager 180 may move vector data over C number of channels.
[00031] With continued reference to FIG. 1, each MFU (e.g., MFU[0] 140 and MFU[1] 160) may include crossbars (e.g., crossbars labeled as xbars). MFU[0] 140 may support vector operations, such as vector-vector multiply and addition, a Sigmoid function, a TanH function, a softmax operation, a Rectified Linear Unit (ReLU) operation, and/or an activation block operation. Thus, as shown in FIG. 1, MFU[0] 140 may include crossbars (e.g., xbar 146, 148, and 150) that may stream a vector from its input bus through a pipelined sequence of operations. Thus, a vector may be received via a register file labeled MulVrf 142 or another register file labeled AsVrf[0] 144, and such vectors may be subjected to any of a multiply operation, an addition operation, or some other operation. MFU[0] 140 may include several hardware blocks for performing addition (e.g., 153, 157, and 161). MFU[0] 140 may also include several hardware blocks for performing multiplication (e.g., 152, 156, and 159). MFU[0] 140 may also include several hardware blocks for performing activation (e.g., 151, 154, and 158).
[00032] Still referring to FIG. 1, MFU[1] 160 may include crossbars (e.g., xbar 162, 163, and 164) that may allow MFU[1] 160 to receive outputs from MFU[0] 140 and perform additional operations on those outputs and any additional inputs received via ADD/SUB VRF 168. MFU[1] 160 may include several hardware blocks for performing addition (e.g., 169, 171, and 172). MFU[1] 160 may also include several hardware blocks for performing activation. The outputs from MFU[1] 160 received via C channels may be coupled via a multiplexing circuit 174 to vector memory manager 180. Although FIG. 1 shows a certain number of components of neural network processor 100 arranged in a certain manner, there could be more or fewer number of components arranged differently.
[00033] Neural network processor 100 may be used to enable issuance of instructions that can trigger millions of operations using a small number of instructions. As an example, Table 2 below shows instructions corresponding to a fully parameterized LSTM:
Table 2
[00034] Although Table 2 shows a certain number of instructions having a certain format, neural network processor 100 may execute more or fewer instructions having a different format to accomplish the same objectives.
[00035] Table 3 below shows how to compute a 1 X 1 convolution as part of a CNN evaluation.
Table 3
[00036] As shown in the table above, the number of iterations over a chain of instructions for the computation may be specified. Next, as needed, the native dimension of each instruction chain may be scaled by a column scaling factor. And after reading the vector data from the vector register file it may be multiplied with the weights retrieved from the matrix register file. After performing additional operations as required by the CNN evaluation, the output may be provided. As an example, a pointwise Rectified Linear Unit (ReLU) operation may be performed for each element of the vector data.
[00037] Table 4 below shows how to compute an N X N convolution as part of a CNN evaluation. The instructions below that are similar to the 1 X 1 convolution are not described again. The Set2dWindows instruction may be used to set the total window size and then Setlterations instruction may be used to slide that window across the input volume. The *_inc instructions (e.g., v rd inc and v add inc) may be used to increment the instruction’s address based on the stride. As an example, a stride of 2 may result in skipping of every other vector in the vector register file that is used to store vector data for operations, such as addition.
Table 4
[00038] FIG. 3 is a block diagram of a system for scaled quantization in accordance with one example. System 300 may include a processor 310, a memory 320, input/output devices 340, display 350, and network interfaces 360 interconnected via bus system 302. Memory 320 may include input data 322, training data 324, training code 326, scaling and quantization (SQ) code 328, inference code 330, and evaluation code 332. Input data 322 may comprise data corresponding to images or other types of information that can be classified or otherwise processed using a neural network. Memory 320 may further include training data 324 that may include weights obtained by training the neural network using the higher precision numbers (e.g., floating point format numbers). Memory 320 may further include training code 326 comprising instructions configured to train a neural network, such as ResNet-50. Training code 326 may use the weights obtained by training the neural network using the higher precision numbers (e.g., floating point format numbers).
[00039] Scaling and quantization (SQ) code 328 may include instructions configured to scale and quantize input data 322 or training data 324. As described earlier, in one example, scaling may include multiplying the data that is in a higher precision format (e.g., FP32 or FP16) by a scaling factor. Quantizing may include converting the scaled values of the data from the higher precision format to a lower precision format (e.g., a lower precision floating point format, an integer, or a block floating point format).
[00040] With continued reference to FIG. 3, memory 320 may further include inference code 330 comprising instructions to perform inference using a trained neural network. Memory 320 may further include evaluation code 332 comprising instructions to evaluate the performance of the trained neural network in terms of the accuracy of inference. Although FIG. 3 shows a certain number of components of system 300 arranged in a certain way, additional or fewer components arranged differently may also be used. In addition, although memory 320 shows certain blocks of code, the functionality provided by this code may be combined or distributed. In addition, the various blocks of code may be stored in non-transitory computer-readable media, such as non-volatile media and/or volatile media. Non-volatile media include, for example, a hard disk, a solid state drive, a magnetic disk or tape, an optical disk or tape, a flash memory, an EPROM, NVRAM, PRAM, or other such media, or networked versions of such media. Volatile media include, for example, dynamic memory, such as, DRAM, SRAM, a cache, or other such media.
[00041] FIG. 4 is a flow chart 400 of a method in accordance with one example. The method may be implemented by processor 310 of FIG. 3. Alternatively, all or some of the steps of this method may be implemented by neural network processor 100. Step 410 may include receiving a subset of data corresponding to a layer of a neural network. In one example, the subset of the data is expressed in a higher precision format (e.g., FP32 of FP16). In one example, the higher precision format may be floating point precision format. Prior to this step, another step may include memory 320 receiving input data 322 via another storage or via a network and storing as part of input data 322.
[00042] Step 420 may include, prior to prior to performing any matrix operations using the subset of the data, using the processor, scaling the subset of the data by a scaling factor to generate a scaled subset of the data. This step may be performed using SQ code 328. Alternatively, this step may be performed by vector scaling and quantization 192 and/or by matrix scaling and quantization 194.
[00043] Step 430 may include, using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data. This step may be performed using SQ code 328. Alternatively, this step may be performed by vector scaling and quantization 192 and/or by matrix scaling and quantization 194. In one example, this step may include converting the scaled subset of the data from a higher precision format to a lower precision format.
[00044] Step 440 may include performing the matrix operations using the scaled and quantized subset of the data to generate a subset of results of the matrix operations. This step may be performed using training code 326 or inference code 330 when executed by processor 310 of FIG. 3. Alternatively, this step may be performed by MVM 110 of FIG.
1 as explained earlier.
[00045] Step 450 may include descaling the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations. This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3. Alternatively, this step may be performed by de-scalers 123, 125, and 127 of FIG. 1. In this example, each of de-scalers 123, 125, and 127 may be configured to multiply the respective result of the matrix operations with an inverse of the scaling factor used as part of the scaling operation. Although FIG. 4 describes several steps performed in a certain order, additional or fewer steps may be performed in a different order. In addition, each of the steps of flowchart 400 may be performed by neural network processor 100, or by processor 310, or by a combination of the two.
[00046] In one example, instead of searcing for an optimum scaling factor, scaling may be applied selectively. For each block of data that may be subjected to scaling, the quantization error with and without applying a scaling factor is calculated and the case that has less error may be used. Different type of error metrics (e.g., LI error, L2 error, max error, relative error, etc.) may be used to compare the quantization error. While this calculation requires quantizing the weights twice, the process can be done entirely offline. [00047] FIG. 5 is a flow chart 500 of a method in accordance with one example. The method may be implemented by processor 310 of FIG. 3. Alternatively, all or some of the steps of this method may be implemented by neural network processor 100. Step 510 may include receiving a subset of data corresponding to a layer of a neural network. In one example, the subset of the data is expressed in a higher precision format (e.g., FP32 of FP16). In one example, the higher precision format may be floating point precision format. Prior to this step, another step may include memory 320 receiving input data 322 via another storage or via a network and storing as part of input data 322.
[00048] Step 520 may include, prior to performing any matrix operations using the subset of the data, using the processor, scaling the subset of the data by a scaling factor to generate a scaled subset of the data. This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3. Alternatively, this step may be performed by vector scaling and quantization 192 and/or by matrix scaling and quantization 194.
[00049] Step 530 may include, using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data. This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3. Alternatively, this step may be performed by vector scaling and quantization 192 and/or by matrix scaling and quantization 194. In one example, this step may include converting the scaled subset of the data from a higher precision format to a lower precision format.
[00050] Step 540 may include using the processor, quantizing the subset of the data to generate a quantized subset of data. This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3. Alternatively, this step may be performed by vector scaling and quantization 192 and/or by matrix scaling and quantization 194. In one example, this step may include converting the subset of the data from a higher precision format to a lower precision format.
[00051] Step 550 may include determining a first quantization error associated with the scaled and quantized subset of data and determining a second quantization error associated with the quantized subset of data. In this example, the quantization error may be the LI error, which is the sum of the absolute differences between the original values and the quantized values. This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3.
[00052] Step 560 may include, if the first quantization error is greater than or equal to the second quantization error, performing the matrix operations using the quantized subset of data to generate a first subset of results of the matrix operations. This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3. Alternatively, this step may be performed using neural network processor 100.
[00053] Step 570 may include if the first quantization error is lower than the second quantization error, performing the matrix operations using the scaled and quantized subset of the data to generate a second subset of results of the matrix operations and, using the processor, descaling the second subset of the results of the matrix operations, by multiplying the second subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations. This step may be performed using SQ code 328 when executed by processor 310 of FIG. 3.
Alternatively, this step may be performed using neural network processor 100. Although FIG. 5 describes several steps performed in a certain order, additional or fewer steps may be performed in a different order. In addition, each of the steps of flowchart 500 may be performed by neural network processor 100, or by processor 310, or by a combination of the two.
[00054] In conclusion, the present disclosure relates to a method including receiving a subset of data corresponding to a layer of a neural network. The method may further include prior to performing any matrix operations using the subset of the data, using a processor, scaling the subset of the data by a scaling factor to generate a scaled subset of data. The method may further include using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data. The method may further include performing the matrix operations using the scaled and quantized subset of the data to generate a subset of results of the matrix operations. The method may further include descaling the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
[00055] The scaled and the quantized subset of the data may comprise scaled and quantized activation values corresponding to the layer of the neural network. The scaled and the quantized subset of the data may comprise scaled and quantized weight values corresponding to the layer of the neural network.
[00056] The subset of the data may be expressed in a first precision format, where the first precision format comprises floating point format. The quantized subset of the data may be expressed in a second precision format, and the second precision format may comprise a precision format selected from one of an integer format, a reduced floating point precision format, or a block floating point format. [00057] The scaled and quantized subset of the data may comprise scaled and quantized matrix data corresponding to weight values for the layer of the neural network, and the performing the matrix operations may comprise performing matrix-vector multiplication operations using the scaled and quantized matrix data and vector data corresponding to activation values for the layer of the neural network. The scaled and quantized subset of the data may comprise scaled and quantized matrix data corresponding to weight values for the layer of the neural network and scaled and quantized vector data corresponding to activation values for the layer of the neural network, and the performing the matrix operations may comprise performing matrix-vector multiplication operations using the scaled and quantized matrix data and the scaled and quantized vector data.
[00058] In another example, the present disclosure relates to a processor configured to receive a subset of data corresponding to a layer of a neural network. The processor may further be configured to prior to performing any matrix operations using the subset of the data, scale the subset of the data by a scaling factor to generate a scaled subset of data.
The processor may further be configured to quantize the scaled subset of the data to generate a scaled and quantized subset of data. The processor may further be configured to perform the matrix operations using the scaled and quantized subset of the data and generate a subset of results of the matrix operations. The processor may further be configured to descale the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
[00059] The scaled and the quantized subset of the data may comprise scaled and quantized activation values corresponding to the layer of the neural network. The scaled and the quantized subset of the data may comprise scaled and quantized weight values corresponding to the layer of the neural network.
[00060] The subset of the data may be expressed in a first precision format, where the first precision format comprises floating point format. The quantized subset of the data may be expressed in a second precision format, and the second precision format may comprise a precision format selected from one of an integer format, a reduced floating point precision format, or a block floating point format.
[00061] The scaled and quantized subset of the data may comprise scaled and quantized matrix data corresponding to weight values for the layer of the neural network, and the processor may further be configured to perform matrix-vector multiplication operations using the scaled and quantized matrix data and vector data corresponding to activation values for the layer of the neural network. The scaled and quantized subset of the data may comprise quantized matrix data corresponding to weight values for the layer of the neural network and scaled and quantized vector data corresponding to activation values for the layer of the neural network, and the processor may further be configured to perform matrix-vector multiplication operations using the scaled and quantized matrix data and the scaled and quantized vector data.
[00062] In yet another example, the present disclosure relates to a method including receiving a subset of data corresponding to a layer of a neural network. The method may further include prior to performing any matrix operations using the subset of the data, using a processor, scaling the subset of the data by a scaling factor to generate a scaled subset of data. The method may further include using the processor, quantizing the scaled subset of the data to generate a scaled and quantized subset of data. The method may further include using the processor, quantizing the subset of the data to generate a quantized subset of data. The method may further include determining a first quantization error associated with the scaled and quantized subset of data and determining a second quantization error associated with the quantized subset of data. The method may further include if the first quantization error is greater than or equal to the second quantization error, then performing the matrix operations using the quantized subset of data to generate a first subset of results of the matrix operations. The method may further include if the first quantization error is lower than the second quantization error, then performing the matrix operations using the scaled and quantized subset of the data to generate a second subset of results of the matrix operations and descaling the second subset of the results of the matrix operations, by multiplying the second subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
[00063] The scaled and the quantized subset of the data may comprise scaled and quantized activation values corresponding to the layer of the neural network. The scaled and the quantized subset of the data may comprise scaled and quantized weight values corresponding to the layer of the neural network.
[00064] The subset of the data may be expressed in a first precision format, where the first precision format comprises floating point format. The quantized subset of the data may be expressed in a second precision format, and the second precision format may comprise a precision format selected from one of an integer format, a reduced floating point precision format, or a block floating point format. The method may further comprise selecting the scaling factor for the subset of the data.
[00065] It is to be understood that the methods, modules, and components depicted herein are merely exemplary. Alternatively, or in addition, the functionally described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-Programmable Gate Arrays (FPGAs), Application-Specific
Integrated Circuits (ASICs), Application-Specific Standard Products (ASSPs), System-on- a-Chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. In an abstract, but still definite sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or inter-medial components. Likewise, any two components so associated can also be viewed as being "operably connected," or "coupled," to each other to achieve the desired functionality.
[00066] The functionality associated with some examples described in this disclosure can also include instructions stored in a non-transitory media. The term“non-transitory media” as used herein refers to any media storing data and/or instructions that cause a machine to operate in a specific manner. Exemplary non-transitory media include non volatile media and/or volatile media. Non-volatile media include, for example, a hard disk, a solid state drive, a magnetic disk or tape, an optical disk or tape, a flash memory, an EPROM, NVRAM, PRAM, or other such media, or networked versions of such media. Volatile media include, for example, dynamic memory, such as, DRAM, SRAM, a cache, or other such media. Non-transitory media is distinct from, but can be used in conjunction with transmission media. Transmission media is used for transferring data and/or instruction to or from a machine. Exemplary transmission media, include coaxial cables, fiber-optic cables, copper wires, and wireless media, such as radio waves.
[00067] Furthermore, those skilled in the art will recognize that boundaries between the functionality of the above described operations are merely illustrative. The
functionality of multiple operations may be combined into a single operation, and/or the functionality of a single operation may be distributed in additional operations. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.
[00068] Although the disclosure provides specific examples, various modifications and changes can be made without departing from the scope of the disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure. Any benefits, advantages, or solutions to problems that are described herein with regard to a specific example are not intended to be construed as a critical, required, or essential feature or element of any or all the claims.
[00069] Furthermore, the terms "a" or "an," as used herein, are defined as one or more than one. Also, the use of introductory phrases such as "at least one" and "one or more" in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an." The same holds true for the use of definite articles.
[00070] Unless stated otherwise, terms such as "first" and "second" are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements.

Claims

1. A method comprising:
receiving a subset of data corresponding to a layer of a neural network;
prior to performing any matrix operations using the subset of the data, using a processor, scaling the subset of the data by a scaling factor to generate a scaled subset of data;
using the processor, quantizing the scaled subset of the data to generate a
scaled and quantized subset of data;
performing the matrix operations using the scaled and quantized subset of the data to generate a subset of results of the matrix operations; and descaling the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
2. The method of claim 1, wherein the scaled and the quantized subset of the data comprises scaled and quantized activation values corresponding to the layer of the neural network.
3. The method of claim 1, wherein the scaled and the quantized subset of the data comprises scaled and quantized weight values corresponding to the layer of the neural network.
4. The method of claim 1, wherein the subset of the data is expressed in a first precision format, and wherein the first precision format comprises floating point format.
5. The method of claim 1, wherein the quantized subset of the data is expressed in a second precision format, and wherein the second precision format comprises a precision format selected from one of an integer format, a reduced floating point precision format, or a block floating point format.
6. The method of claim 1, wherein the scaled and quantized subset of the data comprises scaled and quantized matrix data corresponding to weight values for the layer of the neural network, and wherein the performing the matrix operations comprises performing matrix-vector multiplication operations using the scaled and quantized matrix data and vector data corresponding to activation values for the layer of the neural network.
7. The method of claim 1, wherein the scaled and quantized subset of the data comprises scaled and quantized matrix data corresponding to weight values for the layer of the neural network and scaled and quantized vector data corresponding to activation values for the layer of the neural network, and wherein the performing the matrix operations comprises performing matrix- vector multiplication operations using the scaled and quantized matrix data and the scaled and quantized vector data.
8. A processor configured to:
receive a subset of data corresponding to a layer of a neural network;
prior to performing any matrix operations using the subset of the data, scale the subset of the data by a scaling factor to generate a scaled subset of data; quantize the scaled subset of the data to generate a scaled and quantized subset of data;
perform the matrix operations using the scaled and quantized subset of the data and generate a subset of results of the matrix operations; and descale the subset of the results of the matrix operations, by multiplying the subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
9. The processor of claim 8, wherein the scaled and quantized subset of the data comprises scaled and quantized activation values corresponding to the layer of the neural network.
10. The processor of claim 8, wherein the scaled and quantized subset of the data comprises scaled and quantized weight values corresponding to the layer of the neural network.
11. The processor of claim 8, wherein the subset of the data is expressed in a first precision format, and wherein the first precision format comprises floating point format.
12. The processor of claim 8, wherein the scaled and quantized subset of the data is expressed in a second precision format, and wherein the second precision format comprises a precision format selected from one of an integer format, a reduced floating point precision format, or a block floating point format.
13. The processor of claim 8, wherein the scaled and quantized subset of the data comprises scaled and quantized matrix data corresponding to weight values for the layer of the neural network, wherein the processor is further configured to perform matrix-vector multiplication operations using the scaled and quantized matrix data and vector data corresponding to activation values for the layer of the neural network.
14. The processor of claim 8, wherein the scaled and quantized subset of the data comprises quantized matrix data corresponding to weight values for the layer of the neural network and scaled and quantized vector data corresponding to activation values for the layer of the neural network, and wherein the processor is further configured to perform matrix-vector multiplication operations using the scaled and quantized matrix data and the scaled and quantized vector data.
15. A method comprising:
receiving a subset of data corresponding to a layer of a neural network;
prior to performing any matrix operations using the subset of the data, using a processor, scaling the subset of the data by a scaling factor to generate a scaled subset of data;
using the processor, quantizing the scaled subset of the data to generate a
scaled and quantized subset of data;
using the processor, quantizing the subset of the data to generate a quantized subset of data;
determining a first quantization error associated with the scaled and quantized subset of data and determining a second quantization error associated with the quantized subset of data;
if the first quantization error is greater than or equal to the second quantization error, then performing the matrix operations using the quantized subset of data to generate a first subset of results of the matrix operations; and if the first quantization error is lower than the second quantization error, then performing the matrix operations using the scaled and quantized subset of the data to generate a second subset of results of the matrix operations and descaling the second subset of the results of the matrix operations, by multiplying the second subset of the results of the matrix operations with an inverse of the scaling factor, to generate a descaled subset of results of the matrix operations.
EP20711689.8A 2019-02-25 2020-02-11 Neural network layer processing with scaled quantization Pending EP3931758A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16/284,407 US11544521B2 (en) 2019-02-25 2019-02-25 Neural network layer processing with scaled quantization
PCT/US2020/017799 WO2020176248A1 (en) 2019-02-25 2020-02-11 Neural network layer processing with scaled quantization

Publications (1)

Publication Number Publication Date
EP3931758A1 true EP3931758A1 (en) 2022-01-05

Family

ID=69844894

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20711689.8A Pending EP3931758A1 (en) 2019-02-25 2020-02-11 Neural network layer processing with scaled quantization

Country Status (3)

Country Link
US (1) US11544521B2 (en)
EP (1) EP3931758A1 (en)
WO (1) WO2020176248A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11526743B2 (en) * 2020-03-13 2022-12-13 Arm Limited Artificial neural network optical hardware accelerator
CN111814967B (en) * 2020-09-11 2021-02-23 鹏城实验室 Method, apparatus and storage medium for calculating inferential computation of neural network model
US20220222575A1 (en) * 2021-01-14 2022-07-14 Microsoft Technology Licensing, Llc Computing dot products at hardware accelerator
WO2023224723A1 (en) * 2022-05-19 2023-11-23 Qualcomm Incorporated Fast eight-bit floating point (fp8) simulation with learnable parameters

Also Published As

Publication number Publication date
US20200272881A1 (en) 2020-08-27
US11544521B2 (en) 2023-01-03
WO2020176248A1 (en) 2020-09-03

Similar Documents

Publication Publication Date Title
US11562201B2 (en) Neural network layer processing with normalization and transformation of data
US11556762B2 (en) Neural network processor based on application specific synthesis specialization parameters
CN110998570B (en) Hardware node with matrix vector unit with block floating point processing
WO2020180491A1 (en) Deriving a concordant software neural network layer from a quantized firmware neural network layer
CN111758106B (en) Method and system for massively parallel neuro-reasoning computing elements
US11544521B2 (en) Neural network layer processing with scaled quantization
US10860924B2 (en) Hardware node having a mixed-signal matrix vector unit
US11151445B2 (en) Neural network processor with a window expander circuit
EP3785112B1 (en) Matrix vector multiplier with a vector register file comprising a multi-port memory
CN114868108A (en) Systolic array component combining multiple integer and floating point data types
US11663444B2 (en) Pipelined neural network processing with continuous and asynchronous updates
CN111240746A (en) Floating point data inverse quantization and quantization method and equipment
US11811416B2 (en) Energy-efficient analog-to-digital conversion in mixed signal circuitry
RU2795887C2 (en) Matrix-vector multiplier with a set of registers for storing vectors containing multiport memory
CN115965048A (en) Data processing device, data processing method and electronic equipment
CN115965047A (en) Data processor, data processing method and electronic equipment
CN115951858A (en) Data processor, data processing method and electronic equipment

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210811

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)