GB2619919A - Hardware implementation of an attention-based neural network - Google Patents
Hardware implementation of an attention-based neural network Download PDFInfo
- Publication number
- GB2619919A GB2619919A GB2208959.3A GB202208959A GB2619919A GB 2619919 A GB2619919 A GB 2619919A GB 202208959 A GB202208959 A GB 202208959A GB 2619919 A GB2619919 A GB 2619919A
- Authority
- GB
- United Kingdom
- Prior art keywords
- attention
- neural network
- wise
- input
- padding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 177
- 238000012545 processing Methods 0.000 claims abstract description 110
- 238000000034 method Methods 0.000 claims abstract description 103
- 238000012360 testing method Methods 0.000 claims abstract description 57
- 239000011159 matrix material Substances 0.000 claims description 91
- 238000004364 calculation method Methods 0.000 claims description 51
- 230000004913 activation Effects 0.000 claims description 45
- 238000013507 mapping Methods 0.000 claims description 41
- 238000010606 normalization Methods 0.000 claims description 41
- 238000011176 pooling Methods 0.000 claims description 27
- 230000004044 response Effects 0.000 claims description 16
- 230000009466 transformation Effects 0.000 claims description 12
- 239000010410 layer Substances 0.000 description 106
- 230000006870 function Effects 0.000 description 66
- 239000000872 buffer Substances 0.000 description 53
- 238000004519 manufacturing process Methods 0.000 description 51
- 238000012549 training Methods 0.000 description 29
- 239000013598 vector Substances 0.000 description 23
- 230000008569 process Effects 0.000 description 21
- 229910052757 nitrogen Inorganic materials 0.000 description 17
- 238000013459 approach Methods 0.000 description 13
- 238000009825 accumulation Methods 0.000 description 11
- 229910052698 phosphorus Inorganic materials 0.000 description 11
- 239000000470 constituent Substances 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000003860 storage Methods 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 6
- 238000013519 translation Methods 0.000 description 6
- 230000006872 improvement Effects 0.000 description 5
- 238000003058 natural language processing Methods 0.000 description 5
- 229910052799 carbon Inorganic materials 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 2
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 235000019800 disodium phosphate Nutrition 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 229910052710 silicon Inorganic materials 0.000 description 2
- 239000010703 silicon Substances 0.000 description 2
- 239000002356 single layer Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000012993 chemical processing Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000003750 conditioning effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000005389 semiconductor device fabrication Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 238000000714 time series forecasting Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
- Tests Of Electronic Circuits (AREA)
Abstract
Computer-implemented method for selecting numerical formats for use in configuring a hardware implementation of an attention-based neural network. A representation of the neural network is implemented as a test network. A dataset of test input sequences for the neural network is obtained, including sequences of varying length. Each test input sequence is padded with padding values to a fixed length. For each padded input sequence, a padding mask is generated identifying the part of the padded input sequence containing the padding values. An attention mask is generated from each padding mask, using an outer product operation. The padded input sequences and attention masks are processed through the test network. Statistics are collected describing ranges of values, obtained during the processing, for at least two layers of the neural network. Numerical formats are selected for the layers based on the collected statistics. The neural network may comprise a transformer. Each attention mask may comprise zeros in locations corresponding to elements of the respective input sequence and large negative values in locations corresponding to the padding values.
Description
Intellectual Property Office Application No G1322089591 RTM Date:21 November 2022 The following terms are registered trade marks and should be read as such wherever they occur in this document: Op enC L TensorFlow Verilog Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
HARDWARE IMPLEMENTATION OF AN ATTENTION-BASED NEURAL NETWORK
BACKGROUND
[1] "Attention" refers to a technique or structural configuration that allows a neural network to focus on a certain part (or certain parts) of its input. Attention can be used to characterise relationships between different parts of different data. Applications of attention include, but are not limited to, natural language processing (NLP) and computer vision. In NLP, for example, attention mechanisms may enable a neural network model to attend to certain words in a sentence. In computer vision, attention may enable the neural network to attend to certain portions of a scene, for example.
[2] Attention mechanisms can be categorised into two broad groups: * "Self-attention" manages and quantifies the interdependence among a set of input elements themselves.
* "Cross attention" (also known as general attention) manages and quantifies the interdependence between two sets of input elements.
[003] These different types of attention are used differently by different neural network architectures. In NLP, for instance, self-attention can be used by itself to understand the context of a sentence. It is applied in this way in Google's bidirectional encoder representations from transformers (BERT) technology.
[004] In applications such as machine translation, self-attention and cross attention may be applied together, to allow the network to focus on different parts of an input sentence in an input language, and to establish relationships between parts of the input sentence and the target sentence in the target language.
[005] Transformer networks are currently a leading example of attention-based networks. The transformer architecture was introduced in Vaswani et al. ("Attention is all you need", in Advances in Neural Information Processing Systems 30 (NIPS) 2017, https://arxiv.org/abs/1706.03762). The transformer model architecture was proposed as an alternative to the use of recurrence for sequence modelling. The original architecture was based around an encoder stack and a decoder stack, each of which is composed of multiple layers. However, more generally, transformer networks can be built around various configurations of encoder stack and/or decoder stack, such as: * Single encoder stack, multiple decoder stacks -see Vaswani et al.; * Single encoder stack, single decoder stack -for example, the object detection transformer (DETR) network; * Single encoder stack -examples including BERT, vision transformer, and word language model; * Multiple decoder stacks -for example, the generative pre-trained transformer (GPT) 1/2/3 family of models.
[006] Transformer networks have proven to offer a powerful attention-based architecture, with state-of-the-art accuracy, across multiple modalities and tasks. These include, for 2-D images: image classification, object detection, action recognition, segmentation, super-resolution, enhancement, and colorization; for video: activity recognition and video forecasting (a type of time series forecasting); for 3D is representations, such as meshes or point clouds: classification and segmentation; for text: language modelling and generation, next sentence prediction, classification, and question-answering; for audio: speech recognition and voice synthesis. There are also multi-modal applications, where inputs and outputs come from different modalities. Examples in this area include visual-question answering, reasoning, and image captioning.
is SUMMARY
[007] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
[008] It would be desirable to be able to implement an attention-based neural network efficiently in hardware.
[009] The invention is defined by the claims.
[0010] A computer-implemented method for selecting numerical formats is disclosed. The numerical formats are suitable for use in configuring a hardware implementation of an attention-based neural network. A dataset of test input sequences for the neural network is obtained. Each test input sequence is padded with padding values. For each padded input sequence, a padding mask is generated identifying the part of the padded input sequence that contains the padding values. An attention mask is generated from each padding mask, using an outer product operation. The padded input sequences and attention masks are processed through the neural network. During the processing, statistics are collected, describing ranges of values obtained at various layers of the neural network. Numerical formats are selected for the various layers based on the collected statistics.
[0011] According to one aspect, there is provided a computer-implemented method for selecting numerical formats for use in configuring a hardware implementation of an attention-based neural network, the method comprising: obtaining a representation of the attention-based neural network; implementing the representation as a test neural network; obtaining a dataset of first test input sequences for the attention-based neural network, wherein the dataset includes first test input sequences of varying length; padding each first test input sequence with padding values to produce a respective first padded input sequence of a first fixed length; generating, for each first padded input sequence, a respective first padding mask identifying the part of the first padded input sequence that contains the padding values; generating a first attention mask from each first padding mask, wherein the generating comprises an outer product operation applied to the first padding mask; processing the first padded input sequences and the first attention masks through the test neural network; collecting statistics describing ranges of values obtained during said processing, wherein the statistics describe ranges of values for at least two different layers of the attention-based neural network; and selecting numerical formats for the at least two different layers based on the collected statistics.
[0012] Based on their different collected statistics, the numerical format selected for one of the two layers may be different from the numerical format selected for the other layer.
[0013] The method comprises, in particular, selecting numerical formats for one or both 20 of: the first padded input sequences; and the first attention masks.
[0014] The method may comprise selecting the first fixed length. The first fixed length may be selected to be equal to the maximum length among the first test input sequences in the dataset.
[0015] The first attention masks are used in the processing to ignore certain elements of the respective first padded input sequences, in an attention-calculation. In particular, the first attention mask is used to ignore the portion of the first padded input sequence containing the padding values. The first attention mask may also be used to ignore other portions of the first padded input sequence. The first attention mask may be a self-attention mask. Generating the first attention mask may comprise an outer product of the first padding mask with itself.
[0016] The attention-based neural network may comprise a decoder, wherein the first test input sequences are test input sequences for the decoder, wherein processing each of the first padded input sequences and respective first attention masks through the test neural network optionally comprises executing the decoder for a number of iterations equal to the first fixed length, wherein the statistics are collected over all of the iterations.
[0017] If the attention-based neural network comprises only a decoder (that is, it comprises no encoder), the first attention mask is a self-attention mask. The decoder may 5 comprise a stack of a plurality of layers.
[0018] At each iteration, the decoder may produce an output sequence, and, at each iteration other than an initial iteration, the input to the decoder comprises the output sequence from the preceding iteration. In some examples, the iterations of the decoder may be terminated early -for instance, by the decoder outputting or receiving a terminate io marker. Alternatively, in some other examples, the decoder may be executed just once for a given input sequence.
[0019] The attention based neural network may comprise an encoder, and the first test input sequences may be test input sequences for the encoder. The encoder may comprise a stack of a plurality of layers.
is [0020] The attention-based neural network may further comprise a decoder, and the method may further comprise: obtaining a dataset of second test input sequences, wherein the second test input sequences are test input sequences for the decoder; padding each second test input sequence with padding values to produce a respective second padded input sequence of a second fixed length; generating, for each second padded input sequence, a respective second padding mask identifying the part of the second padded input sequence that contains the padding values; and generating a second attention mask from each second padding mask, wherein the generating comprises an outer product operation applied to the second padding mask, wherein the method further comprises selecting a numerical format for at least one layer of the decoder.
[0021] The second test input sequences may be of variable length, less than or equal to the second fixed length. The first fixed length may be different from the second fixed length. The method may comprise selecting the second fixed length. The second fixed length may be selected to be equal to the maximum length among the second test input sequences in the dataset.
[0022] The method may further comprise executing the decoder for a number of iterations equal to the second fixed length, wherein, at each iteration the decoder produces an output sequence, and, at each iteration other than an initial iteration, the input to the decoder comprises the output sequence from the preceding iteration, wherein the statistics are collected over all of the iterations.
[0023] The method may further comprise generating a cross-attention mask from each first padding mask and respective second padding mask, comprising an outer product of 5 the first padding mask with the second padding mask, wherein the method comprises: processing the first padded input sequences, the second padded input sequences, the first attention masks, the second attention masks, and the cross-attention masks through the test neural network, and selecting numerical formats for any one or any combination of two or more of: the first padded input sequences, the second padded input sequences, 10 the first attention masks, the second attention masks, and the cross-attention masks.
[0024] The cross-attention mask is used in the processing to ignore certain elements of the output of the encoder with respect to certain elements of the second padded input sequence (that is, certain elements of the input sequence to the decoder).
[0025] The attention-based neural network may comprise a scaled dot-product attention calculation. The scaled dot-product attention calculation may comprise two matrix multiplications and a Softmax function. This may be the case, in particular, if there is a single input batch and a single-head attention layer. In the case of a multi-head attention layer (and/or if there are multiple input batches), the scaled dot-product attention calculation may comprise two batch matrix multiplications and a Softmax function. In some examples, in the case of a multi-head attention layer and multiple batches, the scaled dot-product attention calculation may comprise two batch matrix multiplications and a Softmax function for each head of the multi-head attention layer. The scaled dot-product attention calculation may further comprise one or more linear projections of its inputs. The one or more linear projections may be the first step of the calculation.
[0026] The padding values in the (or each) padded input sequence may be equal to zero.
[0027] Each padding mask may consist of ones and zeros. In particular, locations corresponding to elements of the respective input sequence may be equal to one, and locations corresponding to padding may be equal to zero.
[0028] Optionally, each first attention mask comprises: a plurality of zeros, in locations corresponding to the elements of the respective first input sequence; and one or more large negative values, in locations corresponding to the padding values of the respective first padded input sequence.
[0029] Each second attention mask may similarly comprise or consist of zeros in locations corresponding to the elements of the respective second input sequence; and large negative values in locations corresponding to the padding values of the respective second padded input sequence. Each cross-attention mask may comprise or consist of zeros in locations corresponding to the elements of the respective first and second input sequences; and large negative values in locations corresponding to the padding values of either respective padded input sequence.
[0030] The large negative values may be equal to the largest negative value representable by a numerical format selected for the first attention mask. In some embodiments, the large negative values need not be equal to said largest negative value; however, they may have values that are more negative than the largest negative value present in a data tensor input to any layer in which the first attention mask is used (for every first test input sequence). They may be more negative by a margin. That is, there may be several (unused) values between the largest negative value present in the data is tensor and said large negative values. In some examples, the large negative values may have an absolute value that is greater than the largest absolute value of the data tensor. They may be greater by a margin -that is, there may be several (unused) values between the largest absolute value of the data tensor and the absolute value of the large negative values. The large negative values, x, may be chosen such that eAx=0, in the numerical format selected for the attention mask (where "e" is Euler's number). All of this applies equally to the self-attention masks and the cross-attention mask.
[0031] The attention-based neural network may comprise a Softmax function, and the processing may comprise adding the first attention mask to an input to the Softmax function. Large negative values in the padding mask may have the effect that the corresponding values of the input to the Softmax function are neglected. The large negative value added to the input dominates the previously existing value of the input. The Softmax function may be part of a scaled dot-product attention calculation, for example (as mentioned already above).
[0032] The attention-based neural network may comprise (or may be) a transformer 30 network. The transformer network may comprise or consist of an encoder, or a decoder, or both an encoder and a decoder.
[0033] The transformer network may comprise any one or any combination of two or more of: an input embedding applied to the first padded input sequence; a first positional encoding added to the input embedding; an output embedding applied to the second 35 padded input sequence; a second positional encoding added to the output embedding.
[0034] The transformer network may comprise one or more classification or prediction layers, coupled to an output of the decoder.
[0035] The attention-based neural network optionally comprises a layer normalisation. The layer normalisation may be applied at the output of the scaled dot-product attention 5 calculation, for example. The layer normalisation may alternatively or additionally be applied at the output of a feedforward layer in either the encoder or the decoder.
[0036] The hardware implementation may be configured to perform a set of available elementary neural network operations, the method optionally comprising: mapping the layer normalisation to an equivalent representation comprising a plurality of elementary ic neural network operations from the set of available elementary neural network operations; and selecting numerical formats for said plurality of elementary neural network operations, wherein each of the plurality of elementary neural network operations is selected from the list consisting of: a convolution operation; an element-wise subtraction operation; an element-wise multiplication operation; a reciprocal operation; a square root operation; an element-wise division operation; a rectified linear activation function; a local response normalisation; an element-wise addition.
[0037] The element-wise operations may be implemented by an element-wise operations unit of the NNA. The rectified linear activation function may be implemented by an activation unit of the NNA. The local response normalisation, reciprocal operation, and square root operation may be implemented by a local response normalisation unit of the NNA. (It should be understood that the parameters input to the [RN unit may be selected to configure it appropriately for the desired operation to be performed.) The convolution operation may be implemented by a plurality of convolution engines of the NNA.
[0038] In some examples, the plurality of elementary neural network operations implements: a first convolution operation, applied to an input to the layer normalisation, to calculate a mean of the input; an element-wise subtraction operation, to subtract the mean from the input; a first element-wise multiplication operation, to calculate the square of the output of the element-wise subtraction operation; and a second convolution operation, applied to the output of the first element-wise multiplication operation, to calculate the variance about the mean.
[0039] The plurality of elementary neural network operations may further implement a second element-wise multiplication operation, configured to control the standard deviation of the output of the layer normalisation. It may further implement a first element-wise addition operation (optionally applied to the output of the second element-wise multiplication operation), configured to control the mean of the output of the layer normalisation.
[0040] The plurality of elementary neural network operations may further implement a square root operation, applied after the second convolution operation, to calculate a 5 standard deviation.
[0041] The plurality of elementary neural network operations may further implement a second element-wise addition operation, configured to add a small (positive) value to the standard deviation. The addition of this small value helps ensure numerical stability, by avoiding a divide-by-zero condition. An exemplary value for the small positive value is le- 5(0.00001).
[0042] In some examples, the plurality of elementary neural network operations implements: a square root operation and a reciprocal operation, applied after the second convolution operation, to calculate the reciprocal of the standard deviation; and a second element-wise multiplication operation, to multiply the output of the element-wise subtraction operation by the reciprocal of the standard deviation.
[0043] In some examples, the plurality of elementary neural network operations implements: a square root operation, applied to the output of the second convolution operation, to calculate the standard deviation; and an element-wise division operation, to divide the output of the element-wise subtraction operation by the reciprocal of the standard deviation.
[0044] In some examples, the plurality of elementary neural network operations implements: a rectified linear activation function, applied to the output of the second convolution operation; an element-wise addition operation, applied to the output of the rectified linear activation function; a local response normalisation, applied to the output of the element-wise addition operation, to calculate the reciprocal of the standard deviation; and a second element-wise multiplication operation, to multiply the output of the element-wise subtraction operation by the reciprocal of the standard deviation.
[0045] The element-wise addition operation may comprise adding a small constant value to the output of the rectified linear activation function. This may be done to avoid a 30 divide by zero condition.
[0046] The attention-based neural network may comprise a Softmax function, wherein the hardware implementation is configured to perform a set of available elementary neural network operations, the method optionally comprising: mapping the Softmax function to a representation comprising a plurality of elementary neural network operations from the set of available elementary neural network operations; and selecting numerical formats for said plurality of elementary neural network operations, wherein each of the plurality of elementary neural network operations is selected from the list consisting of: a transpose or permute operation; a max pooling operation; an element-wise maximum operation; an element-wise subtraction operation; an element-wise negation operation; an element-wise addition operation; an element-wise division operation; an element-wise multiplication operation; an element-wise bit-shifting operation; an element-wise operation f(z)=2^z, where z is in general a non-integer value; a convolution operation; a function approximation operation; and a local response normalisation.
[0047] The attention-based neural network may comprise a matrix multiplication operation defined in two or more dimensions between a first tensor X having dimensions [***, 0,...] and a second tensor Y having dimensions [..., R,...], the method optionally comprising: mapping the matrix multiplication operation to a graph of neural 15 network operations including at least one transformation and at least one convolution operation; and selecting numerical formats for the graph of neural network operations.
[0048] In some examples, the attention-based neural network may comprise a matrix multiplication operation defined in two or more dimensions between a first tensor X having dimensions [..., Q,...] and a second tensor Y having dimensions [..., R,...], the method optionally comprising: mapping the matrix multiplication operation to a graph of neural network operations including at least one element-wise operation; and selecting numerical formats for the graph of neural network operations.
[0049] The matrix multiplication operation may be part of an attention calculation. In particular, it may be part of a scaled dot-product attention calculation. The matrix 25 multiplication operation may be a batch matrix multiplication operation.
[0050] Also provided is a data processing system configured to perform a method as summarised above. The data processing system may be embodied in hardware on an integrated circuit.
[0051] Also provided is a method of manufacturing, using an integrated circuit 30 manufacturing system, a data processing system as summarised above.
[0052] Also provided is a method of manufacturing, using an integrated circuit manufacturing system, a data processing system as summarised above, the method comprising: processing, using a layout processing system, a computer readable description of the data processing system so as to generate a circuit layout description of an integrated circuit embodying the data processing system; and manufacturing, using an integrated circuit generation system, the data processing system according to the circuit layout description.
[0053] Also provided is computer readable code configured to cause a method as summarised above to be performed when the code is run. Also provided is a computer readable storage medium (optionally non-transitory) having encoded thereon the computer readable code [0054] Further provided is an integrated circuit definition dataset that, when processed in is an integrated circuit manufacturing system, configures the integrated circuit manufacturing system to manufacture a data processing system as summarised above.
[0055] Also provided is a computer readable storage medium having stored thereon a computer readable description of a data processing system as summarised above that, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to manufacture an integrated circuit embodying the data processing system.
[0056] Also provided is a computer readable storage medium having stored thereon a computer readable description of a data processing system as summarised above which, when processed in an integrated circuit manufacturing system, causes the integrated circuit manufacturing system to: process, using a layout processing system, the computer readable description of the data processing system so as to generate a circuit layout description of an integrated circuit embodying the data processing system; and manufacture, using an integrated circuit generation system, the data processing system according to the circuit layout description.
[0057] Also provided is an integrated circuit manufacturing system configured to manufacture a data processing system as summarised above.
[0058] Further provided is an integrated circuit manufacturing system comprising: computer readable storage medium having stored thereon a computer readable description of a data processing system as summarised above; layout processing system configured to process the computer readable description so as to generate a circuit layout description of an integrated circuit embodying the data processing system; and an integrated circuit generation system configured to manufacture the data processing system according to the circuit layout description. The layout processing system may be configured to determine positional information for logical components of a circuit derived from the integrated circuit description so as to generate a circuit layout description of an integrated circuit embodying the data processing system.
[0059] The above features may be combined as appropriate, as would be apparent to a 5 skilled person, and may be combined with any of the aspects of the examples described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0060] Examples will now be described in detail with reference to the accompanying drawings in which: io [0061] Figure 1A illustrates an implementation of one layer of an encoder stack for a transformer network, according to an example; [0062] Figure 1B illustrates an implementation of one layer of a decoder stack for a transformer network, according to an example; [0063] Figure 1C illustrates an implementation of a scaled dot-product attention 15 calculation, according to an example; [0064] Figure 2 is a block diagram of a hardware accelerator comprising fixed-function hardware, according to an example of the present disclosure; [0065] Figure 3 is a block diagram of a convolution engine as used in Figure 2; [0066] Figure 4 is a block diagram of a data processing system according to an 20 example; [0067] Figure 5 is a block diagram of the memory manipulation module in Figure 4; [0068] Figure 6A is a flowchart illustrating a method of implementing inference using an attention-based neural network, according to an example; [0069] Figure 6B is a flowchart illustrating a method for handling an input sequence that 25 is longer than a previously selected fixed length; [0070] Figure 7 illustrates a computational graph made up of elementary neural network operations for implementing a matrix multiplication using a convolution operation, according to an example; [0071] Figure 8 illustrates a computational graph made up of elementary neural network operations for implementing a batch matrix multiplication using convolutions, according to an example; [0072] Figure 9 shows a practical instance of batch matrix multiplication, based on the 5 computational graph in Figure 8; [0073] Figure 10 illustrates a computational graph made up of elementary neural network operations for implementing a batch matrix multiplication using a grouped convolution, according to an example; [0074] Figure 11 shows a practical instance of batch matrix multiplication, based on the io computational graph in Figure 10; [0075] Figure 12 illustrates a computational graph made up of elementary neural network operations for implementing a batch matrix multiplication using element-wise operations, according to an example; [0076] Figure 13 shows a practical instance of a matrix multiplication, based on the 15 computational graph in Figure 12; [0077] Figure 14A shows a computational graph of a softmax layer; [0078] Figure 14B is a computational graph showing one way of calculating the exponential operation in Figure 14A; [0079] Figure 15A illustrates a maximum operation; [0080] Figure 15B is a computational graph illustrating one approach for implementing the maximum operation of Figure 15A; [0081] Figure 15C shows a way of determining the maximum of a tensor by successive element-wise comparisons; [0082] Figure 15D shows another example of determining a maximum by successive 25 element-wise comparisons; [0083] Figure 16 is a computational graph illustrating an alternative approach for implementing the maximum operation of Figure 15A; [0084] Figure 17A illustrates a summation operation; [0085] Figure 17B is a computational graph illustrating one way in which the summation operation of Figure 17A can be mapped to an elementary neural network operation; [0086] Figure 18A illustrates a division operation; [0087] Figure 18B is a computational graph illustrating one way in which the division 5 operation of Figure 18A can be mapped to elementary neural network operations; [0088] Figure 18C is a computational graph illustrating an alternative way in which the division operation of Figure 18A can be mapped to elementary neural network operations; [0089] Figures 19A-19C illustrate three different computational graphs for implementing a layer normalisation; so [0090] Figure 20A is a flowchart illustrating a method for selecting numerical formats, according to an example, [0091] Figure 20B continues the flowchart of Figure 20A; [0092] Figure 21 is a flowchart illustrating a method of training an attention-based neural network according to an example; [0093] Figure 22 shows a computer system in which a data processing system is implemented; and [0094] Figure 23 shows an integrated circuit manufacturing system for generating an integrated circuit embodying a data processing system.
[0095] The accompanying drawings illustrate various examples. The skilled person will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the drawings represent one example of the boundaries. It may be that in some examples, one element may be designed as multiple elements or that multiple elements may be designed as one element. Common reference numerals are used throughout the figures, where appropriate, to indicate similar features.
DETAILED DESCRIPTION
[0096] The following description is presented by way of example to enable a person skilled in the art to make and use the invention. The present invention is not limited to the embodiments described herein and various modifications to the disclosed embodiments will be apparent to those skilled in the art.
[0097] Embodiments will now be described by way of example only.
In the following description, transformer networks will be used as an example of an attention-based neural network. This is a convenient and useful example, since such networks have been found to produce good performance in many tasks. Nevertheless, it should be understood that the scope of the present disclosure is not limited to transformer networks. The present disclosure is equally applicable to other attention-based networks. For understanding the transformer network examples in the following description, it is assumed that the reader will already be familiar with the paper of Vaswani et al., cited above.
lo [0098] Figure 1A illustrates an implementation of one layer 100 of an encoder stack in a transformer network. There may be many such layers in the encoder stack, as indicated by the "Nx" annotation in the drawing. The number N of layers is at least one, but will be greater than one in many examples. The general structure of the encoder will be familiar from Vaswani et al. The inputs to the encoder stack comprise sequences of "tokens" or "elements". Before passing them to the first layer of the encoder, the inputs 101 are "embedded" by means of a linear projection 102. A positional encoding 103 is added to the result of the embedding, at adder 104. The positional encoding 103 injects information about the position of the tokens in the sequence. Like Vaswani et al, the present example uses periodic functions of different frequencies for the positional encoding. It should be understood that the embedding and positional encoding is necessary only at the input to the first layer of the encoder stack. Subsequent layers in the stack (not shown) will take their inputs directly from the outputs of the preceding layers.
[0099] The layer 100 of the encoder stack comprises: a multi-head attention block 110; an add and normalise block 120; a feedforward block 130; and a further add and normalise block 140. The multi-head attention block 110 executes self-attention on the inputs. In the present example, this is done by a plurality of scaled dot-product attention (SDPA) blocks 112-1 to 112-n, where n is the number of heads. However, it should be understood that other forms of attention calculation could be used in place of the SDPA. The outputs of the multiple heads are concatenated and subjected to a linear projection 114 [00100] The output of the multi-head attention block 110 is provided as input to the add and normalise block 120. This block 120 comprises an adder 122, configured to add the output of the multi-head attention block 110 to the input to that block. Layer normalisation 124 is applied to the result of the addition 122.
[00101] The output of the add and normalise block 120 is provided as input to the feedforward block 130. This block comprises one or more feedforward neural network layers. In the present example, there are two feedforward layers 132 and 134.
[00102] The output of the feedforward block 130 is provided as input to the further add and normalise block 140. This has a similar structure to the add and normalise block 120. An adder 142 is configured to add the output of the feedforward block 132 to the input to that block. A layer normalisation 144 is applied to the output of the addition 142. The output of the further add and normalise block 140 forms the output of this layer of the encoder stack. If the current layer is not the final layer of the encoder stack, the output of To the add and normalise block 140 is input to the next layer of the encoder stack. If the current layer is the final layer of the encoder stack, then the output of the add and normalise block 140 forms the output of the encoder. This output may be provided to one or more prediction heads, or to a decoder stack, depending on the particular application of the transformer network. That is, the encoder stack may be used alone or in conjunction with a decoder stack (such as the decoder stack described below with reference to Figure 1B).
[00103] Figure 1B illustrates an implementation of one layer 150 of a decoder stack of a transformer network. Depending on the particular application of the transformer network, the decoder stack may be used alone, or in conjunction with an encoder stack (such as the encoder stack described above with reference to Figure 1A). As with the encoder stack, the decoder stack in general includes a number of layers, N, where N is greater than or equal to 1. The layer 150 of the decoder stack illustrated in the drawing is shown as the first layer of the decoder stack. Before passing them to the layer 150, the inputs 151 are embedded, at block 152, and a positional encoding 153 is added at block 154. In this respect, blocks 152 and 154 are analogous to blocks 102 and 104. However, the particular embeddings and positional encodings are generally not identical between an encoder and decoder. The inputs are also typically different. For example, in a transformer network that is trained for language translation, the tokens may be words. In the training phase, the encoder stack takes as input a sequence of words (e.g. a sentence) in the source language. The decoder stack takes as input a sequence of words (e.g. a sentence) in the target language. The embedding (which may be learned) and the positional encoding may be different for each language.
[00104] As with the encoder layer 100, the general structure of the decoder layer 150 will be familiar from Vaswani et al. The layer 150 of the decoder stack comprises: a first multi-35 head attention block 111 a first add and normalise block 121; a second multi-head attention block 161; a second add and normalise block 171; a feedforward block 131; and a further add and normalise block 141. The multi-head attention block 111 has the same structure as the multi-head attention block 110 of the encoder layer. It executes self-attention on the inputs using a plurality of SDPA blocks (not shown).
[00105] The output of the multi-head attention block 111 is provided as input to the first add and normalise block 121. This is similar in structure and function to the add and normalise block 120 of the encoder layer. When the decoder stack is used together with the encoder stack, the output of the add and normalise block 121 is provided as input to the second multi-head attention block 161. This block 161 also receives as an input the output of the encoder stack. The second multi-head attention block 161 executes cross attention between the decoder inputs and the encoder inputs. The output of the second multi-head attention block 161 is provided as input to a second add and normalise block 171. Similarly to the add and normalise blocks 120 and 121, this block adds together the inputs and outputs of the second multi-head attention block 161 and performs layer normalisation on the result of the addition. The output of the second add and normalise block 171 is provided as input to the feedforward block 131. (When the decoder stack is used alone -that is, without the encoder stack -the feedforward block 131 receives its input directly from the first add and normalise block 121.) [00106] The feedforward block 131 is similar in structure and function to the feedforward block 130 of the encoder layer. It comprises one or more feedforward neural network layers -for example, two feedforward layers. The output of the feedforward block 131 is provided as input to a further add and normalise block 141, which is similar in structure and function to the add and normalise block 140 at the end of the encoder layer. The output of the further add and normalise block 141 forms the output of this layer of the decoder stack. If the current layer is not the final layer of the decoder stack, the output of the add and normalise block 141 is input to the next layer of the decoder stack. If the current layer is the final layer of the decoder stack, then the output of the add and normalise block 141 forms the output of the decoder. This output may be provided to one or more prediction heads.
[00107] In some applications of transformer networks, the decoder stack is executed iteratively, for a number of iterations that may be fixed or variable. For instance, using the machine translation example mentioned above, the decoder stack may be executed for several iterations, until the decoder outputs an "end" token, which indicates that the end of the sentence has been reached in the target language, and terminates the iterations. At each iteration, one more word of the corresponding sentence in the target language would be predicted.
[00108] A problem can arise when attempting to implement attention-based networks, such as the transformer network example of Figures 1A and 1B, in the fixed-function hardware of a neural network accelerator (N NA). The inputs to the encoder and decoder stacks are, in general, sequences of variable length. For instance, in the machine translation example discussed above, the sequence length depends on the number of words in the sentence. In an exemplary NNA architecture, it may be inefficient or impossible to process inputs of variable length. The fixed-function hardware may require 10 inputs of fixed length (or may be more efficient at processing such inputs).
[00109] In examples according to the present disclosure, it is proposed to pad the variable length input sequences to a fixed length. The inputs to the encoder stack are padded to a first fixed length and the inputs to the decoder stack are padded to a second fixed length. These fixed lengths may be different, reflecting the fact that input sequences of different length may be expected at the encoder as compared with the decoder. The fixed length may be chosen based on the range of variable lengths encountered over a training dataset of input sequences. In particular, each fixed length may be chosen to be equal to the maximum length of the respective input sequences of the training dataset.
[00110] The padding of the input sequences is not explicit in Figures 1A and 1 B. However, it should be understood that the inputs 101 and 151 in these diagrams are already padded. This means that a first portion of each input tensor is identical to the corresponding variable length input tensor. The remaining portion of each input tensor consists of padding values. In the present example, these are set equal to 0.
[00111] When padding the variable length input sequences to a fixed length, care should be taken to ensure that the padding does not unduly influence the calculations in the attention-based network. In particular, in examples according to the present disclosure, it is proposed to modify the attention calculation to accommodate the padding. Because of the form of the calculations, it is not sufficient simply to set the padding values equal to 0.
[00112] Figure 10 illustrates an implementation of a scaled dot-product attention calculation 112, as used and shown in Figure 1A, and as used (but not shown) in Figure 18. Block 112 shows a single SDPA calculation. It should be understood that this is performed for each head in a multi-head attention calculation. The inputs to the SDPA calculation are labelled Q, K, and V. The tensor Q is known as the "query"; the tensor K is known as the "keys"; and the tensor V is known as the "values". Optionally, each of these may first be subjected to a respective linear projection 191-1, 191-2, and 191-3, as described by Vaswani et al. Then, the (projected) tensor Q and the (projected) tensor K are input to a matrix multiplication 192. (In general, if there are multiple batches of input data, this may be a batch matrix multiplication.) [00113] The output of the (batch) matrix multiplication 192 provides one input to an addition operation 194. The other input to the addition 194 is an attention mask defined for the input sequence. The dimensions of the attention mask (like those of the tensors Q, K, and V) are determined by the fixed lengths. According to the present example, the attention mask contains values of zero in positions corresponding to the original variable-s() length input sequence. In positions corresponding to the padding (in the padded input sequence), the attention mask contains large negative values. The intention is that the output of the addition 194 should be dominated by these large negative values in the positions corresponding to the padding.
[00114] By way of example: for tensors Q and K with dimensions (N, Li, D), the attention mask for the self-attention block 110 has dimensions (N, Li, Li), where Li is the fixed length chosen for the encoder stack. The attention mask for the self-attention block 111 has dimensions (N, L2, L2), where L2 is the fixed length chosen for the decoder stack. The attention mask for the cross-attention block 161 has dimensions (N, Li, L2).
[00115] In the case of the self-attention block 110, the inputs V, K, and Q are the inputs to 20 the encoder. Likewise, in the case of the self-attention block 111, the inputs V, K, and Q are the inputs to the decoder. In the case of the cross-attention block 161, the tensors V and K come from the output of the encoder, while the tensor Q is an input to the decoder.
[00116] The attention masks are calculated from padding masks, which are associated with the respective padded input sequences. Each padding mask has the same fixed length as the associated padded input sequence. The portion of the padding mask corresponding to the original, variable-length input sequence contains values equal to one. The remainder of the padding mask, corresponding to the padding values in the padded input sequence, contains values equal to zero. A first padding mask is associated with the input to the encoder; a second padding mask is associated with the input to the decoder. To generate the self-attention mask for the self-attention block 110 of the encoder, the outer product of the first padding mask with itself is calculated. The result of the outer product calculation has ones at locations corresponding to the original, variable length encoder input sequence, and zeros at locations corresponding to the padding. To produce the self-attention mask from the result of the outer product calculation, the zeros are replaced with large negative values and the ones are replaced with zeros.
Consequently, the self-attention mask contains zeros at the locations corresponding to the original, variable length encoder input sequence, and contains large negative values at the locations corresponding to the padding. The self-attention mask for the self-attention block 111 of the decoder is generated in the same way, from the second padding mask. The cross-attention mask for the cross-attention block 161 of the decoder is generated from the first padding mask and the second padding mask. Firstly, the outer product of these two masks is calculated. Then, the zeros in the outer product are replaced with large negative values, and the ones in the outer product replaced with zeros. Consequently, the cross-attention mask contains zeros at locations corresponding to a "one" in both the first padding mask and the second padding mask. It contains large negative values at locations that correspond to a zero in either padding mask -that is, locations that correspond to padding in either mask.
[00117] According to the present example, the padding values are appended to the original (variable length) input sequences. Consequently, the attention masks have a is block structure, with a block of zeros whose dimensions are dependent on the size of the original variable length input sequence, and with the remainder of the elements of the tensor being equal to the large negative value.
[00118] The self-attention masks can be modified to prevent the network from attending (via self-attention or cross-attention) to certain elements of the original (variable length) input sequences. This can be done either by modifying the attention masks directly (by changing a zero to a large negative value), or by modifying the padding masks before the outer product calculations (changing a one to a zero). The elements to which the network is to be prevented from attending may be indicated by an external input to the network -for example, an additional variable provided in a function call that instantiates the neural network.
[00119] The output of the addition 194 is input to a Softmax function 196. This calculates the Softmax over the features among which attention is being computed. The Softmax function involves applying an exponential function to the input. The result of the calculation ex tends to zero when x is a large negative number. This ensures that the padded part of the input tensor does not influence the Softmax calculation. In some examples, the large negative number x may be chosen such that ex = 0 in the numerical format used for the relevant (self-attention or cross-attention) mask and the input to the Softmax function. In this way, the large negative number approximates -.50 (minus infinity) in the calculation.
[00120] The output of the Softmax function 196 forms one input to a (batch) matrix multiplication 198. The other input to the (batch) matrix multiplication 198 is the tensor of (projected) values, V. Together the blocks 192, 194, 196, and 198 implement the following equation, with no interference from the padded part of the respective input vectors:
QKT
Attention(Q,K,V) = Softmax Hr) V. [00121] According to the present example, all of the calculations necessary to implement the attention-based network can be constructed from elementary neural network operations available on an exemplary hardware accelerator (NNA). This includes calculations that are not usually natively supported on NNAs, such as (batch) matrix multiplication, Softmax, and layer normalisation. This can be achieved without necessarily modifying the hardware of the hardware accelerator.
Exemplary hardware accelerator [00122] As shown in Figure 2, an exemplary hardware accelerator 200 (also referred to herein as a neural network accelerator or NNA) includes the following fixed-function 15 hardware units: * A set of convolution engines 240, specialised at convolution operations; * An element-wise operations unit 285, specialised at performing the same operation to every pair of respective elements of two tensors of corresponding size; * An activation unit 255, specialised at applying an activation function (which may be selectable, configurable, or fully programmable) to every element of a tensor; * A local response normalisation (LRN) unit 265 (or normalisation unit, for short), specialised at performing neighbourhood-based normalisation operations; and * A pooling unit 275, specialised at performing pooling operations, such as max-pooling and min-pooling.
[00123] In greater detail, the hardware accelerator 200 comprises digital logic circuitry that is configured to receive data (including weights and input tensors) and commands for processing them. The hardware accelerator 200 comprises a memory interface 210, an input buffer controller 215, a command decoder 220, a coefficient buffer controller 225, a coefficient buffer 230, n input buffers 235, n convolution engines 240, n accumulators 245, an accumulation buffer 250, an activation unit 255, a local response normalize (LRN) unit 265, a shared buffer 270, a pooling unit 275, and an element-wise operations unit 285. The hardware accelerator 200 can be used to evaluate elementary neural network operations.
[00124] The memory interface 210 is configured to provide an interface between the hardware accelerator 200 and external memory 25 (not shown in Fig. 2, but shown in e.g. Fig. 4). The external memory 25 may be considered as a separate module to the hardware accelerator 200. The command or configuration information may comprise, for example, information regarding weight and data size and format as well as their location in the external memory.
[00125] The memory interface 210 is configured to receive, from external memory 25, is weights and data to be used in calculations within the neural network, as well as command information to control the operation of the hardware accelerator 200. The received weights (also referred to herein as coefficients) are passed to the coefficient buffer controller 225 and the received data is passed to the input buffer controller 215. The received commands are passed to the command decoder 220, which, in turn, is configured to decode the commands and subsequently issue control information to elements of the hardware accelerator, including the coefficient buffer controller 225 and input buffer controller 215 to control the manner in which the weight and input data is stored in the buffers.
[00126] The weights and input data received from external memory via memory interface 20 210 during a read of the external memory may form the weights and input data for only a portion of a single layer, all of the weights and input data to be used in processing a single layer, or may comprise the weights and input data for processing multiple layers.
[00127] In practice, the number of weights and data received in a single read from external memory 25 will depend upon the size of the coefficient buffer 230 and the input buffer 235. The weights are passed from the coefficient buffer controller 225 to the coefficient buffer 230 and the data received is passed from the input buffer controller 215 to a plurality of input buffers 235a-235n. The number of input buffers will depend upon the specific implementation of the accelerator 200 but may take any value. The input data is shared across all of the input buffers 235a-235n. The input buffers each form an effective bank such that the number of input buffers can be increased or decreased depending on the application.
[00128] The input buffers 235a-235n are connected to each of a plurality of multiplexers, since each convolution engine 240a-240n requires access to all of the effective 'banks' of the input data. The multiplexers are each configured to select an output from one of the input buffers 235 and to pass the values output from the selected input buffer 235 to a respective convolution engine 240a-240n. In addition, weights from the coefficient buffer 230 are provided as a second input into each convolution engine 240a-240n. The convolution engines 240 are configured to perform a convolution calculation on the received input data using the weights received from the coefficient buffer 230. The resultant output of each convolution engine 240a-240n is provided as an input to a respective accumulator of a plurality of accumulators 245a-245n.
[00129] Each accumulator 245a-245n is connected to an accumulation buffer 250. The accumulation buffer 250 is configured to store accumulated results received from each accumulator 245a-245n. The accumulation buffer 250 is connected to the memory interface 210. As such, the accumulation buffer 250 is configured to send and receive data to and from external memory 25 via memory interface 210. Specifically, the accumulation buffer 250 is configured to be able to store and restore its values from the external memory 25 via memory interface 210, as will be described in more detail below.
is The accumulation buffer 250 is connected to the input of the accumulators 245a-245n and is configured to feed values back into the accumulators 245a-245n to enable accumulation calculations to take place.
[00130] The accumulation buffer 250 is configured to pass accumulated values to the activation unit 255 and/or the element-wise operations unit 285. The activation unit 255 is 20 configured to perform at least one of a number of different activation functions.
[00131] The resultant value calculated by the activation unit 255 can be passed to be processed by the LRN unit 265 and/or the pooling unit 275 via the shared buffer 270. The LRN unit 265 is configured to perform a local response normalisation. This may be performed within a single plane of input data. Alternatively or in addition, the LRN operation may also be performed across planes.
[00132] A result stored in the shared buffer 270 is passed to the memory interface 210, which can either store the result in external memory 25 or pass the result back into the input buffers for further processing without having to first be passed out to external memory.
[00133] The shared buffer 270 is configured to buffer values from any one or more of the activation unit 255, the LRN unit 265, the pooling unit 275, and the element-wise operations unit 285 until all the values required to perform the next operation are available. In this way, the shared buffer 270 is used for efficiency of storage as it can hold values required in later operations without having to use external memory 25.
[00134] The element-wise operations unit 285 comprises circuitry configured to perform element-wise operations on tensors received from the accumulation buffer 250 and/or activation unit 255. The supported element-wise operations may include element-wise addition, subtraction, multiplication, division, and maximum (or minimum) of the respective elements of the tensors.
[00135] Element-wise operations are operations that are repeated for multiple elements of at least one tensor. The operations are typically repeated for all elements of the tensor. Two categories of element-wise operation may be considered: unary operations, having a single operand, and binary operations, having two operands. The element-wise operations unit 285 handles binary element-wise operations. Element-wise operations may also be performed by other components of the hardware accelerator. For example, the activation unit 255 may perform unary element-wise operations, by applying a function to every element of a tensor.
[00136] Whilst the hardware accelerator of Figure 2 illustrates a particular order in which 15 the units are arranged and thus how the processing of data flows through the hardware implementation, it will be appreciated that the specific calculations required and the order in which data is processed across layers may vary.
[00137] In some examples of evaluating neural network layers, the functions performed by the activation 255, LRN 265, pooling 275, and element-wise 285 units may all be performed. In other examples, only some of these functions may be performed and not necessarily in the order set out in the hardware accelerator 200. To achieve a configurable order of processing these functions, each of the activation 255, LRN 265, pooling 275 and element-wise 285 units may be configured to receive control signalling configuring the unit into a bypass mode in which the function is not performed and the input values are simply passed through the unit without change.
[00138] Advantageously, since the activation, LRN, pooling, and element-wise units 255, 265, 275, 285 are placed linearly, it is possible to perform these operations sequentially without having to retrieve data from external memory 25. In some implementations, the order in which the activation, LRN, pooling, and element-wise units 255, 265, 275, 285 are connected may vary. For example, the activation, LRN, and pooling units 255, 265, 275 may be connected in reverse order such that the pooling unit is connected to the accumulation buffer 250 and the activation unit is connected to the memory interface 210.
[00139] Figure 3 illustrates the structure of each of the convolution engines 240 in Figure 2. The convolution engine 240 comprises a plurality of elements of multiply logic 242, each configured to multiply a weight by an input data element, and a plurality of elements of addition logic 244, configured in a tree structure to sum the outputs of the elements of multiply logic 242.
Exemplary data processing system [00140] Figure 4 is a block diagram of a data processing system 10 for implementing an attention-based neural network in a hardware accelerator 200 (NNA), according to an example. The data processing system comprises the hardware accelerator 200; a mapping unit 12; a memory 25; and a memory manipulation module (MMM) 40. At least the hardware accelerator 200, the memory 25, and the MMM 40 are connected by a data is bus 30. The mapping unit 12 is configured to receive a definition of the neural network, and map it to a graph of elementary neural network operations that can be performed natively by the hardware accelerator 200. The mapping unit 12 is further configured to control the hardware accelerator 200 (and if necessary the MMM 40) to evaluate the attention-based neural network in fixed-function hardware, by means of these elementary operations. As part of its mapping duties, the mapping unit 12 is configured to pad each input sequence for the attention-based neural network to produce a respective padded input sequence of the appropriate, predetermined fixed length. The mapping unit is further configured to generate padding masks of the appropriate fixed length, identifying the part of each input sequence that contains the padding values, and to generate attention masks from the padding masks, as described above.
[00141] The hardware accelerator 200 is configured to process each padded input sequence and the relevant attention mask(s), to perform inference using the attention-based neural network. In particular, the hardware accelerator 200 is configured to evaluate the graph of elementary neural network operations output by the mapping unit 12. The MMM 40 is configured to manipulate multidimensional data in memory in various ways, including permute operations that modify the order of the dimensions of the data. In some examples, the MMM 40 may be configured to transform data by rearranging the channel dimension of the data in one or both of the width or height dimensions, or exchanging the channel dimension with one or both of these spatial dimensions. In alternative examples, the MMM may permute any other combination of the dimensions of the input data, including the batch dimension. A transformation from dimensions [B, C, H, VV] to dimensions [B, 1, HC, VV] is one example of rearranging the channel dimension in the spatial dimensions. A transformation from dimensions [B, C, H, W] to dimensions [B, C/KL, HK, W[] is another such example (where K and [are integers). A transformation from dimensions [B, C, H, W] to dimensions [B, H, C, W] is an example of exchanging the channel dimension with one of the spatial dimensions.
[00142] Figure 5 is a block diagram of the MMM 40 used in Figure 4. As mentioned already, the MMM 40 is coupled to the memory 25, via the bus 30. The MMM 40 comprises a memory reading block 420; an internal buffer 410; and a memory writing block 430. A control channel 440 is used to coordinate the operations performed by the memory reading block 420 and the memory writing block 430. Both the memory reading block 420 and the memory writing block 430 are coupled to the bus 30. An output of the memory reading block 420 is coupled to an input of the internal buffer 410. An input of the memory writing block 430 is coupled to an output of the internal buffer 410.
[00143] The memory reading block 420 reads data from the memory 25. The memory reading block 420 writes the data (that was read from the memory 25) to the internal buffer 410. The memory writing block 430 reads data from the internal buffer 410 and writes the data (that was read from the internal buffer 410) back to the external memory 25. By the combination of operations performed by the memory reading block 420 and the memory writing block 430, the data may be transformed in the ways previously described. The transformation may occur when moving the data from the memory 25 to the internal buffer 410, or it may occur when moving the data from the internal buffer 410 to the memory 25. In some cases, the transformation may occur in part between the memory 25 and the internal buffer 410, and in part between the internal buffer 410 and the memory 25.
[00144] Where the memory reading block 420 and the memory writing block 430 are provided as separate hardware blocks (as in the example of Figure 4), they may be able to operate in parallel. The control channel 240 provides for communication between the memory reading block 220 and the memory writing block 230, to maintain synchronisation between the two blocks. This synchronisation can ensure, for example, that the memory writing block 430 does not attempt to read data from the internal buffer 410 before it has been written there by the memory reading block 420. Similarly, it can ensure, that the memory reading block 420 does not overwrite data in the internal buffer 410 before it has been read from there by the memory writing block 430.
Exemplary data processing method [00145] Figure 6A is a flowchart illustrating a method of implementing inference using an attention-based neural network, according to an example. This method is performed using the data processing system of Figure 4. In the present example, the attention-based neural network is a transformer network comprising an encoder and a decoder (see also the description of Figures 1A-1C above).
[00146] In step 511, the mapping unit 12 receives a first input sequence for the encoder. In step 512, the mapping unit 12 pads the first input sequence to produce a first padded input sequence of a first fixed length. This first fixed length has been selected in advance for the encoder stack (for example, in an earlier training phase). In step 513, the mapping unit 12 generates a first padding mask, corresponding to the first padded input sequence. In the present example, the mapping unit 12 produces the first padded input sequence by padding the first input sequence with zeros in the padded locations. The first padding mask contains zeros corresponding to the padded locations, and ones in locations corresponding to the (variable length) first input sequence. In step 514, the mapping unit 12 constructs a first (self-)attention mask for the encoder based on the outer product of the first padding mask with itself, as described above. The first attention mask contains large negative values corresponding to the padded locations, and zeros in locations corresponding to the (variable length) first input sequence. The large negative values all have the same value. The particular value is chosen off-line, in advance, in a training phase. According to the present example, it is selected in conjunction with a set of numerical formats for the inference processing such that, when applied as input to the exponential function, f(x) = ex, in the chosen numerical format, it returns a value of zero.
[00147] A similar sequence of steps is performed for the decoder. In step 521, the mapping unit 12 receives a second input sequence for the decoder. This is also of variable length. In general, it will be a different length to the first input sequence for the encoder. (In some examples, when performing inference using the attention-based neural network, the second input sequence may initially consist of a single "start' token.) [00148] In step 522, the mapping unit 12 pads the second input sequence with padding values On this example, zeros) to produce a second padded input sequence of a second fixed length. Like the first fixed length, the second fixed length may be selected off-line, in advance, in a training phase. The intention is that the first fixed length and second fixed length are chosen to be sufficiently long that all expected input sequences (for the encoder and decoder, respectively) are less than or equal to the respective fixed length.
[00149] In step 523, the mapping unit 12 generates a second padding mask, corresponding to the second padded input sequence. Similarly, to the approach taken for the encoder, the mapping unit 12 produces the second padded input sequence for the decoder by putting zeros in the padded locations. The second padding mask contains zeros corresponding to the padded locations, and ones in locations corresponding to the (variable length) second input sequence. In step 524, the mapping unit 12 constructs a second (self-)attention mask for the decoder based on the outer product of the second padding mask with itself. The second attention mask contains large negative values in locations corresponding to the padded locations, and zeros in locations corresponding to the original (variable length) second input sequence. Similar to the approach taken for the encoder, the large negative values all have the same value, which is chosen such that f(x) = ex = 0 in the relevant numerical format. The value and the numerical format may be the same as those for the first attention mask; however, in general the padding values and numerical formats may be chosen independently. The self-attention mask governs which elements of the second input sequence the decoder can attend to and which it must ignore. When the decoder stack is executed for a number of iterations, the second attention mask may change from one iteration to the next. This arises in natural language translation applications, for example. In each iteration, the decoder should only be able to self-attend to words that have already been translated (in previous iterations). The attention mask can be used to implement this. The self-attention mask has zeros at locations corresponding to the elements/tokens to which attention is permitted and large negative values at locations corresponding to elements/tokens which are to be ignored by the self-attention calculation.
[00150] In the example illustrated in Figure 6A, the transformer network includes both an encoder and a decoder. Consequently, the decoder uses two attention masks. The first is a self-attention mask (the second attention mask described above), which is used in the multi-head self-attention calculation in block 111. The second is a cross-attention mask, which is used in the multi-head cross-attention calculation in block 161. As explained already above, the cross-attention mask has one dimension that is defined by the first fixed length and another dimension that is defined by the second fixed length. The dimensions of the self-attention mask are determined by the second fixed length. (It should be understood that, in other examples, if the transformer network comprises only a decoder and no encoder, blocks 161 and 171 are removed and there is no need for the cross-attention mask.) [00151] In step 525, the mapping unit generates the cross-attention mask. The cross-attention mask governs which elements of the second input sequence the decoder can attend to and ignore with respect to the encoder output. To generate the cross attention mask, the mapping unit 12 calculates the outer product of the first padding mask (generated in step 513) with the second padding mask (generated in step 523). The mapping unit 12 then takes the result of the outer product, replaces all of the zeros with large negative values, and replaces all of the ones with zeros. The cross-attention mask therefore has zeros at locations corresponding to the elements/tokens to which attention is permitted and large negative values at locations corresponding to elements/tokens which are to be ignored by the cross-attention calculation. In particular, locations in the cross-attention mask that correspond to the padding values in either the first padded input sequence or the second padded input sequence are set to large negative values.
[00152] The various attention masks can also be modified based on external inputs. These external inputs may specify that particular additional elements of the padded input sequences should be ignored (that is, not attended to) in calculations. This allows the attention calculations to ignore elements that were present in the original (variable length) To input sequences. The specified additional elements can be ignored by setting additional locations in each self-attention mask, and the cross-attention mask, to large negative values. This can be done by direct modification of the attention mask. Alternatively, it may be done by setting additional elements of the first and/or second padding masks to zero. This will result in additional large negative values after the attention masks are generated is from the padding masks. (For simplicity and clarity, these steps are not shown in Figure 6A) [00153] It should be understood that the self-and cross-attention mask contains zeros corresponding to elements (for example, word-tokens) that are already "available", and large negative values for a number of remaining elements that is defined by the second fixed length. It is not essential, and in some cases not particularly meaningful, to distinguish between large negative values that relate to padding and large negative values that relate to elements that are to be "ignored" in the attention calculation. In many cases, it is not known until the inference is completed what length the variable length input/output sequence of the decoder will grow to. The "length" of second input sequence is defined only by the length of the output sequence from the decoder, and cannot be known in advance. At the end of each iteration, the output of the decoder is fed back to the input, for the next iteration. Thus, for example, after the second iteration, the second padded input sequence may consist of two word-tokens -one produced in the first iteration and a second produced in the second iteration. The remainder of the second padded input sequence consists of zeros, and the locations in the second padding mask corresponding to those padding-zeros contain zeros, while the corresponding locations in the second (self-)attention mask and the cross-attention mask contain large negative values. The length of the output sequence (and therefore the input sequence) may continue to grow until either a predetermined number of iterations has been reached -for example, a number of iterations equal to the (number of elements in the) second fixed length of the second padded input sequence -or until the decoder outputs a terminate token, which stops the processing and prevents any further iterations.
[00154] In step 530, the fixed-function hardware of the hardware accelerator 200 processes the first padded input sequence, the second padded input sequence, the first(self-)attention mask, the second (self-)attention mask, and the cross-attention mask to perform the inference, using the attention-based neural network. The first padded input sequence and first (self-)attention mask are processed through the encoder 100; the second padded input sequence, the second (self-) attention mask, and the cross-attention mask are processed through the decoder 150.
[00155] It should be understood that not all examples of transformer networks include both an encoder stack and decoder stack. Some examples comprise only an encoder stack; some examples comprise only a decoder stack. In examples comprising or consisting of a decoder stack, the decoder stack may be iterated a number of times, or executed just once. The particular configuration will depend on the task being performed by the transformer network.
[00156] Since the fixed lengths are chosen based on training data, it is possible that an input sequence may be presented in the inference phase that is longer than any of the training sequences and therefore longer than the relevant fixed length. Figure 6B is a flowchart illustrating a method capable of responding to this eventuality. In step 542, the mapping unit 12 receives a further input sequence. In step 544, the mapping unit 12 determines the length of this further input sequence. In step 545, the mapping unit 12 identifies that the length determined in step 544 is greater than the relevant fixed length (the first fixed length in the case of an input sequence with the encoder; the second fixed length in the case of an input sequence for the decoder). In response, in step 546, the mapping unit loads a representation of a further attention-based neural network. This further network is associated with a further fixed length, which is long enough to accommodate the length of the further input sequence. That is, the further fixed length is greater than or equal to the length of the further input sequence. In some examples, the mapping unit 12 may obtain the representation of the further attention-based neural network from the on-board memory 25. The memory may store an array of different neural networks like this, which are trained for essentially the same task but which differ in the fixed length(s) that they impose on input sequences. When the mapping unit discovers that the currently loaded network is not capable of accommodating an input sequence, it may load a different (but related) network with a longer fixed length parameter. In some examples, instead of obtaining the representation of the further attention-based neural network from the on-board memory 25, the mapping unit 12 may obtain it from an external source. For instance, the mapping unit may be configured to download the representation from a remote server via a communications network such as a wireless LAN or cellular data network.
[00157] After loading the further attention-based neural network, the mapping unit proceeds to pad the further input sequence with padding values to produce a further padded input sequence of the further fixed length. It also generates a further padding mask and one or more further attention masks (depending on the structure of the network). The further padding mask identifies the part of the further padded input sequence containing the padding values. At least one further attention mask is generated from the further padding mask. The fixed-function hardware of the hardware accelerator processes the further padded input sequence, and the one or more further attention masks, to perform inference using the further attention-based neural network. According to one example, these steps are similar to steps 511, 512, 513, 514 and 530 of Figure 6A.
is It should be understood that the further attention-based neural network will have the same structure as the neural network that it replaces. This may be any attention-based neural network. In particular, the further attention-based neural network may comprise only an encoder stack; or only a decoder stack; or both an encoder stack and a decoder stack. In the latter case, the further attention-based neural network would be designed to operate on first and second further input sequences, for the encoder and decoder, respectively. These would be padded to respective first and second further fixed lengths, either or both of which may be different from the first and second fixed length used in the previously loaded attention-based neural network. The one or more further attention masks will, in this case, include: a further first (self-)attention mask for the further first input sequence; a further second (self-)attention mask for the further second input sequence; and a further cross-attention mask.
[00158] In the event (at step 545) that the length of the further input sequence is less than or equal to the fixed length associated with the currently loaded network, then there is no need to load a representation of a different network. The method instead proceeds from 30 step 545 to step 548, from where it continues to step 512 and/or 522 as normal.
(Batch) matrix multiplication [00159] As already explained above, two (batch) matrix multiplications arise in each SDPA calculation (see blocks 192 and 198). Multiple instances of the SDPA calculation exists in each of the blocks 110, 111 and 161. Multiplications also arise repeatedly elsewhere in the network. For example, the feedforward layers in the feedforward blocks and 131 are fully connected layers, which are implemented using (batch) matrix multiplications. It is therefore desirable to implement such batch matrix multiplications as efficiently as possible in the hardware accelerator.
[00160] A single matrix multiplication has the form Z = XY [00161] In this equation, X is a matrix having dimensions [P, Q] and Y is a matrix having dimensions [Q, R]. The output Z is a matrix having dimensions [P, R]. Treating the matrices as 4-D tensors, X has dimensions [1, 1, P, Q], Y has dimensions [1, 1, Q, R], and Z has dimensions [1, 1, P, R].
[00162] Here, "NCHW' notation is adopted for a data tensor, where N is the number of batches, C is the number of channels, H is the height, and W is the width. Likewise, "OIHW" notation is adopted for weights/coefficients, where 0 is the number of output channels, I is the number of input channels, H is the kernel height and W is the kernel width.
[00163] The task of matrix multiplication can be generalised to batch matrix multiplication, which involves performing multiple matrix multiplications in the same operation. For batch matrix multiplication, the two leading dimensions of the 4-D tensors defined above are allowed to have sizes greater than one. Thus, the more general form is a tensor X with dimensions [M, N, P, Q], and a tensor Y with dimensions [M', N', Q, R]. This represents (max(M, M') max(N, N')) respective multiplications, between matrices of size [P, Q] and [Q, R], performed in the same operation.
[00164] It should be understood that the definition of batch matrix multiplication above applies to any number of dimensions greater than two. In other words, the selection of 4-D tensors is not special or important-it is just convenient for the purposes of explanation. In general, there may be any number of dimensions. The matrix multiplication is performed on two of the dimensions, and the remaining dimensions (which are referred to generically as the "batch" dimensions) all have matching sizes (or have sizes that are equal to 1, in which case broadcasting is implied). The notation [..., Q,...] and [..., R,...] will be adopted to indicate any number of additional dimensions that match in the manner described above for M, N, M', and N', with two dimensions that are subject to matrix multiplication. In this notation, the additional dimensions may be positioned in one or more of before, between and after the two dimensions that are subject to matrix multiplication. The dimension of size 0 in the two input tensors may be referred to as the "inner" dimension. Here, for the sake of simplicity in the examples and without loss of generality, it will be assumed that the matrix multiplication is performed on the final two dimensions. Thus, the tensors in the examples will have dimensions [..., P, Q] and [..., Q, R].
However, it should be understood that this is not intended to be limiting.
[00165] Existing neural network accelerator (NNA) hardware is generally specialised in evaluating convolutional layers, and might not natively support matrix multiplication. For example, the exemplary NNA described above, with reference to Figure 2, is adapted to multiply the same set of weights simultaneously by multiple sets of input data elements, in parallel, at multiple processing elements 240. The majority of computation in convolutional neural networks (CNNs) is for operations such as convolution, which require the same coefficients (weights) to be applied across multiple sets of input data. For this reason, some neural network accelerators are specialised towards this kind of operation. In some cases, matrix multiplication might not be natively supported by such NNAs; in other cases, it may be supported but relatively inefficient because of the way the hardware is designed.
[00166] The exemplary hardware accelerator discussed above does not lend itself to direct evaluation of matrix multiplications and batch matrix multiplications. According to aspects of the present disclosure, these operations are instead mapped to other operations, which are directly supported on the exemplary hardware accelerator. Different approaches may be adopted depending on the architecture and capabilities of the hardware accelerator. For example, in some hardware accelerators, the software or hardware might only support convolution using constant weights (coefficients) and might not support dynamic weights. This would restrict the use of convolution-based methods (see below), when wishing to multiply two matrices of dynamic data.
[00167] Faced with a desire to implement a matrix multiplication in a system using a neural network accelerator (NNA), one possibility would be to design a dedicated fixed-function hardware module that is specialised at evaluating matrix multiplication. This hardware module could then be included in the NNA, where it would take responsibility for evaluating any layers involving matrix multiplication, as needed.
[00168] Another alternative would be to evaluate the matrix multiplication in general 30 purpose hardware outside the NNA, such as a general purpose CPU or DSP.
[00169] Providing a dedicated fixed-function hardware module in an NNA may allow for an optimised, fast evaluation. However, it has the drawback that the dedicated fixed-function hardware module occupies additional area in the integrated circuit. This area will be inactive, except when called upon to evaluate a matrix multiplication.
[00170] Meanwhile, evaluating the matrix multiplication in general purpose hardware allows for flexibility, and avoids leaving large areas of the NNA underutilised; however, it is typically less efficient, because the hardware is less specialised.
[00171] Additionally, when using general-purpose hardware that is external to the NNA, there is an overhead in transferring the necessary data from the NNA to the general-purpose hardware (for example, CPU). This typically involves the NNA writing the data to a memory, and the CPU reading the data from the memory, before evaluating the matrix multiplication. This is likely to slow down the evaluation of the matrix multiplication, especially if -as is often the case -the speed of memory access dominates.
Furthermore, CPU time is often at a premium due to the requirements of the operating system and other processes being run. Spending CPU time evaluating matrix multiplications may cause these other processes to slow down and is an inefficient use of resources. The same is also true for GPUs and DSPs.
[00172] Examples according to the present disclosure provide ways to implement matrix multiplications (including batch matrix multiplications) in hardware, based on elementary neural network operations that are available on an exemplary NNA. These elementary neural network operations include element-wise operations, transformations, and convolutions. The mapping unit 12 is configured to map the (batch) matrix multiplication to a graph of elementary neural network operations. The hardware accelerator is configured to evaluate the graph of elementary neural network operations to evaluate the (batch) matrix multiplication operation. The graph may include either (a) at least one convolution operation or (b) at least one element-wise operation. The at least one convolution operation or at least one element-wise operation is evaluated in fixed-function hardware in the hardware accelerator 200.
[00173] Various examples of ways of implementing matrix multiplication operations will now be described in greater detail. Without loss of generality, in these examples, we assume that the batch size B = (max(M,M')max(N,N')) = MN, such that M>=M' and N>=N'. However, it should be understood that this is not limiting on the scope. If the batch size is different from this (for example, if M'>M and/or N'>N), the algorithms would need only replication and/or permutes and/or reshapes for the corresponding tensors. A first set of examples is illustrated in Figures 7-11. These examples rely upon convolution operations to evaluate matrix multiplication operations.
[00174] In the computational graph illustrated in Figure 7, the graph of neural network operations 700 for implementing a matrix multiplication comprises a convolution 730, and transformations 710, 720, 740, which are applied to the input and output of the convolution. The first tensor X will provide the data tensor for the convolution; the second tensor Y will provide the weights for the convolution. A first permute operation 710 is applied to the first tensor X. This permute operation rearranges the dimensions of X from [1, 1, P, Q] to [1, Q, 1, P]. The width dimension with size Q is placed in the channel dimension; and the height dimension with size P is placed in the width dimension.
[00175] In the present implementation, it is preferred to use a width dimension greater than one for data tensor X, because the hardware accelerator 200 is configured to parallelise over the width dimension when performing convolutions, for example by processing adjacent windows across the convolution engines 240. In another implementation, the hardware accelerator might be configured to parallelise over the height dimension. In such a case, it might be advantageous to instead leave the height dimension in place, such that the reconfigured first tensor would have dimensions [1, Q, P, 1].
[00176] A second permute operation 720 is applied to the second tensor Y. This operation rearranges the dimensions of Y from [1, 1, Q, R] to [R, Q, 1, 1]. Thus, the height dimension with size Q is placed in the input channel dimension. The width dimension with size R is placed in the output channel dimension. The first and second permute operations are examples of first transformations, to prepare the input data before the multiplication calculations are performed.
[00177] The matrix multiplication can then be implemented by a 1x1 convolution 730, using the reconfigured first tensor X as data, and using the reconfigured second tensor Y as weights. The convolution 730 has R filters, Q input channels, stride 1 and no padding. The output of this convolution will have dimensions [1, R, 1, P]. A third permute operation 740 is applied to this, to restore the dimensions to the configuration corresponding to the original inputs X and Y. Following the permute 740, the result Z of the matrix multiplication has dimensions [1, 1, P, R]. The third permute operation is an example of a second transformation, applied to the results of the multiplication calculations, to restore data such that the dimensions are in a desired order. Usually, this will match the ordering of the dimensions of the input data. It should be understood that the second transformation (third permute operation 740) may be eliminated in some circumstances. In particular, if the neural network includes two successive matrix multiplications, then the output of the convolution 730 in the first of these matrix multiplications may be passed directly to an input of the convolution 730 in the second matrix multiplication (eliminating also the second permute operation 710 in the second matrix multiplication). In essence, the third permute operation 740 in the first matrix multiplication and the first permute operation 710 in the second matrix multiplication cancel one another out. In this way, the first permute operation 710 may be performed only at the start of a sequence of successive matrix multiplications, and the third permute operation may be performed only at the end of that sequence.
[00178] Figure 8 extends the approach of Figure 7 to construct a graph of neural network operations 800 for implementing a batch matrix multiplication. The approach taken in Figure 8 is to split the batch matrix multiplication into individual matrix multiplications, and to implement each of the individual matrix multiplications using the approach of Figure 7.
To The first tensor X, which has dimensions [M, N, P, Q], is split 820 into MN constituent matrices each having dimensions [1, 1, P, Q]. For the second tensor Y, if broadcasting is required (that is, if M'=1 and/or if N'=1), then the relevant dimensions of Y are repeated making it of shape [M, N, Q, R]. The second tensor Y is split into MN constituent matrices each having dimensions [1, 1, Q, R]. The respective pairs of constituent matrices are each is multiplied together using the graph of neural network operations 700 in the computational graph shown in Figure 7. Each of these implements the matrix multiplication as a convolution 730, and produces a respective tensor of dimensions [1, 1, P, R]. If broadcasting is involved, then this can be handled by feeding the same constituent matrix to multiple instances of the operations 700. The results of the MN operations are then concatenated 840, to construct the result of the batch matrix multiplication with dimensions [M, N, P, R].
[00179] Figure 9 shows one specific instance relying on the approach of Figures 7-8. A batch matrix multiplication is performed between a first tensor X with dimensions [MN=2, P=4, 0=3] and a second tensor Y with dimensions [MN=2, 0=3, R=5]. Here, without loss of generality, we are considering only the combined size MN of the individual dimensions M and N. The method works in the same way, irrespective of whether M=1 and N=2, or M=2 and N=1. The first tensor X is split 810 into two tensors each of dimensions [1, 1, 4, 3]. Each of these is permuted 710 into a first reconfigured tensor of dimensions [1, 3, 1, 4]. The second tensor Y is split 820 into two tensors of dimensions [1, 1, 3, 5], each of which is permuted 720 into a second reconfigured tensor of dimensions [5, 3, 1, 1]. Each first reconfigured tensor (acting as data) is convolved 730 with the respective second reconfigured tensor (acting as weights), to produce an output with dimensions [1, 5, 1, 4]. These are permuted 740 to have dimensions [1, 1, 4, 5] and concatenated 840 to produce an output tensor Z with dimensions [MN=2, P=4, R=5].
[00180] Figure 10 shows a computational graph comprising a graph of neural network operations that offer another way of implementing a batch matrix multiplication -this time using a grouped convolution. A specific instance applying this method is illustrated in Figure 11. The first input tensor X (having dimensions [M, N, P, 01) is permuted 910 to a tensor of dimensions [M, N, Q, P]. It is then reshaped 912 to a tensor of dimensions [1, MNQ, 1, P]. The second input tensor Y (having dimensions [M', N', Q, R]) is permuted 920 to a tensor of dimensions [M', N', R, Q] and then reshaped 922 to a tensor of dimensions [M'N'R, Q, 1, 1]. If broadcasting is required (that is, if M'=1 and/or if N'=1), then this can be implemented by repeating the relevant dimensions appropriately in Y thereby making it of shape [MNR, Q, 1, 1]. A grouped convolution 930 is then performed, using the first reconfigured tensor [1, MNQ, 1, P] as data and the second reconfigured tensor [MNR, Q, 1, 1] as weights. There are MN groups each having Q channels. This produces a tensor of dimensions [1, MNR, 1, P], which is firstly reshaped 940 to have dimensions [M, N, R, P] and then permuted 950 to produce the output tensor Z with dimensions [M, N, P, R]. The specific instance in Figure 11 uses input tensors X and Y with the same dimensions as those of Figure 9, for ease of comparison. If the batch matrix multiplication is part of a sequence of consecutive multiplications, then the final permute operation 950 might not be necessary. In a similar manner to that discussed above for Figure 7, the initial permute 910 of the next batch multiplication in the sequence effectively cancels with the final permute 950 of the preceding batch multiplication.
[00181] There are various alternative ways of implementing matrix multiplication operations. A second set of examples is illustrated in Figures 12-13. These examples rely upon element-wise operations to evaluate matrix multiplication operations.
[00182] Figure 12 illustrates a computational graph for implementing a batch matrix multiplication using element-wise multiplication, according to an example. For the second tensor Y, if broadcasting is required (that is, if M'=1 and/or if N'=1), then the relevant dimensions of Y are repeated making it of shape [M, N, Q, R]. A permute operation 1110 is applied to the second tensor Y, to reconfigure it into a third tensor having dimensions [M, N, R, Q]. The third tensor is then split 1120, along the height dimension, into R constituent tensors, each of dimensions [M, N, 1,0]. Element-wise multiplication 1130 is performed between each one of these R constituent tensors and the first tensor X. Recall that the first tensor X has dimensions [M, N, P, Q]; therefore, this element-wise multiplication involves broadcasting over at least the height dimension. Alternatively, the constituent tensors may be replicated explicitly P times over the height dimension, prior to the element-wise multiplication. The output of the element-wise multiplication 1130 consists of R tensors, each having dimensions [M, N, P, Q]. These tensors are concatenated 1140 along the width dimension, producing a tensor of dimensions [M, N, P, RQ]. Next the concatenated tensor is permuted 1150 to reconfigure it into a tensor of dimensions [M, RQ, N, P]. Essentially this operation arranges the results of the element-wise multiplication along the channel axis, in R groups of size Q. The next task is to sum over each group of Q channels. In the present example, this is done by means of a grouped convolution 1160, using a weight tensor consisting entirely of ones and having dimensions [R, Q, 1, 1]. The output of this grouped convolution 1160 has dimensions [M, R, N, P]. In a final step, this tensor is permuted 1170 to reconfigure it into the output Z, with dimensions [M, N, P, R].
o [00183] Figure 13 shows a practical instance of a matrix multiplication, based on the computational graph in Figure 12. Note that -in contrast to Figure 12 -in Figure 13, the arrows represent operations and the tensors are represented as blocks. In this example, the first tensor X has dimensions [1, 1, 4, 3] and the second tensor Y has dimensions [1, 1, 3, 5]. In step 1110, the second tensor is permuted to produce a third tensor having is dimensions [1, 1, 5, 3]. In step 1120, this third tensor is split into 5 constituent tensors each of size [1, 1, 1, 3]. In step 1130, each of the 5 constituent tensors is multiplied element-wise by the first tensor X. This produces 5 respective tensors, each one having dimensions [1, 1, 4, 3]. These are concatenated in step 1140 along the horizontal (width) dimension, to produce a concatenated tensor with dimensions [1, 1, 4, 15]. The width dimension now contains 5 groups of 3 elements in each of the 4 rows. The concatenated tensor is permuted 1150 to reconfigure it with dimensions [1, 15, 1, 4]. (Note that this step is not illustrated in Figure 13.) Grouped convolution 1160 is performed on this tensor, using a weight tensor of dimensions [5, 3, 1, 1] to sum the groups of 3. The result of this summation / grouped convolution has dimensions [1, 5, 1, 4]. This is permuted 1170 to produce the final output Z, having dimensions [1, 1, 4, 5].
[00184] Note that the grouped convolution used in the preceding example is just one way to sum the groups of elements as required. It is possible to implement this step in other ways. For example, starting from the concatenated tensor having dimensions [1, 1, 4, 15], one could instead construct a convolution operation with a row of ones [1, 1, 1, 3], using a stride of 3 in the horizontal direction, and apply this to implement the summation.
[00185] Although Figure 13 illustrates the graph of neural network operations being applied to a single matrix multiplication (M=N=1), the method is general and applies also for batch matrix multiplications (MN>1) without modification.
[00186] The MMM can be used to implement the various permute, reshape, split and concatenate operations, as required. Element-wise operations can be implemented using the element-wise operations unit 285. Convolutions (including grouped convolutions) can be carried out using the convolution engines 240. In this way, all of the neural network operations in the graph of neural network operations may be implemented in fixed-function hardware.
Softmax [00187] As already explained above, a Softmax calculation arises in each instance of an SDPA calculation. Additionally, although not shown explicitly in Figures 1A and 1B, the prediction heads at the outputs of either the encoder stack or the decoder stack may comprise a linear projection and a further Softmax. This Softmax may be used for classification, for example, to identify the most likely class from a vector of class probabilities.
[00188] A softmax layer performs, for any value xf in a set or vector of values, the operation: s(x,)_ Lex, In order to drop the subscript notation, this equation can be rewritten in terms of a vector X: S(.70 XzEx ez [00189] Softmax maps input values in the range (-00, +co) to outputs in the range [0,1].
Furthermore, the sum of the output values is 1 (as required for a discrete probability 20 distribution).
[00190] It is known that the evaluation of a softmax layer may suffer from numerical instability problems if the input values x are large in magnitude. The input x may have such large positive values that overflow occurs in the output of the exponential ex, or such large negative values that underflow occurs. Even when overflow does not occur, with large values of x, some of the exponential values ex may be so large in comparison with others that the normalisation is no longer reliable. exi ex
[00191] A solution to at least some of these issues is to subtract the maximum value from all values in the tensor (or vector) x s(x) EzExe'" Where M = max(x). This redefined layer is identical to the definition above but is more stable numerically. The subtraction of the maximum reduces the range of the input from (-oo,+oo) to (-co, 0], but does not affect the result.
[00192] Faced with a desire to implement a softmax layer using a neural network accelerator (NNA), similar considerations apply as for the matrix multiplication operation discussed above. Implementing the operation using natively supported elementary operations has advantages over each of the potential alternative possibilities.
[00193] Examples according to the present disclosure provide ways to implement a softmax layer in hardware, based on elementary neural network operations that are available on an exemplary NNA. The softmax layer can be viewed as a computational graph, as shown in Figure 14A, and the individual operations in the graph can each be replaced with one or more operations in the NNA hardware. The graph containing these substituted operations can then be evaluated by the NNA.
is [00194] Examples according to the present disclosure use elementary neural network operations, executed by fixed-function hardware units, to implement a softmax layer. In the present implementation, the calculations are performed in fixed point arithmetic. Experiments have shown that the fixed point implementation is sufficiently accurate that it does not significantly degrade the overall accuracy of the exemplary neural networks tested.
[00195] A softmax layer may be constructed from the following operations: * A maximum operation; * A subtraction; * An exponential operation implementing the function f (x) = ex; * A summation; and * A division.
[00196] For each of these operations, there may be more than one way that the operation can be restructured for execution on the hardware accelerator. The operations will be explained in turn in more detail below.
[00197] Figure 14A is a computational graph illustrating how the operations available in the exemplary NNA can be used to implement a softmax layer. In this example, an eX_M exemplary input tensor x will be considered, the tensor having batch, height, width, and channel dimensions. In this example, it is assumed that the softmax layer is to be evaluated over the channel dimension of the input. It should be understood that the softmax layer could be evaluated over one or more other dimensions, and that it may be applied to tensors with any number of dimensions ordered in any way.
[00198] First, the input tensor x undergoes a maximum operation 2110. This operation is performed over the dimension or dimensions in which the softmax layer is to be evaluated. In the current example of channel-wise softmax, this operation is performed along the channel dimension. Where the softmax layer is evaluated over a different is dimension or dimensions, the maximum operation 2110 would instead be performed along that different dimension or dimensions. The maximum operation 2110 returns the largest positive value over the relevant dimension(s) within x, to produce a tensor of maximum values M. In subtraction operation 2120, also referred to as the first subtraction, the tensor of maximum values M is subtracted from the respective elements of the tensor x. (This can be implemented using broadcasting, as discussed in further detail below). This subtraction operation 2120 results in a negative-shifted tensor x -M. The negative-shifted tensor is input to an exponential operation 2130. This calculation applies each element of the negative-shifted tensor as a power of Euler's number e. The exponential operation 2130 results in a tensor of exponentiated values, referred to herein as the exponential tensor ex-m. The exponential tensor ex' undergoes a summation 2140. This summation sums all of the elements of the exponential tensor along the dimension or dimensions in which the softmax layer is to be evaluated (in the current example, the channel dimension), resulting in a tensor containing the sum of the exponentiated values E ex-m. In division operation 2150, the exponentiated values are divided by their sum. This returns ex-m the output of the softmax layer: . Iex-m [00199] Figure 14B is a computational graph illustrating an example of how the operations available in the exemplary NNA can be used to implement the exponential operation 2130. According to this example, the exponential operation 2130 consists of the following operations: * A negafion 2132; * A sigmoid function 2134; * A reciprocal operation 2136; and * A second subtraction 2138.
[00200] Continuing with the example of Figure 14A, the input for the exponential operation 2130 is the negative-shifted tensor x -M. This tensor undergoes a negation 2132 to produce a negated tensor -(x -M). The negation 2132 may be implemented in a variety of ways -for example, by subtracting the negative-shifted tensor from 0, or by subtracting the negative-shifted tensor from itself twice. The negated tensor is input to a sigmoid function 2134, which determines a sigmoid value for each element of the negated tensor. The output of the sigmoid function 2134 is a tensor of sigmoid negated values -M)). The tensor of sigmoid negated values undergoes a reciprocal operation 2136, which determines the reciprocal of each of the sigmoid negated values. The sigmoid operation 2136 returns a tensor of reciprocal sigmoid values. Finally, the tensor of reciprocal sigmoid values undergoes a second subtraction 2138, to subtract a constant (one) from each element of the tensor. This returns the exponential tensor 1 1' which is identical to ex-m.
[00201] Two possible implementations of the maximum operation 2110 will be explained with reference to Figures 15A, 15B, 15C, and 16. As was explained with reference to Figure 14A and 14B, and as shown again in Figures 15A and 16, the maximum operation 2110 receives an input tensor and returns the maximum value of the elements over the channel dimension. The channel dimension is given as an example because softmax is most commonly applied over the channel dimension. However, it should be understood that the scope of the present disclosure is not limited to this example.
[00202] The maximum operation 2110 can be implemented: In the pooling unit 275; or In the element-wise operations (EWO) unit 285.
These units may be assisted by transpose or permute operations performed, for example, in the memory manipulation module 40.
[00203] Figure 15B illustrates an implementation of the maximum operation 2110 using the EWO unit 285. An iterative sequence of pairwise maximum operations can be used. The input tensor is split 600 in two on the dimension on which we are computing softmax (the channel dimension in the present example), and the two halves are compared using an element-wise maximum operation 610. For each pair of elements compared, the higher (that is, maximum) of the two is output as the result of the element-wise maximum 610. The result of this operation is a tensor that is half the size of the original. This is itself split in two, and the two halves are compared using a further element-wise maximum operation 610. This process continues iteratively, halving the number of values in each iteration, until the overall maximum values over the channel dimension are found. If the tensor does not have a size that is a power of 2, along the dimension over which the maximum operation is to be applied, then padding may be necessary, to increase the size to the nearest power of 2. The tensor could be padded with zeros, in some examples. If the values in the original tensor are all negative, this will cause the maximum operation to instead return a maximum value of zero. Alternatively, for better conditioning of the softmax layer, the padding could be done with a very large negative value, or by copying one or more existing values in the original tensor. This would be less likely to affect the calculation of the maximum. (In the case of one or more copied values or by using the largest representable negative value, it would be guaranteed not to affect the calculation).
[00204] Figure 15C illustrates the application of the iterative element-wise maximum approach to an exemplary input tensor-here a vector "x" 601, for simplicity. The input vector 601 has 4 channels, each containing a numerical value represented by x1, x2, x3 and x4. First, the vector 601 is split 600 into two sub-vectors 602, 603 each having two is elements. Using an element-wise maximum operation 610, the first element of the first sub vector 602 is compared with the first element of the second sub-vector 603. Similarly, the second element of the sub-vector 602 is compared with the second element of the sub-vector 603. This comparison results in a vector 604. In the example of Figure 15C, x1>x3 and x4>x2; therefore, the vector 604 output by the first element-wise maximum operation consists of x1 and x4. The vector 604 is split 600 to produce a sub-vectors 605 and 606, which are again compared using the element-wise maximum operation 610. This returns the maximum element "M" of the input vector 601 -which, in this example, happens to be x4. While this example used a vector having 4 elements, the process applies in the same fashion to vectors having more elements or to tensors with more dimensions. It can also be applied over dimensions other than the channel dimension.
[00205] An alternative to padding is to split the tensor into more than two sub-tensors, each sub-tensor having a size in the relevant dimension that is a power of 2. For example, a tensor with 5 channels may be split into two tensors with 2 channels each and a final tensor with 1 channel. The two tensors with 2 channels can be reduced by splitting and taking the element-wise maximum, as described above. The resulting 1-channel tensors can be compared to produce a tensor with 1 channel. Finally, this tensor can be compared with the remaining tensor with 1 channel, to return the maximum of the original tensor on the channel dimension. This process is illustrated by way of example in Figure 15D. The exemplary input tensor, the vector "x" 611, differs from the input in Figure 15C by the addition of a fifth channel, containing a numerical value x5. The first four channels are processed as illustrated in Figure 15C. This is then followed by a final, additional step, in which the maximum over the first four channels, x4, is compared with the fifth channel, x5, in a further element-wise maximum operation 610. The result of this comparison is the overall maximum over the five channels. (In this example, as illustrated, the maximum happens still to be x4).
[00206] The splitting operation may be performed by the memory manipulation module 40, by reading data from one location and writing a first part of the data to a first location and a second part of the data to a second location. Alternatively, the splitting might not require a separate operation, and may instead be performed as part of the output of the preceding operation. In the example of Figure 15B, the output of the element-wise maximum 610 may be split by writing a first part of the output to a first location in the memory 25 and a second part of the output to a second location in the memory 25.
[00207] Figure 16 illustrates the implementation of the maximum operation 2110 using the pooling unit 275. As mentioned at the outset above, in the present example softmax is applied over the channel dimension. Therefore, the maximum operation 2110 is also applied over the channels. To facilitate implementing the maximum operation 2110 in the pooling unit 275, a transpose or permute operation 2510 can be applied to the input tensor before the maximum pooling operation 2520 is performed by the pooling unit 275. This is done because, in the exemplary hardware accelerator, the pooling unit is specialised at pooling over spatial dimensions. In order to pool the channel elements of the input, the channel dimension is transposed with one of the spatial dimensions. (This can be done using the MMM 40). Then, the result of the maximum pooling operation 2520 can be transformed back to the original dimensions of the input by another transpose or permute operation 2512 that inverts the transpose or permute operation 2510 to restore the original ordering of the dimensions. (Again, this can be done using the MMM 40). If softmax is being performed in the spatial dimensions (for example, the height and/or width dimensions) then these transpose operations might not be needed. Similarly, where the pooling unit 275 is designed to operate in the channel dimension, transpose operations might not be necessary. In some cases, the pooling unit 275 may have a maximum window size that is smaller than the size of the dimension(s) over which the maximum is to be calculated. If this arises, the max pooling can be iterated a number of times, in order to calculate the maximum over the larger set of values.
[00208] The subtractions 2120, 2138 and the negation 2132 can be performed by the element-wise operations unit 285. The element-wise operations unit can perform a respective subtraction operation on each element of a tensor. Where a constant is subtracted -as in the subtraction 2138 -the subtraction may be performed either by the element-wise operations unit 285 or by the activation unit 255. Subtraction of a constant (for example, subtraction of 1) can be implemented in the activation unit by loading the function y = x -c into the LUT, where c is the constant to be subtracted. Subtraction of a constant can be implemented as an element-wise addition in the element-wise operations unit, by adding the negative of the constant (for example, adding -1). Similarly, the negation 2132 could be performed either by the element-wise operations unit 285 (by subtracting the input tensor from a constant, 0), or by the activation unit 255 (by loading the function y = -x into the LUT).
[00209] In the present example, the negation 2132 is performed by subtracting each is element of the tensor from zero. It could also be performed by element-wise subtraction in other ways -for example, by subtracting the tensor from itself twice, or multiplying every element of the tensor by two and subtracting the result from the original tensor. Alternatively, the negation may be performed by changing the sign bit of each element of the tensor where a sign and magnitude representation of numbers is used. Where a two's complement representation is used, the negation may be performed by inverting all of the bits representing the number and then adding one.
[00210] The exponential operation 2130 receives an input and raises e to the power of that input. For an input of x, the output of the exponential operation 2130 is ex. [00211] The exponential operation could be evaluated: 1. Directly in a lookup table (LUT), in the activation unit 255; 2. By means of a sigmoid function 2134, reciprocal 2136, negation 2132 and subtraction 2138 as shown in Figure 14B; or 3. By means of a reciprocal sigmoid function, negation 2132 and subtraction 2138.
[00212] The first implementation is relatively straightforward, provided that the hardware 25 and software of the hardware accelerator allows the LUT to be programmed with the exponential function. However, this might not always be possible. Therefore, the second implementation may be used as an alternative.
[00213] The second implementation makes use of the following identity: ei = 1 a(-x) Where c(-x) is the negative sigmoid function. Here, in common with most literature in the field of neural networks, we use the term "sigmoid" synonymously with "logistic function". That is, the sigmoid (logistic) function is defined as: 1 + e-x The negative sigmoid function is therefore: a(-x) = [00214] The second implementation uses elementary neural network operations to implement the steps shown in Figure 14B. The negation 2132 and subtraction 2138 can be evaluated by the EWO unit 285, as explained above. The sigmoid function 2134 can be evaluated by the activation unit 255, as a sigmoid activation. In other words, instead of loading the exponential function into the LUT, the sigmoid function is instead loaded and evaluated. Because the sigmoid is a common activation function, it is likely to be available natively in the activation unit 255. Thus, in this case, the exponential operation is implemented indirectly by means of a sigmoid activation (together with other elementary operations).
[00215] The reciprocal 2136 of the value of o-(-x) can be evaluated in several ways, depending on the exact capabilities of the hardware accelerator 200. The options for evaluation of a reciprocal include: A reciprocal lookup in the LUT of the activation unit 255 Using the LRN unit 265; and An element-wise division, using the EWO unit 285.
In the present example, the reciprocal function is performed using the LRN unit. Each of the three options will be described in greater detail below.
[00216] Referring once again to Figure 14B, in principle, the sigmoid function 2134 and the reciprocal 2136 could be combined, if both are implemented by means of lookups in an LUT. Rather than carry out two lookups in two LUTs, the functions could be combined and a single lookup performed. That is, the LUT of the activation unit 255 could be programmed to contain a reciprocal sigmoid function fix) = c*. Going one step further, the negation 2132 could also be subsumed into the LUT, so that it returns the result of the function fix) = In practice, however, it is likely that if the LUT were fully programmable in this way, it would be easier to simply program it with the exponential function.
[00217] The summation 2140, shown in Figure 17A, can be implemented by a 1x1 convolution with a kernel of ones, using the convolution engines 240. The convolution operation 2570 is shown in Figure 17B. Using the example of Figure 14A and 14B, consider the exponential tensor ex-M. This tensor has dimensions BxCxHx W, with B batches, C channels, H rows and W columns. To evaluate a softmax layer over the channel dimension, the elements of the exponential tensor must be summed over the channel dimension. In other words, elements sharing the same height, width and batch location but in different channels will be summed together. This summation 2140 is performed by convolving 2570 the input tensor with a kernel having dimensions 1 x C x 1 x 1, where the kernel has dimensions OxIx KH x Kw, where I is the number of input channels, 0 is the number of output channels, KH is the kernel height and Kw is the kernel width. Each element of the kernel has a value of one. The kernel is convolved 2570 across the height, width and batch dimensions of the input tensor. The result of this process is a tensor that contains the summation of the elements of the exponential tensor across the channel dimension.
[00218] Where a softmax layer is evaluated over one or more spatial dimensions, a depth-wise convolution may be used, meaning that the kernel is applied to each channel separately. In this case, the kernel would have a size of one in the channel dimension and a size greater than one in the height and/or width dimensions. If the hardware accelerator is limited to a certain maximum kernel size, it may be necessary to iterate the convolution in order to capture all elements of the input tensor, in a similar manner to that described above for the max pooling operation. It should be understood that in other examples, the softmax layer may be evaluated over other combinations of dimensions, such as the channel dimension and one or more spatial dimensions. The convolution kernel will be adapted according to the relevant dimensions.
[00219] Alternatively, the summation 2140 could be implemented by the EWO unit 285 using iterated pairwise addition operations. This approach is very similar to the element-wise maximum operation explained in relation to Figures 15B and 150. The difference is that instead of implementing an element-wise maximum operation after each split, an element-wise addition operation is used. Each element of the sub vector produced in this operation is the sum of two respective elements of the vectors on which it operated. The split and addition processes are repeated until all the elements have been summed over the channel dimension. Similarly to the maximum operation of Figures 15B and 15C, it may be necessary to pad the tensor beforehand, so that its size in the relevant dimension is a power of 2 (that is, its size is 2, where P is a positive integer). In this case, because the element-wise operation is addition, the tensor should be padded with zeros. As with the maximum operation, padding may be avoided by instead splitting the input tensor into more than two sub-tensors, each having a size in the relevant dimension that is a power of 2.
[00220] The reciprocal 2136 can be implemented: Using the [RN unit 265; Using the activation unit 255, by loading a reciprocal function into an [UT of the activation unit; or As an element-wise division, using the element-wise operations unit 285, by dividing a constant, 1, by the input values whose reciprocal is to be calculated.
[00221] If the LUT of the activation unit 255 is available, and is programmable, it can be is programmed with a reciprocal function f(x) =17x.
[00222] The [RN unit 265 may be used, in particular, if the [UT is not available or if it is not programmable to implement arbitrary functions. The [RN unit 265 is designed to carry out the following [RN calculation: al,x,y/ cb = 1 + a min (N -1,i +121) \ (a.
j=triax(0./4) [00223] By setting a = 1, fl = 1, k = n = 0, this function can be reduced to = [00224] Which is identical to the desired reciprocal: =1/ai,,y [00225] Either of these two solutions can be used to evaluate reciprocals. Both ways of evaluating reciprocals can also be useful for evaluating division operations, as explained further below. Alternatively, as mentioned above, the reciprocal could itself be implemented by means of an element-wise division, in the element-wise operations unit 285 (assuming that element-wise division is supported).
[00226] In the evaluation of the exponential operation as shown in Figure 14B, the reciprocal 2136 (for example, implemented by the [RN unit 265) produces a reciprocal tensor a(-(x-M))1 which is passed to the subtraction operation 2138 (for example, implemented by element-wise subtraction in the EWO unit 285).
[00227] In some examples, division 2150 (reproduced in Figure 18A) may be performed directly by means of an element-wise division operation, using the element-wise operations unit 285. However, some hardware accelerators 200 might not support element-wise division. For such eventualities, it is desirable to be able to implement the division in other ways.
[00228] Figures 18B and 18C show two alternative ways to implement the division 2150 shown in Figure 18A -in particular, if it cannot be performed directly by the EWO unit 285. Both of these approaches exploit the recognition that a division can be evaluated as a combination of a reciprocal operation and a multiplication. In both cases, the multiplication 2580 can be performed by the EWO unit 285.
[00229] Figure 18B illustrates the use of an LRN operation 2552 to evaluate the reciprocal. Using this method, in the context of the example of Figure 1A, the reciprocal of the tensor containing the sum of the exponenfiated values E ex-m can be evaluated. The reciprocal tensor is passed to the EWO unit 285, where it is multiplied 2580 with the exponential tensor e' , to return the output of the softmax layer.
[00230] Figure 180 illustrates an alternative to the method of Figure 18B, in which a reciprocal lookup 2545 in the LUT of the activation unit 255 is used instead of the LRN 2552 to implement the reciprocal. In some cases, if the activation unit 255 is programmable to implement arbitrary functions, such as the reciprocal function, it may be preferable to use the activation unit 255 to carry out a reciprocal lookup 2545 (as shown in Figure 180) instead of using the LRN unit 265 to perform an LRN operation 2552 (as shown in Figure 18B). A lookup may be faster and more energy efficient than an LRN operation once the reciprocal function has been loaded into the [UT.
Layer normalisation [00231] As already explained above, with reference to Figures 1A and 1B, layer normalisation arises in each of blocks 120, 121, 140, 141, and 171. Layer normalisation applies the following normalisation over its inputs x: Y= x -E[x] p V ar[x] + [00232] Here, E[x] is the mean of the input data and V ar[x] is its variance. The layer 30 normalisation first subtracts the mean from the input data, and then scales it by the reciprocal of the standard deviation. The constant e has a small positive value, and is included for numerical stability, to avoid divide-by-zero cases. In the present example, e = 0.00001. These operations normalise the data to have zero mean and unit standard deviation. The multiplicative parameter y then scales the data to have standard deviation equal to y. Finally, the additive parameter p shifts the data to have a mean equal to p. [00233] The layer normalisation operation can be implemented in at least three ways in the fixed-function hardware of the exemplary hardware accelerator 200. These will be described with reference to Figures 19A-19C.
[00234] Figure 19A shows a computational graph for implementing a layer normalisation according to a first example. The mean of the input data is calculated using a convolution operation 1910 (performed by the convolution engines 240). The mean is then subtracted from the original input data, using a subtraction operation 1920. This can be done using the element-wise operations unit 285. The output of the subtraction operation 1920 is squared in a multiplication operation 1930. Again, this can be done using the element-wise operations unit 285. A second convolution operation 1940 sums the squares that were output by the multiplication 1930, thereby calculating the variance of the input data. The standard deviation is calculated as the square root of the variance. The square root operation 1945 may be implemented using the local response normalisation unit 265. The standard deviation is passed to a reciprocal operation 1950. This may be implemented using an [UT in the activation unit 255 (although it could also be implemented in other ways). It is noted that the order of the square root operation 1945 and the reciprocal operation 1950 could be reversed, in other examples. In principle, the results should be identical; in practice, there may be slight differences, because of rounding or other numerical effects. The mean-subtracted data produced by the subtraction operation 1920 is then multiplied by the reciprocal of the standard deviation, in multiplication operation 1960. This can be implemented as an element-wise multiplication in the element-wise operations unit 285. The scaling factor y is applied in a further element-wise multiplication operation 1970. Finally, the additive offset p is applied in an element-wise addition operation 1980.
[00235] The computational graph of Figure 19B is similar in many ways to that of Figure 19A, and identical operations are indicated by the same reference numerals. The only difference is in the implementation of the division operation. Whereas in Figure 19A this was implemented as a reciprocal 1950 followed by a multiplication 1960, in the computational graph of Figure 193 it is implemented directly as an element-wise division 1952. This can be performed using the element-wise operations unit 285.
[00236] The computational graph of Figure 19C again differs only in the implementation of the division operation. Similarly to Figure 19A, the division is implemented as a reciprocal followed by a separate multiplication 1960. In Figure 19C, the reciprocal is implemented in three steps. A rectified linear activation function (ReLU) is applied by the activation unit 255, in step 1954. Addition operation 1956 adds a small positive constant (for example, 1x10-10), to avoid the possibility of division by zero in the next step. This is a local response normalisation, performed by the LRN unit 265. The output of the LRN is the reciprocal of the standard deviation. For this task, the LRN unit is configured to calculate the inverse square root of its input. This can be achieved by appropriate setting of the parameters passed to the local response normalisation. In the present example, the parameters are depth_radius=0, bias=0., alpha=1., beta=0.75. These parameters have the same meanings given to them in the implementation of local response normalisation in TensorFlow Core v1.15.0: depth_radius is the half-width of the normalisation window; bias is an offset; alpha is a scale factor; and beta is an exponent.
[00237] The method selected for implementing the layer normalisation will typically depend on the precise capabilities of the target hardware accelerator. Not all hardware accelerators might implement the element-wise division operation directly, for example. And different methods of implementing the reciprocal might perform slower or faster on different platforms.
Format selection [00238] Figures 20A and 20B show a flowchart illustrating a method for selecting numerical formats suitable for use in a hardware implementation of an attention-based neural network (such as the one shown in Figures 1A and 1B). It would be desirable to choose numerical formats that are capable of a sufficiently accurate representation of the data, while avoiding excessive or redundant precision. If data is represented in excessively precise formats, this is liable to cause an increase in the computational complexity, power consumption, memory and bandwidth requirements, and/or on-chip area footprint. The chosen numerical formats may comprise or consist of fixed point formats. This can reduce or avoid the need to store data in floating point formats. The method of Figures 20A-20B may be performed, for example, by a data processing system as illustrated in Figure 4, or by a general purpose data processing system, such as a server computer.
[00239] Referring to Figure 20A, in step 2010, a representation is obtained of the attention-based neural network that is to be implemented in hardware. In step 2012, the 35 data processing system implements the representation as a test neural network. In step 2014, the data processing system obtains a dataset of first test input sequences. According to one example, these may be input sequences for an encoder stack such as that of Figure 1A. The dataset includes multiple first test input sequences of different lengths. In step 2016, the data processing system pads each of these test input sequences with padding values, to produce a respective first padded input sequence of a first fixed length. The first fixed length may be selected to be equal to the maximum length of any first test input sequence in the dataset. In step 2018, the data processing system generates a first padding mask for each first padded input sequence. Each first padding mask identifies the part of the respective first padded input sequence that contains padding values. In step 2019, the data processing system generates a first (self-) attention mask from the first padding mask. The generation of these masks corresponds to steps 513 and 514 in Figure 6A.
[00240] Referring to Figure 20B, in step 2054, a dataset of second test input sequences is obtained by the data processing system. Staying with the example of Figures 1A and 1B, the second test input sequences may be test input sequences for the decoder stack of Figure 1B. Again, the dataset includes multiple second test input sequences of different lengths. A second fixed length is determined, which is equal to the maximum sequence-length over all of the second test input sequences. In step 2056, the data processing system pads each of the second test input sequences to the second fixed length, producing respective second padded input sequences. In step 2058, the data processing system generates a respective second padding mask for each second padded input sequence. Next, in step 2059 the data processing system generates a respective second (self-)attention mask for each second padded input sequence. These steps correspond to steps 523 and 524 in Figure 6A. The data processing system also generates a cross-attention mask corresponding to each pair of first and second padded input sequences (step 2060). This step corresponds to step 525 in Figure 6A.
[00241] Referring again to Figure 20A, in step 2020, the data processing system processes the first and second padded input sequences, the first and second (self- )attention masks, and the cross-attention masks through the test neural network. While performing this processing, the data processing system collects statistics on the values obtained during the processing (see step 2022). The statistics describe the ranges of values encountered at each of the layers of the neural network, when processing the test sequences through the test neural network. The statistics capture relevant information about suitable numerical formats that can be used to represent (and store) the data at the various layers, when the neural network is ultimately implemented in hardware. In step 2024, the data processing system selects numerical formats according to the collected statistics.
[00242] If operations will be implemented in hardware as a plurality of elementary neural network operations (as explained above for batch matrix multiplications and Softmax, for 5 example), then the data processing system may collect statistics and select numerical formats for each of the elementary neural network operations.
[00243] If the neural network includes a decoder stack that will be executed in multiple iterations, then the data processing system may collect statistics over all of the iterations, for all of the (first and second) test sequences. Numerical formats can then be chosen for is each layer in the decoder stack such that the same numerical format is used in every iteration for any given layer. This helps to avoid the need to reconfigure the hardware implementation between iterations of the decoder stack, which may be complicated or time-consuming in the inference phase.
[00244] The data processing system may select numerical formats for the first and second (self-)attention masks, and the cross-attention mask. Additionally, it may select a value (or values) for the "large negative values" in these attention masks.
[00245] In some cases, the large negative values may be chosen to be equal to the largest negative value representable by the numerical format selected for the associated attention mask. In some embodiments, this might not be the case; however, the selected large negative values may nevertheless have values that are more negative than the largest negative value, observed over the course of processing the test sequences, in a data tensor input to a layer in which the respective attention mask is used. The selected large negative values may be more negative than this governing value by a margin. That is, there may be several (unused) values between the largest negative value present in the data tensor and said large negative values. In some examples, the large negative values may be selected such that they have an absolute value that is greater than the largest absolute value of the data tensor. They may be greater by a margin -that is, there may be several (unused) values between the largest absolute value of the data tensor and the absolute value of the large negative values. In some cases, the large negative values, x, may be chosen such that eAx=0, in the numerical format selected for the relevant attention mask.
Training [00246] A method of training an attention-based neural network suitable for hardware implementation will now be described, with reference to Figure 21. This method may be performed by a data processing system such as the one shown in Figure 4 or by a 5 general data processing system, such as a server computer.
[00247] In step 2210, a dataset of first training input sequences is obtained. For simplicity, in this example, it will be assumed that the attention-based neural network comprises only a decoder stack and no encoder stack. Therefore, the first training input sequences are input sequences for the decoder. It should be understood, however, that is the neural network may include an encoder stack, in other examples. In such examples, a dataset of second training input sequences may be provided for the encoder.
[00248] In step 2212, each of the first training input sequences is padded with padding values, to produce a respective first padded input sequence of a first fixed length. The first fixed length has already been selected in advance -for example, in the course of selecting numerical formats for the layers of the neural network in a method such as that illustrated by Figures 20A-20B. The first fixed length is greater than or equal to the maximum length among all of the training input sequences. It should be noted that the dataset of training input sequences (obtained in step 2210) might be identical to the dataset of test input sequences (obtained in step 2014), but this is not essential. In general, these may be different datasets.
[00249] In step 2214, a first padding mask is generated for each training input sequence. The padding mask indicates the part of the respective padded input sequence that contains the padding values. As with the other padding masks discussed above, the padding mask may contain values of one in locations corresponding to the elements of the original training input sequence (prior to padding). The padding mask may contain zeros in locations corresponding to the padding values.
[00250] In step 2216, a (self-)attention mask is defined for each of the training input sequences. The attention mask prevents the attention-based neural network from attending to certain parts of the padded input sequence, during training of the neural network. In particular, the attention mask prevents the network from attending to the padded part of each training input sequence. In some examples, the attention mask may also be configured to prevent the network from attending to certain elements of the original training input sequence. The attention mask may contain large negative values at locations corresponding to elements to which the network should be prevented from attending, and zeros at other locations. The attention mask is generated from the respective padding mask in the same manner as described already above. As also discussed above, the attention mask may be modified based on external inputs, to prevent network from attending to specified elements of the training input sequences.
[00251] In step 2220, the attention-based neural network is trained using the padded input sequences generated in step 2212, and the (self-)attention masks generated in step 2216. The attention masks are used to ignore certain elements of the respective training input sequences. (This prevents the network from attending to the ignored elements.) Since the attention-based neural network comprises only a decoder stack, in the present example, there are no cross-attention masks needed. In other examples, in which the network comprises both encoder and decoder stacks, a cross-attention mask would be generated for each pair of training input sequences (to the encoder and decoder, respectively) from the padding masks associated with the respective training input sequences.
[00252] In some examples, the decoder stack is to be executed in multiple iterations. At each iteration (other than the first iteration), the output of the decoder stack from the previous iteration is used as the input to the decoder stack for the next iteration. In such examples, the self-attention mask associated with each training input sequence may be updated at each iteration to allow the decoder stack to attend to more of the (padded) training input sequence than in the previous iteration.
[00253] In the foregoing examples, the mapping unit 12 and the MMM 40 were described as separate components to the hardware accelerator 200. This is not to be understood as limiting. In some examples, the mapping unit 12 (or at least a part of it) and the MMM 40 25 may be integrated as part of the hardware accelerator 200 [00254] The methods and data processing systems described herein are useful for processing input data of a wide variety of types, in a wide variety of applications. Of particular interest are image processing applications, where an input to the neural network comprises image or video data. The neural network may be configured to process the image or video data and produce further image or video data -for example, image or video data that has had its resolution enhanced, artefacts removed, or visual content otherwise modified in some way. Alternatively, the neural network may be configured to analyse the visual content of the image or video -for example, to detect faces, segment and/or classify objects, etc. Also of particular interest are audio processing applications, including tasks such as denoising, speaker detection/identification. Also of particular interest (and overlapping with audio processing applications in some cases) are NLP applications. These include but are not limited to speech recognition; text-to-speech; and machine translation. In some examples, the neural network may be configured to process mulfimodal data -that is, data of two or more different types/media. An example would be the use of both image data and text (caption) data for a classification or verification task.
[00255] Image and video data may be input to an attention-based neural network in a variety of ways. A video can be input as a sequence of frames, whereby each frame constitutes one element of the sequence, and the length of the sequence (over which attention is calculated) is equal to the number of frames. Image data can also be presented as a sequence of elements. For example, each pixel may be regarded as one element. The length of the sequence is then equal to the number of pixels. Such approaches can allow videos with different numbers of frames, or images with different numbers of pixels, to be processed by the attention-based neural network without the is need to resize or resample the data as a pre-processing step.
[00256] Figure 22 shows a computer system in which the data processing systems described herein may be implemented. The computer system comprises a CPU 902, an NNA 904, a memory 906 and other devices 914, such as a display 916, speakers 918 and a camera 919. A processing block 907 (corresponding to mapping unit 12, MMM 40, and hardware accelerator 200) is implemented on the NNA 904. The components of the computer system can communicate with each other via a communications bus 905. A store 908 (corresponding to memory 25) is implemented as part of the memory 906.
[00257] The data processing system of Figure 4 is shown as comprising a number of functional blocks. This is schematic only and is not intended to define a strict division between different logic elements of such entities. Each functional block may be provided in any suitable manner. It is to be understood that intermediate values described herein as being formed by a data processing system need not be physically generated by the data processing system at any point and may merely represent logical values which conveniently describe the processing performed by the data processing system between its input and output.
[00258] The data processing systems described herein may be embodied in hardware on an integrated circuit. The data processing systems described herein may be configured to perform any of the methods described herein. Generally, any of the functions, methods, techniques or components described above can be implemented in software, firmware, hardware (e.g., fixed logic circuitry), or any combination thereof. The terms "module," "functionality," "component", "element", "unit", "block" and "logic" may be used herein to generally represent software, firmware, hardware, or any combination thereof. In the case of a software implementation, the module, functionality, component, element, unit, block or logic represents program code that performs the specified tasks when executed on a processor. The algorithms and methods described herein could be performed by one or more processors executing code that causes the processor(s) to perform the algorithms/methods. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may use magnetic, optical, and other techniques to store instructions or other data and that can be accessed by a machine.
[00259] The terms computer program code and computer readable instructions as used herein refer to any kind of executable code for processors, including code expressed in a machine language, an interpreted language or a scripting language. Executable code includes binary code, machine code, bytecode, code defining an integrated circuit (such as a hardware description language or netlist), and code expressed in a programming language code such as C, Java® or OpenCL. Executable code may be, for example, any kind of software, firmware, script, module or library which, when suitably executed, processed, interpreted, compiled, executed at a virtual machine or other software environment, cause a processor of the computer system at which the executable code is supported to perform the tasks specified by the code.
[00260] A processor, computer, or computer system may be any kind of device, machine or dedicated circuit, or collection or portion thereof, with processing capability such that it can execute instructions. A processor may be any kind of general purpose or dedicated processor, such as a CPU, GPU, NNA, System-on-chip, state machine, media processor, an application-specific integrated circuit (ASIC), a programmable logic array, a field-programmable gate array (FPGA), or the like. A computer or computer system may comprise one or more processors.
[00261] It is also intended to encompass software which defines a configuration of hardware as described herein, such as HDL (hardware description language) software, as is used for designing integrated circuits, or for configuring programmable chips, to carry out desired functions. That is, there may be provided a computer readable storage medium having encoded thereon computer readable program code in the form of an integrated circuit definition dataset that when processed (i.e. run) in an integrated circuit manufacturing system configures the system to manufacture a data processing system configured to perform any of the methods described herein, or to manufacture a data processing system comprising any apparatus described herein. An integrated circuit definition dataset may be, for example, an integrated circuit description.
[00262] Therefore, there may be provided a method of manufacturing, at an integrated circuit manufacturing system, a data processing system as described herein.
Furthermore, there may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, causes the method of manufacturing a data processing system to be performed.
[00263] An integrated circuit definition dataset may be in the form of computer code, for example as a netlist, code for configuring a programmable chip, as a hardware description language defining hardware suitable for manufacture in an integrated circuit at any level, including as register transfer level (RTL) code, as high-level circuit representations such as Verilog or VHDL, and as low-level circuit representations such as OASIS (RIM) and GDSII. Higher level representations which logically define hardware suitable for manufacture in an integrated circuit (such as RTL) may be processed at a computer system configured for generating a manufacturing definition of an integrated circuit in the context of a software environment comprising definitions of circuit elements and rules for combining those elements in order to generate the manufacturing definition of an integrated circuit so defined by the representation. As is typically the case with software executing at a computer system so as to define a machine, one or more intermediate user steps (e.g. providing commands, variables etc.) may be required in order for a computer system configured for generating a manufacturing definition of an integrated circuit to execute code defining an integrated circuit so as to generate the manufacturing definition of that integrated circuit.
[00264] An example of processing an integrated circuit definition dataset at an integrated 25 circuit manufacturing system so as to configure the system to manufacture a data processing system will now be described with respect to Figure 23.
[00265] Figure 23 shows an example of an integrated circuit (IC) manufacturing system 1002 which is configured to manufacture a data processing system as described in any of the examples herein. In particular, the IC manufacturing system 1002 comprises a layout processing system 1004 and an integrated circuit generation system 1006. The IC manufacturing system 1002 is configured to receive an IC definition dataset (e.g. defining a data processing system as described in any of the examples herein), process the IC definition dataset, and generate an IC according to the IC definition dataset (e.g. which embodies a data processing system as described in any of the examples herein). The processing of the IC definition dataset configures the IC manufacturing system 1002 to manufacture an integrated circuit embodying a data processing system as described in any of the examples herein.
[00266] The layout processing system 1004 is configured to receive and process the IC definition dataset to determine a circuit layout. Methods of determining a circuit layout from an IC definition dataset are known in the art, and for example may involve synthesising RTL code to determine a gate level representation of a circuit to be generated, e.g. in terms of logical components (e.g. NAND, NOR, AND, OR, MUX and FLIP-FLOP components). A circuit layout can be determined from the gate level representation of the circuit by determining positional information for the logical components. This may be done automatically or with user involvement in order to optimise the circuit layout. When the layout processing system 1004 has determined the circuit layout it may output a circuit layout definition to the IC generation system 1006. A circuit layout definition may be, for example, a circuit layout description.
[00267] The IC generation system 1006 generates an IC according to the circuit layout definition, as is known in the art. For example, the IC generation system 1006 may implement a semiconductor device fabrication process to generate the IC, which may involve a multiple-step sequence of photo lithographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of semiconducting material. The circuit layout definition may be in the form of a mask which can be used in a lithographic process for generating an IC according to the circuit definition. Alternatively, the circuit layout definition provided to the IC generation system 1006 may be in the form of computer-readable code which the IC generation system 1006 can use to form a suitable mask for use in generating an IC.
[00268] The different processes performed by the IC manufacturing system 1002 may be implemented all in one location, e.g. by one party. Alternatively, the IC manufacturing system 1002 may be a distributed system such that some of the processes may be performed at different locations, and may be performed by different parties. For example, some of the stages of: (i) synthesising RTL code representing the IC definition dataset to form a gate level representation of a circuit to be generated, (ii) generating a circuit layout based on the gate level representation, (iii) forming a mask in accordance with the circuit layout, and (iv) fabricating an integrated circuit using the mask, may be performed in different locations and/or by different parties.
[00269] In some embodiments, an integrated circuit manufacturing definition dataset, when processed in an integrated circuit manufacturing system, may cause an integrated 35 circuit manufacturing system to generate a device as described herein. For example, the configuration of an integrated circuit manufacturing system in the manner described above with respect to Figure 23 by an integrated circuit manufacturing definition dataset may cause a device as described herein to be manufactured.
[00270] In some examples, an integrated circuit definition dataset could include software which runs on hardware defined at the dataset or in combination with hardware defined at the dataset. In the example shown in Figure 23, the IC generation system may further be configured by an integrated circuit definition dataset to, on manufacturing an integrated circuit, load firmware onto that integrated circuit in accordance with program code defined at the integrated circuit definition dataset or otherwise provide program code with the integrated circuit for use with the integrated circuit.
[00271] The implementation of concepts set forth in this application in devices, apparatus, modules, and/or systems (as well as in methods implemented herein) may give rise to performance improvements when compared with known implementations. The performance improvements may include one or more of increased computational performance, reduced latency, increased throughput, and/or reduced power consumption. During manufacture of such devices, apparatus, modules, and systems (e.g. in integrated circuits) performance improvements can be traded-off against the physical implementation, thereby improving the method of manufacture. For example, a performance improvement may be traded against layout area, thereby matching the performance of a known implementation but using less silicon. This may be done, for example, by reusing functional blocks in a serialised fashion or sharing functional blocks between elements of the devices, apparatus, modules and/or systems. Conversely, concepts set forth in this application that give rise to improvements in the physical implementation of the devices, apparatus, modules, and systems (such as reduced silicon area) may be traded for improved performance. This may be done, for example, by manufacturing multiple instances of a module within a predefined area budget.
[00272] The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.
Claims (22)
- CLAIMS1. A computer-implemented method for selecting numerical formats for use in configuring a hardware implementation of an attention-based neural network, the method comprising: obtaining (2010) a representation of the attention-based neural network; implementing (2012) the representation as a test neural network; obtaining (2014) a dataset of first test input sequences for the attention-based neural network, wherein the dataset includes first test input sequences of varying length; padding (2016) each first test input sequence with padding values to produce a respective first padded input sequence of a first fixed length; generating (2018), for each first padded input sequence, a respective first padding mask identifying the part of the first padded input sequence that contains the padding values; generating (2019) a first attention mask from each first padding mask, wherein the generating comprises an outer product operation applied to the first padding mask; processing (2020) the first padded input sequences and the first attention masks through the test neural network; collecting (2022) statistics describing ranges of values obtained during said 20 processing, wherein the statistics describe ranges of values for at least two different layers of the attention-based neural network; and selecting (2024) numerical formats for the at least two different layers based on the collected statistics.
- 2. The method of claim 1, wherein the attention-based neural network comprises a decoder, wherein the first test input sequences are test input sequences for the decoder, wherein processing (2020) each of the first padded input sequences and respective first attention masks through the test neural network comprises executing the decoder for a number of iterations equal to the first fixed length, wherein the statistics are collected over all of the iterations.
- 3. The method of claim 2, wherein, at each iteration the decoder produces an output sequence, and, at each iteration other than an initial iteration, the input to the decoder comprises the output sequence from the preceding iteration.
- 4. The method of claim 1, wherein the attention based neural network comprises an encoder, and wherein the first test input sequences are test input sequences for the encoder.
- 5. The method of claim 4, wherein the attention-based neural network further comprises a decoder, the method further comprising: obtaining (2054) a dataset of second test input sequences, wherein the second test input sequences are test input sequences for the decoder; padding (2056) each second test input sequence with padding values to produce a 10 respective second padded input sequence of a second fixed length; generating (2058), for each second padded input sequence, a respective second padding mask identifying the part of the second padded input sequence that contains the padding values; and generating (2059) a second attention mask from each second padding mask, 15 wherein the generating comprises an outer product operation applied to the second padding mask, wherein the method further comprises selecting a numerical format for at least one layer of the decoder.
- 6. The method of claim 5, further comprising executing the decoder for a number of iterations equal to the second fixed length, wherein, at each iteration the decoder produces an output sequence, and, at each iteration other than an initial iteration, the input to the decoder comprises the output sequence from the preceding iteration, wherein the statistics are collected over all of the iterations.
- 7. The method of claim 5 or claim 6, further comprising generating a cross-attention mask from each first padding mask and respective second padding mask, comprising an outer product of the first padding mask with the second padding mask, wherein the method comprises: processing the first padded input sequences, the second padded input sequences, the first attention masks, the second attention masks, and the cross-attention masks through the test neural network, and selecting numerical formats for any one or any combination of two or more of: the first padded input sequences, the second padded input sequences, the first attention masks, the second attention masks, and the cross-attention masks.
- 8. The method of any one of the preceding claims, wherein the attention-based neural network comprises a scaled dot-product attention calculation.
- 9. The method of any one of the preceding claims, wherein each first attention mask comprises: a plurality of zeros, in locations corresponding to the elements of the respective first input sequence; and one or more large negative values, in locations corresponding to the padding values of the respective first padded input sequence.
- 10. The method of any one of the preceding claims, wherein the attention-based neural network comprises a Softmax function, and wherein the processing comprises adding the first attention mask to an input to the Softmax function.
- 11. The method of any one of the preceding claims, wherein the attention-based neural network comprises a transformer network.
- 12. The method of any one of the preceding claims, wherein the attention-based neural network comprises a layer normalisation.
- 13. The method of claim 12, wherein the hardware implementation is configured to perform a set of available elementary neural network operations, the method comprising: mapping the layer normalisation to an equivalent representation comprising a plurality of elementary neural network operations from the set of available elementary 25 neural network operations; and selecting numerical formats for said plurality of elementary neural network operations, wherein each of the plurality of elementary neural network operations is selected from the list consisting of: a convolution operation; an element-wise subtraction operation; an element-wise multiplication operation; a reciprocal operation; a square root operation; an element-wise division operation; a rectified linear activation function; a local response normalisation; an element-wise addition.
- 14. The method of claim 13, wherein the plurality of elementary neural network operations implements: a first convolution operation, applied to an input to the layer normalisation, to calculate a mean of the input; an element-wise subtraction operation, to subtract the mean from the input; a first element-wise multiplication operation, to calculate the square of the output of the element-wise subtraction operation; and a second convolution operation, applied to the output of the first element-wise multiplication operation, to calculate the variance about the mean.
- 15. The method of claim 14, wherein the plurality of elementary neural network operations implements: a square root operation and a reciprocal operation, applied after the second convolution operation, to calculate the reciprocal of the standard deviation; and a second element-wise multiplication operation, to multiply the output of the element-wise subtraction operation by the reciprocal of the standard deviation.
- 16. The method of claim 14, wherein the plurality of elementary neural network operations implements: a square root operation, applied to the output of the second convolution operation, to calculate the standard deviation; and an element-wise division operation, to divide the output of the element-wise subtraction operation by the reciprocal of the standard deviation.
- 17. The method of claim 14, wherein the plurality of elementary neural network operations implements: a rectified linear activation function, applied to the output of the second convolution 30 operation; an element-wise addition operation, applied to the output of the rectified linear activation function; a local response normalisation, applied to the output of the element-wise addition operation, to calculate the reciprocal of the standard deviation; and a second element-wise multiplication operation, to multiply the output of the element-wise subtraction operation by the reciprocal of the standard deviation.
- 18. The method of any one of the preceding claims, wherein the attention-based neural network comprises a Softmax function, wherein the hardware implementation is configured to perform a set of available elementary neural network operations, the method comprising: mapping the Softmax function to a representation comprising a plurality of elementary neural network operations from the set of available elementary neural network operations; and selecting numerical formats for said plurality of elementary neural network operations, wherein each of the plurality of elementary neural network operations is selected from the list consisting of: a transpose or permute operation (510, 512); a max pooling operation (520); an element-wise maximum operation (610); an element-wise subtraction operation (530, 532); an element-wise negation operation; an element-wise addition operation (560); an element-wise division operation (590); an element-wise multiplication operation (580); an element-wise bit-shifting operation; an element-wise operation f(z)=2^z, where z is in general a non-integer value; a convolution operation (570); a function approximation operation (540); and a local response normalisation (550, 552).
- 19. The method of any one of the preceding claims, wherein the attention-based neural network comprises a matrix multiplication operation defined in two or more dimensions between a first tensor X having dimensions [..., Q,...] and a second tensor Y having dimensions [..., R,...], the method comprising: mapping the matrix multiplication operation to a graph of neural network operations including at least one transformation and at least one convolution operation; and selecting numerical formats for the graph of neural network operations.
- 20. The method of any one of claims 1-19, wherein the attention-based neural network comprises a matrix multiplication operation defined in two or more dimensions between a first tensor X having dimensions [..., Q,...] and a second tensor Y having dimensions [..., R,...], the method comprising: mapping the matrix multiplication operation to a graph of neural network operations including at least one element-wise operation; and selecting numerical formats for the graph of neural network operations.
- 21. A data processing system configured to perform the method of any of claims 1 to 20.
- 22. Computer readable code configured to cause the method of any of claims 1 to 20 to be performed when the code is run.
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2208959.3A GB2619919A (en) | 2022-06-17 | 2022-06-17 | Hardware implementation of an attention-based neural network |
EP23179567.5A EP4293575A1 (en) | 2022-06-17 | 2023-06-15 | Hardware implementation of an attention-based neural network |
CN202310713298.XA CN117252243A (en) | 2022-06-17 | 2023-06-15 | Hardware implementation of attention-based neural networks |
EP23179573.3A EP4293576A1 (en) | 2022-06-17 | 2023-06-15 | Hardware implementation of an attention-based neural network |
CN202310713303.7A CN117252244A (en) | 2022-06-17 | 2023-06-15 | Hardware implementation of attention-based neural networks |
US18/211,202 US20240127044A1 (en) | 2022-06-17 | 2023-06-16 | Hardware implementation of an attention-based neural network |
US18/211,208 US20240127049A1 (en) | 2022-06-17 | 2023-06-16 | Hardware implementation of an attention-based neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2208959.3A GB2619919A (en) | 2022-06-17 | 2022-06-17 | Hardware implementation of an attention-based neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202208959D0 GB202208959D0 (en) | 2022-08-10 |
GB2619919A true GB2619919A (en) | 2023-12-27 |
Family
ID=82705453
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2208959.3A Pending GB2619919A (en) | 2022-06-17 | 2022-06-17 | Hardware implementation of an attention-based neural network |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN117252243A (en) |
GB (1) | GB2619919A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200097823A1 (en) * | 2018-09-24 | 2020-03-26 | Samsung Electronics Co., Ltd. | Non-uniform quantization of pre-trained deep neural network |
US20210279569A1 (en) * | 2020-03-09 | 2021-09-09 | Samsung Electronics Co., Ltd. | Method and apparatus with vector conversion data processing |
GB2596606A (en) * | 2020-07-03 | 2022-01-05 | Imagination Tech Ltd | Number format selection in recurrent neural networks |
GB2596607A (en) * | 2020-07-03 | 2022-01-05 | Imagination Tech Ltd | Methods and systems for running dynamic recurrent neural networks in hardware |
WO2022137696A1 (en) * | 2020-12-25 | 2022-06-30 | 日立Astemo株式会社 | Information processing device and information processing method |
-
2022
- 2022-06-17 GB GB2208959.3A patent/GB2619919A/en active Pending
-
2023
- 2023-06-15 CN CN202310713298.XA patent/CN117252243A/en active Pending
- 2023-06-15 CN CN202310713303.7A patent/CN117252244A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200097823A1 (en) * | 2018-09-24 | 2020-03-26 | Samsung Electronics Co., Ltd. | Non-uniform quantization of pre-trained deep neural network |
US20210279569A1 (en) * | 2020-03-09 | 2021-09-09 | Samsung Electronics Co., Ltd. | Method and apparatus with vector conversion data processing |
GB2596606A (en) * | 2020-07-03 | 2022-01-05 | Imagination Tech Ltd | Number format selection in recurrent neural networks |
GB2596607A (en) * | 2020-07-03 | 2022-01-05 | Imagination Tech Ltd | Methods and systems for running dynamic recurrent neural networks in hardware |
WO2022137696A1 (en) * | 2020-12-25 | 2022-06-30 | 日立Astemo株式会社 | Information processing device and information processing method |
Also Published As
Publication number | Publication date |
---|---|
CN117252243A (en) | 2023-12-19 |
GB202208959D0 (en) | 2022-08-10 |
CN117252244A (en) | 2023-12-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10942986B2 (en) | Hardware implementation of convolutional layer of deep neural network | |
US20220391172A1 (en) | Implementation of Softmax and Exponential in Hardware | |
US20230019151A1 (en) | Implementation of pooling and unpooling or reverse pooling in hardware | |
EP4024291A1 (en) | Neural network comprising matrix multiplication | |
EP3933706A1 (en) | Number format selection in recurrent neural networks | |
US20230021204A1 (en) | Neural network comprising matrix multiplication | |
US20240111990A1 (en) | Methods and systems for performing channel equalisation on a convolution layer in a neural network | |
GB2619918A (en) | Hardware implementation of an attention-based neural network | |
EP4113384A1 (en) | Running bidirectional recurrent neural networks in hardware | |
EP4113385A1 (en) | Number format selection for bidirectional rnns | |
EP3933705A1 (en) | Methods and systems for running dynamic recurrent neural networks in hardware | |
EP4293575A1 (en) | Hardware implementation of an attention-based neural network | |
GB2619919A (en) | Hardware implementation of an attention-based neural network | |
EP4120142A1 (en) | Implementation of argmax or argmin in hardware | |
US20230012553A1 (en) | Implementation of argmax or argmin in hardware | |
GB2611520A (en) | Methods and devices for configuring a neural network accelerator with a configurable pipeline |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
732E | Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977) |
Free format text: REGISTERED BETWEEN 20240822 AND 20240828 |