WO2021037082A1 - 用于处理数据的方法、装置以及相关产品 - Google Patents

用于处理数据的方法、装置以及相关产品 Download PDF

Info

Publication number
WO2021037082A1
WO2021037082A1 PCT/CN2020/111489 CN2020111489W WO2021037082A1 WO 2021037082 A1 WO2021037082 A1 WO 2021037082A1 CN 2020111489 W CN2020111489 W CN 2020111489W WO 2021037082 A1 WO2021037082 A1 WO 2021037082A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
quantized
thresholds
value
truncation
Prior art date
Application number
PCT/CN2020/111489
Other languages
English (en)
French (fr)
Inventor
张尧
江广
张曦珊
周诗怡
黄迪
刘畅
郭家明
Original Assignee
上海寒武纪信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海寒武纪信息科技有限公司 filed Critical 上海寒武纪信息科技有限公司
Priority to EP20858492.0A priority Critical patent/EP4024287A4/en
Priority to JP2020566955A priority patent/JP7060719B2/ja
Publication of WO2021037082A1 publication Critical patent/WO2021037082A1/zh
Priority to US17/565,008 priority patent/US20220121908A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/483Computations with numbers represented by a non-linear combination of denominational numbers, e.g. rational numbers, logarithmic number system or floating-point numbers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Definitions

  • the embodiments of the present disclosure generally relate to the field of computer technology, and more specifically relate to methods, devices, and related products for processing data.
  • the embodiments of the present disclosure provide a method, device, and related products for processing data.
  • a method for processing data includes: obtaining a set of data to be quantified for use in a machine learning model; determining multiple sets of quantized data by using multiple pairs of truncation thresholds to quantify a set of data to be quantified, wherein each pair of truncation thresholds in the plurality of truncation thresholds Thresholds include symmetrical truncated positive values and truncated negative values; and based on the difference between the mean value of the absolute value of each of the multiple sets of quantized data and the mean value of the absolute value of a set of data to be quantized, from A pair of truncation thresholds is selected from a plurality of pairs of truncation thresholds to quantify a group of data to be quantified.
  • an apparatus for processing data includes: a data acquisition unit to be quantified for acquiring a group of data to be quantified for the machine learning model; a data determination unit after quantization for determining a group of data to be quantized by using multiple pairs of cutoff thresholds to quantify a group of data to be quantified.
  • each pair of cutoff thresholds in the plurality of pairs of cutoff thresholds includes a symmetrical cutoff positive value and a cutoff negative value; and a cutoff threshold selection unit for selecting a unit based on each group of the multiple sets of quantized data For the difference between the mean value of the absolute value of the data and the mean value of the absolute value of a group of data to be quantized, a pair of cutoff thresholds is selected from a plurality of pairs of cutoff thresholds to be used to quantify a group of data to be quantified.
  • a computer-readable storage medium having a computer program stored thereon, and when the program is executed, the method according to each embodiment of the present disclosure is implemented.
  • an artificial intelligence chip including a device for processing data according to various embodiments of the present disclosure.
  • an electronic device including an artificial intelligence chip according to various embodiments of the present disclosure.
  • a board card which includes a storage device, an interface device, and a control device, and an artificial intelligence chip according to various embodiments of the present disclosure.
  • the artificial intelligence chip is connected with the storage device, the control device and the interface device; the storage device is used to store data; the interface device is used to realize the data transmission between the artificial intelligence chip and the external equipment; and the control device is used to communicate with the artificial intelligence chip. The status is monitored.
  • Fig. 1 shows a schematic diagram of a processing system of a method for processing data according to an embodiment of the present disclosure
  • Fig. 2 shows a schematic diagram of an example architecture of a neural network according to an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a process for quantizing data according to an embodiment of the present disclosure
  • FIG. 4A shows a schematic diagram for symmetrically quantizing data according to an embodiment of the present disclosure
  • 4B shows a schematic diagram for symmetrically quantizing data based on a cutoff threshold according to an embodiment of the present disclosure
  • FIG. 5 shows a flowchart of a method for processing data according to an embodiment of the present disclosure
  • Fig. 6 shows a flowchart of a method for searching for a cutoff threshold of symmetric quantization according to an embodiment of the present disclosure
  • Fig. 7A shows a schematic diagram of a cutoff threshold for coarse-grained search symmetric quantization according to an embodiment of the present disclosure
  • FIG. 7B shows a schematic diagram of a cutoff threshold for fine-grained search symmetric quantization according to an embodiment of the present disclosure
  • FIG. 8 shows a flowchart of a method for iteratively searching for an optimal cutoff threshold according to an embodiment of the present disclosure
  • FIG. 9 shows a block diagram of an apparatus for processing data according to an embodiment of the present disclosure.
  • Fig. 10 shows a structural block diagram of a board according to an embodiment of the present disclosure.
  • the term “if” can be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context.
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • KL divergence Kullback–Leibler divergence
  • KL divergence is also called relative entropy (relative entropy), information divergence (information divergence), and information gain (information gain).
  • KL divergence is a measure of the difference between two probability distributions P and Q. Assuming that the distribution of 32-bit floating-point numbers before quantization is P, and the distribution of 8-bit integers after quantization is Q, then as long as the KL divergence between P and Q is smaller, the closer the distribution before and after quantization, the more effective the quantization. However, the inventor of the present application found that the quantization effect achieved by the cutoff threshold obtained by the traditional KL method is not good, which usually causes a large loss of accuracy.
  • the embodiments of the present disclosure propose a new solution for determining a cutoff threshold for symmetric quantization, which can achieve a smaller loss of quantization accuracy than traditional techniques (such as the KL method).
  • a plurality of pairs of truncation thresholds are used to quantify a set of data to be quantified to determine the plurality of sets of quantized data, wherein the plurality of truncation thresholds
  • Each pair of cutoff thresholds in includes a symmetric cutoff positive value and a cutoff negative value.
  • the difference between the mean value of the absolute value of each group of quantized data and the mean value of the absolute value of a group of to-be-quantized data is used as an evaluation index to select a suitable pair of cutoff thresholds from a plurality of pairs of cutoff thresholds. In this way, a more suitable cutoff threshold can be found.
  • FIGS. 1 to 10 The basic principles and several example implementations of the present disclosure are described below with reference to FIGS. 1 to 10. It should be understood that these exemplary embodiments are only given to enable those skilled in the art to better understand and then implement the embodiments of the present disclosure, and are not intended to limit the scope of the present disclosure in any way.
  • FIG. 1 shows a schematic diagram of a processing system 100 for a method for processing data according to an embodiment of the present disclosure.
  • the processing system 100 includes a plurality of processors 101-1, 101-2, 101-3 (collectively referred to as the processor 101) and a memory 102.
  • the processor 101 is used to execute instruction sequences, and the memory 102 is used to store Data can include random access memory (RAM) and register files.
  • RAM random access memory
  • the multiple processors 101 in the processing system 100 can not only share part of the storage space, for example, share part of the RAM storage space and the register file, but also have their own storage space at the same time.
  • the various methods according to the embodiments of the present disclosure can be applied to any one processor of the processing system 100 (for example, an artificial intelligence chip) including multiple processors (multi-core).
  • the processor may be a general-purpose processor, such as a CPU (Central Processing Unit, central processing unit), or an artificial intelligence processor (IPU) for performing artificial intelligence operations.
  • Artificial intelligence operations can include machine learning operations, brain-like operations, and so on. Among them, machine learning operations include neural network operations, k-means operations, support vector machine operations, and so on.
  • the artificial intelligence processor may, for example, include GPU (Graphics Processing Unit), NPU (Neural-Network Processing Unit, neural network processing unit), DSP (Digital Signal Process, digital signal processing unit), field programmable gate array (Field-Programmable Gate Array, FPGA) One or a combination of chips.
  • GPU Graphics Processing Unit
  • NPU Neuro-Network Processing Unit
  • DSP Digital Signal Process, digital signal processing unit
  • field programmable gate array Field-Programmable Gate Array, FPGA
  • the processor mentioned in the present disclosure may include multiple processing units, and each processing unit can independently run various tasks assigned to it, such as convolution computing tasks and pooling tasks. Or fully connected tasks, etc.
  • the present disclosure does not limit the processing unit and the tasks run by the processing unit.
  • FIG. 2 shows a schematic diagram of an example architecture of a neural network 200 according to an embodiment of the present disclosure.
  • Neural network is a mathematical model that imitates the structure and function of a biological neural network.
  • the neural network is calculated by a large number of neuron connections. Therefore, a neural network is a computational model, which is composed of a large number of nodes (or "neurons") connected to each other. Each node represents a specific output function, called activation function.
  • Each connection between two neurons represents a weighted value of the signal passing through the connection, called the weight, which is equivalent to the memory of the neural network.
  • the output of the neural network varies according to the connection method between neurons, as well as the different weights and activation functions.
  • the neuron In the neural network, the neuron is the basic unit of the neural network. It gets a certain number of inputs and a bias, and when the signal (value) arrives, it is multiplied by a weight. Connection is the connection of a neuron to another layer or another neuron in the same layer, and the connection is accompanied by a weight associated with it.
  • the bias is an additional input to the neuron, it is always 1 and has its own connection weight. This ensures that the neuron will activate even if all inputs are empty (all 0).
  • the neural network In application, if a non-linear function is not applied to the neurons in the neural network, the neural network is just a linear function, then it is not more powerful than a single neuron. If the output of a neural network is between 0 and 1, for example, in the case of cat-dog identification, the output close to 0 can be regarded as a cat, and the output close to 1 can be regarded as a dog.
  • an activation function is introduced into the neural network, such as the sigmoid activation function.
  • FIG. 2 it is a schematic diagram of the structure of the neural network 200.
  • the hidden layer 220 shown in FIG. 2 has three layers. Of course, the hidden layer 220 can also be Including more or fewer layers.
  • the neurons in the input layer 210 are called input neurons.
  • the input layer needs to input signals (values) and pass them to the next layer. It does not do any operation on the input signal (value), and has no associated weights and biases.
  • 4 input signals (values) can be received.
  • the hidden layer 220 is used for neurons (nodes) that apply different transformations to the input data.
  • a hidden layer is a collection of neurons arranged vertically (Representation).
  • the neural network shown in Figure 2 there are 3 hidden layers. The first hidden layer has 4 neurons (nodes), the second layer has 6 neurons, and the third layer has 3 neurons. Finally, the hidden layer passes the value to the output layer.
  • the neural network 200 shown in FIG. 2 completely connects each neuron in the three hidden layers, and each neuron in the three hidden layers is connected to each neuron in the next layer. It should be noted that not every hidden layer of a neural network is fully connected.
  • the neurons of the output layer 230 are called output neurons.
  • the output layer receives the output from the last hidden layer. Through the output layer 230, the desired value and the desired range can be determined.
  • the output layer has 3 neurons, that is, there are 3 output signals (values).
  • the function of the neural network is to train a large amount of sample data (including input and output) in advance. After the training is completed, use the neural network to obtain an accurate output for the input of the real environment in the future.
  • the loss function is a function that indicates how well the neural network performs on a particular task. The most direct way to do this is to pass each sample data along the neural network to get a number during the training process, and then calculate the difference between this number and the actual number you want to get, and then square it. What is calculated is the distance between the predicted value and the true value, and training the neural network is to reduce this distance or the value of the loss function.
  • the weights When starting to train the neural network, the weights should be initialized randomly. Obviously, the initialized neural network will not provide a good result. In the training process, suppose you start with a very bad neural network. Through training, you can get a network with high accuracy. At the same time, it is also hoped that at the end of the training, the function value of the loss function becomes particularly small.
  • the training process of the neural network is divided into two stages.
  • the first stage is the forward processing of the signal, from the input layer 210 to the hidden layer 220, and finally to the output layer 230.
  • the second stage is to back-propagate the gradient, from the output layer 230 to the hidden layer 220, and finally to the input layer 210, according to the gradient to adjust the weight and bias of each layer in the neural network in turn.
  • the input value is input to the input layer 210 of the neural network, and the output called the predicted value is obtained from the output layer 230 of the neural network.
  • the input value is provided to the input layer 210 of the neural network, it does not perform any operation.
  • the second hidden layer obtains the predicted intermediate result value from the first hidden layer and performs calculation and activation operations, and then passes the obtained intermediate predicted result value to the next hidden layer. The same operation is performed in the subsequent layers, and finally the output value is obtained in the output layer 230 of the neural network.
  • an output value called the predicted value is obtained.
  • a loss function is used to compare the predicted value with the actual output value to obtain the corresponding error value.
  • Backpropagation uses the chain rule of differential calculus. In the chain rule, the derivative of the error value corresponding to the weight of the last layer of the neural network is first calculated. Call these derivatives gradients, and then use these gradients to calculate the gradient of the penultimate layer in the neural network. Repeat this process until the gradient of each weight in the neural network is obtained. Finally, the corresponding gradient is subtracted from the weight, so that the weight is updated once to achieve the purpose of reducing the error value.
  • fine-tuning is to load a trained neural network.
  • the fine-tuning process is the same as the training process. It is divided into two stages. The first stage is the forward processing of the signal, and the second stage is the back propagation gradient. Update the weights of the trained neural network.
  • the difference between training and fine-tuning is that training is to randomly process the initialized neural network and train the neural network from scratch, while fine-tuning is not training from scratch.
  • the weights in the neural network are updated once using the gradient. This is called an iteration (iteration). ).
  • iteration In order to obtain a neural network with an expected accuracy, a very large sample data set is required during the training process. In this case, it is impossible to input the sample data set into the computer at one time. Therefore, in order to solve this problem, the sample data set needs to be divided into multiple blocks, and each block is passed to the computer. After each block of data set is processed forward, the weight of the neural network is updated correspondingly.
  • the data of the neural network is expressed in high-precision data formats, such as floating-point numbers, so in the training or fine-tuning process, the data involved are all high-precision data formats, and then the trained neural network is quantified.
  • the quantified object as the weight of the entire neural network, and the quantized weights are all 8-bit fixed-point numbers, as there are often millions of connections in a neural network, almost all of the space is determined by the weights of neuron connections. occupy. Moreover, these weights are all different floating-point numbers.
  • the weights of each layer tend to a certain interval of normal distribution, such as (-3.0,3.0).
  • the maximum and minimum values corresponding to the weights of each layer in the neural network are saved, and each floating-point value is represented by an 8-bit fixed-point number.
  • the space is linearly divided into 256 quantization intervals within the range of the maximum and minimum values, and each quantization interval is represented by an 8-bit fixed-point number.
  • byte 0 means -3.0
  • byte 255 means 3.0.
  • byte 128 represents 0.
  • the floating-point arithmetic unit needs to consume more resources to process, so that the power consumption gap between fixed-point and floating-point operations is usually orders of magnitude.
  • the chip area and power consumption of a floating-point arithmetic unit are many times larger than that of a fixed-point arithmetic unit.
  • FIG. 3 shows a schematic diagram of a process 300 for quantizing data according to an embodiment of the present disclosure.
  • the input data 310 is an unquantized floating point number, such as a 32-bit floating point number. If the input data 310 is directly input to the neural network model 340 for processing, more computing resources will be consumed and the processing speed will be slower. . Therefore, at block 320, the input data may be quantized to obtain quantized data 330 (for example, an 8-bit integer). If the quantized data 330 is input to the neural network model 340 for processing, since the 8-bit integer calculation is faster, the neural network model 340 will complete the processing of the input data faster and generate the corresponding output result 350.
  • quantized data 330 for example, an 8-bit integer
  • FIG. 4A shows a diagram 400 for symmetrically quantizing data according to an embodiment of the present disclosure.
  • it is the simplest symmetric quantization method. It directly selects the maximum absolute value of all the values in the data to be quantized, that is
  • FIG. 4B shows a diagram 450 for symmetrically quantizing data based on a cutoff threshold according to an embodiment of the present disclosure.
  • a cutoff threshold T is selected in Figure 4B, and the data outside the range of -
  • the three values to be quantized in the circle 460 are outside the cut-off range, so they will be treated as the value -
  • the truncation threshold to reduce the value range of the data to be quantized, the accuracy of the quantized data can be improved.
  • how to obtain the cutoff threshold with the least loss of quantization accuracy is a technical problem to be solved urgently.
  • FIG. 5 shows a flowchart of a method 500 for processing data according to an embodiment of the present disclosure. It should be understood that the method 500 may be executed by one or more processors 101 described with reference to FIG. 1.
  • a set of data to be quantified for the machine learning model is obtained.
  • the input data 310 to be quantized can be obtained, and the input data can be quantized, thereby speeding up the processing speed of the neural network model 340.
  • some parameters (such as weights) of the neural network model itself can also be quantified.
  • the size of the neural network model can be reduced.
  • the data to be quantized may be a 32-bit floating point number.
  • the data to be quantized may also be floating-point numbers with other digits, or other data types.
  • multiple sets of quantized data are determined by using multiple pairs of truncation thresholds to respectively quantify a set of data to be quantized, where each pair of truncation thresholds in the multiple pairs of truncation thresholds includes a symmetrical truncated positive value and a truncated negative value.
  • the truncation threshold is a symmetric pair of positive and negative values, that is, the truncated positive value and the truncated negative value. The values of these two values are the same but have opposite signs.
  • multiple pairs of cutoff thresholds can be selected, and the data to be quantized can be quantified separately.
  • some truncation thresholds may be selected at fixed intervals, for example, a truncation threshold may be selected every predetermined distance according to the maximum absolute value in the data to be quantized.
  • the corresponding one or more quantization parameters may be calculated according to each pair of truncation thresholds, and then the calculated quantization parameters may be used to quantize the data to be quantized.
  • the data to be quantized can also be directly quantified through various formulas or models according to the cutoff threshold, without separately calculating the value of each quantization parameter.
  • a pair of truncation is selected from the plurality of pairs of truncation thresholds.
  • the threshold is used to quantify a set of data to be quantified.
  • the inventors of the present application have discovered through research and a large number of experiments that the mean difference between the absolute values of the data before and after quantization can reflect the accuracy loss before and after quantization, where the smaller the mean absolute difference, the smaller the accuracy loss of the quantization operation. Therefore, the embodiment of the present disclosure uses the difference of the mean value of the absolute value of the data before and after the quantization as an index for selecting the optimal cutoff threshold, which can achieve a smaller accuracy loss than the traditional KL method.
  • the difference between the mean value of the absolute value of the quantized data and the mean value of the absolute value of the data to be quantized may be the difference between the two absolute value means.
  • the difference between the mean value of the absolute value of the quantized data and the mean value of the absolute value of the data to be quantized may also be: the difference between the two absolute value means divided by the mean value of the absolute value of the data to be quantized , And then take the absolute value.
  • the selected pair of cutoff thresholds can be used to quantify a set of data to be quantized to obtain quantized data, including: The numerical value of the truncated positive value is truncated to a truncated positive value, and a set of data to be quantified less than the truncated negative value is truncated to a truncated negative value; then the obtained quantized data is input to the neural network model for processing.
  • FIG. 6 shows a flowchart of a method 600 for searching for a cutoff threshold for symmetric quantization according to an embodiment of the present disclosure.
  • the method 600 determines an optimal pair of cutoff thresholds for data quantization based on the data to be quantized.
  • the mean value of the absolute value of the data to be quantized and the maximum value of the absolute value in the data to be quantized where the mean of the absolute value is the sum of the absolute values of all the data in the data to be quantized divided by the number of elements, in addition,
  • the minimum mean difference is also initialized, for example, the maximum value in floating-point numbers is initially set, and the search order i of the cyclic search is initialized (for example, initialized to 0).
  • the search order i can also be initialized to half of the total number of searches, that is, the search starts from the middle, which can improve the search efficiency.
  • one or more rounds of the threshold search process can be set, and each round of the threshold search can have the same or different total number of searches.
  • the total number of searches in each round can be set between 10 and 32.
  • the more the total number of searches the longer the search time and the more accurate the cutoff threshold found.
  • the search performance may no longer be substantially improved.
  • FIG. 7A shows an example illustration 700 of a cutoff threshold for coarse-grained search for symmetric quantization according to an embodiment of the present disclosure.
  • 10 candidate truncation thresholds can be determined in the data to be quantified (identified by the dotted line in Figure 7A), and these 10 pairs of truncation thresholds can be used in turn ( Figure 7A only shows the positive truncation values, and the corresponding values are not shown.
  • the cut-off negative value of performs the quantization process, and determines the best pair of cut-off thresholds according to the difference of the absolute value of the data before and after the quantization.
  • search order i is less than the total number of searches, that is, when each pair of cutoff thresholds are selected in turn for quantization, it is judged whether all calculations for cutoff thresholds have been completed. If the search order i is less than the total number of searches, in block 606, based on the current search order i, a pair of truncation thresholds is determined, which are the maximum value of absolute value/total number of searches*(i+1), absolute value. Maximum value/total number of searches*(i+1).
  • block 612 it is determined whether the calculated difference Distance_i is less than the current minimum difference. If so, in block 614, the calculated difference Distance_i is set as the current minimum difference, and the cutoff threshold when the difference is the smallest is recorded, and then the search order i (ie, i++) is incremented in block 616. If it is judged in block 612, the search order i is directly incremented in block 616, that is, the difference between the next pair of truncation thresholds is determined. Next, continue to loop through steps 604 to 616 until the value of the search order i reaches the total number of searches, then in block 618, exit the first round of the search process of the truncation threshold. As shown in FIG.
  • the process of truncation threshold search is: use multiple pairs of truncation thresholds to quantify the data to be quantified, and determine the group of quantized data that has the smallest difference in absolute value from the data to be quantized in the multiple sets of quantized data. , And then select a pair of cutoff thresholds corresponding to this set of quantized data from multiple pairs of cutoff thresholds.
  • a second round of fine-grained truncation threshold search process can be performed.
  • the second round of search process can also refer to method 600, except that the second round of search is within a certain range around the first round of optimal truncation threshold 770 (for example, , The selected cut-off threshold 770 between the previous cut-off threshold and the next cut-off threshold) is performed, which is a further refinement of the first round of search results.
  • the interval between each pair of cutoff thresholds may be (maximum absolute value*2)/(total number of searches in the first round*total number of searches in the second round).
  • FIG. 7B shows an illustration 750 of a cutoff threshold for fine-grained search for symmetric quantization according to an embodiment of the present disclosure.
  • the fine-grained optimal cutoff threshold is 772 and 778.
  • a more accurate cut-off threshold can be obtained, and the accuracy loss caused by quantization can be further reduced.
  • FIG. 8 shows a flowchart of a method 800 for iteratively searching for an optimal cutoff threshold according to an embodiment of the present disclosure.
  • three pairs of truncation thresholds are determined. For example, the maximum absolute value absmax of all data in the data F x to be quantized can be determined.
  • the three pairs of truncation thresholds can be (-absmax/2, absmax/2), (- absmax*3/4, absmax*3/4) and (-absmax, absmax).
  • the three pairs of truncation thresholds are used to quantify the data to be quantized, respectively, to obtain the quantized data Then calculate F x , The mean value of the corresponding absolute value F mean , Then according to the formula Choose the smallest difference diff_min. In block 806, it is determined whether the minimum difference diff_min is less than a predetermined threshold set in advance.
  • the quantization parameter when using each pair of truncation thresholds to quantize data can be determined by the following equations (1)-(3).
  • n the number of binary digits after quantization
  • S and f represent quantization parameters
  • ceil rounding up.
  • the quantization parameters S1, f1, S2, f2, S3, and f3 can be obtained, thereby obtaining the quantized data
  • S and f corresponding to the pair of truncation thresholds are directly taken as the quantization parameters of the data to be quantized.
  • steps in the flowchart are displayed in sequence according to the directions of the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless there is a clear description in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in the flowchart may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The execution of these sub-steps or stages The sequence is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • FIG. 9 shows a block diagram of an apparatus 900 for processing data according to an embodiment of the present disclosure.
  • the device 900 includes a data acquisition unit 910 to be quantized, a data determination unit 920 after quantization, and a truncation threshold selection unit 930.
  • the to-be-quantized data acquisition unit 910 is configured to acquire a group of to-be-quantized data for the machine learning model.
  • the quantized data determining unit 920 is configured to determine multiple sets of quantized data by using multiple pairs of truncation thresholds to quantize a set of data to be quantized, wherein each pair of truncation thresholds in the multiple pairs of truncation thresholds includes a symmetrical truncation positive value and a truncation Negative value.
  • the truncation threshold selection unit 930 is configured to select from a plurality of pairs of truncation thresholds based on the difference between the mean value of the absolute value of each set of quantized data in the plurality of sets of quantized data and the mean value of the absolute value of a set of data to be quantized A pair of cutoff thresholds, used to quantify a set of data to be quantified.
  • the to-be-quantized data acquiring unit 910, the quantized data determining unit 920, and the truncation threshold selection unit 930 in the device 900 may also be configured to perform steps and/or actions according to various embodiments of the present disclosure.
  • the foregoing device embodiments are only illustrative, and the device of the present disclosure may also be implemented in other ways.
  • the division of units/modules in the above-mentioned embodiments is only a logical function division, and there may be other division methods in actual implementation.
  • multiple units, modules or components may be combined or integrated into another system, or some features may be omitted or not implemented.
  • the functional units/modules in the various embodiments of the present disclosure may be integrated into one unit/module, or each unit/module may exist alone physically, or two or more units/modules may exist.
  • the modules are integrated together.
  • the above-mentioned integrated unit/module can be implemented in the form of hardware or software program module.
  • the hardware may be a digital circuit, an analog circuit, and so on.
  • the physical realization of the hardware structure includes but is not limited to transistors, memristors and so on.
  • the artificial intelligence processor may be any appropriate hardware processor, such as CPU, GPU, FPGA, DSP, ASIC, and so on.
  • the storage unit may be any suitable magnetic storage medium or magneto-optical storage medium, such as RRAM (Resistive Random Access Memory), DRAM (Dynamic Random Access Memory), Static random access memory SRAM (Static Random-Access Memory), enhanced dynamic random access memory EDRAM (Enhanced Dynamic Random Access Memory), high-bandwidth memory HBM (High-Bandwidth Memory), hybrid storage cube HMC (Hybrid Memory Cube), etc. Wait.
  • RRAM Resistive Random Access Memory
  • DRAM Dynamic Random Access Memory
  • Static random access memory SRAM Static Random-Access Memory
  • enhanced dynamic random access memory EDRAM Enhanced Dynamic Random Access Memory
  • high-bandwidth memory HBM High-Bandwidth Memory
  • hybrid storage cube HMC Hybrid Memory Cube
  • the integrated unit/module is implemented in the form of a software program module and sold or used as an independent product, it can be stored in a computer readable memory.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory, It includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed, the method according to each embodiment of the present disclosure is implemented
  • an artificial intelligence chip which includes a device for processing the above data.
  • a board card which includes a storage device, an interface device, a control device, and the aforementioned artificial intelligence chip; wherein, the artificial intelligence chip is connected to the storage device, the control device, and the artificial intelligence chip.
  • the interface devices are respectively connected; the storage device is used to store data; the interface device is used to realize data transmission between the artificial intelligence chip and external equipment; the control device is used to communicate with the artificial intelligence chip The status is monitored.
  • FIG. 10 shows a structural block diagram of a board card 1000 according to an embodiment of the present disclosure.
  • the board card 1000 may include other supporting devices in addition to the chips 1030-1 and 1030-2 (collectively referred to as chips 1030).
  • the supporting components include but are not limited to: a storage device 1010, an interface device 1040, and a control device 1020.
  • the interface device 1040 may be connected with an external device 1060.
  • the storage device 1010 and the artificial intelligence chip 1030 are connected through a bus 1050 for storing data.
  • the storage device 1010 may include multiple sets of storage cells 1010-1 and 1010-2.
  • Each group of the storage unit and the artificial intelligence chip are connected through a bus 1050. It can be understood that each group of the storage units may be DDR SDRAM (English: Double Data Rate SDRAM, double-rate synchronous dynamic random access memory).
  • the storage device may include 4 groups of the storage units. Each group of the storage unit may include a plurality of DDR4 particles (chips).
  • the artificial intelligence chip may include four 72-bit DDR4 controllers. In the 72-bit DDR4 controller, 64 bits are used for data transmission and 8 bits are used for ECC verification. It can be understood that when DDR4-3200 particles are used in each group of the storage units, the theoretical bandwidth of data transmission can reach 25600MB/s.
  • each group of the storage unit includes a plurality of double-rate synchronous dynamic random access memories arranged in parallel.
  • DDR can transmit data twice in one clock cycle.
  • a controller for controlling the DDR is provided in the chip, which is used to control the data transmission and data storage of each storage unit.
  • the interface device is electrically connected with the artificial intelligence chip.
  • the interface device is used to implement data transmission between the artificial intelligence chip and an external device (such as a server or a computer).
  • the interface device may be a standard PCIE interface.
  • the data to be processed is transferred from the server to the chip through a standard PCIE interface to realize data transfer.
  • the interface device may also be other interfaces. The present disclosure does not limit the specific manifestations of the other interfaces mentioned above, as long as the interface unit can realize the switching function.
  • the calculation result of the artificial intelligence chip is still transmitted by the interface device back to an external device (such as a server).
  • the control device is electrically connected with the artificial intelligence chip.
  • the control device is used to monitor the state of the artificial intelligence chip.
  • the artificial intelligence chip and the control device may be electrically connected through an SPI interface.
  • the control device may include a single-chip microcomputer (Micro Controller Unit, MCU).
  • MCU Micro Controller Unit
  • the artificial intelligence chip may include multiple processing chips, multiple processing cores, or multiple processing circuits, and can drive multiple loads. Therefore, the artificial intelligence chip can be in different working states such as multi-load and light-load.
  • the control device can realize the regulation and control of the working states of multiple processing chips, multiple processing and/or multiple processing circuits in the artificial intelligence chip.
  • an electronic device which includes the aforementioned artificial intelligence chip.
  • Electronic equipment includes data processing devices, robots, computers, printers, scanners, tablets, smart terminals, mobile phones, driving recorders, navigators, sensors, cameras, servers, cloud servers, cameras, cameras, projectors, watches, headsets , Mobile storage, wearable devices, vehicles, household appliances, and/or medical equipment.
  • the transportation means include airplanes, ships, and/or vehicles;
  • the household appliances include TVs, air conditioners, microwave ovens, refrigerators, rice cookers, humidifiers, washing machines, electric lights, gas stoves, and range hoods;
  • the medical equipment includes nuclear magnetic resonance, B-ultrasound and/or electrocardiograph.
  • a method for processing data characterized in that it comprises:
  • Multiple sets of quantized data are determined by using multiple pairs of truncation thresholds to respectively quantify the set of data to be quantified, each of the multiple pairs of truncation thresholds including a symmetric truncated positive value and a truncated negative value;
  • the cutoff threshold is used to quantify the set of data to be quantified.
  • determining multiple sets of quantified data includes:
  • the plurality of pairs of cutoff thresholds are determined.
  • determining multiple sets of quantified data further includes:
  • a first set of quantized data is determined by using a first pair of truncation thresholds to quantify the set of data to be quantized.
  • the first pair of truncation thresholds includes the first truncated positive value and the first truncated positive value. The opposite first truncated negative value;
  • determining multiple sets of quantified data further includes:
  • a second set of quantized data is determined by using a second pair of truncation thresholds to quantify the set of data to be quantized.
  • the second pair of truncation thresholds includes the second truncated positive value and the second truncated positive value. The opposite second truncated negative value;
  • a pair of truncation thresholds corresponding to the set of quantized data is selected from the plurality of pairs of truncation thresholds.
  • the first pair of cutoff thresholds in the three pairs of cutoff thresholds includes half of the maximum value of the absolute value and the opposite, and the first pair of cutoff thresholds in the three pairs of cutoff thresholds
  • the two pairs of cut-off thresholds include three-quarters of the absolute maximum value and the inverse number thereof, and the third pair of cut-off thresholds in the three pairs of cut-off thresholds include the absolute value maximum and the inverse number;
  • the three sets of quantized data are determined by using three pairs of truncation thresholds to respectively quantify the set of data to be quantized.
  • selecting a pair of truncation thresholds from the plurality of pairs of truncation thresholds comprises:
  • three pairs of cutoff thresholds are re-determined.
  • quantizing the set of data to be quantized includes: quantifying the value of the set of data to be quantized that is greater than the cutoff positive value Set as the truncated positive value, and set a numerical value in the set of data to be quantized that is less than the truncated negative value as the truncated negative value; and
  • the obtained quantized data is input to the neural network model for processing.
  • a device for processing data characterized in that it comprises:
  • the to-be-quantified data acquisition unit is used to acquire a group of to-be-quantified data used in the machine learning model
  • the quantized data determining unit is configured to determine a plurality of sets of quantized data by using a plurality of pairs of truncation thresholds to quantize the set of data to be quantized, and each pair of truncation thresholds in the plurality of pairs of truncation thresholds includes a symmetric truncation threshold. Values and truncated negative values; and
  • the truncation threshold selection unit is configured to determine the difference between the mean value of the absolute value of each set of quantized data in the plurality of sets of quantized data and the mean value of the absolute value of the set of data to be quantized based on the A pair of truncation thresholds is selected from the plurality of pairs of truncation thresholds to quantify the set of data to be quantified.
  • the quantized data determining unit includes:
  • An absolute maximum value determining unit for determining the maximum absolute value of all data in the set of data to be quantized
  • a plurality of pairs of truncation threshold determination units are configured to determine the plurality of pairs of truncation thresholds based on the absolute maximum value.
  • quantized data determining unit further comprises:
  • a first cut-off positive value determining unit configured to determine a first cut-off positive value based on the maximum absolute value, a predetermined total number of searches, and the current search order;
  • a first group of quantized data determining unit configured to determine a first group of quantized data by using a first pair of truncation thresholds to quantize the group of data to be quantized, the first pair of truncation thresholds including the first truncation A positive value and a first truncated negative value opposite to the first truncated positive value;
  • the first difference determining unit is configured to determine the first difference between the mean value of the absolute value of the first set of quantized data and the mean value of the absolute value of the set of data to be quantized.
  • quantized data determining unit further comprises:
  • a second cut-off positive value determining unit configured to determine a second cut-off positive value based on the maximum absolute value, the predetermined total number of searches, and the current search order;
  • a second set of quantized data determining unit configured to determine a second set of quantized data by using a second pair of truncation thresholds to quantize the set of data to be quantized, where the second pair of truncation thresholds includes the second truncation A positive value and a second truncated negative value opposite to the second truncated positive value;
  • the second difference determining unit is configured to determine the second difference between the mean value of the absolute value of the second set of quantized data and the mean value of the absolute value of the set of data to be quantized.
  • a minimum difference determining unit configured to determine a group of quantized data with the smallest difference in the mean value of absolute values from the group of data to be quantized among the plurality of groups of quantized data;
  • the second truncation threshold selection unit is configured to select a pair of truncation thresholds corresponding to the set of quantized data from the plurality of pairs of truncation thresholds.
  • a truncated search range determining unit configured to determine a truncated search range associated with the selected pair of truncation thresholds
  • a new multiple-pair truncation threshold determining unit configured to determine a new multiple-pair truncation threshold within the truncation search range
  • the second quantized data determining unit is configured to determine new sets of quantized data by using the new pairs of cutoff thresholds to quantize the set of data to be quantized respectively;
  • the third truncation threshold selection unit is used to base the difference between the mean value of the absolute value of each group of quantized data in the new plurality of sets of quantized data and the mean value of the absolute value of the set of data to be quantized , Select a new pair of truncation thresholds from the new multiple pairs of truncation thresholds.
  • the quantized data determining unit includes:
  • An absolute maximum value determining unit configured to determine the maximum absolute value of all data in the set of data to be quantized
  • a three-pair cut-off threshold determining unit is configured to determine three pairs of cut-off thresholds based on the maximum absolute value, where the first pair of cut-off thresholds in the three pairs of cut-off thresholds includes half of the maximum absolute value and the opposite thereof, The second pair of cutoff thresholds in the three pairs of cutoff thresholds includes three-quarters of the maximum value of the absolute value and the opposite, and the third pair of cutoff thresholds in the three pairs of cutoff thresholds includes the maximum of the absolute value Value and its inverse; and
  • Three sets of quantized data determining units are used to determine three sets of quantized data by using three pairs of truncation thresholds to respectively quantize the set of data to be quantized.
  • the iterative unit is used to iteratively perform the following actions until the stop condition is met:
  • three pairs of cutoff thresholds are re-determined.
  • the data quantization unit is configured to use the selected pair of cutoff thresholds to quantize the set of data to be quantized to obtain quantized data, wherein quantizing the set of data to be quantized includes: combining the set of data to be quantized A numerical value greater than the cut-off positive value is set as the cut-off positive value, and a numerical value smaller than the cut-off negative value in the set of data to be quantized is set as the cut-off negative value; and
  • the data input unit is used to input the obtained quantized data into the neural network model for processing.
  • a computer-readable storage medium characterized in that a computer program is stored thereon, which, when executed, implements the method according to any one of clauses A1-A9.
  • An artificial intelligence chip characterized in that the chip includes the device for processing data according to any one of clauses A10-A18.
  • An electronic device characterized in that it comprises the artificial intelligence chip according to clause A20.
  • a board characterized in that, the board comprises: a storage device, an interface device, a control device, and the artificial intelligence chip according to clause A20;
  • the artificial intelligence chip is connected with the storage device, the control device, and the interface device;
  • the storage device is used to store data
  • the interface device is used to implement data transmission between the artificial intelligence chip and external equipment.
  • the control device is used to monitor the state of the artificial intelligence chip.
  • the storage device includes: multiple groups of storage units, each group of storage units is connected to the artificial intelligence chip through a bus, and the storage unit is DDR SDRAM;
  • the chip includes: a DDR controller, which is used to control data transmission and data storage of each storage unit;
  • the interface device is a standard PCIE interface.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Molecular Biology (AREA)
  • Computational Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Nonlinear Science (AREA)
  • Neurology (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本公开的实施例涉及一种用于处理数据的方法、装置以及相关产品。本公开的实施例涉及一种板卡,所述板卡包括:存储器件、接口装置、控制器件、以及人工智能芯片;其中所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;所述存储器件,用于存储数据;所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;所述控制器件,用于对所述人工智能芯片的状态进行监控。所述板卡可以用于执行人工智能运算。

Description

用于处理数据的方法、装置以及相关产品 技术领域
本公开的实施例总体上计算机技术领域,并且更具体地涉及用于处理数据的方法、装置以及相关产品。
背景技术
随着人工智能技术的不断发展,其应用领域越来越广泛,在图像识别、语音识别、自然语言处理等领域中都得到了良好的应用。然而,随着人工智能算法的复杂度和准确度的提高,机器学习模型越来越大,需要处理的数据量也越来越大。在进行大量的数据处理时,需要较大的计算和时间开销,处理效率较低。
发明内容
鉴于此,本公开的实施例提供了一种用于处理数据的方法、装置以及相关产品。
在本公开的第一方面,提供了一种用于处理数据的方法。该方法包括:获取用于机器学习模型的一组待量化数据;通过使用多对截断阈值分别量化一组待量化数据,来确定多组量化后的数据,其中多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;以及基于多组量化后的数据中的每组量化后的数据的绝对值的均值与一组待量化数据的绝对值的均值之间的差异,从多对截断阈值中选择一对截断阈值,以用于量化一组待量化数据。
在本公开的第二方面,提供了一种用于处理数据的装置。该装置包括:待量化数据获取单元,用于获取用于机器学习模型的一组待量化数据;量化后数据确定单元,用于通过使用多对截断阈值分别量化一组待量化数据,来确定多组量化后的数据,其中多对截断阈值中的 每对截断阈值包括对称的截断正值和截断负值;以及截断阈值选择单元,用于基于多组量化后的数据中的每组量化后的数据的绝对值的均值与一组待量化数据的绝对值的均值之间的差异,从多对截断阈值中选择一对截断阈值,以用于量化一组待量化数据。
在本公开的第三方面,提供了一种计算机可读存储介质,其上存储有计算机程序,程序被执行时实现根据本公开的各个实施例的方法。
在本公开的第四方面,提供了一种人工智能芯片其包括根据本公开的各个实施例的用于处理数据的装置。
在本公开的第五方面,提供了一种电子设备,其包括根据本公开的各个实施例的人工智能芯片。
在本公开的第六方面,提供了一种板卡,其包括:存储器件、接口装置和控制器件以及根据本公开的各个实施例的人工智能芯片。其中,人工智能芯片与存储器件、控制器件以及接口装置相连接;存储器件用于存储数据;接口装置用于实现人工智能芯片与外部设备之间的数据传输;以及控制器件用于对人工智能芯片的状态进行监控。
通过权利要求中的技术特征进行推导,能够达到对应背景技术中的技术问题的有益效果。根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本公开的示例性实施例、特征和方面,并且用于解释本公开的原理。
图1示出了根据本公开的实施例的用于处理数据的方法的处理系统的示意图;
图2示出了根据本公开的实施例的神经网络的示例架构的示意图;
图3示出了根据本公开的实施例的用于量化数据的过程的示意图;
图4A示出了根据本公开的实施例的用于对称地量化数据的示意图;
图4B示出了根据本公开的实施例的用于基于截断阈值对称地量化数据的示意图;
图5示出了根据本公开的实施例的用于处理数据的方法的流程图;
图6示出了根据本公开的实施例的用于搜索对称量化的截断阈值的方法的流程图;
图7A示出了根据本公开的实施例的用于粗粒度搜索对称量化的截断阈值的示意图;
图7B示出了根据本公开的实施例的用于细粒度搜索对称量化的截断阈值的示意图;
图8示出了根据本公开的实施例的用于迭代地搜索最佳截断阈值的方法的流程图;
图9示出了根据本公开的实施例的用于处理数据的装置的框图;以及
图10示出根据本公开实施例的板卡的结构框图。
具体实施例
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
应当理解,本公开的权利要求、说明书及附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。本公开的说明书和权利要求书中使用的术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本公开说明书中所使用的术语仅仅是出于描述特定实施例的目的,而并不意在限定本公开。如在本公开说明书和权 利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。还应当进一步理解,在本公开说明书和权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
一般来说,在对数据进行量化时,如果选择的取值范围较广,则会造成量化后的数据精度较低,而如果取值范围过小,则会造成过多的数据被截断,导致分布在两侧的数据的信息损失,其中取值范围是指用于量化数据的最小截断阈值与最大截断阈值之间的数值范围。因此,需要找到一对合适的截断阈值来对数据进行量化,使得数据量化的损失最小或较小。传统地,通过KL散度(Kullback–Leibler divergence)的方法来确定最佳截断阈值,其中KL散度能够确定量化前与量化后的数据之间的相关度。KL散度又称为相对熵(relative entropy)、信息散度(information divergence)、信息增益(information gain)。KL散度是两个概率分布P和Q之间差别的度量。假设量化前32位浮点数分布为P,量化后8位整数分布为Q,那么只要让P和Q之间的KL散度越小,则表明量化前后的分布越接近,量化也就越有效。然而,本申请的发明人发现通过传统的KL方法所获得的截断阈值所实现的量化效果并不佳,通常会造成较大的精度损失。
为此,本公开的实施例提出了一种确定用于对称量化的截断阈值的新方案,能够实现比传统技术(例如KL方法)更小的量化精度损失。根据本公开的实施例,在获取用于机器学习模型的一组待量化数据之后,通过使用多对截断阈值分别量化一组待量化数据,来确定多 组量化后的数据,其中多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值。然后,使用每组量化后的数据的绝对值的均值与一组待量化数据的绝对值的均值之间的差异作为评价指标,来从多对截断阈值中选择合适的一对截断阈值。通过这种方式,能够找到更合适的截断阈值。
以下参考图1至图10来说明本公开的基本原理和若干示例实现方式。应当理解,给出这些示例性实施例仅是为了使本领域技术人员能够更好地理解进而实现本公开的实施例,而并非以任何方式限制本公开的范围。
图1示出了根据本公开的实施例的用于处理数据的方法的处理系统100的示意图。如图1所示,处理系统100包括多个处理器101-1、101-2、101-3(统称为处理器101)以及存储器102,处理器101用于执行指令序列,存储器102用于存储数据,可包括随机存储器(RAM,Random Access Memory)和寄存器堆。处理系统100中的多个处理器101既可共用部分存储空间,例如共用部分RAM存储空间和寄存器堆,又可同时拥有各自的存储空间。
应当理解,根据本公开实施例的各种方法可应用于包括多个处理器(多核)的处理系统100(例如人工智能芯片)的任意一个处理器中。该处理器可以是通用处理器,例如CPU(Central Processing Unit,中央处理器),也可以是用于执行人工智能运算的人工智能处理器(IPU)。人工智能运算可包括机器学习运算,类脑运算等。其中,机器学习运算包括神经网络运算、k-means运算、支持向量机运算等。该人工智能处理器可例如包括GPU(Graphics Processing Unit,图形处理单元)、NPU(Neural-Network Processing Unit,神经网络处理单元)、DSP(Digital Signal Process,数字信号处理单元)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)芯片中的一种或组合。本公开对处理器的具体类型不作限制。此外,处理系统100中的多个处理器的类型可以相同或不同,本公开对此不作限制。
在一种可能的实现方式中,本公开中所提及的处理器可包括多个 处理单元,每个处理单元可以独立运行所分配到的各种任务,如:卷积运算任务、池化任务或全连接任务等。本公开对处理单元及处理单元所运行的任务不作限制。
图2示出了根据本公开的实施例的神经网络200的示例架构的示意图。神经网络(neural network,NN)是一种模仿生物神经网络的结构和功能的数学模型,神经网络由大量的神经元连接进行计算。因此,神经网络是一种计算模型,由大量的节点(或称“神经元”)相互的连接构成。每个节点代表一种特定的输出函数,称为激活函数(activation function)。每两个神经元之间的连接都代表一个通过该连接信号的加权值,称之为权值,这相当于神经网络的记忆。神经网络的输出则依神经元之间的连接方式以及权值和激活函数的不同而不同。在神经网络中,神经元是神经网络的基本单位。它获得一定数量的输入和一个偏置,当信号(值)到达时会乘以一个权值。连接是将一个神经元连接到另一层或同一层的另一个神经元,连接伴随着与之相关联的权值。另外,偏置是神经元的额外输入,它始终为1,并具有自己的连接权值。这确保即使所有的输入都为空(全部为0),神经元也会激活。
在应用中,如果不对神经网络中的神经元应用一个非线性函数,神经网络只是一个线性函数而已,那么它并不比单个神经元强大。如果让一个神经网络的输出结果在0到1之间,例如,在猫狗鉴别的例子中,可以把接近于0的输出视为猫,将接近于1的输出视为狗。为了完成这个目标,在神经网络中引入激活函数,比如:sigmoid激活函数。关于这个激活函数,只需要知道它的返回值是一个介于0到1的数字。因此,激活函数用于将非线性引入神经网络,它会将神经网络运算结果缩小到较小的范围内。实际上,激活函数怎样表达并不重要,重要的是通过一些权值将一个非线性函数参数化,可以通过改变这些权值来改变这个非线性函数。
如图2所示,为神经网络200的结构示意图。在图2所示的神经网络中,包括三层,分别为输入层210、隐含层220以及输出层230, 图2所示的隐含层220为3层,当然,隐含层220也可以包括更多或更少的层。其中输入层210的神经元被称为输入神经元。输入层作为神经网络中的第一层,需要输入信号(值)并将它们传递到下一层。它不对输入信号(值)做任何操作,并且没有关联的权值和偏置。在图2所示的神经网络中,可以接收4个输入信号(值)。
隐含层220用于对输入数据应用不同变换的神经元(节点)。一个隐含层是垂直排列的神经元的集合(Representation)。在图2所示的神经网络中,有3个隐含层。第一隐含层有4个神经元(节点),第2层有6个神经元,第3层有3个神经元。最后,隐含层将值传递给输出层。图2所示的神经网络200将3个隐含层中每个神经元之间进行完全连接,3个隐含层中的每个神经元都与下一层的每一个神经元有连接。需要说明的是,并不是每个神经网络的隐含层是完全连接的。
输出层230的神经元被称为输出神经元。输出层接收来自最后一个隐含层的输出。通过输出层230,可以确定期望的值和期望的范围。在图2所示的神经网络中,输出层有3个神经元,即有3个输出信号(值)。
在实际应用中,神经网络的作用就是预先给大量的样本数据(包含输入和输出)来进行训练,训练完成后,使用神经网络对于将来的真实环境的输入得到一个准确的输出。
在开始讨论神经网络的训练之前,需要定义损失函数。损失函数是一个指示神经网络在某个特定的任务上表现有多好的函数。做这件事的最直接的办法就是,在训练过程中,对每一个样本数据,都沿着神经网络传递得到一个数字,然后将这个数字与想要得到的实际数字做差再求平方,这样计算出来的就是预测值与真实值之间的距离,而训练神经网络就是希望将这个距离或损失函数的取值减小。
在开始训练神经网络的时候,要对权值进行随机初始化。显然,初始化的神经网络并不会提供一个很好的结果。在训练的过程中,假设以一个很糟糕的神经网络开始,通过训练,可以得到一个具有高准 确率的网络。同时,还希望在训练结束的时候,损失函数的函数值变得特别小。
神经网络的训练过程分为两个阶段,第一阶段是信号的正向处理,从输入层210经过隐含层220,最后到达输出层230。第二阶段是反向传播梯度,从输出层230到隐含层220,最后到输入层210,根据梯度依次调节神经网络中每层的权值和偏置。
在正向处理的过程中,将输入值输入到神经网络的输入层210,并从神经网络的输出层230得到称为预测值的输出。当输入值提供给神经网络的输入层210时,它没有进行任何操作。在隐含层中,第二个隐含层从第一个隐含层获取预测中间结果值并进行计算操作和激活操作,然后将得到的预测中间结果值传递给下一个隐含层。在后面的层中执行相同的操作,最后在神经网络的输出层230得到输出值。
正向处理后,得到一个被称为预测值的输出值。为了计算误差,使用损失函数来将预测值与实际输出值进行比较,获得对应的误差值。反向传播使用微分学的链式法则,在链条法则中,首先计算对应神经网络的最后一层权值的误差值的导数。称这些导数为梯度,然后使用这些梯度来计算神经网络中的倒数第二层的梯度。重复此过程,直到得到神经网络中每个权值的梯度。最后,从权值中减去对应的梯度,从而对权值进行一次更新,以达到减少误差值的目的。
此外,对于神经网络来说,微调是载入训练过的神经网络,微调过程与训练过程相同,分为两个阶段,第一阶段是信号的正向处理,第二阶段是反向传播梯度,对训练过的神经网络的权值进行更新。训练与微调的不同之处在于,训练是随机对初始化的神经网络进行处理,从头开始训练神经网络,而微调不是从头开始训练。
在神经网络进行训练或微调过程中,神经网络每经过一次信号的正向处理以及对应一次误差的反向传播过程,神经网络中的权值利用梯度进行一次更新,此时称为一次迭代(iteration)。为了获得精度符合预期的神经网络,在训练过程中需要很庞大的样本数据集。在这种情况下,一次性将样本数据集输入计算机是不可能的。因此,为了解 决这个问题,需要把样本数据集分成多个块,每块传递给计算机,每块数据集正向处理后,对应更新一次神经网络的权值。当一个完整的样本数据集通过了神经网络一次正向处理并且对应返回了一次权值更新,这个过程称为一个周期(epoch)。实际中,在神经网络中传递一次完整的数据集是不够的,需要将完整的数据集在同一神经网络中传递多次,即需要多个周期,最终获得精度符合预期的神经网络。
在神经网络进行训练或微调过程中,通常希望速度越快越好,准确率越高越好。神经网络的数据通过高精度数据格式表示,比如浮点数,所以在训练或微调过程中,涉及的数据均为高精度数据格式,然后再将训练好的神经网络进行量化。以量化对象是整个神经网络的权值、且量化后的权值均为8位定点数为例,由于一个神经网络中常常有数百万连接,几乎所有空间都被神经元连接的权值所占据。况且这些权值都是不同的浮点数。每层权值都趋向于某个确定区间的正态分布,例如(-3.0,3.0)。将神经网络中每层的权值对应的最大值和最小值保存下来,将每个浮点数值采用8位定点数表示。其中,在最大值、最小值范围内空间线性划分256个量化间隔,每个量化间隔用一个8位定点数表示。例如:在(-3.0,3.0)区间内,字节0表示-3.0,字节255表示3.0。以此类推,字节128表示0。
对于高精度数据格式表示的数据,以浮点数为例,根据计算机体系结构可知,基于浮点数的运算表示法则、定点数的运算表示法则,对于同样长度的定点运算和浮点运算,浮点运算计算模式更为复杂,需要更多的逻辑器件来构成浮点运算器。这样从体积上来说,浮点运算器的体积比定点运算器的体积要大。并且,浮点运算器需要消耗更多的资源去处理,使得定点运算和浮点运算二者之间的功耗差距通常是数量级的。简言之,浮点运算器占用的芯片面积和功耗相比于定点运算器都要大很多倍。
图3示出了根据本公开的实施例的用于量化数据的过程300的示意图。参考图3,输入数据310为未量化的浮点数,例如32位的浮点数,如果直接将输入数据310输入到神经网络模型340进行处理,则 会造成耗费较多的计算资源并且处理速度较慢。因此,在框320处,可以对输入数据进行量化,以获得量化后的数据330(例如8位整数)。如果将量化后的数据330输入到神经网络模型340中进行处理,由于8位整数计算较快,因而神经网络模型340将更快速地完成针对输入数据的处理,并生成对应的输出结果350。
从未量化的输入数据310到量化后的数据330的量化过程中,会在一定程度上造成一些精度损失,而精度损失的程度将直接影响着输出结果350的准确性。因此,在针对输入数据330进行量化处理的过程中,需要保证量化过程的精度损失最小或者尽量小。
图4A示出了根据本公开的实施例的用于对称地量化数据的图示400。如图4A所示,是一种最简单的对称量化方法,直接选择待量化数据中的所有值中的绝对值最大值,即为|max|,然后在-|max|至|max|的范围内进行量化,从而生成量化后的数据。然而,这种方法不做任何截断,会导致量化后的数据中的精度较低。
图4B示出了根据本公开的实施例的用于基于截断阈值对称地量化数据的图示450。不同于图4A中的直接量化方法,图4B中选择一个截断阈值T,在-|T|至|T|的范围之外的数据将会被设定成-|T|或|T|。例如,在图4B的示例中,圈460中待量化的3个值由于在截断范围之外,因而会被当成值-|T|来进行量化处理,量化成数据点470。通过这种方式,通过使用截断阈值来缩小待量化数据的取值范围,能够提高量化后的数据的精度。然而,如何获得量化精度损失最小的截断阈值,是一个亟待解决的技术问题。
图5示出了根据本公开的实施例的用于处理数据的方法500的流程图。应当理解,方法500可以由参考图1所描述的一个或多个处理器101来执行。
在框502,获取用于机器学习模型的一组待量化数据。例如,参考以上图3,可以获取待量化的输入数据310,对输入数据进行量化,从而加快神经网络模型340的处理速度。此外,也可以对神经网络模型自身的一些参数(诸如权值等)进行量化,通过对网络参数进行量 化,能够减小神经网络模型的大小。在一些实施例中,待量化的数据可以为32位的浮点数。备选地,待量化的数据也可以为其他位数的浮点数,或者其他的数据类型。
在框504,通过使用多对截断阈值分别量化一组待量化数据,来确定多组量化后的数据,其中多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值。在对称量化的方案中,截断阈值为对称的一对正负值,即截断正值和截断负值,这两个值的数值本身相同但是符号相反。
根据本公开的实施例,可以挑选多对截断阈值,分别量化待量化数据。在一些实施例中,可以以固定的间隔挑选一些截断阈值,例如,根据待量化数据中的绝对值最大值,每隔预定距离挑选一个截断阈值。在一些实施例中,也可以仅挑选几个特定位置处的截断阈值,例如仅挑选绝对值最大值的几个预定比例的数值。
在一些实施例中,可以根据每对截断阈值计算出相应的一个或多个量化参数,然后使用计算出的量化参数来量化待量化数据。备选地,也可以直接根据截断阈值来通过各种公式或模型量化待量化数据,而无需单独计算各个量化参数的值。
在框506,基于多组量化后的数据中的每组量化后的数据的绝对值的均值与一组待量化数据的绝对值的均值之间的差异,从多对截断阈值中选择一对截断阈值,以用于量化一组待量化数据。本申请的发明人根据通过研究和大量实验发现,量化前后的数据的绝对值的均值差距能够反映出量化前后的精度损失,其中绝对值均值差异越小,量化操作的精度损失越小。因此,本公开的实施例使用量化前后的数据的绝对值的均值的差异作为挑选最佳截断阈值的指标,能够实现比传统的KL方法更小的精度损失。
在一些实施例中,量化后的数据的绝对值的均值与待量化数据的绝对值的均值之间的差异可以为两个绝对值均值之间的差值。备选地,量化后的数据的绝对值的均值与待量化数据的绝对值的均值之间的差异也可以为:两个绝对值均值之间的差值除以待量化数据的绝对值 的均值,然后再取绝对值。
在一些实施例中,在选择最佳的一对截断阈值之后,可以使用所选择的一对截断阈值来量化一组待量化数据以获得量化后的数据,包括:将一组待量化数据中大于截断正值的数值截断为截断正值,并且将一组待量化数据中小于截断负值的数值截断为截断负值;然后将所获得的量化后的数据输入到神经网络模型以用于处理。
图6示出了根据本公开的实施例的用于搜索用于对称量化的截断阈值的方法600的流程图,方法600基于待量化数据确定最佳的一对截断阈值以用于数据的量化。
在框602,确定待量化数据的绝对值的均值以及待量化数据中的绝对值最大值,其中绝对值的均值为待量化数据中的所有数据的绝对值之和除以元素个数,此外,还初始化最小均值差异,例如初始设置浮点数中的最大值,并且初始化循环搜索的搜索次序i(例如初始化为0)。在一些实施例中,搜索次序i也可以被初始化为搜索总次数的一半,也即从中间开始搜索,这样能够提高搜索效率。根据本公开的实施例,可以设置一轮或者多轮阈值搜索过程,每轮阈值搜索可以具有相同或者不同的搜索总次数。在一些实施例中,每轮的搜索总次数可以设置在10至32之间。一般来说,搜索总次数越多,所花费的搜索时间越长,所搜到的截断阈值也越精确。然而,当搜索总次数达到某个值后,搜索效果可能不再会有本质提升。
接下来,开始第一轮粗粒度的截断阈值搜索过程。例如,图7A示出了根据本公开的实施例的用于粗粒度搜索用于对称量化的截断阈值的示例图示700。如图7A所示,可以在待量化数据中确定10个候选截断阈值(通过图7A中的虚线标识),依次使用这10对截断阈值(图7A中仅示出截断正值,未示出对应的截断负值)执行量化过程,并根据量化前后的数据的绝对值均值的差异来确定最佳的一对截断阈值。
在框604,判断搜索次序i是否小于搜索总次数,即在依次选择各对截断阈值进行量化时,判断是否已经完成所有对截断阈值的计算。 如果搜索次序i小于搜索总次数,则在框606基于当前的搜索次序i,确定一对截断阈值,这对截断阈值分别为-绝对值最大值/搜索总次数*(i+1)、绝对值最大值/搜索总次数*(i+1)。在框608,使用这对截断阈值来量化待量化数据,以得到相应的量化后数据Quant_data_i,然后在框610,计算量化后的数据的绝对值的均值Quant_data_mean_i与待量化数据的绝对值均值Data_mean之间的差异Distance_i=abs(Quant_data_mean_i-Data_mean)/Data_mean。
在框612,判断所计算的差异Distance_i是否小于当前最小差异。如果是的话,则在框614,将所计算的差异Distance_i设置为当前最小差异,并记录差异最小时的截断阈值,然后在框616处递增搜索次序i(即i++)。如果在框612判断是否的话,直接在框616处递增搜索次序i,即继续确定下一对截断阈值时的差异。接下来,继续循环步骤604至616,直到搜索次序i的值达到搜索总次数,则在框618,退出第一轮截断阈值的搜索过程。如图7A所示,经过第一轮的搜索,确定虚线770处的截断阈值所对应的差异最小。由此可见,截断阈值搜索的过程即为:使用多对截断阈值对待量化数据进行量化,确定多组量化后的数据中与待量化数据在绝对值的均值方面差异最小的一组量化后的数据,然后从多对截断阈值中选择与这组量化后的数据相对应的一对截断阈值。
可选地,可以执行第二轮细粒度的截断阈值搜索过程,第二轮搜索过程也可以参考方法600,只是第二轮搜索是在第一轮最佳截断阈值770周围的一定范围内(例如,所选择的截断阈值770的前一个截断阈值与后一个截断阈值之间)进行,是对第一轮搜索结果的更一步细化。例如,第二轮搜索时,每对截断阈值之间的间隔可以为(绝对值最大值*2)/(第一轮搜索总次数*第二轮搜索总次数)。图7B示出了根据本公开的实施例的用于细粒度搜索用于对称量化的截断阈值的图示750,参考图7B,经过第二轮搜索,确定细粒度的最佳截断阈值为772和778。通过两轮搜索的方式,能够获得更加准确的截断阈值,进一步减小量化所导致的精度损失。
图8示出了根据本公开的实施例的用于迭代地搜索最佳截断阈值的方法800的流程图。在框802,确定三对截断阈值,例如,可以确定待量化数据F x中的所有数据的绝对值最大值absmax,三对截断阈值可以分别为(-absmax/2,absmax/2)、(-absmax*3/4,absmax*3/4)以及(-absmax,absmax)。在框804,使用这三对截断阈值分别量化待量化数据,得到量化后的数据
Figure PCTCN2020111489-appb-000001
然后分别计算F x
Figure PCTCN2020111489-appb-000002
对应的绝对值的均值F mean
Figure PCTCN2020111489-appb-000003
然后根据公式
Figure PCTCN2020111489-appb-000004
选择最小差异diff_min。在框806,判断最小差异diff_min是否小于提前设置的预定阈值。如果否,则在框808,基于所选择的一对截断阈值(将最小差异diff_min对应的值设置为新的绝对值最大值),重新确定三对截断阈值,并重复上述过程,直到最小差异diff_min小于预定阈值,则在框810,退出截断阈值的迭代过程。在一些实施例,除了最小差异diff_min小于预定阈值这一迭代停止条件之外,还可以设置其他的迭代停止条件,例如最大迭代次数、达到预定最小间隔,等等。另外,虽然图8的方法800中示出了迭代地选择最佳的一对截断阈值,也可以不执行迭代,而只执行一次,然后直接将最小差异diff_min对应的一对截断阈值作为最终的截断阈值。
在一些实施例中,可以通过以下式(1)-(3)确定在使用各对截断阈值量化数据时的量化参数。
Figure PCTCN2020111489-appb-000005
Figure PCTCN2020111489-appb-000006
Figure PCTCN2020111489-appb-000007
其中p为待量化数据中的绝对值最大值,n表示量化后的二进制位数,S和f表示量化参数,ceil表示向上取整。
根据本公开的实施例,通过将p分别选为absmax/2、absmax*3/4和absmax,可以求得量化参数S1、f1、S2、f2、S3以及f3,由此得到量化后的数据
Figure PCTCN2020111489-appb-000008
相应地,在选出一对截断阈值之后,直接取这对截断阈值对应的S和f作为待量化数据的量化参数。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将 其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本公开所必须的。
进一步需要说明的是,虽然流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图9示出了根据本公开的实施例的用于处理数据的装置900的框图。如图9所示,装置900包括待量化数据获取单元910、量化后数据确定单元920以及截断阈值选择单元930。待量化数据获取单元910用于获取用于机器学习模型的一组待量化数据。量化后数据确定单元920用于通过使用多对截断阈值分别量化一组待量化数据,来确定多组量化后的数据,其中多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值。截断阈值选择单元930用于基于多组量化后的数据中的每组量化后的数据的绝对值的均值与一组待量化数据的绝对值的均值之间的差异,从多对截断阈值中选择一对截断阈值,以用于量化一组待量化数据。
此外,装置900中的待量化数据获取单元910、量化后数据确定单元920以及截断阈值选择单元930还可以被配置为执行根据本公开的各个实施例的步骤和/或动作。
应该理解,上述的装置实施例仅是示意性的,本公开的装置还可通过其它的方式实现。例如,上述实施例中所述单元/模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。例如, 多个单元、模块或组件可以结合,或者可以集成到另一个系统,或一些特征可以忽略或不执行。
另外,若无特别说明,在本公开各个实施例中的各功能单元/模块可以集成在一个单元/模块中,也可以是各个单元/模块单独物理存在,也可以两个或两个以上单元/模块集成在一起。上述集成的单元/模块既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。
所述集成的单元/模块如果以硬件的形式实现时,该硬件可以是数字电路,模拟电路等等。硬件结构的物理实现包括但不局限于晶体管,忆阻器等等。若无特别说明,所述人工智能处理器可以是任何适当的硬件处理器,比如CPU、GPU、FPGA、DSP和ASIC等等。若无特别说明,所述存储单元可以是任何适当的磁存储介质或者磁光存储介质,比如,阻变式存储器RRAM(Resistive Random Access Memory)、动态随机存取存储器DRAM(Dynamic Random Access Memory)、静态随机存取存储器SRAM(Static Random-Access Memory)、增强动态随机存取存储器EDRAM(Enhanced Dynamic Random Access Memory)、高带宽内存HBM(High-Bandwidth Memory)、混合存储立方HMC(Hybrid Memory Cube)等等。
所述集成的单元/模块如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
在一个实施例中,公开了一种计算机可读存储介质,其上存储有计算机程序,程序被执行时实现根据本公开的各个实施例的方法
在一个实施例中,还公开了一种人工智能芯片,其包括了上述数据用于处理数据的装置。
在一个实施例中,还公开了一种板卡,其包括存储器件、接口装置和控制器件以及上述人工智能芯片;其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;所述存储器件,用于存储数据;所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;所述控制器件,用于对所述人工智能芯片的状态进行监控。
图10示出根据本公开实施例的板卡1000的结构框图,参考图10,上述板卡1000除了包括上述芯片1030-1和1030-2(统称为芯片1030)以外,还可以包括其他的配套部件,该配套部件包括但不限于:存储器件1010、接口装置1040和控制器件1020。接口装置1040可以与外部设备1060相连接。存储器件1010与人工智能芯片1030通过总线1050连接,用于存储数据。存储器件1010可以包括多组存储单元1010-1和1010-2。每一组所述存储单元与所述人工智能芯片通过总线1050连接。可以理解,每一组所述存储单元可以是DDR SDRAM(英文:Double Data Rate SDRAM,双倍速率同步动态随机存储器)。
DDR不需要提高时钟频率就能加倍提高SDRAM的速度。DDR允许在时钟脉冲的上升沿和下降沿读出数据。DDR的速度是标准SDRAM的两倍。在一个实施例中,所述存储装置可以包括4组所述存储单元。每一组所述存储单元可以包括多个DDR4颗粒(芯片)。在一个实施例中,所述人工智能芯片内部可以包括4个72位DDR4控制器,上述72位DDR4控制器中64bit用于传输数据,8bit用于ECC校验。可以理解,当每一组所述存储单元中采用DDR4-3200颗粒时,数据传输的理论带宽可达到25600MB/s。
在一个实施例中,每一组所述存储单元包括多个并联设置的双倍速率同步动态随机存储器。DDR在一个时钟周期内可以传输两次数据。在所述芯片中设置控制DDR的控制器,用于对每个所述存储单元的数据传输与数据存储的控制。
所述接口装置与所述人工智能芯片电连接。所述接口装置用于实现所述人工智能芯片与外部设备(例如服务器或计算机)之间的数据传输。例如在一个实施例中,所述接口装置可以为标准PCIE接口。比如,待处理的数据由服务器通过标准PCIE接口传递至所述芯片,实现数据转移。优选的,当采用PCIE 3.0X 16接口传输时,理论带宽可达到16000MB/s。在另一个实施例中,所述接口装置还可以是其他的接口,本公开并不限制上述其他的接口的具体表现形式,所述接口单元能够实现转接功能即可。另外,所述人工智能芯片的计算结果仍由所述接口装置传送回外部设备(例如服务器)。
所述控制器件与所述人工智能芯片电连接。所述控制器件用于对所述人工智能芯片的状态进行监控。具体的,所述人工智能芯片与所述控制器件可以通过SPI接口电连接。所述控制器件可以包括单片机(Micro Controller Unit,MCU)。如所述人工智能芯片可以包括多个处理芯片、多个处理核或多个处理电路,可以带动多个负载。因此,所述人工智能芯片可以处于多负载和轻负载等不同的工作状态。通过所述控制装置可以实现对所述人工智能芯片中多个处理芯片、多个处理和/或多个处理电路的工作状态的调控。
在一种可能的实现方式中,公开了一种电子设备,其包括了上述人工智能芯片。电子设备包括数据处理装置、机器人、电脑、打印机、扫描仪、平板电脑、智能终端、手机、行车记录仪、导航仪、传感器、摄像头、服务器、云端服务器、相机、摄像机、投影仪、手表、耳机、移动存储、可穿戴设备、交通工具、家用电器、和/或医疗设备。
所述交通工具包括飞机、轮船和/或车辆;所述家用电器包括电视、空调、微波炉、冰箱、电饭煲、加湿器、洗衣机、电灯、燃气灶、油烟机;所述医疗设备包括核磁共振仪、B超仪和/或心电图仪。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。上述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特 征的组合不存在矛盾,都应当认为是本说明书记载的范围。
依据以下条款可更好地理解前述内容:
A1.一种用于处理数据的方法,其特征在于,包括:
获取用于机器学习模型的一组待量化数据;
通过使用多对截断阈值分别量化所述一组待量化数据,来确定多组量化后的数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;以及
基于所述多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值,以用于量化所述一组待量化数据。
A2.根据条款A1所述的方法,其特征在于,确定多组量化后的数据包括:
确定所述一组待量化数据中的所有数据的绝对值最大值;以及
基于所述绝对值最大值,确定所述多对截断阈值。
A3.根据条款A2所述的方法,其特征在于,确定多组量化后的数据还包括:
基于所述绝对值最大值、预定的搜索总次数以及当前搜索次序,确定第一截断正值;
通过使用第一对截断阈值量化所述一组待量化数据,来确定第一组量化后的数据,所述第一对截断阈值包括所述第一截断正值以及与所述第一截断正值相反的第一截断负值;以及
确定所述第一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第一差异。
A4.根据条款A3所述的方法,其特征在于,确定多组量化后的数据还包括:
递增所述当前搜索次序;
基于所述绝对值最大值、所述预定的搜索总次数以及所述当前搜索次序,确定第二截断正值;
通过使用第二对截断阈值量化所述一组待量化数据,来确定第 二组量化后的数据,所述第二对截断阈值包括所述第二截断正值以及与所述第二截断正值相反的第二截断负值;以及
确定所述第二组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第二差异。
A5.根据条款A1-A4中任一项所述的方法,其特征在于,从所述多对截断阈值中选择一对截断阈值包括:
确定所述多组量化后的数据中与所述一组待量化数据在绝对值的均值方面差异最小的一组量化后的数据;以及
从所述多对截断阈值中选择与所述一组量化后的数据相对应的一对截断阈值。
A6.根据条款A5所述的方法,其特征在于,还包括:
确定与所选择的所述一对截断阈值相关联的截断搜索范围;
确定处于所述截断搜索范围内的新的多对截断阈值;
通过使用所述新的多对截断阈值分别量化所述一组待量化数据,来确定新的多组量化后的数据;以及
基于所述新的多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述新的多对截断阈值中选择新的一对截断阈值。
A7.根据条款A1所述的方法,其特征在于,确定多组量化后的数据包括:
确定所述一组待量化数据中的所有数据的绝对值最大值;
基于所述绝对值最大值,确定三对截断阈值,所述三对截断阈值中的第一对截断阈值包括所述绝对值最大值的一半及其相反数,所述三对截断阈值中的第二对截断阈值包括所述绝对值最大值的四分之三及其相反数,并且所述三对截断阈值中的第三对截断阈值包括所述绝对值最大值及其相反数;以及
通过使用三对截断阈值分别量化所述一组待量化数据,来确定三组量化后的数据。
A8.根据条款A7所述的方法,其特征在于,从所述多对截断 阈值中选择一对截断阈值包括:
迭代执行以下动作,直到满足停止条件:
从所述三对截断阈值中选择一对截断阈值;
确定与所选择的一对截断阈值相对应的差异是否小于预定阈值;
响应于所述差异小于预定阈值,停止迭代执行动作;以及
响应于所述差异大于预定阈值,基于所选择的一对截断阈值,重新确定三对截断阈值。
A9.根据条款A1-A8中任一项所述的方法,其特征在于,所述一组待量化数据是神经网络模型中的一组浮点数,所述方法还包括:
使用所选择的一对截断阈值来量化所述一组待量化数据以获得量化后的数据,其中量化所述一组待量化数据包括:将所述一组待量化数据中大于截断正值的数值设为所述截断正值,并且将所述一组待量化数据中小于截断负值的数值设为所述截断负值;以及
将所获得的量化后的数据输入到所述神经网络模型以用于处理。
A10.一种用于处理数据的装置,其特征在于,包括:
待量化数据获取单元,用于获取用于机器学习模型的一组待量化数据;
量化后数据确定单元,用于通过使用多对截断阈值分别量化所述一组待量化数据,来确定多组量化后的数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;以及
截断阈值选择单元,用于基于所述多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值,以用于量化所述一组待量化数据。
A11.根据条款A10所述的装置,其特征在于,所述量化后数据确定单元包括:
绝对值最大值确定单元,用于确定所述一组待量化数据中的所 有数据的绝对值最大值;以及
多对截断阈值确定单元,用于基于所述绝对值最大值来确定所述多对截断阈值。
A12.根据条款A11所述的装置,其特征在于,所述量化后数据确定单元还包括:
第一截断正值确定单元,用于基于所述绝对值最大值、预定的搜索总次数以及当前搜索次序,确定第一截断正值;
第一组量化后数据确定单元,用于通过使用第一对截断阈值量化所述一组待量化数据,来确定第一组量化后的数据,所述第一对截断阈值包括所述第一截断正值以及与所述第一截断正值相反的第一截断负值;以及
第一差异确定单元,用于确定所述第一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第一差异。
A13.根据条款A12所述的装置,其特征在于,所述量化后数据确定单元还包括:
递增单元,用于递增所述当前搜索次序;
第二截断正值确定单元,用于基于所述绝对值最大值、所述预定的搜索总次数以及所述当前搜索次序,确定第二截断正值;
第二组量化后数据确定单元,用于通过使用第二对截断阈值量化所述一组待量化数据,来确定第二组量化后的数据,所述第二对截断阈值包括所述第二截断正值以及与所述第二截断正值相反的第二截断负值;以及
第二差异确定单元,用于确定所述第二组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第二差异。
A14.根据条款A10-A13中任一项所述的装置,其特征在于,所述截断阈值选择单元包括:
最小差异确定单元,用于确定所述多组量化后的数据中与所述一组待量化数据在绝对值的均值方面差异最小的一组量化后的数据;以及
第二截断阈值选择单元,用于从所述多对截断阈值中选择与所述一组量化后的数据相对应的一对截断阈值。
A15.根据条款A14所述的装置,其特征在于,还包括:
截断搜索范围确定单元,用于确定与所选择的所述一对截断阈值相关联的截断搜索范围;
新的多对截断阈值确定单元,用于确定处于所述截断搜索范围内的新的多对截断阈值;
第二量化后数据确定单元,用于通过使用所述新的多对截断阈值分别量化所述一组待量化数据,来确定新的多组量化后的数据;以及
第三截断阈值选择单元,用于基于所述新的多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述新的多对截断阈值中选择新的一对截断阈值。
A16.根据条款A10所述的装置,其特征在于,所述量化后数据确定单元包括:
绝对值最大值确定单元,用于确定所述一组待量化数据中的所有数据的绝对值最大值;
三对截断阈值确定单元,用于基于所述绝对值最大值,确定三对截断阈值,所述三对截断阈值中的第一对截断阈值包括所述绝对值最大值的一半及其相反数,所述三对截断阈值中的第二对截断阈值包括所述绝对值最大值的四分之三及其相反数,并且所述三对截断阈值中的第三对截断阈值包括所述绝对值最大值及其相反数;以及
三组量化后数据确定单元,用于通过使用三对截断阈值分别量化所述一组待量化数据,来确定三组量化后的数据。
A17.根据条款A16所述的装置,其特征在于,所述截断阈值选择单元包括:
迭代单元,用于迭代执行以下动作,直到满足停止条件:
从所述三对截断阈值中选择一对截断阈值;
确定与所选择的一对截断阈值相对应的差异是否小于预定阈值;
响应于所述差异小于预定阈值,停止迭代执行动作;以及
响应于所述差异大于预定阈值,基于所选择的一对截断阈值,重新确定三对截断阈值。
A18.根据条款A10-A17中任一项所述的装置,其特征在于,所述一组待量化数据是神经网络模型中的一组浮点数,所述装置还包括:
数据量化单元,用于使用所选择的一对截断阈值来量化所述一组待量化数据以获得量化后的数据,其中量化所述一组待量化数据包括:将所述一组待量化数据中大于截断正值的数值设为所述截断正值,并且将所述一组待量化数据中小于截断负值的数值设为所述截断负值;以及
数据输入单元,用于将所获得的量化后的数据输入到所述神经网络模型以用于处理。
A19.一种计算机可读存储介质,其特征在于其上存储有计算机程序,所述程序被执行时实现根据条款A1-A9中任一项所述的方法。
A20.一种人工智能芯片,其特征在于,所述芯片包括根据条款A10-A18中任一项所述的用于处理数据的装置。
A21.一种电子设备,其特征在于,所述电子设备包括根据条款A20所述的人工智能芯片。
A22.一种板卡,其特征在于,所述板卡包括:存储器件、接口装置和控制器件以及根据条款A20所述的人工智能芯片;
其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置相连接;
所述存储器件,用于存储数据;
所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;以及
所述控制器件,用于对所述人工智能芯片的状态进行监控。
A23.根据条款A22所述的板卡,其特征在于,
所述存储器件包括:多组存储单元,每组存储单元与所述人工智能芯片通过总线连接,所述存储单元为DDR SDRAM;
所述芯片包括:DDR控制器,用于对每个所述存储单元的数据传输与数据存储的控制;
所述接口装置为标准PCIE接口。
以上对本公开实施例进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明仅用于帮助理解本公开的方法及其核心思想。同时,本领域技术人员依据本公开的思想,基于本公开的具体实施方式及应用范围上做出的改变或变形之处,都属于本公开保护的范围。综上所述,本说明书内容不应理解为对本公开的限制。

Claims (23)

  1. 一种用于处理数据的方法,其特征在于,包括:
    获取用于机器学习模型的一组待量化数据;
    通过使用多对截断阈值分别量化所述一组待量化数据,来确定多组量化后的数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;以及
    基于所述多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值,以用于量化所述一组待量化数据。
  2. 根据权利要求1所述的方法,其特征在于,确定多组量化后的数据包括:
    确定所述一组待量化数据中的所有数据的绝对值最大值;以及
    基于所述绝对值最大值,确定所述多对截断阈值。
  3. 根据权利要求2所述的方法,其特征在于,确定多组量化后的数据还包括:
    基于所述绝对值最大值、预定的搜索总次数以及当前搜索次序,确定第一截断正值;
    通过使用第一对截断阈值量化所述一组待量化数据,来确定第一组量化后的数据,所述第一对截断阈值包括所述第一截断正值以及与所述第一截断正值相反的第一截断负值;以及
    确定所述第一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第一差异。
  4. 根据权利要求3所述的方法,其特征在于,确定多组量化后的数据还包括:
    递增所述当前搜索次序;
    基于所述绝对值最大值、所述预定的搜索总次数以及所述当前搜索次序,确定第二截断正值;
    通过使用第二对截断阈值量化所述一组待量化数据,来确定第二 组量化后的数据,所述第二对截断阈值包括所述第二截断正值以及与所述第二截断正值相反的第二截断负值;以及
    确定所述第二组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第二差异。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,从所述多对截断阈值中选择一对截断阈值包括:
    确定所述多组量化后的数据中与所述一组待量化数据在绝对值的均值方面差异最小的一组量化后的数据;以及
    从所述多对截断阈值中选择与所述一组量化后的数据相对应的一对截断阈值。
  6. 根据权利要求5所述的方法,其特征在于,还包括:
    确定与所选择的所述一对截断阈值相关联的截断搜索范围;
    确定处于所述截断搜索范围内的新的多对截断阈值;
    通过使用所述新的多对截断阈值分别量化所述一组待量化数据,来确定新的多组量化后的数据;以及
    基于所述新的多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述新的多对截断阈值中选择新的一对截断阈值。
  7. 根据权利要求1所述的方法,其特征在于,通过使用多对截断阈值分别量化所述一组待量化数据,来确定多组量化后的数据包括:
    确定所述一组待量化数据中的所有数据的绝对值最大值;
    基于所述绝对值最大值,确定三对截断阈值,所述三对截断阈值中的第一对截断阈值包括所述绝对值最大值的一半及其相反数,所述三对截断阈值中的第二对截断阈值包括所述绝对值最大值的四分之三及其相反数,并且所述三对截断阈值中的第三对截断阈值包括所述绝对值最大值及其相反数;以及
    通过使用三对截断阈值分别量化所述一组待量化数据,来确定三组量化后的数据。
  8. 根据权利要求7所述的方法,其特征在于,从所述多对截断 阈值中选择一对截断阈值包括:
    迭代执行以下动作,直到满足停止条件:
    从所述三对截断阈值中选择一对截断阈值;
    确定与所选择的一对截断阈值相对应的差异是否小于预定阈值;
    响应于所述差异小于预定阈值,停止迭代执行动作;以及
    响应于所述差异大于预定阈值,基于所选择的一对截断阈值,重新确定三对截断阈值。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述一组待量化数据是神经网络模型中的一组浮点数,所述方法还包括:
    使用所选择的一对截断阈值来量化所述一组待量化数据以获得量化后的数据,其中量化所述一组待量化数据包括:将所述一组待量化数据中大于截断正值的数值设为所述截断正值,并且将所述一组待量化数据中小于截断负值的数值设为所述截断负值;以及
    将所获得的量化后的数据输入到所述神经网络模型以用于处理。
  10. 一种用于处理数据的装置,其特征在于,包括:
    待量化数据获取单元,用于获取用于机器学习模型的一组待量化数据;
    量化后数据确定单元,用于通过使用多对截断阈值分别量化所述一组待量化数据,来确定多组量化后的数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;以及
    截断阈值选择单元,用于基于所述多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值,以用于量化所述一组待量化数据。
  11. 根据权利要求10所述的装置,其特征在于,所述量化后数据确定单元包括:
    绝对值最大值确定单元,用于确定所述一组待量化数据中的所有数据的绝对值最大值;以及
    多对截断阈值确定单元,用于基于所述绝对值最大值来确定所述多对截断阈值。
  12. 根据权利要求11所述的装置,其特征在于,所述量化后数据确定单元还包括:
    第一截断正值确定单元,用于基于所述绝对值最大值、预定的搜索总次数以及当前搜索次序,确定第一截断正值;
    第一组量化后数据确定单元,用于通过使用第一对截断阈值量化所述一组待量化数据,来确定第一组量化后的数据,所述第一对截断阈值包括所述第一截断正值以及与所述第一截断正值相反的第一截断负值;以及
    第一差异确定单元,用于确定所述第一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第一差异。
  13. 根据权利要求12所述的装置,其特征在于,所述量化后数据确定单元还包括:
    递增单元,用于递增所述当前搜索次序;
    第二截断正值确定单元,用于基于所述绝对值最大值、所述预定的搜索总次数以及所述当前搜索次序,确定第二截断正值;
    第二组量化后数据确定单元,用于通过使用第二对截断阈值量化所述一组待量化数据,来确定第二组量化后的数据,所述第二对截断阈值包括所述第二截断正值以及与所述第二截断正值相反的第二截断负值;以及
    第二差异确定单元,用于确定所述第二组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第二差异。
  14. 根据权利要求10-13中任一项所述的装置,其特征在于,所述截断阈值选择单元包括:
    最小差异确定单元,用于确定所述多组量化后的数据中与所述一组待量化数据在绝对值的均值方面差异最小的一组量化后的数据;以及
    第二截断阈值选择单元,用于从所述多对截断阈值中选择与所述 一组量化后的数据相对应的一对截断阈值。
  15. 根据权利要求14所述的装置,其特征在于,还包括:
    截断搜索范围确定单元,用于确定与所选择的所述一对截断阈值相关联的截断搜索范围;
    新的多对截断阈值确定单元,用于确定处于所述截断搜索范围内的新的多对截断阈值;
    第二量化后数据确定单元,用于通过使用所述新的多对截断阈值分别量化所述一组待量化数据,来确定新的多组量化后的数据;以及
    第三截断阈值选择单元,用于基于所述新的多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述新的多对截断阈值中选择新的一对截断阈值。
  16. 根据权利要求10所述的装置,其特征在于,所述量化后数据确定单元包括:
    绝对值最大值确定单元,用于确定所述一组待量化数据中的所有数据的绝对值最大值;
    三对截断阈值确定单元,用于基于所述绝对值最大值,确定三对截断阈值,所述三对截断阈值中的第一对截断阈值包括所述绝对值最大值的一半及其相反数,所述三对截断阈值中的第二对截断阈值包括所述绝对值最大值的四分之三及其相反数,并且所述三对截断阈值中的第三对截断阈值包括所述绝对值最大值及其相反数;以及
    三组量化后数据确定单元,用于通过使用三对截断阈值分别量化所述一组待量化数据,来确定三组量化后的数据。
  17. 根据权利要求16所述的装置,其特征在于,所述截断阈值选择单元包括:
    迭代单元,用于迭代执行以下动作,直到满足停止条件:
    从所述三对截断阈值中选择一对截断阈值;
    确定与所选择的一对截断阈值相对应的差异是否小于预定阈值;
    响应于所述差异小于预定阈值,停止迭代执行动作;以及
    响应于所述差异大于预定阈值,基于所选择的一对截断阈值,重新确定三对截断阈值。
  18. 根据权利要求10-17中任一项所述的装置,其特征在于,所述一组待量化数据是神经网络模型中的一组浮点数,所述装置还包括:
    数据量化单元,用于使用所选择的一对截断阈值来量化所述一组待量化数据以获得量化后的数据,其中量化所述一组待量化数据包括:将所述一组待量化数据中大于截断正值的数值设为所述截断正值,并且将所述一组待量化数据中小于截断负值的数值设为所述截断负值;以及
    数据输入单元,用于将所获得的量化后的数据输入到所述神经网络模型以用于处理。
  19. 一种计算机可读存储介质,其特征在于其上存储有计算机程序,所述程序被执行时实现根据权利要求1-9中任一项所述的方法。
  20. 一种人工智能芯片,其特征在于,所述芯片包括根据权利要求10-18中任一项所述的用于处理数据的装置。
  21. 一种电子设备,其特征在于,所述电子设备包括根据权利要求20所述的人工智能芯片。
  22. 一种板卡,其特征在于,所述板卡包括:存储器件、接口装置和控制器件以及根据权利要求20所述的人工智能芯片;
    其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置相连接;
    所述存储器件,用于存储数据;
    所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;以及
    所述控制器件,用于对所述人工智能芯片的状态进行监控。
  23. 根据权利要求22所述的板卡,其特征在于,
    所述存储器件包括:多组存储单元,每组存储单元与所述人工智能芯片通过总线连接,所述存储单元为DDR SDRAM;
    所述芯片包括:DDR控制器,用于对每个所述存储单元的数据传输与数据存储的控制;
    所述接口装置为标准PCIE接口。
PCT/CN2020/111489 2019-08-28 2020-08-26 用于处理数据的方法、装置以及相关产品 WO2021037082A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP20858492.0A EP4024287A4 (en) 2019-08-28 2020-08-26 DATA PROCESSING METHOD AND APPARATUS, AND RELATED PRODUCT
JP2020566955A JP7060719B2 (ja) 2019-08-28 2020-08-26 データを処理するための方法、装置、及び関連製品
US17/565,008 US20220121908A1 (en) 2019-08-28 2021-12-29 Method and apparatus for processing data, and related product

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910804625.6A CN112446472A (zh) 2019-08-28 2019-08-28 用于处理数据的方法、装置以及相关产品
CN201910804625.6 2019-08-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/565,008 Continuation US20220121908A1 (en) 2019-08-28 2021-12-29 Method and apparatus for processing data, and related product

Publications (1)

Publication Number Publication Date
WO2021037082A1 true WO2021037082A1 (zh) 2021-03-04

Family

ID=74684563

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/111489 WO2021037082A1 (zh) 2019-08-28 2020-08-26 用于处理数据的方法、装置以及相关产品

Country Status (5)

Country Link
US (1) US20220121908A1 (zh)
EP (1) EP4024287A4 (zh)
JP (1) JP7060719B2 (zh)
CN (1) CN112446472A (zh)
WO (1) WO2021037082A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238987B (zh) * 2021-06-08 2022-11-22 中科寒武纪科技股份有限公司 量化数据的统计量化器、存储装置、处理装置及板卡

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100087845A (ko) * 2009-01-29 2010-08-06 엘지디스플레이 주식회사 데이터 압축과 복원 방법 및 장치와 이를 이용한 액정표시장치
CN107197297A (zh) * 2017-06-14 2017-09-22 中国科学院信息工程研究所 一种检测基于dct系数隐写的视频隐写分析方法
CN107665364A (zh) * 2016-07-28 2018-02-06 三星电子株式会社 神经网络方法和设备
CN109934761A (zh) * 2019-01-31 2019-06-25 中山大学 基于卷积神经网络的jpeg图像隐写分析方法
CN109993296A (zh) * 2019-04-01 2019-07-09 北京中科寒武纪科技有限公司 量化实现方法及相关产品

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107239826A (zh) 2017-06-06 2017-10-10 上海兆芯集成电路有限公司 在卷积神经网络中的计算方法及装置
KR102601604B1 (ko) 2017-08-04 2023-11-13 삼성전자주식회사 뉴럴 네트워크의 파라미터들을 양자화하는 방법 및 장치
KR20190034985A (ko) * 2017-09-25 2019-04-03 삼성전자주식회사 인공 신경망의 양자화 방법 및 장치
JP7060718B2 (ja) * 2019-08-26 2022-04-26 上海寒武紀信息科技有限公司 データを処理するための方法、装置、及び関連製品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100087845A (ko) * 2009-01-29 2010-08-06 엘지디스플레이 주식회사 데이터 압축과 복원 방법 및 장치와 이를 이용한 액정표시장치
CN107665364A (zh) * 2016-07-28 2018-02-06 三星电子株式会社 神经网络方法和设备
CN107197297A (zh) * 2017-06-14 2017-09-22 中国科学院信息工程研究所 一种检测基于dct系数隐写的视频隐写分析方法
CN109934761A (zh) * 2019-01-31 2019-06-25 中山大学 基于卷积神经网络的jpeg图像隐写分析方法
CN109993296A (zh) * 2019-04-01 2019-07-09 北京中科寒武纪科技有限公司 量化实现方法及相关产品

Also Published As

Publication number Publication date
JP2022501674A (ja) 2022-01-06
CN112446472A (zh) 2021-03-05
US20220121908A1 (en) 2022-04-21
JP7060719B2 (ja) 2022-04-26
EP4024287A4 (en) 2023-09-13
EP4024287A1 (en) 2022-07-06

Similar Documents

Publication Publication Date Title
WO2021036908A1 (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021036905A1 (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021036904A1 (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021036890A1 (zh) 数据处理方法、装置、计算机设备和存储介质
EP3770823A1 (en) Quantization parameter determination method for neural network, and related product
TWI795519B (zh) 計算裝置、機器學習運算裝置、組合處理裝置、神經網絡芯片、電子設備、板卡及執行機器學習計算的方法
WO2021036362A1 (zh) 用于处理数据的方法、装置以及相关产品
WO2022111002A1 (zh) 用于训练神经网络的方法、设备和计算机可读存储介质
WO2021036255A1 (zh) 用于处理数据的方法、装置以及相关产品
WO2021037082A1 (zh) 用于处理数据的方法、装置以及相关产品
WO2022012233A1 (zh) 一种量化校准方法、计算装置和计算机可读存储介质
CN111125617A (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021082725A1 (zh) Winograd卷积运算方法及相关产品
WO2021185262A1 (zh) 计算装置、方法、板卡和计算机可读存储介质
CN113112009B (zh) 用于神经网络数据量化的方法、装置和计算机可读存储介质
CN111198714B (zh) 重训练方法及相关产品
US20220222041A1 (en) Method and apparatus for processing data, and related product
WO2021169914A1 (zh) 数据量化处理方法、装置、电子设备和存储介质
WO2021082724A1 (zh) 运算方法及相关产品
US20240086153A1 (en) Multi-bit accumulator and in-memory computing processor with same
WO2021083100A1 (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021082747A1 (zh) 运算装置及相关产品
WO2020073874A1 (zh) 机器学习运算的分配系统及方法
KR20240027526A (ko) 신경 아키텍처 검색을 위한 시스템 및 방법
CN114118341A (zh) 量化方法、计算装置和计算机可读存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2020566955

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20858492

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020858492

Country of ref document: EP

Effective date: 20220328