WO2021083100A1 - 数据处理方法、装置、计算机设备和存储介质 - Google Patents

数据处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2021083100A1
WO2021083100A1 PCT/CN2020/123853 CN2020123853W WO2021083100A1 WO 2021083100 A1 WO2021083100 A1 WO 2021083100A1 CN 2020123853 W CN2020123853 W CN 2020123853W WO 2021083100 A1 WO2021083100 A1 WO 2021083100A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
quantized
winograd
thresholds
truncation
Prior art date
Application number
PCT/CN2020/123853
Other languages
English (en)
French (fr)
Inventor
张英男
曾洪博
张尧
刘少礼
黄迪
周诗怡
张曦珊
刘畅
郭家明
高钰峰
Original Assignee
中科寒武纪科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中科寒武纪科技股份有限公司 filed Critical 中科寒武纪科技股份有限公司
Publication of WO2021083100A1 publication Critical patent/WO2021083100A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the present disclosure relates to the field of data processing technology, and in particular to a data processing method, device, computer equipment, and storage medium.
  • neural network algorithm is a very popular machine learning algorithm recently, and has achieved very good results in various fields, such as image recognition, speech recognition, natural language processing, etc.
  • image recognition speech recognition
  • speech recognition natural language processing
  • the complexity of the algorithm is getting higher and higher.
  • the scale of the model is gradually increasing.
  • Using GPU and CPU to process these large-scale models requires a lot of computing time and consumes a lot of power.
  • the embodiments of the present disclosure provide a data processing method, device, computer equipment, and storage medium that can save calculation time, reduce energy consumption, and improve calculation accuracy.
  • a data processing method including:
  • a pair of truncation thresholds is determined from the multiple pairs of truncation thresholds, wherein the set of data to be quantized is winograd convolution
  • a data processing device including:
  • the first determining module is configured to determine a pair of truncation thresholds from the plurality of pairs of truncation thresholds according to the mean value of the absolute value of the quantized data obtained by quantizing a group of data to be quantized using multiple pairs of truncation thresholds, wherein the set of truncation thresholds
  • the data to be quantified is a set of data in a winograd convolution process, and each pair of cutoff thresholds in the plurality of cutoff thresholds includes a symmetrical cutoff positive value and a cutoff negative value;
  • the first quantization module is configured to quantize the set of data to be quantized according to the determined pair of cutoff thresholds to obtain quantized first data
  • the convolution module is used to continue the winograd convolution process according to the quantized first data to obtain the quantized winograd convolution result;
  • the inverse quantization module is configured to perform inverse quantization processing on the quantized winograd convolution result to obtain the winograd convolution result.
  • an artificial intelligence chip is provided, and the chip includes the data processing device according to any one of the foregoing.
  • an electronic device including the aforementioned artificial intelligence chip.
  • a board card includes: a storage device, an interface device, a control device, and the aforementioned artificial intelligence chip;
  • the artificial intelligence chip is connected to the storage device, the control device, and the interface device respectively;
  • the storage device is used to store data
  • the interface device is used to implement data transmission between the artificial intelligence chip and external equipment
  • the control device is used to monitor the state of the artificial intelligence chip.
  • an electronic device including:
  • a memory for storing processor executable instructions
  • the processor is configured to call instructions stored in the memory to execute the method described in any one of the foregoing.
  • a computer-readable storage medium having computer program instructions stored thereon, wherein the computer program instructions implement the method described in any one of the foregoing when the computer program instructions are executed by a processor .
  • one pair is determined from the multiple pairs of truncation thresholds.
  • the truncation threshold is used to quantize a set of data to be quantized in the winograd convolution according to the determined pair of truncation thresholds to obtain the quantized first data, and continue to perform the winograd convolution process according to the quantized first data to obtain the quantized first data
  • For the winograd convolution result perform inverse quantization processing on the quantized winograd convolution result to obtain the winograd convolution result, which can improve the accuracy of quantization, save the operation time of the winograd convolution, and reduce energy consumption.
  • Fig. 1 shows a flowchart of a data processing method according to an embodiment of the present disclosure
  • Figure 2 shows a block diagram of a data processing device according to an embodiment of the present disclosure
  • Figure 3 shows a structural block diagram of a board according to an embodiment of the present disclosure
  • FIG. 4 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure
  • FIG. 5 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • the term “if” can be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context.
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • Winograd convolution is a convolution acceleration implementation method based on polynomial interpolation algorithm. It passes the two inputs of the convolution operation: neurons and weights are divided into a certain scale and then linearly transformed (winograd positive transformation), and then the transformed neurons and weights are multiplied by bit, and finally the pair The bit multiplication result is linearly transformed again (winograd inverse transformation) to obtain a convolution result equivalent to the original convolution operation.
  • g represents the weight value
  • G represents the left multiplication positive transformation matrix corresponding to the weight value
  • G T represents the right multiplication positive transformation matrix corresponding to the weight value
  • d represents the input neuron
  • B represents the right multiplication positive transformation matrix corresponding to the input neuron
  • B T represents the left multiplication forward transformation matrix corresponding to the input neuron
  • represents the bitwise multiplication operation
  • A represents the right multiplication and inverse transformation matrix
  • AT represents the left multiplication and inverse transformation matrix.
  • the present disclosure provides a data processing method, which can disassemble the multiplication operation in the winograd convolution process into an addition operation, thereby saving calculation time, reducing energy consumption, and quantifying the data in the winograd convolution process Processing to further improve computing performance.
  • KL divergence Kullback–Leibler divergence
  • KL divergence is also called relative entropy (relative entropy), information divergence (information divergence), and information gain (information gain).
  • KL divergence is a measure of the asymmetry of the difference between two probability distributions P and Q. Assuming that the distribution of 32-bit floating-point numbers before quantization is P, and the distribution of 8-bit integers after quantization is Q, then as long as the KL divergence between P and Q is smaller, the closer the distribution before and after quantization, the more effective the quantization. However, the inventor of the present application found that the quantization effect achieved by the cutoff threshold obtained by the traditional KL method is not good, which usually causes a large loss of accuracy.
  • the embodiments of the present disclosure propose a new solution for determining a cutoff threshold for symmetric quantization, which can achieve a smaller loss of quantization accuracy than traditional techniques (such as the KL method).
  • a plurality of pairs of truncation thresholds are used to respectively quantize a set of data to be quantized to determine a plurality of sets of quantized data, wherein multiple pairs of truncation thresholds
  • Each pair of cutoff thresholds in includes a symmetric cutoff positive value and a cutoff negative value.
  • the difference between the mean value of the absolute value of each group of quantized data and the mean value of the absolute value of a group of to-be-quantized data is used as an evaluation index to select a suitable pair of cutoff thresholds from a plurality of pairs of cutoff thresholds. In this way, a more suitable cutoff threshold can be found.
  • Fig. 1 shows a flowchart of a data processing method according to an embodiment of the present disclosure. As shown in Figure 1, the method may include:
  • Step 101 Determine a pair of truncation thresholds from the plurality of pairs of truncation thresholds according to the mean value of the absolute value of the quantized data obtained by quantizing a set of data to be quantized using multiple pairs of truncation thresholds, where the set of data to be quantized is A set of data in the process of winograd convolution processing, where each pair of cutoff thresholds in the plurality of cutoff thresholds includes a symmetrical cutoff positive value and a cutoff negative value.
  • a plurality of pairs of truncation thresholds are used to quantize a set of data to be quantized to determine a plurality of sets of quantized data, wherein each of the pairs of truncation thresholds
  • the cutoff threshold includes a symmetric cutoff positive value and a cutoff negative value.
  • the original data to be quantized may be image data, sound data, or video data.
  • the input data can be expressed in the form of NHWC (batch, height, width, channels), N represents the number of images, and HW can represent the number of pixels in the height and width directions, respectively.
  • C can represent the number of channels, for example: C can represent three channels of RGB (Red, Green, Blue). It should be noted that the above representation is only an example of the present disclosure, and the present disclosure is not limited to this.
  • determining a pair of truncation thresholds from the multiple pairs of truncation thresholds may include:
  • the quantized data may be any data in the winograd convolution process.
  • the above-mentioned winograd convolution processing process may include:
  • the winograd inverse transform of the result of the alignment multiplication is disassembled into a summation operation to obtain the winograd convolution result.
  • the above-mentioned set of data to be quantified may be input data (in a possible implementation, the input data is input neurons and/or weights), or may also be the winograd positive transformation result of the input data, or it may be Is the result of the above-mentioned alignment multiplication.
  • the data to be quantized can be quantized to speed up the processing speed of winograd convolution.
  • the data to be quantized may be a 32-bit floating point number.
  • the data to be quantized may also be floating-point numbers with other digits, or other data types.
  • Multiple sets of quantized data are determined by using multiple pairs of truncation thresholds to quantify a set of data to be quantized, wherein each pair of truncation thresholds in the multiple pairs of truncation thresholds includes a symmetrical truncated positive value and a truncated negative value.
  • the truncation threshold is a symmetric pair of positive and negative values, that is, the truncated positive value and the truncated negative value. The values of these two values are the same but have opposite signs.
  • multiple pairs of truncation thresholds can be selected to quantify the data to be quantified respectively.
  • some cutoff thresholds can be selected at fixed intervals.
  • the foregoing method may further include:
  • the multiple pairs of cutoff thresholds are determined.
  • a cutoff threshold is selected every predetermined distance.
  • the corresponding one or more quantization parameters can be calculated according to each pair of truncation thresholds, and then the calculated quantization parameters are used to quantize the data to be quantized .
  • the data to be quantized can also be directly quantified through various formulas or models according to the cutoff threshold, without separately calculating the value of each quantization parameter.
  • a pair of truncation thresholds is selected from a plurality of pairs of truncation thresholds to Used to quantify a group of data to be quantified. Since the mean difference between the absolute values of the data before and after the quantization can reflect the accuracy loss before and after the quantization, the smaller the mean difference between the absolute values, the smaller the accuracy loss of the quantization operation. Therefore, the embodiment of the present disclosure uses the difference of the mean value of the absolute value of the data before and after the quantization as an index for selecting the optimal cutoff threshold, which can achieve a smaller accuracy loss than the traditional KL method.
  • the difference between the mean value of the absolute value of the quantized data and the mean value of the absolute value of the data to be quantized may be the difference between the two absolute value means.
  • the difference between the mean value of the absolute value of the quantized data and the mean value of the absolute value of the data to be quantized may also be: the difference between the two absolute value means divided by the mean value of the absolute value of the data to be quantized , And then take the absolute value.
  • Selecting a pair of truncation thresholds from the plurality of pairs of truncation thresholds may include:
  • a set of first quantized data is selected from the plurality of sets of quantized data, and the difference between the mean value of the absolute value of the first set of quantized data and the mean value of the absolute value of the set of data to be quantized is less than The difference between the mean value of the absolute value of the second set of quantized data and the mean value of the absolute value of the set of data to be quantized, wherein the multiple sets of second quantized data are the multiple sets of quantized data except for the first set of quantized data.
  • a pair of truncation thresholds corresponding to the first set of quantized data is selected from the plurality of pairs of truncation thresholds.
  • the pair of cutoff thresholds used to quantize the data to be quantized to obtain the first quantized data is determined as a pair of cutoff thresholds used to quantize the data to be quantized.
  • will be set to -
  • the data in the truncation range corresponding to a pair of truncation thresholds is quantized according to the quantization parameter, and the value to be quantized outside the truncation range and less than -
  • is regarded as the value
  • Step 102 Quantify the set of data to be quantized according to the determined pair of cutoff thresholds to obtain quantized first data.
  • the selected pair of cutoff thresholds can be used to quantify a group of data to be quantized to obtain the quantized first data, including: The numerical value of the truncated positive value is truncated to a truncated positive value, and the numerical value of a set of data to be quantified that is less than the truncated negative value is truncated to a truncated negative value.
  • Step 103 Continue to perform winograd convolution processing according to the quantized first data to obtain a quantized winograd convolution result.
  • Step 104 Perform inverse quantization processing on the quantized winograd convolution result to obtain a winograd convolution result.
  • the above-mentioned data to be quantized is one of the input data, the winograd positive transformation result of the input data, and the alignment multiplication result.
  • the winograd convolution process can be:
  • Quantify the input data by using a certain pair of truncation thresholds to obtain the quantized input data; disassemble the winograd positive transformation of the quantized input data into a summation operation, and perform calculations to obtain the winograd positive transformation result of the quantized input data; Perform the bitwise multiplication operation of the winograd forward transform result of the quantized input data to obtain the bitwise multiplication result; disassemble the winograd inverse transform of the bitwise multiplication result into a summation operation to obtain the quantized winograd volume For the product result, perform inverse quantization processing on the quantized winograd convolution result to obtain the winograd convolution result.
  • the winograd convolution process may be:
  • the winograd positive transform of the input data is disassembled into a summation operation, and the winograd positive transform result of the input data is obtained by calculation; the winograd positive transform result of the input data is quantized by a certain pair of truncation thresholds, and the winograd of the quantized input data is obtained
  • the result of the positive transformation perform the bitwise multiplication operation of the winograd positive conversion result of the quantized input data to obtain the bitwise multiplication result; disassemble the winograd inverse transform of the bitwise multiplication result into a summation operation to obtain the quantized
  • the winograd convolution result performing inverse quantization processing on the quantized winograd convolution result, to obtain the winograd convolution result.
  • the winograd convolution process can be:
  • the winograd positive transformation of the input data is disassembled into a summation operation, and the calculation is performed to obtain the winograd positive transformation result of the input data; the bitwise multiplication operation of the winograd positive conversion result of the input data is performed to obtain the bitwise multiplication result; a certain one is adopted
  • the truncated threshold is quantized to obtain the quantized alignment multiplication result; the winograd inverse transform of the quantized alignment multiplication result is disassembled into a summation operation, and the quantized winograd convolution result is obtained. Perform inverse quantization processing on the quantized winograd convolution result to obtain the winograd convolution result.
  • the foregoing disassembling the winograd forward transformation of the input data into a summation operation, and performing calculations to obtain the winograd forward transformation result of the input data may include:
  • the input data is disassembled into a plurality of first sub-tensors, and winograd positive transformation is performed on the plurality of first sub-tensors and summed to obtain a winograd positive transformation result of the input data.
  • the number of the plurality of first sub-tensors is the same as the number of non-zero elements of the input data, and each first sub-tensor of the plurality of first sub-tensors has an element that is identical to the number of elements in the first sub-tensor.
  • the elements at corresponding positions in the input data are the same, and other elements are all 0.
  • the input neuron is a 4 ⁇ 4 matrix including 16 elements. Therefore, the input data can be decomposed into 16 first sub-tensors.
  • the 16 first sub-tensors are:
  • each first subtensor there is an element in each first subtensor that is the same as the element at the corresponding position in the input data, and the other elements are all 0.
  • the above disassembly methods are only some examples of the present disclosure, and do not limit the present disclosure in any way.
  • the number of first subtensors obtained by the disassembly can be The number of elements less than the input data, for example, the number of multiple first subtensors is the same as the number of non-zero elements of the input data.
  • performing winograd forward transformation on the multiple first subtensors and summing them to obtain the winograd forward transformation result of the input data may include the following process:
  • the winograd positive transformation result of the first sub-tensor corresponding to the first sub-tensor is: the value of the element at the first position in the first sub-tensor Is 1, where the position of the first position in the first sub-tensor is the same as the position of the non-zero element in the first sub-tensor;
  • the winograd positive transformation results of the multiple first subtensors are added to obtain the winograd positive transformation result of the input data.
  • the first-element sub-tensor corresponding to d 00 can be
  • the first sub-tensor is to extract the values of non-zero elements in the first sub-tensor, and the values of non-zero elements can be used as coefficients of the first sub-tensor.
  • the winograd positive transformation result of the first sub-tensor corresponding to the first sub-tensor can be obtained in advance through the following process: For each first sub-tensor, the first sub-tensor corresponding to the first sub-tensor The left side of the sub-tensor is multiplied by the positive transformation, the left-multiplied matrix, and the right is multiplied by the positive transformation, and the right-multiplied matrix is used to obtain the winograd positive transformation result of the first sub-tensor.
  • the form of the corresponding first element sub-tensor is determined, and the corresponding positive transformation left-multiplication matrix and forward transformation right-multiplication matrix are also determined.
  • the winograd positive transformation result of the first sub-tensor can be calculated in advance, and the specific process is as described above.
  • the corresponding winograd positive transformation result of the first sub-tensor is:
  • the winograd positive transformation result of the corresponding first-element sub-tensor is:
  • the matrix multiplication operation can be broken down into an addition operation.
  • the process of calculating the winograd positive transformation result of the first element sub-tensor involves more multiplication operations.
  • the pre-calculated winograd positive transformation results of the first element subtensor of various scales can be stored in In the computing device, in this way, in the actual computing process, it can be directly obtained without repeated computing, thereby shortening computing time and saving computing resources.
  • the non-zero element value in the first sub-tensor can be multiplied by the winograd positive transformation result of the corresponding first sub-tensor, You can get the winograd positive transformation result of the first subtensor.
  • the corresponding winograd positive transformation result is:
  • the winograd positive transformation results of all the first sub-tensors are calculated through the above process, and the winograd positive transformation results of multiple first sub-tensors are added to obtain the winograd positive transformation results of the input data.
  • multiple first sub-tensors are obtained by disassembling the input data, and the winograd positive transformation result of the first sub-tensor corresponding to the first sub-tensor obtained in advance and the first The non-zero element value of the subtensor can be summed to obtain the winograd positive transformation result of the input data.
  • the winograd positive transformation result of the weight can be calculated, and the calculation method of the winograd positive transformation result of the weight can be a traditional matrix For multiplication calculation, you can also refer to the disassembly mentioned above for the summation calculation to obtain the winograd positive transformation result.
  • the alignment multiplication operation of the winograd forward transformation result of the input data can be continued to obtain the alignment multiplication result.
  • the bitwise multiplication may refer to the data obtained by multiplying the data at the corresponding positions of the two tensors as the value of the corresponding position in the bitwise multiplication result.
  • the above-mentioned disassembling the winograd inverse transform of the alignment multiplication result into a summation operation to obtain the winograd convolution result of the input data may include:
  • the number of the plurality of second sub-tensors is the same as the number of non-zero elements of the result of the alignment, and each second sub-tensor of the plurality of second sub-tensors has one element It is the same as the element at the corresponding position in the alignment multiplication result, and other elements are all 0.
  • the result of the alignment multiplication is disassembled into multiple second sub-tensors, for example, it can be disassembled into 16, and the 16 second sub-tensors are:
  • winograd inverse transformation can be performed on the multiple second sub-tensors and summed to obtain the winograd convolution result of the input data.
  • performing winograd inverse transformation on the multiple second subtensors and summing them to obtain the winograd convolution result of the input data may include the following process:
  • the second sub-tensor corresponding to the second sub-tensor is: the value of the element at the second position in the second sub-tensor Is 1, where the position of the second position in the second sub-tensor is the same as the position of the non-zero element in the second sub-tensor;
  • the winograd inverse transform results of the multiple second subtensors are added to obtain the winograd convolution result of the input data.
  • the method for determining the second meta-sub-tensor corresponding to the second sub-tensor is the same as the method for determining the first meta-sub-tensor above, and will not be repeated here.
  • the winograd inverse transform result of the second sub-tensor is obtained in advance through the following process: For each second sub-tensor, the left side of the second sub-tensor corresponding to the second sub-tensor is multiplied by the inverse transform Multiplying the matrix on the left, multiplying the matrix on the right by the inverse transformation, and multiplying the matrix on the right to obtain the winograd inverse transformation result of the second element subtensor.
  • the form of the corresponding second-element sub-tensor is determined, and the corresponding inverse transform left multiplication matrix and inverse transform right multiplication matrix are also determined. Therefore, the winograd inverse transformation result of the second sub-tensor can be calculated in advance, and the specific process is as described above.
  • the left multiplication matrix of the inverse transformation is a 2 ⁇ 4 matrix, for example:
  • the inverse transformation right multiplication matrix is a 4 ⁇ 2 matrix, for example:
  • the dimension of the inverse transformation matrix can be determined according to the dimension of the input neuron and the dimension of the weight value and the convolution step length.
  • the above is only an example, and the present disclosure is not limited in any way.
  • the inverse transformation matrix is given by Therefore, the matrix multiplication operation of the inverse transformation can be realized by disassembling into addition and shift operations. Multiply the inverse transformation matrix by the second-element sub-tensor to obtain the winograd inverse transformation result of the second-element sub-tensor.
  • the element value in the winograd inverse transformation result of the second-element sub-tensor is determined by With other configurations, fractions can be calculated by simple shift operations, which can still save calculation time compared to multiplication operations.
  • the winograd inverse transform result of the second sub-tensor is obtained; multiple second sub-tensors.
  • the specific process of “adding the results of the winograd inverse transform of the amount of input data to obtain the result of the winograd convolution of the input data” can refer to the above, but the result of the winograd inverse transform of the second sub-tensor is not completely composed of 0 and ⁇ 1, but The score can be calculated by a simple shift operation. Compared with the multiplication operation, the present disclosure can still achieve the effects of saving calculation time and reducing energy consumption after disassembling the ordinary inverse transformation process.
  • multiple second sub-tensors are obtained by disassembling the bit-multiplication results, and the winograd inverse transform results of the second-element sub-tensors corresponding to the second sub-tensors obtained in advance and The non-zero element value of the second subtensor can be summed to obtain the winograd convolution result of the input data.
  • An embodiment of the present disclosure also provides a method for searching for a cutoff threshold for symmetric quantization, and the above determining multiple sets of quantized second data further includes:
  • a first set of quantized data is determined by using a first pair of truncation thresholds to quantify the set of data to be quantized.
  • the first pair of truncation thresholds includes the first truncated positive value and the first truncated positive value. The opposite first truncated negative value;
  • the average value of the absolute value of the data to be quantized and the maximum value of the absolute value in the data to be quantized are determined, where the average value of the absolute value is the absolute value of all the data in the data to be quantized divided by the number of elements.
  • the minimum mean difference is initialized, for example, the maximum value in floating-point numbers is initially set, and the search order i of the cyclic search is initialized (for example, initialized to 0).
  • the search order i can also be initialized to half of the total number of searches, that is, the search starts from the middle, which can improve the search efficiency.
  • one or more rounds of the threshold search process can be set, and each round of the threshold search can have the same or different total number of searches.
  • the total number of searches in each round can be set between 10 and 32.
  • the more the total number of searches the longer the search time and the more accurate the cutoff threshold found.
  • the search performance may no longer be substantially improved.
  • the data to be quantified can be divided into 10 pairs of candidate truncation thresholds, the 10 pairs of truncation thresholds are used in turn to perform the quantization process, and the best pair of truncation thresholds is determined according to the difference in the mean value of the absolute value of the data before and after quantization .
  • the current search order i is less than the predetermined total number of searches, that is, when each pair of truncation threshold is selected in turn for quantization, it is judged whether all calculations of the truncation threshold have been completed. If the current search order i is less than the predetermined total number of searches, a pair of truncation thresholds is determined based on the current search order i. The pair of truncation thresholds are respectively-the maximum value of the absolute value/the predetermined total number of searches*(i+1), The maximum value of absolute value/total number of predetermined searches*(i+1).
  • the process of truncation threshold search is: use multiple pairs of truncation thresholds to quantify the data to be quantified, and determine the group of quantized data that has the smallest difference in absolute value from the data to be quantized in the multiple sets of quantized data. , And then select a pair of cutoff thresholds corresponding to this set of quantized data from multiple pairs of cutoff thresholds.
  • a second round of fine-grained truncation threshold search process can be performed, and the second round of search process can also refer to the aforementioned method, except that the second round of search is within a certain range around the first round of optimal truncation threshold (for example, The selected cut-off threshold between the previous cut-off threshold and the latter cut-off threshold) is a further refinement of the first round of search results.
  • the interval between each pair of cutoff thresholds may be (maximum absolute value*2)/(total number of searches in the first round*total number of searches in the second round).
  • the fine-grained optimal cutoff threshold is determined. Through two rounds of search, a more accurate cut-off threshold can be obtained, and the accuracy loss caused by quantization can be reduced.
  • the embodiments of the present disclosure provide a method for iteratively searching for the optimal cutoff threshold.
  • three pairs of truncation thresholds are determined.
  • the maximum absolute value absmax of all data in the data F x to be quantized can be determined.
  • the three pairs of truncation thresholds can be (-absmax/2, absmax/2), ( -absmax*3/4, absmax*3/4), (-absmax, absmax).
  • calculate F x separately The mean value of the corresponding absolute value F mean , Then according to the formula Choose the smallest difference diff_min. Determine whether the minimum difference diff_min is less than a predetermined threshold set in advance.
  • the embodiment of the present disclosure may also perform the iterative process only once, and then directly use the pair of cutoff thresholds corresponding to the smallest difference diff_min as the final cutoff threshold.
  • the quantization parameter when using each pair of truncation thresholds to quantize data can be determined by the following equations (1)-(3).
  • n the number of binary digits after quantization
  • S and f represent quantization parameters.
  • the quantization parameters S1, f1, S2, f2, S3, and f3 can be obtained, thereby obtaining the quantized data
  • S and f corresponding to the pair of truncation thresholds are directly taken as the quantized data of the data to be quantized.
  • a pair of truncation thresholds is determined from the plurality of pairs of truncation thresholds according to the mean value of the absolute value of the quantized data obtained by quantizing a group of data to be quantized by using multiple pairs of truncation thresholds.
  • the pair of truncated thresholds quantize a set of data to be quantized in the winograd convolution to obtain the quantized first data, and continue to perform the winograd convolution processing according to the quantized first data to obtain the quantized winograd convolution result.
  • the quantized winograd convolution result is subjected to inverse quantization processing to obtain the winograd convolution result, which can improve the accuracy of quantization, save the operation time of the winograd convolution, and reduce energy consumption.
  • steps in the flowchart of FIG. 1 are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless there is a clear description in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least a part of the steps in FIG. 1 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The execution of these sub-steps or stages The sequence is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • Fig. 2 shows a block diagram of a data processing device according to an embodiment of the present disclosure. As shown in Figure 2, the device may include:
  • the first determining module 201 may be used to determine a pair of truncation thresholds from the plurality of pairs of truncation thresholds according to the mean value of the absolute value of the quantized data obtained by quantizing a group of data to be quantized using multiple pairs of truncation thresholds, wherein A set of data to be quantized is a set of data in a winograd convolution process, and each pair of truncation thresholds in the plurality of truncation thresholds includes a symmetric truncation positive value and a truncation negative value;
  • the first quantization module 202 may be used to quantize the set of data to be quantized according to the determined pair of cutoff thresholds to obtain quantized first data;
  • the convolution module 203 may be used to continue to perform winograd convolution processing according to the quantized first data to obtain a quantized winograd convolution result;
  • the inverse quantization module 204 may be used to perform inverse quantization processing on the quantized winograd convolution result to obtain the winograd convolution result.
  • a pair of truncation thresholds is determined from the plurality of pairs of truncation thresholds according to the mean value of the absolute value of the quantized data obtained by quantizing a set of data to be quantized using multiple pairs of truncation thresholds, and according to the determined all
  • the pair of truncated thresholds quantize a set of data to be quantized in the winograd convolution to obtain the quantized first data, and continue to perform the winograd convolution processing according to the quantized first data to obtain the quantized winograd convolution result.
  • the quantized winograd convolution result is subjected to inverse quantization processing to obtain the winograd convolution result, which can improve the accuracy of quantization, save the operation time of the winograd convolution, and reduce energy consumption.
  • the above-mentioned first determining module 201 may also be used for:
  • the above winograd convolution processing process may include:
  • the winograd inverse transform of the result of the alignment multiplication is disassembled into a summation operation to obtain the winograd convolution result.
  • the data to be quantized is one of the input data, the winograd positive transformation result of the input data, and the alignment multiplication result.
  • the device may further include:
  • the second determining module may be used to determine the largest absolute value among the absolute values of all the data in the set of data to be quantified
  • the third determining module may be used to determine the multiple pairs of truncation thresholds based on the maximum absolute value.
  • the above-mentioned first determining module 201 may also be used for:
  • a first set of quantized data is determined by using a first pair of truncation thresholds to quantify the set of data to be quantized.
  • the first pair of truncation thresholds includes the first truncated positive value and the first truncated positive value. The opposite first truncated negative value;
  • the first determining module 201 may also be used for:
  • a set of first quantized data is selected from the plurality of sets of quantized data, and the difference between the mean value of the absolute value of the first set of quantized data and the mean value of the absolute value of the set of data to be quantized is less than The difference between the mean value of the absolute value of the second set of quantized data and the mean value of the absolute value of the set of data to be quantized, wherein the multiple sets of second quantized data are the multiple sets of quantized data except for the first set of quantized data.
  • a pair of truncation thresholds corresponding to the first set of quantized data is selected from the plurality of pairs of truncation thresholds.
  • the device may further include:
  • a fourth determining module configured to determine the truncation search range associated with the selected pair of truncation thresholds
  • a fifth determining module configured to determine new pairs of truncation thresholds within the truncation search range
  • the second quantization module is configured to determine new multiple sets of quantized data by using the new multiple pairs of truncation thresholds to respectively quantize the set of data to be quantized;
  • the selection module is configured to determine from the difference between the mean value of the absolute value of each group of quantized data in the new plurality of sets of quantized data and the mean value of the absolute value of the set of data to be quantized Select a new pair of cutoff thresholds from the new multiple pairs of cutoff thresholds.
  • the disassembling the winograd forward transformation of the input data into a summation operation, and performing calculation to obtain the winograd forward transformation result of the input data may include:
  • the input data is disassembled into a plurality of first sub-tensors, and winograd positive transformation is performed on the plurality of first sub-tensors and summed to obtain a winograd positive transformation result of the input data.
  • the number of the plurality of first sub-tensors is the same as the number of non-zero elements of the input data, and each first sub-tensor of the plurality of first sub-tensors has an element that is identical to the number of elements in the first sub-tensor.
  • the elements at corresponding positions in the input data are the same, and other elements are all 0.
  • the disassembling the winograd inverse transform of the alignment multiplication result into a summation operation to obtain the winograd convolution result of the input data may include:
  • the number of the plurality of second sub-tensors is the same as the number of non-zero elements of the result of the alignment, and each second sub-tensor of the plurality of second sub-tensors has one element It is the same as the element at the corresponding position in the alignment multiplication result, and other elements are all 0.
  • the input data may be at least one of input neurons, weights, and gradients.
  • the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
  • the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
  • the foregoing device embodiments are only illustrative, and the device of the present disclosure may also be implemented in other ways.
  • the division of units/modules in the above-mentioned embodiments is only a logical function division, and there may be other division methods in actual implementation.
  • multiple units, modules or components may be combined or integrated into another system, or some features may be omitted or not implemented.
  • the functional units/modules in the various embodiments of the present disclosure may be integrated into one unit/module, or each unit/module may exist alone physically, or two or more units/modules may exist.
  • the modules are integrated together.
  • the above-mentioned integrated unit/module can be realized in the form of hardware or software program module.
  • the hardware may be a digital circuit, an analog circuit, and so on.
  • the physical realization of the hardware structure includes but is not limited to transistors, memristors and so on.
  • the artificial intelligence processor may be any appropriate hardware processor, such as CPU, GPU, FPGA, DSP, ASIC, and so on.
  • the storage unit may be any suitable magnetic storage medium or magneto-optical storage medium, such as RRAM (Resistive Random Access Memory), DRAM (Dynamic Random Access Memory), Static random access memory SRAM (Static Random-Access Memory), enhanced dynamic random access memory EDRAM (Enhanced Dynamic Random Access Memory), high-bandwidth memory HBM (High-Bandwidth Memory), hybrid storage cube HMC (Hybrid Memory Cube), etc. Wait.
  • RRAM Resistive Random Access Memory
  • DRAM Dynamic Random Access Memory
  • Static random access memory SRAM Static Random-Access Memory
  • enhanced dynamic random access memory EDRAM Enhanced Dynamic Random Access Memory
  • high-bandwidth memory HBM High-Bandwidth Memory
  • hybrid storage cube HMC Hybrid Memory Cube
  • the integrated unit/module is implemented in the form of a software program module and sold or used as an independent product, it can be stored in a computer readable memory.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory, It includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • an artificial intelligence chip is also disclosed, which includes the above-mentioned data processing device.
  • a board card which includes a storage device, an interface device, a control device, and the aforementioned artificial intelligence chip; wherein, the artificial intelligence chip is connected to the storage device and the control device And the interface devices are respectively connected; the storage device is used to store data; the interface device is used to realize data transmission between the artificial intelligence chip and an external device; the control device is used to The state of the artificial intelligence chip is monitored.
  • Fig. 3 shows a structural block diagram of a board card according to an embodiment of the present disclosure.
  • the board card may include other supporting components in addition to the chip 389 described above.
  • the supporting components include, but are not limited to: a storage device 390, Interface device 391 and control device 392;
  • the storage device 390 is connected to the artificial intelligence chip through a bus for storing data.
  • the storage device may include multiple groups of storage units 393. Each group of the storage unit and the artificial intelligence chip are connected through a bus. It can be understood that each group of the storage units may be DDR SDRAM (English: Double Data Rate SDRAM, double-rate synchronous dynamic random access memory).
  • the storage device may include 4 groups of the storage units. Each group of the storage unit may include a plurality of DDR4 particles (chips).
  • the artificial intelligence chip may include four 72-bit DDR4 controllers. In the 72-bit DDR4 controller, 64 bits are used for data transmission and 8 bits are used for ECC verification. It can be understood that when DDR4-3200 particles are used in each group of the storage units, the theoretical bandwidth of data transmission can reach 25600MB/s.
  • each group of the storage unit includes a plurality of double-rate synchronous dynamic random access memories arranged in parallel.
  • DDR can transmit data twice in one clock cycle.
  • a controller for controlling the DDR is provided in the chip, which is used to control the data transmission and data storage of each storage unit.
  • the interface device is electrically connected with the artificial intelligence chip.
  • the interface device is used to implement data transmission between the artificial intelligence chip and an external device (such as a server or a computer).
  • the interface device may be a standard PCIE interface.
  • the data to be processed is transferred from the server to the chip through a standard PCIE interface to realize data transfer.
  • the interface device may also be other interfaces. The present disclosure does not limit the specific manifestations of the other interfaces mentioned above, as long as the interface unit can realize the switching function.
  • the calculation result of the artificial intelligence chip is still transmitted by the interface device back to an external device (such as a server).
  • the control device is electrically connected with the artificial intelligence chip.
  • the control device is used to monitor the state of the artificial intelligence chip.
  • the artificial intelligence chip and the control device may be electrically connected through an SPI interface.
  • the control device may include a single-chip microcomputer (Micro Controller Unit, MCU).
  • MCU Micro Controller Unit
  • the artificial intelligence chip may include multiple processing chips, multiple processing cores, or multiple processing circuits, and can drive multiple loads. Therefore, the artificial intelligence chip can be in different working states such as multi-load and light-load.
  • the control device can realize the regulation and control of the working states of multiple processing chips, multiple processing and or multiple processing circuits in the artificial intelligence chip.
  • an electronic device which includes the aforementioned artificial intelligence chip.
  • Electronic equipment includes data processing devices, robots, computers, printers, scanners, tablets, smart terminals, mobile phones, driving recorders, navigators, sensors, cameras, servers, cloud servers, cameras, cameras, projectors, watches, headsets , Mobile storage, wearable devices, vehicles, household appliances, and/or medical equipment.
  • the transportation means include airplanes, ships, and/or vehicles;
  • the household appliances include TVs, air conditioners, microwave ovens, refrigerators, rice cookers, humidifiers, washing machines, electric lights, gas stoves, and range hoods;
  • the medical equipment includes nuclear magnetic resonance, B-ultrasound and/or electrocardiograph.
  • the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to call the instructions stored in the memory to execute the above method.
  • FIG. 4 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
  • the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power supply component 806 provides power for various components of the electronic device 800.
  • the power supply component 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), and when the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
  • the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
  • the component is the display and the keypad of the electronic device 800.
  • the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
  • the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
  • the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the electronic device 800 may be implemented by one or more application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field-available A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • ASIC application-specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field-available A programmable gate array
  • controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
  • FIG. 5 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server. 5
  • the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
  • the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-volatile computer-readable storage medium is also provided, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
  • a pair of truncation thresholds is determined from the multiple pairs of truncation thresholds, wherein the set of data to be quantized is winograd convolution
  • Clause A2 according to the method described in Clause A1, said determining a pair of truncation thresholds from the plurality of pairs of truncation thresholds according to the mean value of the absolute value of the quantized data obtained by quantizing a set of data to be quantized by using multiple pairs of truncation thresholds, include:
  • the winograd convolution processing process includes:
  • the winograd inverse transform of the result of the alignment multiplication is disassembled into a summation operation to obtain the winograd convolution result.
  • the data to be quantized is one of the input data, the winograd conversion result of the input data, and the alignment multiplication result.
  • the multiple pairs of cutoff thresholds are determined.
  • the determining multiple sets of quantified second data further includes:
  • a first set of quantized data is determined by using a first pair of truncation thresholds to quantify the set of data to be quantized.
  • the first pair of truncation thresholds includes the first truncated positive value and the first truncated positive value. The opposite first truncated negative value;
  • a pair of truncation thresholds is selected from the plurality of pairs of truncation thresholds, including:
  • a set of first quantized data is selected from the plurality of sets of quantized data, and the difference between the mean value of the absolute value of the first set of quantized data and the mean value of the absolute value of the set of data to be quantized is less than The difference between the mean value of the absolute value of the second set of quantized data and the mean value of the absolute value of the set of data to be quantized, wherein the multiple sets of second quantized data are the multiple sets of quantized data except for the first set of quantized data.
  • a pair of truncation thresholds corresponding to the first set of quantized data is selected from the plurality of pairs of truncation thresholds.
  • the disassembling the winograd positive transformation of the input data into a summation operation and performing calculation to obtain the winograd positive transformation result of the input data includes:
  • the number of the plurality of first sub-tensors is the same as the number of non-zero elements of the input data, and each first sub-tensor of the plurality of first sub-tensors has an element that is identical to the number of elements in the first sub-tensor.
  • the elements at corresponding positions in the input data are the same, and other elements are all 0.
  • the disassembling the winograd inverse transform of the alignment multiplication result into a summation operation to obtain the winograd convolution result of the input data includes:
  • the number of the plurality of second sub-tensors is the same as the number of non-zero elements of the result of the alignment, and each second sub-tensor of the plurality of second sub-tensors has one element It is the same as the element at the corresponding position in the alignment multiplication result, and other elements are all 0.
  • the input data is at least one of input neurons, weights, and gradients.
  • a data processing device including:
  • the first determining module is configured to determine a pair of truncation thresholds from the plurality of pairs of truncation thresholds according to the mean value of the absolute value of the quantized data obtained by quantizing a group of data to be quantized by using multiple pairs of truncation thresholds, wherein the set of truncation thresholds
  • the data to be quantified is a set of data in the process of winograd convolution processing, and each pair of truncation thresholds in the plurality of truncation thresholds includes a symmetric truncation positive value and a truncation negative value;
  • the first quantization module is configured to quantize the set of data to be quantized according to the determined pair of cutoff thresholds to obtain quantized first data
  • the convolution module is used to continue the winograd convolution process according to the quantized first data to obtain the quantized winograd convolution result;
  • the inverse quantization module is configured to perform inverse quantization processing on the quantized winograd convolution result to obtain the winograd convolution result.
  • the device according to clause A12, the first determining module is further configured to:
  • the winograd convolution processing process includes:
  • the winograd inverse transform of the result of the alignment multiplication is disassembled into a summation operation to obtain the winograd convolution result.
  • Clause A15 the device according to clause A14, wherein the data to be quantized is one of the input data, the winograd conversion result of the input data, and the alignment multiplication result.
  • the second determining module is used to determine the largest absolute value among the absolute values of all the data in the group of data to be quantized
  • the third determining module is configured to determine the multiple pairs of truncation thresholds based on the maximum absolute value.
  • the first determining module is further configured to:
  • a first set of quantized data is determined by using a first pair of truncation thresholds to quantify the set of data to be quantized.
  • the first pair of truncation thresholds includes the first truncated positive value and the first truncated positive value. The opposite first truncated negative value;
  • the first determining module is further configured to:
  • a set of first quantized data is selected from the plurality of sets of quantized data, and the difference between the mean value of the absolute value of the first set of quantized data and the mean value of the absolute value of the set of data to be quantized is less than The difference between the mean value of the absolute value of the second set of quantized data and the mean value of the absolute value of the set of data to be quantized, wherein the multiple sets of second quantized data are the multiple sets of quantized data except for the first set of quantized data.
  • a pair of truncation thresholds corresponding to the first set of quantized data is selected from the plurality of pairs of truncation thresholds.
  • a fourth determining module configured to determine the truncation search range associated with the selected pair of truncation thresholds
  • a fifth determining module configured to determine new pairs of truncation thresholds within the truncation search range
  • the second quantization module is configured to determine new sets of quantized data by using the new pairs of cutoff thresholds to quantize the set of data to be quantized respectively;
  • the selection module is configured to determine from the difference between the mean value of the absolute value of each group of quantized data in the new plurality of sets of quantized data and the mean value of the absolute value of the set of data to be quantized Select a new pair of cutoff thresholds from the new multiple pairs of cutoff thresholds.
  • the disassembling the winograd positive transformation of the input data into a summation operation and performing calculations to obtain the winograd positive transformation result of the input data includes:
  • the number of the plurality of first sub-tensors is the same as the number of non-zero elements of the input data, and each first sub-tensor of the plurality of first sub-tensors has an element that is identical to the number of elements in the first sub-tensor.
  • the elements at corresponding positions in the input data are the same, and other elements are all 0.
  • the decomposing the winograd inverse transform of the alignment multiplication result into a summation operation to obtain the winograd convolution result of the input data includes:
  • the number of the plurality of second sub-tensors is the same as the number of non-zero elements of the result of the alignment, and each second sub-tensor of the plurality of second sub-tensors has one element It is the same as the element at the corresponding position in the alignment multiplication result, and other elements are all 0.
  • the input data is at least one of input neurons, weights, and gradients.
  • Clause A22 an artificial intelligence chip, the chip comprising the data processing device according to any one of clauses A12 to A21.
  • Clause A23 an electronic device including the artificial intelligence chip as described in Clause A22.
  • a board includes: a storage device, an interface device, a control device, and the artificial intelligence chip as described in clause A22;
  • the artificial intelligence chip is connected to the storage device, the control device, and the interface device respectively;
  • the storage device is used to store data
  • the interface device is used to implement data transmission between the artificial intelligence chip and external equipment
  • the control device is used to monitor the state of the artificial intelligence chip.
  • the storage device includes: multiple groups of storage units, each group of the storage unit is connected to the artificial intelligence chip through a bus, and the storage unit is: DDR SDRAM;
  • the chip includes: a DDR controller, which is used to control the data transmission and data storage of each storage unit;
  • the interface device is: a standard PCIE interface.
  • an electronic device including:
  • a memory for storing processor executable instructions
  • the processor is configured to call instructions stored in the memory to execute the method described in any one of clauses A1 to A11.
  • Clause A27 a computer-readable storage medium with computer program instructions stored thereon, which, when executed by a processor, implement the method described in any one of clauses A1 to A11.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Neurology (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

一种数据处理方法、装置、计算机设备和存储介质。所述数据处理方法,包括:根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,其中,所述一组待量化数据为winograd卷积处理过程中的一组数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值(101);根据确定出的所述一对截断阈值量化所述一组待量化数据,得到量化后的第一数据(102);根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果(103);对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果(104)。通过上述方法,可以提高量化精度和计算性能。

Description

数据处理方法、装置、计算机设备和存储介质
本申请要求在2019年11月01日提交中国专利局、申请号为201911061465.7、发明名称为“数据处理方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及数据处理技术领域,特别是涉及一种数据处理方法、装置、计算机设备和存储介质。
背景技术
在人工智能技术领域,神经网络算法是最近非常流行的一种机器学习算法,在各种领域中都取得了非常好的效果,比如图像识别,语音识别,自然语言处理等。随着神经网络算法的发展,算法的复杂度也越来越高,为了提高识别度,模型的规模也在逐渐增大。用GPU和CPU处理起这些大规模的模型,要花费大量的计算时间,并且耗电量很大。
发明内容
基于此,本公开实施例提供了一种能够节约计算时间、减少能耗,提高计算精度的数据处理方法、装置、计算机设备和存储介质。
根据本公开的一方面,提供了一种数据处理方法,包括:
根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,其中,所述一组待量化数据为winograd卷积处理过程中的一组数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;
根据确定出的所述一对截断阈值量化所述一组待量化数据,得到量化后的第一数据;
根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果;
对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果。
根据本公开的另一方面,提供了一种数据处理装置,包括:
第一确定模块,用于根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,其中,所述一组待量化数据为winograd卷积处理过程中的一组数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;
第一量化模块,用于根据确定出的所述一对截断阈值量化所述一组待量化数据,得到量化后的第一数据;
卷积模块,用于根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果;
反量化模块,用于对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果。
根据本公开的另一方面,提供了一种人工智能芯片,所述芯片包括如前述任意一项所述的数据处理装置。
根据本公开的另一方面,提供了一种电子设备,所述电子设备包括如前述的人工智能芯片。
根据本公开的另一方面,提供了一种板卡,所述板卡包括:存储器件、接口装置和控制器件以及如前述的人工智能芯片;
其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;
所述存储器件,用于存储数据;
所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;
所述控制器件,用于对所述人工智能芯片的状态进行监控。
根据本公开的另一方面,提供了一种电子设备,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为调用所述存储器存储的指令,以执行前述中任意一项所述的方法。
根据本公开的另一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现前述中任意一项所述的方法。
根据本公开的数据处理方法、装置、计算机设备和存储介质,根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,根据确定出的所述一对截断阈值量化winograd卷积中的一组待量化数据,得到量化后的第一数据,根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果,对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果,可以提高量化的精度的同时,节约winograd卷积的运算时间,减少能耗。
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
包含在说明书中并且构成说明书的一部分的附图与说明书一起示出了本公开的示例性实施例、特征和方面,并且用于解释本公开的原理。
图1示出根据本公开实施例的数据处理方法的流程图;
图2示出根据本公开实施例的数据处理装置的框图;
图3示出根据本公开实施例的板卡的结构框图;
图4示出根据本公开实施例的一种电子设备800的框图;
图5示出根据本公开实施例的一种电子设备1900的框图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
应当理解,本公开的权利要求、说明书及附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。本公开的说明书和权利要求书中使用的术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、 整体、步骤、操作、元素、组件和/或其集合的存在或添加。
还应当理解,在此本公开说明书中所使用的术语仅仅是出于描述特定实施例的目的,而并不意在限定本公开。如在本公开说明书和权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。还应当进一步理解,在本公开说明书和权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
如在本说明书和权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。
winograd卷积是一种基于多项式插值算法的卷积加速实现方式。它通过对卷积操作的两个输入:神经元、权值进行一定规模切分后分别进行线性变换(winograd正变换),再将变换后的神经元和权值进行对位乘法,最后对对位乘法结果再次进行线性变换(winograd逆变换)得到与原卷积操作等价的卷积结果。
winograd变换的表达式如下所示:
对于一维的神经元和权值:S=A T((Gg)⊙(B Td))
对于二维的神经元和权值:S=A T((GgG T)⊙(B TdB))A
其中,g表示权值,G表示权值对应的左乘正变换矩阵,G T表示权值对应的右乘正变换矩阵,d表示输入神经元,B表示输入神经元对应的右乘正变换矩阵,B T表示输入神经元对应的左乘正变换矩阵,⊙表示对位乘运算,A表示右乘逆变换矩阵,A T表示左乘逆变换矩阵。对于不同维度的输入神经元,都有与其相对应的B和B T;同样的,对于不同维度的权值,都有与其相对应的G和G T
通过winograd卷积替代原始卷积操作能够带来硬件能效比和运算时间上的较大收益,同时也可以在不增加、或者增加较少的硬件开销的情况下实现更高的神经网络性能。但是,winograd卷积的弊端仍然较为明显,大量的乘法运算在计算过程中仍然消耗较长的运算时间。
本公开提供了一种数据处理方法,该数据处理方法可以将winograd卷积过程中的乘法运算拆解为加法运算,从而节约计算时间、减少能耗,并且对winograd卷积过程中的数据进行量化处理,进一步的提高计算性能。
一般来说,在对数据进行量化时,如果选择的取值范围较广,则会造成量化后的数据精度较低,而如果取值范围过小,则会造成过多的数据被截断,导致分布在两侧的数据的信息损失,其中取值范围是指用于量化数据的最小截断阈值与最大截断阈值之间的数值范围。因此,需要找到一个合适的截断阈值来对数据进行量化,使得数据量化的损失最小或较小。传统地,通过KL散度(Kullback–Leibler divergence)的方法来确定最佳截断阈值,其中KL散度能够确定量化前与量化后的数据之间的相关度。KL散度又称为相对熵(relative entropy)、信息散度(information divergence)、信息增益(information gain)。KL散度是两个概率分布P和Q之间差别的非对称性的度量。假设量化前32位浮点数分布为P,量化后8 位整数分布为Q,那么只要让P和Q之间的KL散度越小,则表明量化前后的分布越接近,量化也就越有效。然而,本申请的发明人发现通过传统的KL方法所获得的截断阈值所实现的量化效果并不佳,通常会造成较大的精度损失。
为此,本公开的实施例提出了一种确定用于对称量化的截断阈值的新方案,能够实现比传统技术(例如KL方法)更小的量化精度损失。根据本公开的实施例,在获取winograd卷积过程中的一组待量化数据之后,通过使用多对截断阈值分别量化一组待量化数据,来确定多组量化后的数据,其中多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值。然后,使用每组量化后的数据的绝对值的均值与一组待量化数据的绝对值的均值之间的差异作为评价指标,来从多对截断阈值中选择合适的一对截断阈值。通过这种方式,能够找到更合适的截断阈值。
图1示出根据本公开实施例的数据处理方法的流程图。如图1所示,该方法可以包括:
步骤101、根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,其中,所述一组待量化数据为winograd卷积处理过程中的一组数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值。
举例来说,在获取winograd卷积过程中的一组待量化数据之后,通过使用多对截断阈值分别量化一组待量化数据,来确定多组量化后的数据,其中多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值。然后,使用每组量化后的数据的绝对值的均值与一组待量化数据的绝对值的均值之间的差值作为评价指标,来从多对截断阈值中选择合适的一对截断阈值。
其中,原始的待量化数据可以是图像数据、声音数据或者视频数据等。以原始的待量化数据为图像数据为例,输入数据可以表示为NHWC(batch,height,width,channels)的形式,N表示图像的数量,HW可以分别表示在高度和宽度方向的像素个数,C可以表示通道数,例如:C可以表示RGB(Red,Green,Blue)三个通道。需要说明的是,以上表示方式仅仅是本公开的一个示例,本公开不限于此。
在一种可能的实现方式中,上述据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,可以包括:
通过使用多对截断阈值分别量化所述一组待量化数据,来确定多组量化后的第二数据;
基于所述多组量化后的数据中的至少一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值。
举例来说,获取winograd卷积过程中的一组待量化数据,该量化数据可以为winograd卷积处理过程中的任一数据。
在一种可能的实现方式中,上述所述winograd卷积处理过程,可以包括:
将输入数据的winograd正变换拆解为求和运算,并进行计算得到所述输入数据的winograd正变换结果;
执行所述输入数据的winograd正变换结果的对位乘操作,得到对位乘结果;
将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述winograd卷积结果。
则上述一组待量化数据可以为输入数据(在一种可能的实现方式中,该输入数据为输入神经元和/或权值),或者还可以为输入数据的winograd正变换结果,或者还可以为上述对位乘结果。
举例来说,可以对待量化数据进行量化,从而加快winograd卷积的处理速度。在一些实施例中, 待量化数据可以为32位的浮点数。备选地,待量化的数据也可以为其他位数的浮点数,或者其他的数据类型。
通过使用多对截断阈值分别量化一组待量化数据,来确定多组量化后的数据,其中多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值。在对称量化的方案中,截断阈值为对称的一对正负值,即截断正值和截断负值,这两个值的数值本身相同但是符号相反。
根据本公开的实施例,可以挑选多对截断阈值,分别量化待量化数据。在一些实施例中,可以以固定的间隔挑选一些截断阈值。
在一种可能的实现方式中,上述方法还可以包括:
确定所述一组待量化数据中的所有数据的绝对值中的最大绝对值;
基于所述最大绝对值,确定所述多对截断阈值。
举例来说,根据待量化数据中的所有数据的绝对值中的最大绝对值,每隔预定距离挑选一个截断阈值。在一些实施例中,也可以仅挑选几个特定位置处的截断阈值,例如仅挑选绝对值最大值的几个预定比例的数值。
在一些实施例中,可以根据每对截断阈值计算出相应的一个或多个量化参数(例如:点位置参数、缩放系数、偏移量等),然后使用计算出的量化参数来量化待量化数据。备选地,也可以直接根据截断阈值来通过各种公式或模型量化待量化数据,而无需单独计算各个量化参数的值。
基于多组量化后的数据中的至少一组量化后的数据的绝对值的均值与一组待量化数据的绝对值的均值之间的差异,从多对截断阈值中选择一对截断阈值,以用于量化一组待量化数据。由于量化前后的数据的绝对值的均值差距能够反映出量化前后的精度损失,其中绝对值均值差异越小,量化操作的精度损失越小。因此,本公开的实施例使用量化前后的数据的绝对值的均值的差异作为挑选最佳截断阈值的指标,能够实现比传统的KL方法更小的精度损失。
在一些实施例中,量化后的数据的绝对值的均值与待量化数据的绝对值的均值之间的差异可以为两个绝对值均值之间的差值。备选地,量化后的数据的绝对值的均值与待量化数据的绝对值的均值之间的差异也可以为:两个绝对值均值之间的差值除以待量化数据的绝对值的均值,然后再取绝对值。
在一种可能的实现方式中,基于所述多组量化后的数据中的至少一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值,可以包括:
从所述多组量化后的数据中选出一组第一量化数据,所述一组第一量化数据的绝对值的均值与所述一组待量化数据的绝对值的均值的差,小于多组第二量化数据的绝对值的均值与所述一组待量化数据的绝对值的均值的差,其中,所述多组第二量化数据为多组量化后的数据中除了所述一组第一量化数据之外的其他组量化后的数据;
从所述多对截断阈值中选择与所述一组第一量化数据相对应的一对截断阈值。
确定各组量化后的数据的绝对值的均值,并确定各组量化后的数据的绝对值的均值与一组待量化数据的绝对值的均值的差,确定差最小的一组量化后的数据为第一量化数据,确定除第一量化数据以外的其他组量化后的数据为第二量化数据,也即该第一量化数据的绝对值的均值与一组待量化数据的绝对值的均值的差,小于第二量化数据的绝对值的均值与一组待量化数据的绝对值的均值的差。确定 用于量化待量化数据得到第一量化数据的一对截断阈值为用于量化待量化数据的一对截断阈值。
举例来说,确定一对-|T|至|T|后,在-|T|至|T|的范围之外的数据将会被设定成-|T|或|T|,其中|T|为确定的截断正值,-|T|为确定的截断负值。例如,在一对截断阈值对应的截断范围内的数据根据量化参数进行量化处理,将待量化的在截断范围之外、且小于-|T|的值,当成值-|T|来进行量化处理,将待量化的在截断范围之外、且大于|T|的值,当成值|T|来进行量化处理。通过这种方式,通过使用截断阈值来缩小待量化数据的取值范围,能够提高量化后的数据的精度。
步骤102、根据确定出的所述一对截断阈值量化所述一组待量化数据,得到量化后的第一数据。
举例来说,在选择最佳的一对截断阈值之后,可以使用所选择的一对截断阈值来量化一组待量化数据以获得量化后的第一数据,包括:将一组待量化数据中大于截断正值的数值截断为截断正值,并且将一组待量化数据中小于截断负值的数值截断为截断负值。
步骤103、根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果。
步骤104、对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果。
在一种可能的实现方式中,上述待量化数据为所述输入数据、所述输入数据的winograd正变换结果、所述对位乘结果中的一种。
示例性的,待量化数据为输入数据,则winograd卷积过程可以为:
采用确定的一对截断阈值量化输入数据,得到量化后的输入数据;将量化后的输入数据的winograd正变换拆解为求和运算,并进行计算得到量化后的输入数据的winograd正变换结果;执行量化后的输入数据的winograd正变换结果的对位乘操作,得到对位乘结果;将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述量化后的winograd卷积结果,对该量化后的winograd卷积结果进行反量化处理,得到winograd卷积结果。
示例性的,待量化数据为输入数据的winograd正变换结果,则winograd卷积过程可以为:
将输入数据的winograd正变换拆解为求和运算,并进行计算得到输入数据的winograd正变换结果;采用确定的一对截断阈值量化输入数据的winograd正变换结果,得到量化后的输入数据的winograd正变换结果;执行量化后的输入数据的winograd正变换结果的对位乘操作,得到对位乘结果;将对对位乘结果的winograd逆变换拆解为求和运算,得到所述量化后的winograd卷积结果,对该量化后的winograd卷积结果进行反量化处理,得到winograd卷积结果。
示例性的,待量化数据为对位乘结果,则winograd卷积过程可以为:
将输入数据的winograd正变换拆解为求和运算,并进行计算得到输入数据的winograd正变换结果;执行输入数据的winograd正变换结果的对位乘操作,得到对位乘结果;采用确定的一对截断阈值量化对位乘结果,得到量化后的对位乘结果;将对量化后的对位乘结果的winograd逆变换拆解为求和运算,得到量化后的winograd卷积结果。对该量化后的winograd卷积结果进行反量化处理,得到winograd卷积结果。
在一种可能的实现方式中,上述将输入数据的winograd正变换拆解为求和运算,并进行计算得到所述输入数据的winograd正变换结果,可以包括:
将所述输入数据拆解为多个第一子张量,对所述多个第一子张量进行winograd正变换并求和得到所述输入数据的winograd正变换结果。
其中,所述多个第一子张量的个数与所述输入数据的元素的不为0的个数相同,所述多个第一子张量中的每个第一子张量中有一个元素与所述输入数据中的对应位置的元素相同、其他元素均为0。
举例来说,假设输入神经元表示为:
Figure PCTCN2020123853-appb-000001
输入神经元为4×4的矩阵,包括16个元素,因此,可以将输入数据拆解为16个第一子张量。
那么,按照本公开的拆解方式,16个第一子张量分别为:
Figure PCTCN2020123853-appb-000002
每个第一子张量中有一个元素与所述输入数据中的对应位置的元素相同、其他元素均为0是指:以第一子张量d 00为例,在第一行第一列位置的元素与输入神经元在第一行第一列的位置的元素相同,其他元素都为0,其他第一子张量也有相同的属性。
需要说明的是,以上拆解方式仅仅是本公开的一些示例,不以任何方式限制本公开,例如,如果输入数据中具有值为0的元素,拆解得到的第一子张量的数量可以少于输入数据的元素的个数,例如,多个第一子张量的个数与所述输入数据的不为0的元素的个数相同。
在一种可能的实现方式中,对所述多个第一子张量进行winograd正变换并求和得到所述输入数据的winograd正变换结果,可以包括以下过程:
获取第一子张量对应的第一元子张量的winograd正变换结果;其中,第一子张量对应的第一元子张量为:在第一元子张量中第一位置的元素的值为1,其中,第一位置在第一元子张量中所处的位置与第一子张量中的非0元素所处的位置相同;
将第一子张量中不为0的元素值作为系数乘以对应的第一元子张量的winograd正变换结果,得到第一子张量的winograd正变换结果;
将多个第一子张量的winograd正变换结果相加得到所述输入数据的winograd正变换结果。
仍然以第一子张量d 00为例,d 00对应的第一元子张量可以为
Figure PCTCN2020123853-appb-000003
也就是说,第一元子张量是将第一子张量中的非0元素值提取出来,非0元素的值可以作为第一元子张量的系数。
其中,第一子张量对应的第一元子张量的winograd正变换结果可以是通过以下过程预先得到的:对于每一个第一子张量,将该第一子张量对应的第一元子张量左边乘以正变换左乘矩阵、右边乘以正变换右乘矩阵得到第一元子张量的winograd正变换结果。
对于不同尺寸的矩阵,对应的第一元子张量的形式是确定的,对应的正变换左乘矩阵和正变换右乘矩阵也是确定的。
因此,可以预先计算出第一元子张量的winograd正变换结果,具体过程如上所述。举例来说,仍然以d 00为例,其对应的第一元子张量的winograd正变换结果为:
Figure PCTCN2020123853-appb-000004
再比如,以d 01为例,其对应的第一元子张量的winograd正变换结果为:
Figure PCTCN2020123853-appb-000005
由于正变换左乘矩阵和正变换右乘矩阵的元素值都是0、±1,第一元子张量的元素值为0或1,第一元子张量的winograd正变换结果中的元素也是0、±1。因此,可以将矩阵乘操作拆解为加法操作。
计算第一元子张量的winograd正变换结果的过程涉及较多的乘法运算,通过本公开的方式,可以将预先计算好的各种规模的第一元子张量的winograd正变换结果保存在运算装置中,这样,在实际的运算过程中,可以直接获取,而不需要重复运算,从而缩短计算时间、节约计算资源。
在获得第一子张量对应的第一元子张量的winograd正变换结果之后,可以将第一子张量中不为0的元素值乘以对应的第一元子张量的winograd正变换结果,就可以得到第一子张量的winograd正变换结果。举例来说,仍然以d 00为例,其对应的winograd正变换结果为:
Figure PCTCN2020123853-appb-000006
再比如,以d 01为例,d 01的winograd正变换结果为
Figure PCTCN2020123853-appb-000007
通过以上过程计算得到所有第一子张量的winograd正变换结果,将多个第一子张量的winograd正变换结果相加,即可得到所述输入数据的winograd正变换结果。
Figure PCTCN2020123853-appb-000008
Figure PCTCN2020123853-appb-000009
由于转换得到的第一元子张量的winograd正变换结果中的元素也是0、±1,因此,上述等式(1)、(2)右侧仅涉及求和运算。
根据本公开上述实施方式可知,通过将输入数据进行拆解得到多个第一子张量,根据预先计算得到的第一子张量对应的第一元子张量的winograd正变换结果以及第一子张量的非0元素值即可进行求和运算得到输入数据的winograd正变换结果。
在采用上文提到的拆解为求和运算得到输入神经元的winograd正变换结果后,可以计算权值的winograd正变换结果,其中权值的winograd正变换结果的计算方式可以采用传统的矩阵乘法计算,也可以参照上文提到的拆解为求和运算进行计算得到winograd正变换结果。
在得到输入数据(输入神经元、权值)的winograd正变换结果后,可以继续执行输入数据的winograd正变换结果的对位乘操作,得到对位乘结果。其中,对位乘可以是指对两个张量对应位置的数据相乘得到的数据作为对位乘结果中相应位置的值。
假设输入神经元的winograd正变换结果B Td 4×4B可以表示为:
Figure PCTCN2020123853-appb-000010
权值的winograd正变换结果
Figure PCTCN2020123853-appb-000011
可以表示为:
Figure PCTCN2020123853-appb-000012
那么对位乘结果可以为:
Figure PCTCN2020123853-appb-000013
输入数据的winograd卷积结果可以表示为S 4×4=A T(G 4×4⊙D 4×4)A,本公开的从功能处理单元可以将A T(G 4×4⊙D 4×4)A拆解为求和运算,并进行计算得到所述输入数据的winograd卷积结果,从而可以进一步节约计算时间、减少能耗。
在一种可能的实现方式中,上述所述将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述输入数据的winograd卷积结果,可以包括:
将所述对位乘结果拆解为多个第二子张量,对所述多个第二子张量进行winograd逆变换并求和,得到所述输入数据的winograd卷积结果;
其中,所述多个第二子张量的个数与所述对位乘结果的元素的不为0的个数相同,所述多个第二子张量中的每个第二子张量中有一个元素与所述对位乘结果中的对应位置的元素相同、其他元素均为0。
假设对位乘结果为:
Figure PCTCN2020123853-appb-000014
将对位乘结果拆解为多个第二子张量,例如可以拆解为16个,16个第二子张量分别为:
Figure PCTCN2020123853-appb-000015
在拆解完后,可以对所述多个第二子张量进行winograd逆变换并求和得到所述输入数据的 winograd卷积结果。
在一种可能的实现方式中,对所述多个第二子张量进行winograd逆变换并求和得到所述输入数据的winograd卷积结果,可以包括以下过程:
获取第二子张量对应的第二元子张量的winograd逆变换结果;其中,第二子张量对应的第二元子张量为:在第二元子张量中第二位置的元素的值为1,其中,第二位置在第二元子张量中所处的位置与第二子张量中的非0元素所处的位置相同;
将第二子张量中不为0的元素值作为系数乘以对应的第二元子张量的winograd逆变换结果,得到第二子张量的winograd逆变换结果;
将多个第二子张量的winograd逆变换结果相加得到所述输入数据的winograd卷积结果。
第二子张量对应的第二元子张量确定的方式和上文中第一元子张量确定的方式相同,不再赘述。其中,第二元子张量的winograd逆变换结果是通过以下过程预先得到的:对于每一个第二子张量,将该第二子张量对应的第二元子张量左边乘以逆变换左乘矩阵、右边乘以逆变换右乘矩阵得到第二元子张量的winograd逆变换结果。
对于不同尺寸的矩阵,对应的第二元子张量的形式是确定的,对应的逆变换左乘矩阵和逆变换右乘矩阵也是确定的。因此,可以预先计算出第二元子张量的winograd逆变换结果,具体过程如上所述。对于本文上述列举的示例,逆变换左乘矩阵为2×4的矩阵,例如可以为:
Figure PCTCN2020123853-appb-000016
逆变换右乘矩阵为4×2的矩阵,例如可以为:
Figure PCTCN2020123853-appb-000017
逆变换矩阵的维度可以根据输入神经元的维度以及权值的维度和卷积步长确定,上文仅仅是一个示例,不以任何方式限制本公开。
逆变换矩阵由
Figure PCTCN2020123853-appb-000018
构成,因此逆变换的矩阵乘操作可以通过拆解为加法和移位操作来实现。将逆变换矩阵乘以第二元子张量即可得到第二元子张量的winograd逆变换结果,第二元子张量的winograd逆变换结果内的元素值由
Figure PCTCN2020123853-appb-000019
等构成,分数可以通过简单的移位操作计算,相比于乘法操作仍然可以节省计算时间。
对于“将第二子张量中不为0的元素值作为系数乘以对应的第二元子张量的winograd逆变换结果,得到第二子张量的winograd逆变换结果;将多个第二子张量的winograd逆变换结果相加得到所述输入数据的winograd卷积结果”的具体过程可以参照上文,只不过第二元子张量的winograd逆变换结果不完全由0、±1构成,但分数可以通过简单的移位操作计算,相比于乘法操作,本公开将普通的逆变换过程拆解后仍然可以实现节约计算时间、减少能耗的效果。
根据本公开上述实施方式可知,通过将对位乘结果进行拆解得到多个第二子张量,根据预先计算得到的第二子张量对应的第二元子张量的winograd逆变换结果以及第二子张量的非0元素值即可进行求和运算得到输入数据的winograd卷积结果。
本公开实施例还提供一种用于搜索用于对称量化的截断阈值的方法,上述确定多组量化后的第二数据还包括:
基于所述绝对值最大值、预定的搜索总次数以及当前搜索次序,确定第一截断正值;
通过使用第一对截断阈值量化所述一组待量化数据,来确定第一组量化后的数据,所述第一对截断阈值包括所述第一截断正值以及与所述第一截断正值相反的第一截断负值;以及
确定所述第一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第一差异。
举例来说,确定待量化数据的绝对值的平均值以及待量化数据中的绝对值的最大值,其中绝对值的平均值为待量化数据中的所有数据的绝对值除以元素个数。此外,还初始化最小均值差异,例如初始设置浮点数中的最大值,并且初始化循环搜索的搜索次序i(例如初始化为0)。在一些实施例中,搜索次序i也可以被初始化为搜索总次数的一半,也即从中间开始搜索,这样能够提高搜索效率。根据本公开的实施例,可以设置一轮或者多轮阈值搜索过程,每轮阈值搜索可以具有相同或者不同的搜索总次数。在一些实施例中,每轮的搜索总次数可以设置在10至32之间。一般来说,搜索总次数越多,所花费的搜索时间越长,所搜到的截断阈值也越精确。然而,当搜索总次数达到某个值后,搜索效果可能不再会有本质提升。
接下来,开始第一轮粗粒度的截断阈值搜索过程。示例性的,可以将待量化数据划分成10对候选截断阈值,依次使用这10对截断阈值执行量化过程,并根据量化前后的数据的绝对值的均值的差异来确定最佳的一对截断阈值。
判断当前搜索次序i是否小于预定的搜索总次数,即在依次选择各对截断阈值进行量化时,判断是否已经完成所有对截断阈值的计算。如果当前搜索次序i小于预定的搜索总次数,则基于当前搜索次序i,确定一对截断阈值,这对截断阈值分别为-绝对值的最大值/预定的搜索总次数*(i+1)、绝对值的最大值/预定的搜索总次数*(i+1)。使用这对截断阈值来量化待量化数据,以得到相应的量化后数据Quant_data_i,计算量化后的数据的绝对值的均值Quant_data_mean_i与待量化数据的绝对值的均值Data_mean之间的差异abs(Quant_data_mean_i-Data_mean)/Data_mean。
判断所计算的差异是否小于当前最小差异。如果是的话,将所计算的差异设置为当前最小差异,并记录差异最小时的截断阈值,然后递增当前搜索次序i。如果判断是否的话,直接在当前递增搜索次序i。接下来,继续循环执行前述步骤,直到当前搜索次序i的值达到预定的搜索总次数,则退出第一次截断阈值的搜索过程。经过第一轮的搜索,确定差异最小的截断阈值为最佳截断阈值。由此可见,截断阈值搜索的过程即为:使用多对截断阈值对待量化数据进行量化,确定多组量化后的数据中与待量化数据在绝对值的均值方面差异最小的一组量化后的数据,然后从多对截断阈值中选择与这组量化后的数据相对应的一对截断阈值。
可选地,可以执行第二轮细粒度的截断阈值搜索过程,第二轮搜索过程也可以参考前述方法,只是第二轮搜索是在第一轮最佳截断阈值周围的一定范围内(例如,所选择的截断阈值的前一个截断阈值与后一个截断阈值之间)进行,是对第一轮搜索结果的更一步细化。例如,第二轮搜索时,每对截断阈值之间的间隔可以为(绝对值最大值*2)/(第一轮搜索总次数*第二轮搜索总次数)。经过第二轮搜索,确定细粒度的最佳截断阈值。通过两轮搜索的方式,能够获得更加准确的截断阈值,减小量化所导致 的精度损失。
本公开实施例提供一种用于迭代地搜索最佳截断阈值的方法。
举例来说,确定三对截断阈值,例如,可以确定待量化数据F x中的所有数据的绝对值的最大值absmax,三对截断阈值可以分别为(-absmax/2,absmax/2)、(-absmax*3/4,absmax*3/4)、(-absmax,absmax)。使用这三对截断阈值分别量化待量化数据,得到量化后的数据
Figure PCTCN2020123853-appb-000020
然后分别计算F x,
Figure PCTCN2020123853-appb-000021
对应的绝对值的均值F mean,
Figure PCTCN2020123853-appb-000022
然后根据公式
Figure PCTCN2020123853-appb-000023
选择最小差异diff_min。判断最小差异diff_min是否小于提前设置的预定阈值。如果否,基于所选择的一对截断阈值(将最小差异diff_min对应的值设置为新的绝对值的最大值),重新确定三对截断阈值,并重复上述过程,直到最小差异diff_min小于预定阈值,退出截断阈值的迭代过程。在一些实施例,除了最小差异diff_min小于预定阈值这一迭代停止条件之外,还可以设置其他的迭代停止条件,例如最大迭代次数、达到预定最小间隔,等等。另外,本公开实施例也可以只执行一次迭代过程,然后直接将最小差异diff_min对应的一对截断阈值作为最终的截断阈值。
在一些实施例中,可以通过以下式(1)-(3)确定在使用各对截断阈值量化数据时的量化参数。
Figure PCTCN2020123853-appb-000024
Figure PCTCN2020123853-appb-000025
Figure PCTCN2020123853-appb-000026
其中p为待量化数据中的绝对值最大值,n表示量化后的二进制位数,S和f表示量化参数。
根据本公开的实施例,通过将p分别选为absmax/2、absmax*3/4和absmax,可以求得量化参S1、f1、S2、f2、S3以及f3,由此得到量化后的数据
Figure PCTCN2020123853-appb-000027
相应地,在选出一对截断阈值之后,直接取这对截断阈值对应的S和f作为待量化数据的量化数据。
根据本公开的数据处理方法,根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,根据确定出的所述一对截断阈值量化winograd卷积中的一组待量化数据,得到量化后的第一数据,根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果,对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果,可以提高量化的精度的同时,节约winograd卷积的运算时间,减少能耗。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本公开所必须的。
进一步需要说明的是,虽然图1的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这 些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
图2示出根据本公开实施例的数据处理装置的框图。如图2所示,该装置可以包括:
第一确定模块201,可以用于根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,其中,所述一组待量化数据为winograd卷积处理过程中的一组数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;
第一量化模块202,可以用于根据确定出的所述一对截断阈值量化所述一组待量化数据,得到量化后的第一数据;
卷积模块203,可以用于根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果;
反量化模块204,可以用于对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果。
根据本公开的数据处理装置,根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,根据确定出的所述一对截断阈值量化winograd卷积中的一组待量化数据,得到量化后的第一数据,根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果,对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果,可以提高量化的精度的同时,节约winograd卷积的运算时间,减少能耗。
在一种可能的实现方式中,上述第一确定模块201,还可以用于:
通过使用多对截断阈值分别量化所述一组待量化数据,来确定多组量化后的第二数据;
基于所述多组量化后的数据中的至少一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值。
在一种可能的实现方式中,上述winograd卷积处理过程,可以包括:
将输入数据的winograd正变换拆解为求和运算,并进行计算得到所述输入数据的winograd正变换结果;
执行所述输入数据的winograd正变换结果的对位乘操作,得到对位乘结果;
将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述winograd卷积结果。
在一种可能的实现方式中,所述待量化数据为所述输入数据、所述输入数据的winograd正变换结果、所述对位乘结果中的一种。
在一种可能的实现方式中,所述装置还可以包括:
第二确定模块,可以用于确定所述一组待量化数据中的所有数据的绝对值中的最大绝对值;
第三确定模块,可以用于基于所述最大绝对值,确定所述多对截断阈值。
在一种可能的实现方式中,上述第一确定模块201还可以用于:
基于所述绝对值最大值、预定的搜索总次数以及当前搜索次序,确定第一截断正值;
通过使用第一对截断阈值量化所述一组待量化数据,来确定第一组量化后的数据,所述第一对截断阈值包括所述第一截断正值以及与所述第一截断正值相反的第一截断负值;以及
确定所述第一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第 一差异。
在一种可能的实现方式中,所述第一确定模块201,还可以用于:
从所述多组量化后的数据中选出一组第一量化数据,所述一组第一量化数据的绝对值的均值与所述一组待量化数据的绝对值的均值的差,小于多组第二量化数据的绝对值的均值与所述一组待量化数据的绝对值的均值的差,其中,所述多组第二量化数据为多组量化后的数据中除了所述一组第一量化数据之外的其他组量化后的数据;
从所述多对截断阈值中选择与所述一组第一量化数据相对应的一对截断阈值。
在一种可能的实现方式中,所述装置还可以包括:
第四确定模块,用于确定与所选择的所述一对截断阈值相关联的截断搜索范围;
第五确定模块,用于确定处于所述截断搜索范围内的新的多对截断阈值;
第二量化模块,用于通过使用所述新的多对截断阈值分别量化所述一组待量化数据,来确定新的多组量化后的数据;
选择模块,用于基于所述新的多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述新的多对截断阈值中选择新的一对截断阈值。
在一种可能的实现方式中,所述将输入数据的winograd正变换拆解为求和运算,并进行计算得到所述输入数据的winograd正变换结果,可以包括:
将所述输入数据拆解为多个第一子张量,对所述多个第一子张量进行winograd正变换并求和得到所述输入数据的winograd正变换结果。
其中,所述多个第一子张量的个数与所述输入数据的元素的不为0的个数相同,所述多个第一子张量中的每个第一子张量中有一个元素与所述输入数据中的对应位置的元素相同、其他元素均为0。
在一种可能的实现方式中,所述将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述输入数据的winograd卷积结果,可以包括:
将所述对位乘结果拆解为多个第二子张量,对所述多个第二子张量进行winograd逆变换并求和,得到所述输入数据的winograd卷积结果;
其中,所述多个第二子张量的个数与所述对位乘结果的元素的不为0的个数相同,所述多个第二子张量中的每个第二子张量中有一个元素与所述对位乘结果中的对应位置的元素相同、其他元素均为0。
在一种可能的实现方式中,所述输入数据可以为输入神经元、权值和梯度中的至少一种。
在本公开的一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现和技术效果可参照上文方法实施例的描述,为了简洁,这里不再赘述。
应该理解,上述的装置实施例仅是示意性的,本公开的装置还可通过其它的方式实现。例如,上述实施例中所述单元/模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。例如,多个单元、模块或组件可以结合,或者可以集成到另一个系统,或一些特征可以忽略或不执行。
另外,若无特别说明,在本公开各个实施例中的各功能单元/模块可以集成在一个单元/模块中,也可以是各个单元/模块单独物理存在,也可以两个或两个以上单元/模块集成在一起。上述集成的单 元/模块既可以采用硬件的形式实现,也可以采用软件程序模块的形式实现。
所述集成的单元/模块如果以硬件的形式实现时,该硬件可以是数字电路,模拟电路等等。硬件结构的物理实现包括但不局限于晶体管,忆阻器等等。若无特别说明,所述人工智能处理器可以是任何适当的硬件处理器,比如CPU、GPU、FPGA、DSP和ASIC等等。若无特别说明,所述存储单元可以是任何适当的磁存储介质或者磁光存储介质,比如,阻变式存储器RRAM(Resistive Random Access Memory)、动态随机存取存储器DRAM(Dynamic Random Access Memory)、静态随机存取存储器SRAM(Static Random-Access Memory)、增强动态随机存取存储器EDRAM(Enhanced Dynamic Random Access Memory)、高带宽内存HBM(High-Bandwidth Memory)、混合存储立方HMC(Hybrid Memory Cube)等等。
所述集成的单元/模块如果以软件程序模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
在一种可能的实现方式中,还公开了一种人工智能芯片,其包括了上述数据处理装置。
在一种可能的实现方式中,还公开了一种板卡,其包括存储器件、接口装置和控制器件以及上述人工智能芯片;其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;所述存储器件,用于存储数据;所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;所述控制器件,用于对所述人工智能芯片的状态进行监控。
图3示出根据本公开实施例的板卡的结构框图,参阅图3,上述板卡除了包括上述芯片389以外,还可以包括其他的配套部件,该配套部件包括但不限于:存储器件390、接口装置391和控制器件392;
所述存储器件390与所述人工智能芯片通过总线连接,用于存储数据。所述存储器件可以包括多组存储单元393。每一组所述存储单元与所述人工智能芯片通过总线连接。可以理解,每一组所述存储单元可以是DDR SDRAM(英文:Double Data Rate SDRAM,双倍速率同步动态随机存储器)。
DDR不需要提高时钟频率就能加倍提高SDRAM的速度。DDR允许在时钟脉冲的上升沿和下降沿读出数据。DDR的速度是标准SDRAM的两倍。在一个实施例中,所述存储装置可以包括4组所述存储单元。每一组所述存储单元可以包括多个DDR4颗粒(芯片)。在一个实施例中,所述人工智能芯片内部可以包括4个72位DDR4控制器,上述72位DDR4控制器中64bit用于传输数据,8bit用于ECC校验。可以理解,当每一组所述存储单元中采用DDR4-3200颗粒时,数据传输的理论带宽可达到25600MB/s。
在一个实施例中,每一组所述存储单元包括多个并联设置的双倍速率同步动态随机存储器。DDR在一个时钟周期内可以传输两次数据。在所述芯片中设置控制DDR的控制器,用于对每个所述存储单元的数据传输与数据存储的控制。
所述接口装置与所述人工智能芯片电连接。所述接口装置用于实现所述人工智能芯片与外部设备(例如服务器或计算机)之间的数据传输。例如在一个实施例中,所述接口装置可以为标准PCIE接口。 比如,待处理的数据由服务器通过标准PCIE接口传递至所述芯片,实现数据转移。优选的,当采用PCIE 3.0 X 16接口传输时,理论带宽可达到16000MB/s。在另一个实施例中,所述接口装置还可以是其他的接口,本公开并不限制上述其他的接口的具体表现形式,所述接口单元能够实现转接功能即可。另外,所述人工智能芯片的计算结果仍由所述接口装置传送回外部设备(例如服务器)。
所述控制器件与所述人工智能芯片电连接。所述控制器件用于对所述人工智能芯片的状态进行监控。具体的,所述人工智能芯片与所述控制器件可以通过SPI接口电连接。所述控制器件可以包括单片机(Micro Controller Unit,MCU)。如所述人工智能芯片可以包括多个处理芯片、多个处理核或多个处理电路,可以带动多个负载。因此,所述人工智能芯片可以处于多负载和轻负载等不同的工作状态。通过所述控制装置可以实现对所述人工智能芯片中多个处理芯片、多个处理和或多个处理电路的工作状态的调控。
在一种可能的实现方式中,公开了一种电子设备,其包括了上述人工智能芯片。电子设备包括数据处理装置、机器人、电脑、打印机、扫描仪、平板电脑、智能终端、手机、行车记录仪、导航仪、传感器、摄像头、服务器、云端服务器、相机、摄像机、投影仪、手表、耳机、移动存储、可穿戴设备、交通工具、家用电器、和/或医疗设备。所述交通工具包括飞机、轮船和/或车辆;所述家用电器包括电视、空调、微波炉、冰箱、电饭煲、加湿器、洗衣机、电灯、燃气灶、油烟机;所述医疗设备包括核磁共振仪、B超仪和/或心电图仪。
本公开实施例还提出一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述方法。计算机可读存储介质可以是非易失性计算机可读存储介质。
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行上述方法。
图4示出根据本公开实施例的一种电子设备800的框图。例如,电子设备800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等终端。
参照图4,电子设备800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制电子设备800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在电子设备800的操作。这些数据的示例包括用于在电子设备800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为电子设备800的各种组件提供电力。电源组件806可以包括电源管理系统,一个或多个电源,及其他与为电子设备800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述电子设备800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当电子设备800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当电子设备800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为电子设备800提供各个方面的状态评估。例如,传感器组件814可以检测到电子设备800的打开/关闭状态,组件的相对定位,例如所述组件为电子设备800的显示器和小键盘,传感器组件814还可以检测电子设备800或电子设备800一个组件的位置改变,用户与电子设备800接触的存在或不存在,电子设备800方位或加速/减速和电子设备800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于电子设备800和其他设备之间有线或无线方式的通信。电子设备800可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,电子设备800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器804,上述计算机程序指令可由电子设备800的处理器820执行以完成上述方法。
图5示出根据本公开实施例的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图5,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932 中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。上述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
依据以下条款可更好地理解前述内容:
条款A1,一种数据处理方法,其特征在于,包括:
根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,其中,所述一组待量化数据为winograd卷积处理过程中的一组数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;
根据确定出的所述一对截断阈值量化所述一组待量化数据,得到量化后的第一数据;
根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果;
对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果。
条款A2,根据条款A1所述的方法,所述根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,包括:
通过使用多对截断阈值分别量化所述一组待量化数据,来确定多组量化后的第二数据;
基于所述多组量化后的数据中的至少一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值。
条款A3,根据条款A1-A2任一项所述的方法,所述winograd卷积处理过程,包括:
将输入数据的winograd正变换拆解为求和运算,并进行计算得到所述输入数据的winograd正变换结果;
执行所述输入数据的winograd正变换结果的对位乘操作,得到对位乘结果;
将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述winograd卷积结果。
条款A4,根据条款A3所述的方法,所述待量化数据为所述输入数据、所述输入数据的winograd正变换结果、所述对位乘结果中的一种。
条款A5,根据条款A1至A4中任一项所述的方法,所述方法还包括:
确定所述一组待量化数据中的所有数据的绝对值中的最大绝对值;
基于所述最大绝对值,确定所述多对截断阈值。
条款A6,根据条款A1至A5中任一项所述的方法,所述确定多组量化后的第二数据还包括:
基于所述绝对值最大值、预定的搜索总次数以及当前搜索次序,确定第一截断正值;
通过使用第一对截断阈值量化所述一组待量化数据,来确定第一组量化后的数据,所述第一对截断阈值包括所述第一截断正值以及与所述第一截断正值相反的第一截断负值;以及
确定所述第一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第一差异。
条款A7,根据条款A2所述的方法,基于所述多组量化后的数据中的至少一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值,包括:
从所述多组量化后的数据中选出一组第一量化数据,所述一组第一量化数据的绝对值的均值与所述一组待量化数据的绝对值的均值的差,小于多组第二量化数据的绝对值的均值与所述一组待量化数据的绝对值的均值的差,其中,所述多组第二量化数据为多组量化后的数据中除了所述一组第一量化数据之外的其他组量化后的数据;
从所述多对截断阈值中选择与所述一组第一量化数据相对应的一对截断阈值。
条款A8,根据条款A7所述的方法,所述方法还包括:
确定与所选择的所述一对截断阈值相关联的截断搜索范围;
确定处于所述截断搜索范围内的新的多对截断阈值;
通过使用所述新的多对截断阈值分别量化所述一组待量化数据,来确定新的多组量化后的数据;
基于所述新的多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述新的多对截断阈值中选择新的一对截断阈值。
条款A9,根据条款A3所述的方法,所述将输入数据的winograd正变换拆解为求和运算,并进行计算得到所述输入数据的winograd正变换结果,包括:
将所述输入数据拆解为多个第一子张量,对所述多个第一子张量进行winograd正变换并求和得到所述输入数据的winograd正变换结果;
其中,所述多个第一子张量的个数与所述输入数据的元素的不为0的个数相同,所述多个第一子张量中的每个第一子张量中有一个元素与所述输入数据中的对应位置的元素相同、其他元素均为0。
条款A10,根据条款A3所述的方法,所述将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述输入数据的winograd卷积结果,包括:
将所述对位乘结果拆解为多个第二子张量,对所述多个第二子张量进行winograd逆变换并求和,得到所述输入数据的winograd卷积结果;
其中,所述多个第二子张量的个数与所述对位乘结果的元素的不为0的个数相同,所述多个第二子张量中的每个第二子张量中有一个元素与所述对位乘结果中的对应位置的元素相同、其他元素均为0。
条款A11,根据条款A3或A4所述的方法,所述输入数据为输入神经元、权值和梯度中的至少一种。
条款A12,一种数据处理装置,包括:
第一确定模块,用于根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,其中,所述一组待量化数据为winograd卷积处理 过程中的一组数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;
第一量化模块,用于根据确定出的所述一对截断阈值量化所述一组待量化数据,得到量化后的第一数据;
卷积模块,用于根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果;
反量化模块,用于对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果。
条款A13,根据条款A12所述的装置,所述第一确定模块,还用于:
通过使用多对截断阈值分别量化所述一组待量化数据,来确定多组量化后的第二数据;
基于所述多组量化后的数据中的至少一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值。
条款A14,根据条款A12或A13所述的装置,所述winograd卷积处理过程,包括:
将输入数据的winograd正变换拆解为求和运算,并进行计算得到所述输入数据的winograd正变换结果;
执行所述输入数据的winograd正变换结果的对位乘操作,得到对位乘结果;
将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述winograd卷积结果。
条款A15,根据条款A14所述的装置,所述待量化数据为所述输入数据、所述输入数据的winograd正变换结果、所述对位乘结果中的一种。
条款A16,根据条款A12至A15中任一项所述的装置,所述装置还包括:
第二确定模块,用于确定所述一组待量化数据中的所有数据的绝对值中的最大绝对值;
第三确定模块,用于基于所述最大绝对值,确定所述多对截断阈值。
条款A16,根据条款A12至A15中任一项所述的装置,所述第一确定模块,还用于:
基于所述绝对值最大值、预定的搜索总次数以及当前搜索次序,确定第一截断正值;
通过使用第一对截断阈值量化所述一组待量化数据,来确定第一组量化后的数据,所述第一对截断阈值包括所述第一截断正值以及与所述第一截断正值相反的第一截断负值;以及
确定所述第一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第一差异。
条款A17,根据条款A13所述的装置,所述第一确定模块,还用于:
从所述多组量化后的数据中选出一组第一量化数据,所述一组第一量化数据的绝对值的均值与所述一组待量化数据的绝对值的均值的差,小于多组第二量化数据的绝对值的均值与所述一组待量化数据的绝对值的均值的差,其中,所述多组第二量化数据为多组量化后的数据中除了所述一组第一量化数据之外的其他组量化后的数据;
从所述多对截断阈值中选择与所述一组第一量化数据相对应的一对截断阈值。
条款A18,根据条款A17所述的装置,所述装置还包括:
第四确定模块,用于确定与所选择的所述一对截断阈值相关联的截断搜索范围;
第五确定模块,用于确定处于所述截断搜索范围内的新的多对截断阈值;
第二量化模块,用于通过使用所述新的多对截断阈值分别量化所述一组待量化数据,来确定新的 多组量化后的数据;
选择模块,用于基于所述新的多组量化后的数据中的每组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述新的多对截断阈值中选择新的一对截断阈值。
条款A19,根据条款A14所述的装置,所述将输入数据的winograd正变换拆解为求和运算,并进行计算得到所述输入数据的winograd正变换结果,包括:
将所述输入数据拆解为多个第一子张量,对所述多个第一子张量进行winograd正变换并求和得到所述输入数据的winograd正变换结果;
其中,所述多个第一子张量的个数与所述输入数据的元素的不为0的个数相同,所述多个第一子张量中的每个第一子张量中有一个元素与所述输入数据中的对应位置的元素相同、其他元素均为0。
条款A20,根据条款A14所述的装置,所述将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述输入数据的winograd卷积结果,包括:
将所述对位乘结果拆解为多个第二子张量,对所述多个第二子张量进行winograd逆变换并求和,得到所述输入数据的winograd卷积结果;
其中,所述多个第二子张量的个数与所述对位乘结果的元素的不为0的个数相同,所述多个第二子张量中的每个第二子张量中有一个元素与所述对位乘结果中的对应位置的元素相同、其他元素均为0。
条款A21,根据条款A14或A15所述的装置,所述输入数据为输入神经元、权值和梯度中的至少一种。
条款A22,一种人工智能芯片,所述芯片包括如条款A12至A21中任意一项所述的数据处理装置。
条款A23,一种电子设备,所述电子设备包括如条款A22所述的人工智能芯片。
条款A24,一种板卡,所述板卡包括:存储器件、接口装置和控制器件以及如条款A22所述的人工智能芯片;
其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;
所述存储器件,用于存储数据;
所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;
所述控制器件,用于对所述人工智能芯片的状态进行监控。
条款A25,根据条款A24所述的板卡,所述存储器件包括:多组存储单元,每一组所述存储单元与所述人工智能芯片通过总线连接,所述存储单元为:DDR SDRAM;
所述芯片包括:DDR控制器,用于对每个所述存储单元的数据传输与数据存储的控制;
所述接口装置为:标准PCIE接口。
条款A26,一种电子设备,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为调用所述存储器存储的指令,以执行条款A1至A11中任意一项所述的方法。
条款A27,一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理 器执行时实现条款A1至A11中任意一项所述的方法。
以上对本公开实施例进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明仅用于帮助理解本公开的方法及其核心思想。同时,本领域技术人员依据本公开的思想,基于本公开的具体实施方式及应用范围上做出的改变或变形之处,都属于本公开保护的范围。综上所述,本说明书内容不应理解为对本公开的限制。

Claims (18)

  1. 一种数据处理方法,其特征在于,包括:
    根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,其中,所述一组待量化数据为winograd卷积处理过程中的一组数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;
    根据确定出的所述一对截断阈值量化所述一组待量化数据,得到量化后的第一数据;
    根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果;
    对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果。
  2. 根据权利要求1所述的方法,其特征在于,所述根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,包括:
    通过使用多对截断阈值分别量化所述一组待量化数据,来确定多组量化后的第二数据;
    基于所述多组量化后的数据中的至少一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值。
  3. 根据权利要求1或2所述的方法,其特征在于,所述winograd卷积处理过程,包括:
    将输入数据的winograd正变换拆解为求和运算,并进行计算得到所述输入数据的winograd正变换结果;
    执行所述输入数据的winograd正变换结果的对位乘操作,得到对位乘结果;
    将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述winograd卷积结果。
  4. 根据权利要求3所述的方法,其特征在于,所述待量化数据为所述输入数据、所述输入数据的winograd正变换结果、所述对位乘结果中的一种。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    确定所述一组待量化数据中的所有数据的绝对值中的最大绝对值;
    基于所述最大绝对值,确定所述多对截断阈值。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述确定多组量化后的第二数据还包括:
    基于所述绝对值最大值、预定的搜索总次数以及当前搜索次序,确定第一截断正值;
    通过使用第一对截断阈值量化所述一组待量化数据,来确定第一组量化后的数据,所述第一对截断阈值包括所述第一截断正值以及与所述第一截断正值相反的第一截断负值;以及
    确定所述第一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的第一差异。
  7. 根据权利要求2所述的方法,其特征在于,基于所述多组量化后的数据中的至少一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述多对截断阈值中选择一对截断阈值,包括:
    从所述多组量化后的数据中选出一组第一量化数据,所述一组第一量化数据的绝对值的均值与所述一组待量化数据的绝对值的均值的差,小于多组第二量化数据的绝对值的均值与所述一组待量化数据的绝对值的均值的差,其中,所述多组第二量化数据为多组量化后的数据中除了所述一组第一量化数据之外的其他组量化后的数据;
    从所述多对截断阈值中选择与所述一组第一量化数据相对应的一对截断阈值。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    确定与所选择的所述一对截断阈值相关联的截断搜索范围;
    确定处于所述截断搜索范围内的新的多对截断阈值;
    通过使用所述新的多对截断阈值分别量化所述一组待量化数据,来确定新的多组量化后的数据;
    基于所述新的多组量化后的数据中的至少一组量化后的数据的绝对值的均值与所述一组待量化数据的绝对值的均值之间的差异,从所述新的多对截断阈值中选择新的一对截断阈值。
  9. 根据权利要求3所述的方法,其特征在于,所述将输入数据的winograd正变换拆解为求和运算,并进行计算得到所述输入数据的winograd正变换结果,包括:
    将所述输入数据拆解为多个第一子张量,对所述多个第一子张量进行winograd正变换并求和得到所述输入数据的winograd正变换结果;
    其中,所述多个第一子张量的个数与所述输入数据的元素的不为0的个数相同,所述多个第一子张量中的每个第一子张量中有一个元素与所述输入数据中的对应位置的元素相同、其他元素均为0。
  10. 根据权利要求3所述的方法,其特征在于,所述将对所述对位乘结果的winograd逆变换拆解为求和运算,得到所述输入数据的winograd卷积结果,包括:
    将所述对位乘结果拆解为多个第二子张量,对所述多个第二子张量进行winograd逆变换并求和,得到所述输入数据的winograd卷积结果;
    其中,所述多个第二子张量的个数与所述对位乘结果的元素的不为0的个数相同,所述多个第二子张量中的每个第二子张量中有一个元素与所述对位乘结果中的对应位置的元素相同、其他元素均为0。
  11. 根据权利要求3或4所述的方法,其特征在于,所述输入数据为输入神经元、权值和梯度中的至少一种。
  12. 一种数据处理装置,其特征在于,包括:
    第一确定模块,用于根据采用多对截断阈值对一组待量化数据进行量化处理得到的量化数据的绝对值的均值,从多对截断阈值中确定一对截断阈值,其中,所述一组待量化数据为winograd卷积处理过程中的一组数据,所述多对截断阈值中的每对截断阈值包括对称的截断正值和截断负值;
    第一量化模块,用于根据确定出的所述一对截断阈值量化所述一组待量化数据,得到量化后的第一数据;
    卷积模块,用于根据量化后的第一数据继续执行winograd卷积处理,得到量化后的winograd卷积结果;
    反量化模块,用于对所述量化后的winograd卷积结果执行反量化处理,得到winograd卷积结果。
  13. 一种人工智能芯片,其特征在于,所述芯片包括如权利要求12所述的数据处理装置。
  14. 一种电子设备,其特征在于,所述电子设备包括如权利要求13所述的人工智能芯片。
  15. 一种板卡,其特征在于,所述板卡包括:存储器件、接口装置和控制器件以及如权利要求13所述的人工智能芯片;
    其中,所述人工智能芯片与所述存储器件、所述控制器件以及所述接口装置分别连接;
    所述存储器件,用于存储数据;
    所述接口装置,用于实现所述人工智能芯片与外部设备之间的数据传输;
    所述控制器件,用于对所述人工智能芯片的状态进行监控。
  16. 根据权利要求15所述的板卡,其特征在于,
    所述存储器件包括:多组存储单元,每一组所述存储单元与所述人工智能芯片通过总线连接,所述存储单元为:DDR SDRAM;
    所述芯片包括:DDR控制器,用于对每个所述存储单元的数据传输与数据存储的控制;
    所述接口装置为:标准PCIE接口。
  17. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至11中任意一项所述的方法。
  18. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至11中任意一项所述的方法。
PCT/CN2020/123853 2019-11-01 2020-10-27 数据处理方法、装置、计算机设备和存储介质 WO2021083100A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911061465.7 2019-11-01
CN201911061465.7A CN112765541B (zh) 2019-11-01 2019-11-01 数据处理方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2021083100A1 true WO2021083100A1 (zh) 2021-05-06

Family

ID=75692275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123853 WO2021083100A1 (zh) 2019-11-01 2020-10-27 数据处理方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN112765541B (zh)
WO (1) WO2021083100A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359726A (zh) * 2018-11-27 2019-02-19 华中科技大学 一种基于winograd算法的卷积神经网络优化方法
CN110097172A (zh) * 2019-03-18 2019-08-06 中国科学院计算技术研究所 一种基于winograd卷积运算的卷积神经网络数据处理方法及装置
US20190325004A1 (en) * 2017-06-01 2019-10-24 Samsung Electronics Co., Ltd. Apparatus and method for generating efficient convolution

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388446A (zh) * 2018-02-05 2018-08-10 上海寒武纪信息科技有限公司 运算模块以及方法
CN108364061B (zh) * 2018-02-13 2020-05-05 北京旷视科技有限公司 运算装置、运算执行设备及运算执行方法
CN110245741A (zh) * 2018-03-09 2019-09-17 佳能株式会社 多层神经网络模型的优化和应用方法、装置及存储介质
CN108765247B (zh) * 2018-05-15 2023-01-10 腾讯科技(深圳)有限公司 图像处理方法、装置、存储介质及设备
CN109657782B (zh) * 2018-12-14 2020-10-27 安徽寒武纪信息科技有限公司 运算方法、装置及相关产品
CN109711538B (zh) * 2018-12-14 2021-01-15 安徽寒武纪信息科技有限公司 运算方法、装置及相关产品
CN109740730B (zh) * 2018-12-14 2020-10-23 安徽寒武纪信息科技有限公司 运算方法、装置及相关产品
CN109670586B (zh) * 2018-12-29 2019-11-12 北京中科寒武纪科技有限公司 运算方法、装置及相关产品

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325004A1 (en) * 2017-06-01 2019-10-24 Samsung Electronics Co., Ltd. Apparatus and method for generating efficient convolution
CN109359726A (zh) * 2018-11-27 2019-02-19 华中科技大学 一种基于winograd算法的卷积神经网络优化方法
CN110097172A (zh) * 2019-03-18 2019-08-06 中国科学院计算技术研究所 一种基于winograd卷积运算的卷积神经网络数据处理方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WU, ZHENG ET AL.: "Research and Optimization of Fast Convolution Algorithm Winograd on Intel Platform", JOURNAL OF COMPUTER RESEARCH AND DEVELOPMENT, vol. 56, no. 4, 15 April 2019 (2019-04-15), pages 825 - 835, XP055809978 *

Also Published As

Publication number Publication date
CN112765541B (zh) 2024-02-23
CN112765541A (zh) 2021-05-07

Similar Documents

Publication Publication Date Title
CN110889503B (zh) 数据处理方法、装置、计算机设备和存储介质
US20200218509A1 (en) Multiplication Circuit, System on Chip, and Electronic Device
CN111443917B (zh) 神经网络运行优化方法、装置及相关产品
WO2021036893A1 (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021114903A1 (zh) 数据处理方法、装置、计算机设备和存储介质
WO2020232976A1 (zh) 信息处理方法及装置、电子设备和存储介质
WO2021114904A1 (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021185262A1 (zh) 计算装置、方法、板卡和计算机可读存储介质
WO2021082725A1 (zh) Winograd卷积运算方法及相关产品
WO2021083100A1 (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021082654A1 (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021083097A1 (zh) 数据处理方法、装置、计算机设备和存储介质
CN113297128B (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021082653A1 (zh) 数据处理方法、装置、计算机设备和存储介质
US20230010981A1 (en) Methods and apparatuses for high performance and accuracy fixed-point scale implementation
WO2021082746A1 (zh) 运算装置及相关产品
WO2021082747A1 (zh) 运算装置及相关产品
CN111783969A (zh) 数据处理方法、装置、计算机设备和存储介质
CN113762488B (zh) 处理器、数据处理方法、计算机设备和存储介质
CN113298223B (zh) 数据处理方法、装置、计算机设备和存储介质
WO2021080724A1 (en) Three dimensional convolution in neural network processor
WO2021169914A1 (zh) 数据量化处理方法、装置、电子设备和存储介质
CN112446472A (zh) 用于处理数据的方法、装置以及相关产品
US20230010197A1 (en) Methods and apparatuses for high performance and accuracy fixed-point batchnorm implementation
WO2021017546A1 (zh) 神经网络量化方法、装置、芯片、电子设备及板卡

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20883003

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20883003

Country of ref document: EP

Kind code of ref document: A1