WO2021083097A1 - Appareil et procédé de traitement de données, et dispositif informatique et support de stockage associés - Google Patents

Appareil et procédé de traitement de données, et dispositif informatique et support de stockage associés Download PDF

Info

Publication number
WO2021083097A1
WO2021083097A1 PCT/CN2020/123837 CN2020123837W WO2021083097A1 WO 2021083097 A1 WO2021083097 A1 WO 2021083097A1 CN 2020123837 W CN2020123837 W CN 2020123837W WO 2021083097 A1 WO2021083097 A1 WO 2021083097A1
Authority
WO
WIPO (PCT)
Prior art keywords
input data
convolution
convolution kernel
winograd
sub
Prior art date
Application number
PCT/CN2020/123837
Other languages
English (en)
Chinese (zh)
Inventor
张英男
曾洪博
张尧
刘少礼
黄迪
周诗怡
张曦珊
刘畅
郭家明
高钰峰
Original Assignee
中科寒武纪科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中科寒武纪科技股份有限公司 filed Critical 中科寒武纪科技股份有限公司
Publication of WO2021083097A1 publication Critical patent/WO2021083097A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the present disclosure relates to the field of data processing technology, and in particular to a data processing method, device, computer equipment, and storage medium.
  • neural network algorithm is a very popular machine learning algorithm recently, and has achieved very good results in various fields, such as image recognition, speech recognition, natural language processing, etc.
  • image recognition speech recognition
  • speech recognition natural language processing
  • the complexity of the algorithm is getting higher and higher.
  • the scale of the model is gradually increasing.
  • Using GPU and CPU to process these large-scale models requires a lot of computing time and consumes a lot of power.
  • the embodiments of the present disclosure provide a data processing method, device, computer equipment, and storage medium that can improve the multiplexing rate of data.
  • a data processing method including:
  • the sum of the convolution results corresponding to the plurality of second input data is the convolution result of the first convolution kernel and the first input data.
  • a data processing device including:
  • the first splitting module is used to split the first convolution kernel according to the step size N to obtain multiple second convolution kernels
  • the second splitting module is configured to split the first input data according to the step size N to obtain multiple second input data corresponding to the multiple first convolution kernels;
  • the convolution module is configured to perform a winograd convolution operation on the second input data and the corresponding second convolution kernel for any of the second input data to obtain a convolution result corresponding to the second input data ;
  • the determining module is configured to determine that the sum of the convolution results corresponding to the plurality of second input data is the convolution result of the first convolution kernel and the first input data.
  • an artificial intelligence chip is provided, and the chip includes the data processing device according to any one of the foregoing.
  • an electronic device including the aforementioned artificial intelligence chip.
  • a board card comprising: a storage device, an interface device, a control device, and the aforementioned artificial intelligence chip;
  • the artificial intelligence chip is connected to the storage device, the control device, and the interface device respectively;
  • the storage device is used to store data
  • the interface device is used to implement data transmission between the artificial intelligence chip and external equipment
  • the control device is used to monitor the state of the artificial intelligence chip.
  • an electronic device including:
  • a memory for storing processor executable instructions
  • the processor is configured to call instructions stored in the memory to execute the method described in any one of the foregoing.
  • a computer-readable storage medium having computer program instructions stored thereon, wherein the computer program instructions implement the method described in any one of the foregoing when the computer program instructions are executed by a processor .
  • the data processing method, device, computer equipment, and storage medium provided by the embodiments of the present disclosure can split the first convolution kernel with a step size greater than 1 and the first input data into multiple second volumes with a step size of 1.
  • the product core and multiple second input data improve the data multiplexing rate.
  • Figure 1 shows a data processing method provided by an embodiment of the present disclosure
  • Fig. 2 shows a schematic diagram of a data processing method of an example of the present disclosure
  • FIG. 3 shows a structural block diagram of a data processing device provided by an embodiment of the present disclosure
  • Figure 4 shows a block diagram of a board according to an embodiment of the present disclosure
  • FIG. 5 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure
  • FIG. 6 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • the term “if” can be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context.
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • Winograd convolution is a convolution acceleration implementation method based on polynomial interpolation algorithm. It passes the two inputs of the convolution operation: the input data and the convolution kernel are divided into a certain size and then linear transformation (winograd positive transformation) is performed respectively, and then the transformed input data and the convolution kernel are subjected to bitwise multiplication, and finally The linear transformation (winograd inverse transformation) is performed on the result of the bit multiplication again to obtain the convolution result equivalent to the original convolution operation.
  • linear transformation winograd positive transformation
  • g represents the convolution kernel
  • G represents the left multiplication positive transformation matrix corresponding to the convolution kernel
  • G T represents the right multiplication positive transformation matrix corresponding to the convolution kernel
  • d represents the input data
  • B represents the right multiplication positive transformation corresponding to the input data Matrix
  • B T represents the left multiplication positive transformation matrix corresponding to the input data
  • represents the bitwise multiplication operation
  • A represents the right multiplication inverse transformation matrix
  • AT represents the left multiplication and inverse transformation matrix.
  • the present disclosure provides a data processing method, which can split a convolution kernel with a step size greater than 1 in a winograd convolution process into a convolution kernel with a step size of 1, so as to improve the data multiplexing rate.
  • Fig. 1 shows a data processing method provided by an embodiment of the present disclosure. The method may be applied to a processor. As shown in Fig. 1, the method may include:
  • step S11 the first convolution kernel is split according to the step size N to obtain multiple second convolution kernels;
  • step S12 the first input data is split according to the step size N to obtain a plurality of second input data corresponding to the plurality of first convolution kernels.
  • the first convolution kernel can be split into multiple second convolution kernels with a step size of 1 according to the step size N, and the first input data can be split into multiple steps with a step size of 1 according to the step size N
  • the second input data may be image data, sound data, or video data.
  • the input data can be expressed in the form of NHWC (batch, height, width, channels), N represents the number of images, HW can respectively represent the number of pixels in the height and width directions, and C can represent The number of channels, for example: C can represent three channels of RGB (Red, Green, Blue). It should be noted that the above representation is only an example of the present disclosure, and the present disclosure is not limited to this.
  • the foregoing splitting of the first convolution kernel according to the step size N to obtain multiple second convolution kernels may include:
  • the first convolution kernel is split with an interval of N-1 steps to obtain multiple second convolution kernels.
  • the first convolution kernel can be split with an interval of N-1 steps, that is, for each row and each column of the first convolution kernel, The current element of, obtain an element in the row and column at an interval of N-1, and this element and the current element belong to the same second convolution kernel, and execute the cycle with this element as the current element, and the interval N-1 to obtain the second The process of convolution kernel elements.
  • the first convolution kernel is split with an interval of N-1 steps to obtain multiple second convolution kernels, including:
  • the first convolution kernel of m ⁇ n you can start from the first row of the first convolution kernel, and determine a behavior target row every N-1 rows. For each target row, you can start from the first row of the first convolution kernel. Starting with one element, one element is obtained every N-1 column, and the elements in each target row obtained are determined to form a second convolution kernel. Continue to traverse the second element in each target row, and obtain an element every N-1 column interval, determine that the elements in each target row obtained form another second convolution kernel, until the first convolution kernel Any element of is traversed.
  • the splitting the first input data according to the step size N to obtain multiple second input data corresponding to the multiple first convolution kernels includes:
  • the first input data is split with an interval of N-1 steps to obtain multiple second input data corresponding to the multiple first convolution kernels.
  • the first input data can be split with an interval of N-1 steps, that is, for the current element in the first input data , Obtain an element in the row and column at intervals of N-1, and the element and the current element belong to the same second input data, and execute cyclically with this element as the current element, and obtain the elements that make up the second input data at intervals of N-1 the process of.
  • the first input data is split with an interval of N-1 steps to obtain multiple data corresponding to the multiple first convolution kernels.
  • the second input data can include:
  • the first input data of m ⁇ n you can start from the first row of the first input data, and determine a behavior target row every N-1 rows. For each target row, you can start from the first element At the beginning, an element is obtained in every interval N-1 column, and the elements in each target row obtained are determined to form a second input data. Continue to traverse the second element in each target row, and obtain an element every N-1 column interval, determine that the elements in each target row obtained constitute another second input data, until any of the first input data All elements are traversed.
  • Fig. 2 shows a schematic diagram of a data processing method of an example of the present disclosure.
  • the first input data and the first convolution can be based on the step size of 2.
  • the core is split, the first convolution kernel is split into 4 second convolution kernels, and the first input data is split into 4 second input data, specifically:
  • a target line is determined at an interval of 1 line, then the first line and the third line are determined as the target line.
  • the target line start from the first element and take one element at an interval of 1 element (marked in the figure "1"), all the finally obtained elements form the second convolution kernel (1), where the elements taken from each target row form a row of the second convolution kernel in order.
  • the elements of the target row have been traversed, and the target row of the second line is re-determined.
  • the target row start from the first element and take one element (marked as "3" in the figure), and finally obtain all the elements.
  • the elements form the second convolution kernel (3).
  • a target row is determined at an interval of 1 line, and then the first row, the third row, and the fifth row are determined as the target row.
  • the target row start from the first element and take the interval 1 element One element (marked as "1" in the figure), and all the elements finally obtained form the second input data (1).
  • the elements of the target row have completed the traversal, and the second and fourth rows of the target row have been re-determined.
  • For the target row start from the first element, and take one element (marked as "3" in the figure). All elements finally obtained constitute the second input data (3).
  • the correspondence between the second input data and the second convolution kernel is specifically: the first element in the second input data is in the first input data The position of is the same as the position of the first element in the second convolution kernel in the first convolution kernel.
  • the position of the first element of the second input data in the first input data is the x-th row and the y-th column
  • the position of the first element in the corresponding second convolution kernel in the first convolution It is also the xth row and the yth column.
  • the second convolution kernel (1) has a corresponding relationship with the second input data (1), and the position of the first element in the second convolution kernel (1) in the first convolution kernel is In the first row and the first column, the position of the first element in the second input data (1) in the first input data is the first row and the first column.
  • the second convolution kernel (2) has a corresponding relationship with the second input data (2), and the position of the first element in the second convolution kernel (2) in the first convolution kernel is the first row and the second column , The position of the first element in the second input data (2) in the first input data is the first row and the second column.
  • the second convolution kernel (3) has a corresponding relationship with the second input data (3), and the position of the first element in the second convolution kernel (3) in the first convolution kernel is the second row and the first column , The position of the first element in the second input data (3) in the first input data is the second row and the first column.
  • the second convolution kernel (4) has a corresponding relationship with the second input data (4), and the position of the first element in the second convolution kernel (4) in the first convolution kernel is the second row and the second column , The position of the first element in the second input data (4) in the first input data is the second row and the second column.
  • step S13 for any of the second input data, perform a winograd convolution operation on the second input data and the corresponding second convolution kernel to obtain a convolution result corresponding to the second input data.
  • step S14 it is determined that the sum of the convolution results corresponding to the plurality of second input data is the convolution result of the first convolution kernel and the first input data.
  • the winograd convolution operation can be performed on any second input data and the corresponding second convolution kernel to obtain the convolution result corresponding to the second input data, and perform the convolution result of all the second input data
  • the summation operation determines that the sum of the convolution results of all the second input data is the convolution result of the first convolution kernel and the first input data.
  • the data processing method can split the first convolution kernel with a step size greater than 1 and the first input data into multiple second convolution kernels with a step size of 1 and multiple second inputs. Data, improve the multiplexing rate of data.
  • the present disclosure provides a data processing method that can disassemble the multiplication operation in the winograd convolution process into an addition operation, thereby saving calculation time, reducing energy consumption, and convolving winograd
  • the data in the process is quantified to further improve the calculation performance.
  • the second input data and the corresponding second convolution kernel perform a winograd convolution operation to obtain the second input
  • the convolution result corresponding to the data can include:
  • the winograd inverse transform of the alignment multiplication result is disassembled into a summation operation to obtain a convolution result of the second input data and the corresponding second convolution kernel.
  • the above-mentioned disassembling the winograd forward transformation of the second input data into a summation operation, and performing calculation to obtain the winograd forward transformation result of the second input data may include:
  • the second input data is disassembled into a plurality of first sub-tensors, and winograd positive transformation is performed on the plurality of first sub-tensors and summed to obtain a winograd positive transformation result of the second input data.
  • the number of the plurality of first sub-tensors is the same as the number of non-zero elements of the second input data, and each of the plurality of first sub-tensors One element in a sub-tensor is the same as the element at the corresponding position in the second input data, and the other elements are all zero.
  • the second input data is a 4 ⁇ 4 matrix including 16 elements. Therefore, the second input data can be decomposed into 16 first sub-tensors.
  • the 16 first sub-tensors are:
  • each first sub-tensor is the same as the element at the corresponding position in the second input data, and the other elements are all 0 means: taking the first sub-tensor d 00 as an example, in the first row and first column
  • the position element is the same as the position element of the second input data in the first row and first column.
  • the other elements are all 0, and the other first subtensors also have the same attributes.
  • the above disassembly methods are only some examples of the present disclosure, and do not limit the present disclosure in any way.
  • the first sub-tensor obtained by disassembly is The number may be less than the number of elements of the second input data.
  • the number of multiple first subtensors is the same as the number of non-zero elements of the second input data.
  • performing winograd forward transformation on the multiple first subtensors and summing them to obtain the winograd forward transformation result of the second input data may include the following process:
  • the winograd positive transformation result of the first sub-tensor corresponding to the first sub-tensor is: the value of the element at the first position in the first sub-tensor Is 1, where the position of the first position in the first sub-tensor is the same as the position of the non-zero element in the first sub-tensor;
  • the winograd positive transformation results of the multiple first subtensors are added to obtain the winograd positive transformation result of the second input data.
  • the first-element sub-tensor corresponding to d 00 can be
  • the first sub-tensor is to extract the values of non-zero elements in the first sub-tensor, and the values of non-zero elements can be used as coefficients of the first sub-tensor.
  • the winograd positive transformation result of the first sub-tensor corresponding to the first sub-tensor can be obtained in advance through the following process: For each first sub-tensor, the first sub-tensor corresponding to the first sub-tensor The left side of the sub-tensor is multiplied by the positive transformation, the left-multiplied matrix, and the right is multiplied by the positive transformation, and the right-multiplied matrix is used to obtain the winograd positive transformation result of the first sub-tensor.
  • the form of the corresponding first element sub-tensor is determined, and the corresponding positive transformation left-multiplication matrix and forward transformation right-multiplication matrix are also determined.
  • the winograd positive transformation result of the first sub-tensor can be calculated in advance, and the specific process is as described above.
  • the corresponding winograd positive transformation result of the first sub-tensor is:
  • the winograd positive transformation result of the corresponding first-element sub-tensor is:
  • the matrix multiplication operation can be broken down into an addition operation.
  • the process of calculating the winograd forward transformation result of the first element sub-tensor involves more multiplication operations.
  • the pre-calculated winograd forward transformation results of the first element subtensor of various scales can be saved. In this way, in the actual calculation process, it can be directly obtained without repeated calculations, thereby shortening calculation time and saving calculation resources.
  • the non-zero element value in the first sub-tensor can be multiplied by the winograd positive transformation result of the corresponding first sub-tensor, You can get the winograd positive transformation result of the first subtensor.
  • the corresponding winograd positive transformation result is:
  • the winograd positive transformation results of all the first sub-tensors are calculated through the above process, and the winograd positive transformation results of multiple first sub-tensors are added to obtain the winograd positive transformation results of the second input data.
  • the winograd positive transformation of the second convolution kernel can be disassembled into a summation operation, and calculations are performed to obtain the winograd positive transformation result of the second convolution kernel.
  • the Disassembling the winograd positive transformation of the second convolution kernel into a summation operation, and performing calculations to obtain the winograd positive transformation result of the second convolution kernel may include:
  • the second convolution kernel is disassembled into a plurality of second sub-tensors, and winograd positive transformation is performed on the plurality of second sub-tensors and summed to obtain a winograd positive transformation result of the second convolution kernel.
  • the number of the plurality of second sub-tensors is the same as the number of elements of the second convolution kernel, and each second sub-tensor of the plurality of second sub-tensors One element in the sub-tensor is the same as the element at the corresponding position in the second convolution kernel, and the other elements are all zero.
  • the second convolution kernel is disassembled into a plurality of second sub-tensors, and winograd positive transformation is performed on the plurality of second sub-tensors and summed to obtain a winograd positive transformation result of the second convolution kernel.
  • the second convolution kernel is a 3 ⁇ 3 matrix and includes 9 elements. Therefore, the second convolution kernel can be decomposed into 9 second sub-tensors.
  • the 9 second sub-tensors are:
  • each second subtensor is the same as the element at the corresponding position in the second convolution kernel, and the other elements are all zero.
  • the process of performing winograd positive transformation on the multiple second subtensors and summing them to obtain the winograd positive transformation result of the second convolution kernel can be obtained by referring to the aforementioned winograd positive transformation on multiple first subtensors and summing them.
  • the process of the winograd positive transformation result of the second input data is not repeated here in this disclosure.
  • the winograd positive transformation result of the second input data and the winograd positive transformation result of the second convolution kernel can be executed The counter-multiply operation of, get the result of the counter-multiply.
  • bitwise multiplication may refer to the data obtained by multiplying the data at the corresponding positions of the two tensors as the value of the corresponding position in the bitwise multiplication result.
  • the present disclosure can take A T (G 4 ⁇ 4 ⁇ D 4 ⁇ 4 )A
  • the disassembly is a summation operation, and calculation is performed to obtain the winograd convolution result of the second input data, thereby further saving calculation time and reducing energy consumption.
  • the above-mentioned disassembling the winograd inverse transform of the alignment multiplication result into a summation operation obtains the convolution result of the second input data and the corresponding second convolution kernel ,
  • the result of the alignment multiplication is disassembled into a plurality of third sub-tensors, and winograd inverse transformation is performed on the plurality of third sub-tensors and summed to obtain the second input data and the corresponding second
  • winograd inverse transformation is performed on the plurality of third sub-tensors and summed to obtain the second input data and the corresponding second
  • the convolution result of the convolution kernel is
  • the number of the plurality of third sub-tensors is the same as the number of the non-zero elements of the alignment multiplication result, and each of the plurality of third sub-tensors One element in the third sub-tensor is the same as the element at the corresponding position in the alignment multiplication result, and the other elements are all zero.
  • the result of the alignment multiplication is disassembled into multiple third sub-tensors, for example, it can be disassembled into 16, and the 16 third sub-tensors are:
  • winograd inverse transformation may be performed on the multiple third sub-tensors and summed to obtain the winograd convolution result of the second input data.
  • performing winograd inverse transformation on the multiple third subtensors and summing them to obtain the winograd convolution result of the second input data may include the following process:
  • the third-element sub-tensor corresponding to the third sub-tensor is: the value of the element at the second position in the third-element sub-tensor Is 1, where the position of the second position in the third sub-tensor is the same as the position of the non-zero element in the third sub-tensor;
  • the winograd inverse transform results of the multiple third subtensors are added to obtain the winograd convolution result of the second input data.
  • the method for determining the third-element sub-tensor corresponding to the third sub-tensor is the same as the method for determining the first-element sub-tensor above, and will not be repeated here.
  • the winograd inverse transform result of the third sub-tensor is obtained in advance through the following process: For each third sub-tensor, the left side of the third sub-tensor corresponding to the third sub-tensor is multiplied by the inverse transform Multiplying the matrix on the left, multiplying by the inverse transformation on the right, and multiplying the matrix on the right to obtain the winograd inverse transformation result of the third-element subtensor.
  • the form of the corresponding third-element sub-tensor is determined, and the corresponding inverse transform left multiplication matrix and inverse transform right multiplication matrix are also determined. Therefore, the winograd inverse transformation result of the third-element sub-tensor can be calculated in advance, and the specific process is as described above.
  • the left multiplication matrix of the inverse transformation is a 2 ⁇ 4 matrix, for example:
  • the inverse transformation right multiplication matrix is a 4 ⁇ 2 matrix, for example:
  • the dimension of the inverse transformation matrix can be determined according to the dimension of the second input data and the dimension of the second convolution kernel and the convolution step length. The above is only an example and does not limit the present disclosure in any way.
  • the inverse transformation matrix consists of 0, ⁇ 1 constitutes, so the matrix multiplication operation of the inverse transformation can be realized by disassembling it into addition and shift operations. Multiply the inverse transformation matrix by the third-element sub-tensor to obtain the winograd inverse transformation result of the third-element sub-tensor.
  • the element value in the winograd inverse transformation result of the third-element sub-tensor is 0, With the composition of ⁇ 1, the fraction can be calculated by a simple shift operation, which can still save calculation time compared to the multiplication operation.
  • the winograd inverse transform result of multiplying the element values of the third subtensor that are not 0 as coefficients by the corresponding third subtensor is obtained, and the multiple third subtensors
  • the winograd positive transformation result of the first sub-tensor is obtained, and the winograd positive transformation results of multiple first sub-tensors are added to obtain the winograd positive transformation result of the input data, but the third element
  • the result of the winograd inverse transformation of tensor is not completely composed of 0 and ⁇ 1, but the score can be calculated by a simple shift operation. Compared with the multiplication operation, the present disclosure can still save calculation time after disassembling the ordinary
  • multiple third sub-tensors are obtained by disassembling the bit multiplication results, and the winograd inverse transform results of the third-element sub-tensors corresponding to the third sub-tensors obtained in advance and The non-zero element value of the third subtensor can be summed to obtain the winograd convolution result of the input data.
  • disassembling the multiplication operation into a summation operation can save calculation time and reduce energy consumption.
  • steps in the flowchart of FIGS. 1-2 are displayed in sequence as indicated by the arrows, these steps are not necessarily performed in sequence in the order indicated by the arrows. Unless there is a clear description in this article, there is no strict order for the execution of these steps, and these steps can be executed in other orders. Moreover, at least part of the steps in Figure 1-2 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. These sub-steps or stages The execution order of is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • Fig. 3 shows a structural block diagram of a data processing device provided by an embodiment of the present disclosure. As shown in Fig. 3, the device may include:
  • the first splitting module 301 may be used to split the first convolution kernel according to the step size N to obtain multiple second convolution kernels;
  • the second splitting module 302 may be used to split the first input data according to the step size N to obtain multiple second input data corresponding to the multiple first convolution kernels;
  • the convolution module 303 may be used to perform a winograd convolution operation on the second input data and the corresponding second convolution kernel for any of the second input data to obtain the volume corresponding to the second input data Product result
  • the determining module 304 may be used to determine that the sum of the convolution results corresponding to the plurality of second input data is the convolution result of the first convolution kernel and the first input data.
  • the data processing device provided by the embodiment of the present disclosure can split the first convolution kernel with a step size greater than 1 and the first input data into multiple second convolution kernels with a step size of 1 and multiple second inputs. Data, improve the multiplexing rate of data.
  • the correspondence between the second input data and the second convolution kernel is specifically: the first element in the second input data is in the first input data The position of is the same as the position of the first element in the second convolution kernel in the first convolution kernel.
  • the first splitting module may also be used for:
  • the second splitting module can also be used for:
  • the first input data is split with an interval of N-1 steps to obtain multiple second input data corresponding to the multiple first convolution kernels.
  • the first splitting module may also be used for:
  • the second splitting module can also be used for:
  • the above convolution module can also be used for:
  • the winograd inverse transform of the alignment multiplication result is disassembled into a summation operation to obtain a convolution result of the second input data and the corresponding second convolution kernel.
  • the convolution module may also be used for:
  • the second input data is disassembled into a plurality of first sub-tensors, and winograd positive transformation is performed on the plurality of first sub-tensors and summed to obtain a winograd positive transformation result of the second input data.
  • the convolution module may also be used for:
  • the second convolution kernel is disassembled into a plurality of second sub-tensors, and winograd positive transformation is performed on the plurality of second sub-tensors and summed to obtain a winograd positive transformation result of the second convolution kernel.
  • the number of the plurality of first sub-tensors is the same as the number of non-zero elements of the second input data, and each of the plurality of first sub-tensors One element in a sub-tensor is the same as the element at the corresponding position in the second input data, and the other elements are all zero.
  • the number of the plurality of second sub-tensors is the same as the number of elements of the second convolution kernel, and each second sub-tensor of the plurality of second sub-tensors One element in the tensor is the same as the element at the corresponding position in the second convolution kernel, and the other elements are all zero.
  • the convolution module may also be used for:
  • the result of the alignment multiplication is disassembled into a plurality of third sub-tensors, and winograd inverse transformation is performed on the plurality of third sub-tensors and summed to obtain the second input data and the corresponding second
  • winograd inverse transformation is performed on the plurality of third sub-tensors and summed to obtain the second input data and the corresponding second
  • the convolution result of the convolution kernel is
  • the number of the plurality of third sub-tensors is the same as the number of the non-zero elements of the alignment multiplication result, and each of the plurality of third sub-tensors One element in the third sub-tensor is the same as the element at the corresponding position in the alignment multiplication result, and the other elements are all zero.
  • the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
  • the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
  • the foregoing device embodiments are only illustrative, and the device of the present disclosure may also be implemented in other ways.
  • the division of units/modules in the above-mentioned embodiments is only a logical function division, and there may be other division methods in actual implementation.
  • multiple units, modules or components may be combined or integrated into another system, or some features may be omitted or not implemented.
  • the functional units/modules in the various embodiments of the present disclosure may be integrated into one unit/module, or each unit/module may exist alone physically, or two or more units/modules may exist.
  • the modules are integrated together.
  • the above-mentioned integrated unit/module can be implemented in the form of hardware or software program module.
  • the hardware may be a digital circuit, an analog circuit, and so on.
  • the physical realization of the hardware structure includes but is not limited to transistors, memristors and so on.
  • the artificial intelligence processor may be any appropriate hardware processor, such as CPU, GPU, FPGA, DSP, ASIC, and so on.
  • the storage unit may be any suitable magnetic storage medium or magneto-optical storage medium, such as RRAM (Resistive Random Access Memory), DRAM (Dynamic Random Access Memory), Static random access memory SRAM (Static Random-Access Memory), enhanced dynamic random access memory EDRAM (Enhanced Dynamic Random Access Memory), high-bandwidth memory HBM (High-Bandwidth Memory), hybrid storage cube HMC (Hybrid Memory Cube), etc. Wait.
  • RRAM Resistive Random Access Memory
  • DRAM Dynamic Random Access Memory
  • Static random access memory SRAM Static Random-Access Memory
  • enhanced dynamic random access memory EDRAM Enhanced Dynamic Random Access Memory
  • high-bandwidth memory HBM High-Bandwidth Memory
  • hybrid storage cube HMC Hybrid Memory Cube
  • the integrated unit/module is implemented in the form of a software program module and sold or used as an independent product, it can be stored in a computer readable memory.
  • the technical solution of the present disclosure essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a memory, It includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
  • the aforementioned memory includes: U disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), mobile hard disk, magnetic disk or optical disk and other media that can store program codes.
  • an artificial intelligence chip is also disclosed, which includes the above-mentioned data processing device.
  • a board card which includes a storage device, an interface device, a control device, and the aforementioned artificial intelligence chip; wherein, the artificial intelligence chip is connected to the storage device and the control device And the interface devices are respectively connected; the storage device is used to store data; the interface device is used to realize data transmission between the artificial intelligence chip and an external device; the control device is used to The state of the artificial intelligence chip is monitored.
  • Fig. 4 shows a structural block diagram of a board card according to an embodiment of the present disclosure.
  • the board card may include other supporting components in addition to the chip 389 described above.
  • the supporting components include, but are not limited to: a storage device 390, Interface device 391 and control device 392;
  • the storage device 390 is connected to the artificial intelligence chip through a bus for storing data.
  • the storage device may include multiple groups of storage units 393. Each group of the storage unit and the artificial intelligence chip are connected through a bus. It can be understood that each group of the storage units may be DDR SDRAM (English: Double Data Rate SDRAM, double-rate synchronous dynamic random access memory).
  • the storage device may include 4 groups of the storage units. Each group of the storage unit may include a plurality of DDR4 particles (chips).
  • the artificial intelligence chip may include four 72-bit DDR4 controllers. In the 72-bit DDR4 controller, 64 bits are used for data transmission and 8 bits are used for ECC verification. It can be understood that when DDR4-3200 particles are used in each group of the storage units, the theoretical bandwidth of data transmission can reach 25600MB/s.
  • each group of the storage unit includes a plurality of double-rate synchronous dynamic random access memories arranged in parallel.
  • DDR can transmit data twice in one clock cycle.
  • a controller for controlling the DDR is provided in the chip, which is used to control the data transmission and data storage of each storage unit.
  • the interface device is electrically connected with the artificial intelligence chip.
  • the interface device is used to implement data transmission between the artificial intelligence chip and an external device (such as a server or a computer).
  • the interface device may be a standard PCIE interface.
  • the data to be processed is transferred from the server to the chip through a standard PCIE interface to realize data transfer.
  • the interface device may also be other interfaces. The present disclosure does not limit the specific manifestations of the other interfaces mentioned above, as long as the interface unit can realize the switching function.
  • the calculation result of the artificial intelligence chip is still transmitted by the interface device back to an external device (such as a server).
  • the control device is electrically connected with the artificial intelligence chip.
  • the control device is used to monitor the state of the artificial intelligence chip.
  • the artificial intelligence chip and the control device may be electrically connected through an SPI interface.
  • the control device may include a single-chip microcomputer (Micro Controller Unit, MCU).
  • MCU Micro Controller Unit
  • the artificial intelligence chip may include multiple processing chips, multiple processing cores, or multiple processing circuits, and can drive multiple loads. Therefore, the artificial intelligence chip can be in different working states such as multi-load and light-load.
  • the control device can realize the regulation and control of the working states of multiple processing chips, multiple processing and or multiple processing circuits in the artificial intelligence chip.
  • an electronic device which includes the aforementioned artificial intelligence chip.
  • Electronic equipment includes data processing devices, robots, computers, printers, scanners, tablets, smart terminals, mobile phones, driving recorders, navigators, sensors, cameras, servers, cloud servers, cameras, cameras, projectors, watches, headsets , Mobile storage, wearable devices, vehicles, household appliances, and/or medical equipment.
  • the transportation means include airplanes, ships, and/or vehicles;
  • the household appliances include TVs, air conditioners, microwave ovens, refrigerators, rice cookers, humidifiers, washing machines, electric lights, gas stoves, and range hoods;
  • the medical equipment includes nuclear magnetic resonance, B-ultrasound and/or electrocardiograph.
  • the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to call the instructions stored in the memory to execute the above method.
  • the electronic device can be provided as a terminal, server or other form of device.
  • FIG. 5 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
  • the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
  • the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
  • the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
  • the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
  • the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 804 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable read only memory
  • PROM programmable read only memory
  • ROM read only memory
  • magnetic memory flash memory
  • flash memory magnetic disk or optical disk.
  • the power supply component 806 provides power for various components of the electronic device 800.
  • the power supply component 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
  • the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 810 is configured to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC), and when the electronic device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
  • the audio component 810 further includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
  • the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include, but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
  • the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
  • the component is the display and the keypad of the electronic device 800.
  • the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
  • the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
  • the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
  • the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the electronic device 800 may be implemented by one or more application-specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field-available A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • ASIC application-specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field-available A programmable gate array
  • controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • a non-volatile computer-readable storage medium such as the memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
  • FIG. 6 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server. 6
  • the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by the memory 1932, for storing instructions executable by the processing component 1922, such as application programs.
  • the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to the network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-volatile computer-readable storage medium is also provided, such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
  • a data processing method includes splitting the first convolution kernel according to the step size N to obtain multiple second convolution kernels; splitting the first input data according to the step size N to obtain A plurality of second input data corresponding to a plurality of the first convolution kernels; for any of the second input data, perform a winograd convolution operation on the second input data and the corresponding second convolution kernel , Obtain the convolution result corresponding to the second input data; determine that the sum of the convolution results corresponding to the plurality of second input data is the convolution result of the first convolution kernel and the first input data.
  • the corresponding relationship between the second input data and the second convolution kernel is specifically: the first element in the second input data is in the first input The position in the data is the same as the position of the first element in the second convolution kernel in the first convolution kernel.
  • said splitting the first convolution kernel according to the step size N to obtain multiple second convolution kernels includes:
  • the splitting the first input data according to the step size N to obtain multiple second input data corresponding to the multiple first convolution kernels includes:
  • the first input data is split with an interval of N-1 steps to obtain multiple second input data corresponding to the multiple first convolution kernels.
  • Clause A4 for the rows and columns of the first convolution kernel, the first convolution kernel is split with an interval of N-1 steps to obtain multiple second convolution kernels, include:
  • the first input data is split with an interval of N-1 steps to obtain multiple second input data corresponding to the multiple first convolution kernels, including:
  • Clause A5 for any of the second input data, perform a winograd convolution operation on the second input data and the corresponding second convolution kernel, Obtaining the convolution result corresponding to the second input data includes:
  • the winograd inverse transform of the alignment multiplication result is disassembled into a summation operation to obtain a convolution result of the second input data and the corresponding second convolution kernel.
  • the disassembling the winograd positive transformation of the second input data into a summation operation, and performing calculation to obtain the winograd positive transformation result of the second input data includes:
  • the second input data is disassembled into a plurality of first sub-tensors, and winograd positive transformation is performed on the plurality of first sub-tensors and summed to obtain a winograd positive transformation result of the second input data.
  • the disassembling the winograd positive transformation of the second convolution kernel into a summation operation, and performing calculation to obtain the winograd positive transformation result of the second convolution kernel includes:
  • the second convolution kernel is disassembled into a plurality of second sub-tensors, and winograd positive transformation is performed on the plurality of second sub-tensors and summed to obtain a winograd positive transformation result of the second convolution kernel.
  • the number of the plurality of first sub-tensors is the same as the number of non-zero elements of the second input data, and each of the plurality of first sub-tensors One element in the first sub-tensor is the same as the element at the corresponding position in the second input data, and the other elements are all 0.
  • the number of the plurality of second sub-tensors is the same as the number of elements of the second convolution kernel, and each of the plurality of second sub-tensors One element in the two sub-tensors is the same as the element at the corresponding position in the second convolution kernel, and the other elements are all zero.
  • the result of the alignment multiplication is disassembled into a plurality of third sub-tensors, and winograd inverse transformation is performed on the plurality of third sub-tensors and summed to obtain the second input data and the corresponding second
  • winograd inverse transformation is performed on the plurality of third sub-tensors and summed to obtain the second input data and the corresponding second
  • the convolution result of the convolution kernel is
  • the number of the plurality of third sub-tensors is the same as the number of non-zero elements of the result of the alignment multiplication, and the number of the third sub-tensors in the plurality of One element in each third sub-tensor is the same as the element at the corresponding position in the alignment multiplication result, and the other elements are all zero.
  • a data processing device including:
  • the first splitting module is used to split the first convolution kernel according to the step size N to obtain multiple second convolution kernels
  • the second splitting module is configured to split the first input data according to the step size N to obtain multiple second input data corresponding to the multiple first convolution kernels;
  • the convolution module is configured to perform a winograd convolution operation on the second input data and the corresponding second convolution kernel for any of the second input data to obtain a convolution result corresponding to the second input data ;
  • the determining module is configured to determine that the sum of the convolution results corresponding to the plurality of second input data is the convolution result of the first convolution kernel and the first input data.
  • the correspondence between the second input data and the second convolution kernel is specifically: the first element in the second input data is in the first input The position in the data is the same as the position of the first element in the second convolution kernel in the first convolution kernel.
  • the first splitting module is further used for:
  • the second splitting module is also used for:
  • the first input data is split with an interval of N-1 steps to obtain multiple second input data corresponding to the multiple first convolution kernels.
  • the first splitting module is further used for:
  • the second splitting module is also used for:
  • the convolution module is further used for:
  • the winograd inverse transform of the alignment multiplication result is disassembled into a summation operation to obtain a convolution result of the second input data and the corresponding second convolution kernel.
  • the convolution module is further used for:
  • the second input data is disassembled into a plurality of first sub-tensors, and winograd positive transformation is performed on the plurality of first sub-tensors and summed to obtain a winograd positive transformation result of the second input data.
  • the convolution module is further used for:
  • the second convolution kernel is disassembled into a plurality of second sub-tensors, and winograd positive transformation is performed on the plurality of second sub-tensors and summed to obtain a winograd positive transformation result of the second convolution kernel.
  • Clause A19 The device according to clause A17, wherein the number of the plurality of first sub-tensors is the same as the number of non-zero elements of the second input data, and each of the plurality of first sub-tensors One element in the first sub-tensor is the same as the element at the corresponding position in the second input data, and the other elements are all 0.
  • Clause A20 the device according to clause A18, wherein the number of the plurality of second sub-tensors is the same as the number of elements of the second convolution kernel, and each of the plurality of second sub-tensors One element in the two sub-tensors is the same as the element at the corresponding position in the second convolution kernel, and the other elements are all zero.
  • the convolution module is further used for:
  • the result of the alignment multiplication is disassembled into a plurality of third sub-tensors, and winograd inverse transformation is performed on the plurality of third sub-tensors and summed to obtain the second input data and the corresponding second
  • winograd inverse transformation is performed on the plurality of third sub-tensors and summed to obtain the second input data and the corresponding second
  • the convolution result of the convolution kernel is
  • the number of the plurality of third sub-tensors is the same as the number of non-zero elements of the alignment multiplication result, and of the plurality of third sub-tensors One element in each third sub-tensor is the same as the element at the corresponding position in the alignment multiplication result, and the other elements are all zero.
  • Clause A23 an artificial intelligence chip, the chip comprising the data processing device according to any one of clauses A12 to A22.
  • Clause A24 an electronic device including the artificial intelligence chip as described in Clause A23.
  • a board card comprising: a storage device, an interface device, a control device, and the artificial intelligence chip as described in clause A23;
  • the artificial intelligence chip is connected to the storage device, the control device, and the interface device respectively;
  • the storage device is used to store data
  • the interface device is used to implement data transmission between the artificial intelligence chip and external equipment
  • the control device is used to monitor the state of the artificial intelligence chip.
  • the storage device includes: multiple groups of storage units, each group of the storage unit is connected to the artificial intelligence chip through a bus, and the storage unit is: DDR SDRAM;
  • the chip includes: a DDR controller, which is used to control the data transmission and data storage of each storage unit;
  • the interface device is: a standard PCIE interface.
  • an electronic device characterized in that it includes:
  • a memory for storing processor executable instructions
  • the processor is configured to call instructions stored in the memory to execute the method described in any one of clauses A1 to A11.
  • Clause A28 a computer-readable storage medium with computer program instructions stored thereon, characterized in that, when the computer program instructions are executed by a processor, the method described in any one of clauses A1 to A11 is implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Neurology (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé et un appareil de traitement de données, ainsi qu'un dispositif informatique et un support de stockage associés. Le procédé selon l'invention consiste : à diviser un premier noyau de convolution selon une longueur de pas N afin d'obtenir une pluralité de deuxièmes noyaux de convolution (S11) ; à diviser des premières données d'entrée selon la longueur de pas N afin d'obtenir une pluralité de deuxièmes données d'entrée correspondant à la pluralité de premiers noyaux de convolution (S12) ; pour toutes les deuxièmes données d'entrée, à exécuter une opération de convolution Winograd sur les deuxièmes données d'entrée et le deuxième noyau de convolution correspondant pour obtenir un résultat de convolution correspondant aux deuxièmes données d'entrée (S13) ; et à déterminer qu'une somme des résultats de convolution correspondant à la pluralité de deuxièmes données d'entrée est un résultat de convolution du premier noyau de convolution et des premières données d'entrée (S14). Le procédé selon l'invention permet d'améliorer la réutilisabilité des données.
PCT/CN2020/123837 2019-11-01 2020-10-27 Appareil et procédé de traitement de données, et dispositif informatique et support de stockage associés WO2021083097A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911061027.0A CN112765538B (zh) 2019-11-01 2019-11-01 数据处理方法、装置、计算机设备和存储介质
CN201911061027.0 2019-11-01

Publications (1)

Publication Number Publication Date
WO2021083097A1 true WO2021083097A1 (fr) 2021-05-06

Family

ID=75692119

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123837 WO2021083097A1 (fr) 2019-11-01 2020-10-27 Appareil et procédé de traitement de données, et dispositif informatique et support de stockage associés

Country Status (2)

Country Link
CN (1) CN112765538B (fr)
WO (1) WO2021083097A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028384A (zh) * 2021-10-26 2023-04-28 太初(无锡)电子科技有限公司 一种基于多张量核心处理器的卷积计算数据重用方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993186A (zh) * 2017-12-14 2018-05-04 中国人民解放军国防科技大学 一种基于Winograd算法的3D CNN加速方法及系统
CN110163333A (zh) * 2018-01-10 2019-08-23 成都信息工程大学 卷积神经网络的并行优化方法
CN110288086A (zh) * 2019-06-13 2019-09-27 天津大学 一种基于Winograd的可配置卷积阵列加速器结构
CN110533164A (zh) * 2019-08-05 2019-12-03 西安交通大学 一种面向卷积神经网络加速器的Winograd卷积拆分方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993186A (zh) * 2017-12-14 2018-05-04 中国人民解放军国防科技大学 一种基于Winograd算法的3D CNN加速方法及系统
CN110163333A (zh) * 2018-01-10 2019-08-23 成都信息工程大学 卷积神经网络的并行优化方法
CN110288086A (zh) * 2019-06-13 2019-09-27 天津大学 一种基于Winograd的可配置卷积阵列加速器结构
CN110533164A (zh) * 2019-08-05 2019-12-03 西安交通大学 一种面向卷积神经网络加速器的Winograd卷积拆分方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHEN YANG ET AL. ,: "WRA: A 2.2-to-6.3 TOPS Highly Unified Dynamically Reconfigurable Accelerator Using a Novel Winograd Decomposition Algorithm for Convolutional Neural Networks,", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS–I: REGULAR PAPERS,, vol. 66, no. 9, 30 September 2019 (2019-09-30), XP011743069, DOI: 10.1109/TCSI.2019.2928682 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116028384A (zh) * 2021-10-26 2023-04-28 太初(无锡)电子科技有限公司 一种基于多张量核心处理器的卷积计算数据重用方法

Also Published As

Publication number Publication date
CN112765538B (zh) 2024-03-29
CN112765538A (zh) 2021-05-07

Similar Documents

Publication Publication Date Title
WO2021036893A1 (fr) Procédé et appareil de traitement de données, dispositif informatique et support de stockage
CN110692038A (zh) 多功能矢量处理器电路
WO2021114903A1 (fr) Procédé et appareil de traitement de données, dispositif informatique et support d'enregistrement
WO2012097613A2 (fr) Procédé d'affichage de clavier virtuel et terminal mobile
WO2021114904A1 (fr) Procédé et appareil de traitement de données, dispositif informatique et support d'enregistrement
CN112765540A (zh) 数据处理方法、装置及相关产品
CN111443917A (zh) 神经网络运行优化方法、装置及相关产品
WO2021083097A1 (fr) Appareil et procédé de traitement de données, et dispositif informatique et support de stockage associés
US20240160479A1 (en) Hardware accelerators using shared interface registers
CN112784951A (zh) Winograd卷积运算方法及相关产品
CN113297128B (zh) 数据处理方法、装置、计算机设备和存储介质
WO2024022060A1 (fr) Procédé et appareil d'enregistrement d'image, et support de stockage
WO2021083100A1 (fr) Procédé et dispositif de traitement de données, équipement informatique et support de stockage
WO2021082654A1 (fr) Appareil et procédé de traitement de données, et dispositif informatique et support de stockage
US20230010981A1 (en) Methods and apparatuses for high performance and accuracy fixed-point scale implementation
WO2021082653A1 (fr) Procédé et appareil de traitement de données, dispositif informatique, et support de stockage
WO2021082723A1 (fr) Appareil d'execution
CN112766471B (zh) 运算装置及相关产品
CN113762488B (zh) 处理器、数据处理方法、计算机设备和存储介质
CN111783969A (zh) 数据处理方法、装置、计算机设备和存储介质
CN113298223B (zh) 数据处理方法、装置、计算机设备和存储介质
KR102722476B1 (ko) 증가된 정밀도의 뉴럴 프로세싱 요소
CN113835990B (zh) 检测方法、装置、计算机设备和存储介质
US20240272748A1 (en) Display control method, display device and readable storage medium
EP4148561A1 (fr) Procédé et appareil de traitement de données, et produit associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20882028

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20882028

Country of ref document: EP

Kind code of ref document: A1