TWI840715B - Computing circuit and data processing method based on convolution neural network and computer readable storage medium - Google Patents

Computing circuit and data processing method based on convolution neural network and computer readable storage medium Download PDF

Info

Publication number
TWI840715B
TWI840715B TW110140625A TW110140625A TWI840715B TW I840715 B TWI840715 B TW I840715B TW 110140625 A TW110140625 A TW 110140625A TW 110140625 A TW110140625 A TW 110140625A TW I840715 B TWI840715 B TW I840715B
Authority
TW
Taiwan
Prior art keywords
data
output data
output
filter
operator
Prior art date
Application number
TW110140625A
Other languages
Chinese (zh)
Other versions
TW202230229A (en
Inventor
林文翔
潘偉正
林金岷
Original Assignee
創惟科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 創惟科技股份有限公司 filed Critical 創惟科技股份有限公司
Priority to CN202210055056.1A priority Critical patent/CN114781626A/en
Priority to US17/578,416 priority patent/US20220230055A1/en
Publication of TW202230229A publication Critical patent/TW202230229A/en
Application granted granted Critical
Publication of TWI840715B publication Critical patent/TWI840715B/en

Links

Images

Abstract

A computing circuit and a data processing method based on convolution neural network and a computer readable storage medium are provided. An input data is obtained from the memory. The first computation is performed on the first part data of the input data, to obtain a first output data. The first output data is buffered in a first buffer area. When the buffered first output feature map is larger than a first predetermined data amount, the second computation is performed on the first output data. The second output data is buffered in a second buffer area. The third output data obtained from the third computation on the second output data is outputted to the memory. When the second computation is performed on the first output data, the first computation is performed on the input data continuously. Accordingly, the accessing time on the main memory could be reduced.

Description

基於卷積神經網路的運算電路、資料處理方法及電腦可讀取儲存媒體Computational circuit, data processing method and computer-readable storage medium based on convolutional neural network

本發明是有關於一種機器學習(Machine Learning,ML)技術,且特別是有關於一種基於卷積神經網路(Conventional Neural Network,CNN)的運算電路、資料處理方法及電腦可讀取儲存媒體。 The present invention relates to a machine learning (ML) technology, and in particular to an operation circuit, a data processing method and a computer-readable storage medium based on a convolutional neural network (CNN).

機器學習是人工智慧(Artificial Intelligence,AI)中的一個重要主題,並可分析訓練樣本以自中獲得規律,從而透過規律對未知資料預測。而經學習後所建構出的機器學習模型即是用以對待評估資料推論。 Machine learning is an important topic in artificial intelligence (AI). It can analyze training samples to obtain patterns and predict unknown data through patterns. The machine learning model constructed after learning is used to evaluate data inference.

機器學習的演算法有很多種。例如,神經網路可透過模擬人類腦細胞的運作來進行決策。其中,卷積神經網路在影像及語音辨識方面提供較佳的結果,並逐漸成為廣泛應用及主力研發的機 器學習架構之一。 There are many machine learning algorithms. For example, neural networks can make decisions by simulating the operation of human brain cells. Among them, convolutional neural networks provide better results in image and speech recognition, and have gradually become one of the most widely used and researched machine learning architectures.

值得注意的是,在卷積神經網路架構的卷積層中,處理元件對輸入資料滑動卷積核(kernel)或過濾器(filter)並執行特定運算。處理元件需要反覆地自記憶體讀取輸入資料及權重值並輸出運算結果至記憶體。甚至,若不同卷積層採用不同大小的卷積核或不同卷積運算,則將大幅提升記憶體的存取次數。例如,MobileNet模型結合卷積運算及深度可分離卷積(Depthwise Separable Convolution)運算。因此,這些運算都需要分別對記憶體存取。 It is worth noting that in the convolution layer of the convolutional neural network architecture, the processing element slides the convolution kernel or filter on the input data and performs specific operations. The processing element needs to repeatedly read the input data and weight values from the memory and output the operation results to the memory. Even if different convolution layers use convolution kernels of different sizes or different convolution operations, the number of memory accesses will be greatly increased. For example, the MobileNet model combines convolution operations and depthwise separable convolution operations. Therefore, these operations need to access the memory separately.

有鑑於此,本發明實施例提供一種基於卷積神經網路的運算電路、資料處理方法及電腦可讀取儲存媒體,融合多個卷積層,並據以減少記憶體的存取次數。 In view of this, the present invention provides an operation circuit, a data processing method and a computer-readable storage medium based on a convolutional neural network, which integrates multiple convolutional layers and reduces the number of memory accesses.

本發明實施例的基於卷積神經網路的資料處理方法包括(但不僅限於)下列步驟:自記憶體讀取輸入資料。對輸入資料中的第一部分資料進行第一運算,以取得第一輸出資料。第一運算設有第一過濾器,第一輸出資料的大小相關於第一運算的第一過濾器的大小及第一部分資料的大小。暫存第一輸出資料於第一暫存區。當第一暫存區所暫存的第一輸出資料大於第一預定資料量時,對第一輸出資料進行第二運算,以取得第二輸出資料。第二運算設有第二過濾器,第二輸出資料的大小相關於第二運算的第二過濾器大小。暫存第二輸出資料於第二暫存器。對第二輸出資料進行第三 運算所得出的第三輸出資料輸出至記憶體。對第一輸出資料進行第二運算時,持續對輸入資料進行第一運算。 The data processing method based on the convolutional neural network of the embodiment of the present invention includes (but is not limited to) the following steps: reading input data from the memory. Performing a first operation on the first part of the input data to obtain first output data. The first operation is provided with a first filter, and the size of the first output data is related to the size of the first filter of the first operation and the size of the first part of the data. The first output data is temporarily stored in a first buffer area. When the first output data temporarily stored in the first buffer area is greater than the first predetermined data amount, performing a second operation on the first output data to obtain second output data. The second operation is provided with a second filter, and the size of the second output data is related to the size of the second filter of the second operation. The second output data is temporarily stored in a second buffer. The third output data obtained by performing the third operation on the second output data is output to the memory. When performing the second operation on the first output data, the first operation on the input data continues.

本發明實施例的基於卷積神經網路的運算電路包括(但不僅限於)記憶體及處理元件。記憶體用以儲存輸入資料。處理元件耦接記憶體,並包括第一、第二及第三運算器、第一暫存記憶體及第二暫存記憶體。第一運算器用以對輸入資料中的第一部分資料進行第一運算以取得第一輸出資料,並暫存第一輸出資料至處理元件的第一暫存記憶體。第一輸出資料的大小相關於第一運算的第一過濾器大小及第一部分資料的大小。第二運算器用以當第一暫存記憶體所暫存的第一輸出資料符合第二運算所需的大小時,對第二輸入資料進行第二運算,以取得第二輸出資料,並暫存第二輸出資料至處理元件的第三記憶體。第二運算設有第二過濾器,且第二輸出資料的大小相關於第二運算的第二過濾器的大小。第三運算器用以將對第二輸出資料進行第三運算所得出的第三輸出資料輸出至記憶體。第二運算器進行第二運算時,第一運算器持續進行第一運算。 The convolution neural network-based computing circuit of the embodiment of the present invention includes (but is not limited to) a memory and a processing element. The memory is used to store input data. The processing element is coupled to the memory and includes a first, a second and a third operator, a first temporary memory and a second temporary memory. The first operator is used to perform a first operation on a first portion of the input data to obtain first output data, and temporarily store the first output data in the first temporary memory of the processing element. The size of the first output data is related to the size of the first filter of the first operation and the size of the first portion of the data. The second operator is used to perform a second operation on the second input data to obtain the second output data when the first output data temporarily stored in the first temporary memory meets the size required by the second operation, and temporarily store the second output data in the third memory of the processing element. The second operation is provided with a second filter, and the size of the second output data is related to the size of the second filter of the second operation. The third operator is used to output the third output data obtained by performing the third operation on the second output data to the memory. When the second operator performs the second operation, the first operator continues to perform the first operation.

本發明實施例的電腦可讀取儲存媒體用於儲存程式碼,且處理器載入程式碼以執行前述基於卷積神經網路的資料處理方法。 The computer of the embodiment of the present invention can read the storage medium for storing program code, and the processor loads the program code to execute the aforementioned data processing method based on the convolutional neural network.

基於上述,依據本發明實施例的基於卷積神經網路的運算電路、資料處理方法及電腦可讀取儲存媒體,暫存輸出資料於處理元件中的記憶體,並依據下一個運算器(即,下一運算層)的啟動 條件觸發其運算。藉此,下一運算層不用等到上一運算層對所有輸入資料運算後即可提前觸發運算。此外,本發明實施例可減少自記憶體中存取輸入資料的次數。 Based on the above, according to the convolutional neural network-based computing circuit, data processing method and computer-readable storage medium of the embodiment of the present invention, the output data is temporarily stored in the memory of the processing element, and the operation of the next operator (i.e., the next computing layer) is triggered according to the activation condition. In this way, the next computing layer can trigger the operation in advance without waiting for the previous computing layer to operate on all input data. In addition, the embodiment of the present invention can reduce the number of times the input data is accessed from the memory.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 In order to make the above features and advantages of the present invention more clearly understood, the following is a detailed description of the embodiments with the accompanying drawings.

100:運算電路 100: Operational circuit

110:記憶體 110: Memory

120:處理元件 120: Processing element

131:特徵暫存記憶體 131: Feature temporary memory

151:第一暫存記憶體 151: First temporary memory

171:第二暫存記憶體 171: Second temporary memory

132:第一先入先出單元 132: First FIFO unit

172:第二先入先出單元 172: Second first-in-first-out unit

133:第一權重緩衝器 133: First weight buffer

153:第二權重緩衝器 153: Second weight buffer

173:第三權重緩衝器 173: Third weight buffer

135:第一運算器 135: First calculator

155:第二運算器 155: Second calculator

175:第三運算器 175: The third operator

S210~S290、S111~S113、S1501~S1560:步驟 S210~S290, S111~S113, S1501~S1560: Steps

F:輸入資料 F: Input data

H、Hfi1、Hkd、Hf1o、Ha1o、Ha2o、Ha1i、Ha2i、Ha3o、Hs1o:高 H, H fi1 , H kd , H f1o , H a1o , H a2o , H a1i , H a2i , H a3o , H s1o : high

W、Wfi1、Wkd、Wf1o、Wa1o、Wa2o、Wa1i、Wa2i、Wa3o、Wa4o、Wa5o、Wtfo1、Wtfo2、Ws1o、Wso21、Wso22、Wso3、Wto1:寬 W、W fi1 、W kd 、W f1o 、W a1o 、W a2o 、W a1i 、W a2i 、W a3o 、W a4o 、W a5o 、W tfo1 、W tfo2 、W s1o 、W so21 、W so22 、W so3 、W to1 : width

C、Cfi1、Cf1o、Ca1o、Ca2o、Ca1i、Ca2i、Ca3o、Ctfo、Cs1o、Cso2、Cto1:通道數 C, C fi1 , C f1o , Ca1o , Ca2o , Ca1i , Ca2i , Ca3o , C tfo , C s1o , C so2 , C to1 : Number of channels

Ffi1、Ffi2、Ffi3、Ffi4、Ffi6:第一部分資料 F fi1 , F fi2 , F fi3 , F fi4 , F fi6 : The first part of the data

Kn、Fd、Fp:過濾器 Kn , Fd , Fp : Filter

Ffo1、Ffo2、Ffo3、Ffo4、Ftfo:第一輸出資料 F fo1 , F fo2 , F fo3 , F fo4 , F tfo : first output data

SA1i、SA2i:脈動陣列輸入 SA 1i , SA 2i : Pulse array input

SA1o、SA2o、SA3o、SA4o、SA5o、SA6o、SA7o、SA8o:脈動陣列輸出 SA 1o , SA 2o , SA 3o , SA 4o , SA 5o , SA 6o , SA 7o , SA 8o : Pulse array output

601、703、903、141:已完成輸出 601, 703, 903, 141: Output completed

602、704、904、142:當前處理輸出 602, 704, 904, 142: Current processing output

D1、D2:方向 D1, D2: Direction

701、901:已完成輸入 701, 901: Input completed

702、902:當前處理輸入 702, 902: Currently processing input

i4:高位置 i4: high position

j4:寬位置 j4: wide position

n4:通道位置 n4: Channel location

I(,,,)、A(,,,):值 I(,,,), A(,,,): value

Fdn4(,,):權重 F dn4 (,,):weight

Fsi1:第二部份資料 F si1 : The second part of the data

Fso1、Fso2、Fso3、Ftso:第二輸出資料 F so1 , F so2 , F so3 , F tso : Second output data

Fti:第三部份資料 F ti : The third part of the data

Fto:第三輸出資料 F to : The third output data

圖1是依據本發明一實施例的基於卷積神經網路的運算電路的元件方塊圖。 FIG1 is a block diagram of components of an operation circuit based on a convolutional neural network according to an embodiment of the present invention.

圖2是依據本發明一實施例的基於卷積神經網路的資料處理方法的流程圖。 Figure 2 is a flow chart of a data processing method based on a convolutional neural network according to an embodiment of the present invention.

圖3是依據本發明一實施例的輸入資料的示意圖。 Figure 3 is a schematic diagram of input data according to an embodiment of the present invention.

圖4是依據本發明一實施例的第一運算的示意圖。 Figure 4 is a schematic diagram of the first operation according to an embodiment of the present invention.

圖5A及圖5B是依據本發明一實施例的脈動陣列輸入及輸出的示意圖。 Figures 5A and 5B are schematic diagrams of the pulse array input and output according to an embodiment of the present invention.

圖6A~圖6C是依據本發明一實施例的脈動陣列輸出的示意圖。 Figures 6A to 6C are schematic diagrams of the pulse array output according to an embodiment of the present invention.

圖7A是依據本發明一實施例說明讀取輸入資料的示意圖。 FIG. 7A is a schematic diagram illustrating reading input data according to an embodiment of the present invention.

圖7B是依據本發明一實施例的第一輸出資料的示意圖。 Figure 7B is a schematic diagram of the first output data according to an embodiment of the present invention.

圖7C是依據本發明一實施例說明讀取輸入資料的示意圖。 FIG. 7C is a schematic diagram illustrating reading input data according to an embodiment of the present invention.

圖7D是依據本發明一實施例的第一輸出資料的示意圖。 Figure 7D is a schematic diagram of the first output data according to an embodiment of the present invention.

圖7E是依據本發明一實施例說明讀取輸入資料的示意圖。 FIG. 7E is a schematic diagram illustrating reading input data according to an embodiment of the present invention.

圖7F是依據本發明一實施例的第一輸出資料的示意圖。 Figure 7F is a schematic diagram of the first output data according to an embodiment of the present invention.

圖8是依據本發明一實施例說明讀取輸入資料的示意圖。 FIG8 is a schematic diagram illustrating reading input data according to an embodiment of the present invention.

圖9A是依據本發明一實施例說明第二運算的觸發條件的示意圖。 FIG9A is a schematic diagram illustrating the triggering conditions of the second operation according to an embodiment of the present invention.

圖9B是依據本發明一實施例的暫存的第一輸出資料的示意圖。 FIG9B is a schematic diagram of the temporarily stored first output data according to an embodiment of the present invention.

圖10A是依據本發明一實施例的第二運算的示意圖。 Figure 10A is a schematic diagram of the second operation according to an embodiment of the present invention.

圖10B~圖10D是依據本發明一實施例的第二輸出資料的示意圖。 Figures 10B to 10D are schematic diagrams of the second output data according to an embodiment of the present invention.

圖11是依據本發明一實施例的基於卷積神經網路的資料處理方法的流程圖。 Figure 11 is a flow chart of a data processing method based on a convolutional neural network according to an embodiment of the present invention.

圖12A是依據本發明一實施例的暫存的第一輸出資料的示意圖。 FIG12A is a schematic diagram of the temporarily stored first output data according to an embodiment of the present invention.

圖12B是依據本發明一實施例的暫存的第二輸出資料的示意圖。 FIG12B is a schematic diagram of the temporarily stored second output data according to an embodiment of the present invention.

圖13A是依據本發明一實施例的第三運算的示意圖。 Figure 13A is a schematic diagram of the third operation according to an embodiment of the present invention.

圖13B是依據本發明一實施例的第三輸出資料的示意圖。 Figure 13B is a schematic diagram of the third output data according to an embodiment of the present invention.

圖14A~圖14C是依據本發明一實施例的脈動陣列輸出的示意圖。 Figures 14A to 14C are schematic diagrams of the pulse array output according to an embodiment of the present invention.

圖15是依據本發明一實施例的MobileNet架構的資料處理方法的流程圖。 Figure 15 is a flow chart of a data processing method of a MobileNet architecture according to an embodiment of the present invention.

圖1是依據本發明一實施例的基於卷積神經網路的運算電路100的元件方塊圖。請參照圖1,運算電路100包括(但不僅限於)記憶體110及一個或更多個處理元件(Processing Element,PE)120。 FIG1 is a block diagram of a convolutional neural network-based computing circuit 100 according to an embodiment of the present invention. Referring to FIG1 , the computing circuit 100 includes (but is not limited to) a memory 110 and one or more processing elements (PE) 120.

記憶體110可以是動態隨機存取記憶體(Dynamic Random Access Memory,DRAM)、快閃記憶體(Flash Memory)、寄存器(Register)、組合邏輯電路(Combinational Circuit)或上述元件的組合。 The memory 110 can be a dynamic random access memory (DRAM), a flash memory, a register, a combinatorial circuit, or a combination of the above elements.

處理元件120耦接記憶體110。處理元件120包括(但不僅限於)特徵暫存憶體131、先入先出(First Input First Output,FIFO)單元132、第一權重暫存記憶體133、第一運算器135、第一暫存記憶體151、第二權重暫存記憶體153、第二運算器155、第二暫存記憶體171、第二先入先出單元172、第三權重暫存記憶體173及第三運算器175。 The processing element 120 is coupled to the memory 110. The processing element 120 includes (but is not limited to) a feature temporary memory 131, a first input first output (FIFO) unit 132, a first weight temporary memory 133, a first operator 135, a first temporary memory 151, a second weight temporary memory 153, a second operator 155, a second temporary memory 171, a second first input first output unit 172, a third weight temporary memory 173 and a third operator 175.

在一實施例中,特徵暫存憶體131、先入先出單元132、第一權重暫存記憶體133、第一運算器135對應於一層卷積層/運算層。此外,第一運算器135設有第一運算所用的第一過濾器。 In one embodiment, the feature temporary memory 131, the first-in-first-out unit 132, the first weight temporary memory 133, and the first operator 135 correspond to a convolution layer/operation layer. In addition, the first operator 135 is provided with a first filter used for the first operation.

在一實施例中,特徵暫存記憶體131用以儲存來自記憶體110的輸入資料中的部分或全部,第一先入先出單元132用以依據先入先出規則輸入且/或輸出特徵暫存記憶體131中的資料, 第一權重暫存記憶體133用以儲存來自記憶體110的權重(形成第一卷積核/過濾器),且第一運算器135用以進行第一運算。在一實施例中,第一運算是卷積運算,並待後續實施例詳述。在另一實施例中,第一運算也可能是深度可分離卷積運算或其他類型的卷積運算。 In one embodiment, the feature temporary memory 131 is used to store part or all of the input data from the memory 110, the first first-in first-out unit 132 is used to input and/or output the data in the feature temporary memory 131 according to the first-in first-out rule, the first weight temporary memory 133 is used to store the weight from the memory 110 (forming the first convolution kernel/filter), and the first operator 135 is used to perform the first operation. In one embodiment, the first operation is a convolution operation, which will be described in detail in the subsequent embodiments. In another embodiment, the first operation may also be a depth-separable convolution operation or other types of convolution operations.

在一實施例中,第一暫存記憶體151、第二權重暫存記憶體153、第二運算器155對應於一層卷積層/運算層。此外,第二運算器155設有第二運算所用的第二過濾器。 In one embodiment, the first temporary memory 151, the second weight temporary memory 153, and the second operator 155 correspond to a convolution layer/operation layer. In addition, the second operator 155 is provided with a second filter used for the second operation.

在一實施例中,第一暫存記憶體151用以儲存來自第一運算器135所輸出的輸入資料中的部分或全部,第二權重暫存記憶體153用以儲存來自記憶體110的權重(形成第二卷積核/過濾器),且第二運算器155用以進行第二運算。在一實施例中,第二運算是逐深度(depthwise)卷積運算,並待後續實施例詳述。在另一實施例中,第二運算也可能是卷積運算或其他類型的卷積運算。 In one embodiment, the first temporary memory 151 is used to store part or all of the input data output from the first operator 135, the second weight temporary memory 153 is used to store the weights from the memory 110 (forming a second convolution kernel/filter), and the second operator 155 is used to perform a second operation. In one embodiment, the second operation is a depthwise convolution operation, which will be described in detail in the subsequent embodiments. In another embodiment, the second operation may also be a convolution operation or other types of convolution operations.

在一實施例中,第二暫存記憶體171、第二先入先出單元172、第三權重暫存記憶體173及第三運算器175對應於一層卷積層/運算層。此外,第三運算器175設有第三運算所用的第三過濾器。 In one embodiment, the second temporary memory 171, the second first-in first-out unit 172, the third weight temporary memory 173 and the third operator 175 correspond to a convolution layer/operation layer. In addition, the third operator 175 is provided with a third filter used for the third operation.

在一實施例中,第二暫存記憶體171用以儲存來自第二運算器155所輸出的輸入資料中的部分或全部,第二先入先出單元172用以依據先入先出規則輸入且/或輸出第二暫存記憶體171中的資料,第三權重暫存記憶體173用以儲存來自記憶體110的 權重(形成第三卷積核/過濾器),且第三運算器175用以進行第三運算。在一實施例中,第三運算是逐點(pointwise)卷積運算,並待後續實施例詳述。在另一實施例中,第三運算也可能是卷積運算或其他類型的卷積運算。 In one embodiment, the second temporary memory 171 is used to store part or all of the input data output from the second operator 155, the second first-in first-out unit 172 is used to input and/or output the data in the second temporary memory 171 according to the first-in first-out rule, the third weight temporary memory 173 is used to store the weights from the memory 110 (forming a third convolution kernel/filter), and the third operator 175 is used to perform a third operation. In one embodiment, the third operation is a pointwise convolution operation, which will be described in detail in the subsequent embodiments. In another embodiment, the third operation may also be a convolution operation or other types of convolution operations.

在一實施例中,前述特徵暫存記憶體131、第一暫存記憶體、第二暫存記憶體、第一權重暫存記憶體133、第二權重暫存記憶體153及第三權重暫存記憶體173可以是靜態隨機存取記憶體(Static Random Access Memory,SRAM)、快閃記憶體、寄存器、各類型緩衝器或上述元件的組合。 In one embodiment, the aforementioned feature temporary memory 131, the first temporary memory, the second temporary memory, the first weight temporary memory 133, the second weight temporary memory 153 and the third weight temporary memory 173 can be static random access memory (SRAM), flash memory, register, various types of buffers or a combination of the above elements.

在一實施例中,運算電路100中的部分或全部元件可形成神經網路處理元件(Network Processing Unit,NPU)、片上系統(System on Chip,SoC)或積體電路(Integrated Circuit,IC)。 In one embodiment, some or all of the components in the computing circuit 100 may form a neural network processing unit (Network Processing Unit, NPU), a system on chip (System on Chip, SoC) or an integrated circuit (Integrated Circuit, IC).

在一實施例中,第一運算器135在單位時間內具有第一最大運算量,第二運算器155在相同單位時間內具有第二最大運算量,且第三運算器175在相同單位時間內具有第三最大運算量。第一最大運算量大於第二最大運算量,且第一最大運算量大於第三最大運算量。 In one embodiment, the first operator 135 has a first maximum amount of operations per unit time, the second operator 155 has a second maximum amount of operations per unit time, and the third operator 175 has a third maximum amount of operations per unit time. The first maximum amount of operations is greater than the second maximum amount of operations, and the first maximum amount of operations is greater than the third maximum amount of operations.

下文中,將搭配運算電路100中的各項裝置、元件及模組說明本發明實施例所述之方法。本方法的各個流程可依照實施情形而隨之調整,且並不僅限於此。 In the following, the method described in the embodiment of the present invention will be described with various devices, components and modules in the computing circuit 100. The various processes of the method can be adjusted according to the implementation situation, but are not limited to this.

圖2是依據本發明一實施例的基於卷積神經網路的資料處理方法的流程圖。請參照圖2,處理元件120自記憶體110讀取 輸入資料(步驟S210)。具體而言,輸入資料可以是影像中部份或全部畫素的資料(例如,色階、亮度、或灰階度)。或者,輸入資料也可以是相關於語音、文字、圖案或其他態樣所形成的資料集合。 FIG2 is a flow chart of a data processing method based on a convolutional neural network according to an embodiment of the present invention. Referring to FIG2, the processing element 120 reads the input data from the memory 110 (step S210). Specifically, the input data may be data of part or all pixels in the image (e.g., color level, brightness, or gray level). Alternatively, the input data may also be a data set formed by speech, text, pattern or other forms.

讀取輸入資料的方法有很多種。在一實施例中,處理元件120讀取全部的輸入資料,並作為第一部分資料。在另一實施例中,處理元件120依據第一運算所需的資料量或特徵暫存記憶體131的容量每次讀取輸入資料中的一部分,並作為第一部分資料。 There are many ways to read input data. In one embodiment, the processing element 120 reads all the input data and uses it as the first part of the data. In another embodiment, the processing element 120 reads a portion of the input data each time according to the amount of data required for the first operation or the capacity of the feature temporary memory 131 and uses it as the first part of the data.

圖3是依據本發明一實施例的輸入資料F的示意圖。請參照圖3,假設輸入資料F的大小為(高H,寬W,通道數C),且處理元件120所讀取的第一部分資料Ffi1的大小為(高Hfi1,寬Wfi1,通道數C)。高Hfi1可小於或等於高H,且寬Wfi1可小於或等於寬W。 FIG3 is a schematic diagram of input data F according to an embodiment of the present invention. Referring to FIG3 , it is assumed that the size of the input data F is (height H, width W, number of channels C), and the size of the first portion of data F fi1 read by the processing element 120 is (height H fi1 , width W fi1 , number of channels C). The height H fi1 may be less than or equal to the height H, and the width W fi1 may be less than or equal to the width W.

須說明的是,輸入資料可能儲存在記憶體110中的特定區塊或位置,但本發明實施例不加以限制輸入資料中的各元素在記憶體110中的儲存位置。 It should be noted that the input data may be stored in a specific block or location in the memory 110, but the embodiment of the present invention does not limit the storage location of each element in the input data in the memory 110.

特徵暫存記憶體131儲存來自記憶體110的輸入資料中的部分或全部。即,特徵暫存記憶體131儲存第一部分資料。第一運算器135對輸入資料中的第一部分資料進行第一運算,以取得第一輸出資料(步驟S230)。具體而言,第一運算是將第一部分資料與對應權重進行第一運算(例如,卷積運算)。第一輸出資料的大小相關於第一運算的第一過濾器大小及第一部分資料的大小。 The feature temporary memory 131 stores part or all of the input data from the memory 110. That is, the feature temporary memory 131 stores the first part of the data. The first operator 135 performs a first operation on the first part of the input data to obtain the first output data (step S230). Specifically, the first operation is to perform a first operation (for example, a convolution operation) on the first part of the data and the corresponding weight. The size of the first output data is related to the size of the first filter of the first operation and the size of the first part of the data.

例如,圖4是依據本發明一實施例的第一運算的示意圖。 請參照圖4,第一運算以卷積運算為例。第一部分資料Ffi1(大小為高Hfi1,寬Wfi1,通道數Cfi1)與第一過濾器Kn(大小為高Hkd,寬Wkd)。若高Hfi1大於或等於高Hkd且寬Wfi1大於或等於寬Wkd,則第一運算器135可觸發第一運算。而第一運算的結果(即,第一輸出資料Ffo1)的高Hf1o為高Hfi1-Hkd+1,寬Wf1o為寬Wfi1-Wkd+1,且通道數Cf1o相同於通道數Cfi1For example, FIG4 is a schematic diagram of a first operation according to an embodiment of the present invention. Referring to FIG4 , the first operation is a convolution operation as an example. The first part of data F fi1 (size is height H fi1 , width W fi1 , number of channels C fi1 ) and the first filter K n (size is height H kd , width W kd ). If the height H fi1 is greater than or equal to the height H kd and the width W fi1 is greater than or equal to the width W kd , the first operator 135 can trigger the first operation. The result of the first operation (i.e., the first output data F fo1 ) has a height H f1o of height H fi1 -H kd +1, a width W f1o of width W fi1 -W kd +1, and the number of channels C f1o is the same as the number of channels C fi1 .

又例如,若第一部分資料Ffi1的大小(高,寬,通道數)為(3,32,16),且第一過濾器Kn的大小(高,寬)為(3,3),則第一輸出資料Ffo1的大小(高,寬,通道數)為(1,30,16)。 For another example, if the size (height, width, number of channels) of the first part of the data F fi1 is (3, 32, 16), and the size (height, width) of the first filter K n is (3, 3), then the size (height, width, number of channels) of the first output data F fo1 is (1, 30, 16).

在一實施例中,第一運算器135採用脈動陣列(systolic array)結構。第一運算器135將第一部分資料區分成多個第一脈動陣列輸入,並分別對這些第一脈動陣列輸入進行第一運算,以取得多個第一脈動陣列輸出。各第一脈動陣列輸出的大小將受限於脈動陣列的大小。例如,第一脈動陣列輸出的元素量小於或等於脈動陣列的容量。此外,基於相同第一部分資料的這些第一脈動陣列輸出組成第一輸出資料。 In one embodiment, the first operator 135 adopts a systolic array structure. The first operator 135 divides the first part of the data into multiple first systolic array inputs, and performs a first operation on these first systolic array inputs respectively to obtain multiple first systolic array outputs. The size of each first systolic array output will be limited by the size of the systolic array. For example, the number of elements of the first systolic array output is less than or equal to the capacity of the systolic array. In addition, these first systolic array outputs based on the same first part of the data constitute the first output data.

例如,圖5A及圖5B是依據本發明一實施例的脈動陣列輸入及輸出的示意圖。請參照圖5A及圖5B,假設脈動陣列的大小為(數量Msa×數量Nsa)。脈動陣列輸出SA1o的高Ha1o為1,其寬Wa1o可為數量Msa,且其通道數Ca1o可為數量Nsa。因此,第一運算器135將第一部分資料區分成脈動陣列輸入SA1i(其大小為高Ha1i,寬Wa1i,通道數Ca1i)及脈動陣列輸入SA2i(其大小為高Ha2i, 寬Wa2i,通道數Ca2i)。這兩個脈動陣列輸入SA1i,SA2i分別與過濾器Kn的各通道的權重進行卷積運算,並據以得出脈動陣列輸出SA1o,SA2o。而脈動陣列輸出SA2o的高Ha2o為1,其寬Wa2o可小於或等於數量Msa,且其通道數Ca2o可小於或等於數量NsaFor example, FIG. 5A and FIG. 5B are schematic diagrams of a pulse array input and output according to an embodiment of the present invention. Referring to FIG. 5A and FIG. 5B , it is assumed that the size of the pulse array is (number M sa × number N sa ). The height Ha1o of the pulse array output SA1o is 1, its width W a1o may be the number M sa , and its number of channels Ca1o may be the number N sa . Therefore, the first operator 135 divides the first portion of data into a pulse array input SA1i (whose size is height Ha1i , width W a1i , number of channels Ca1i ) and a pulse array input SA2i (whose size is height Ha2i , width W a2i , number of channels Ca2i ). The two pulse array inputs SA 1i , SA 2i are convolved with the weights of each channel of the filter K n , and the pulse array outputs SA 1o , SA 2o are obtained accordingly. The height Ha2o of the pulse array output SA 2o is 1, its width W a2o can be less than or equal to the number M sa , and its number of channels C a2o can be less than or equal to the number N sa .

又例如,第一部分資料的大小(高,寬,通道數)為(3,32,16),脈動陣列的大小為16×16,且過濾器Kn的大小為3×3。脈動陣列輸出SA1o的高Ha1o為1,其寬Wa1o可為16,且其通道數Ca1o可為16。此外,脈動陣列輸入SA1i的高Ha1i為3,其寬Wa1i為18,且其通道數Ca1i為16。另一方面,第一運算器135自第一部分資料區分出脈動陣列輸入SA1i之後,可得出脈動陣列輸入SA2i。脈動陣列輸入SA2i的高Ha2i為3,其寬Wa2i為16,且其通道數Ca2i為16。此外,脈動陣列輸出SA2o的高Ha2o為1,其寬Wa2o為14(即,寬Wa2i-過濾器Kn的寬+1),且其通道數Ca2o為16。 For another example, the size (height, width, number of channels) of the first portion of data is (3, 32, 16), the size of the pulse array is 16×16, and the size of the filter Kn is 3×3. The height Ha1o of the pulse array output SA1o is 1, its width W a1o can be 16, and its number of channels Ca1o can be 16. In addition, the height Ha1i of the pulse array input SA1i is 3, its width W a1i is 18, and its number of channels Ca1i is 16. On the other hand, after the first operator 135 distinguishes the pulse array input SA1i from the first portion of data, it can obtain the pulse array input SA2i . The height Ha2i of the pulse array input SA2i is 3, its width Wa2i is 16, and its channel number Ca2i is 16. In addition, the height Ha2o of the pulse array output SA2o is 1, its width Wa2o is 14 (i.e., width Wa2i - width of filter Kn + 1), and its channel number Ca2o is 16.

再例如,表(1)~表(3)是儲存在特徵暫存記憶體131的第一筆、第二筆及第十五筆第一部分資料的資料(其餘筆資料可依此類推):

Figure 110140625-A0305-02-0013-1
For another example, Table (1) to Table (3) are data of the first, second and fifteenth first part data stored in the feature temporary memory 131 (the remaining data can be deduced in the same way):
Figure 110140625-A0305-02-0013-1

Figure 110140625-A0305-02-0013-2
Figure 110140625-A0305-02-0013-2
Figure 110140625-A0305-02-0014-3
Figure 110140625-A0305-02-0014-3

Figure 110140625-A0305-02-0014-4
Figure 110140625-A0305-02-0014-4

I(i1,j1,n1)代表所讀取的輸入資料在位置(高位置i1,寬位置j1,通道位置n1)的值。而第一先入先出單元132由右至左且由上至下依序輸入那些資料至第一運算器135。 I(i1,j1,n1) represents the value of the input data read at the position (high position i1, wide position j1, channel position n1). The first FIFO unit 132 sequentially inputs those data to the first operator 135 from right to left and from top to bottom.

表(4)為卷積運算所用16個通道的3×3過濾器的資料:

Figure 110140625-A0305-02-0014-5
Table (4) shows the data of the 16-channel 3×3 filter used in the convolution operation:
Figure 110140625-A0305-02-0014-5

Fdn(i2,j2,n2)代表所讀取的第n過濾器在位置(高位置i2,寬位置j2,通道位置n2)的值。 F dn (i2, j2, n2) represents the value read by the nth filter at position (high position i2, wide position j2, channel position n2).

表(5)為脈動陣列輸出:

Figure 110140625-A0305-02-0015-6
Table (5) shows the pulse array output:
Figure 110140625-A0305-02-0015-6

A(i3,j3,n3)代表脈動陣列輸出在位置(高位置i3,寬位置j3,通道位置n3)的值,且其數學表示式為:A(i3,j3,n3)=I(i3,j3,0)×Fdn3(0,0,0)+I(i3,j3,1)×Fdn3(0,0,1)+…+I(i3,j3,15)×Fdn3(0,0,15)+I(i3,j3+1,0)×Fdn3(0,1,0)+I(i3,j3+1,1)×Fdn3(0,1,1)+…+I(i3,j3+1,15)×Fdn3(0,1,15)+I(i3,j3+2,0)×Fdn3(0,2,0)+I(i3,j3+2,1)×Fdn3(0,2,1)+…+I(i3,j3+2,15)×Fdn3(0,2,15)+I(i3+1,j3,0)×Fdn3(1,0,0)+I(i3+1,j3,1)×Fdn3(1,0,1)+…+I(i3+1,j3,15)×Fdn3(1,0,15)+I(i3+1,j3+1,0)×Fdn3(1,1,0)+I(i3+1,j3+1,1)×Fdn3(1,1,1)+…+I(i3+1,j3+1,15)×Fdn3(1,1,15)+I(i3+1,j3+2,0)×Fdn3(1,2,0)+I(i3+1,j3+2,1)×Fdn3(1,2,1)+…+I(i3+1,j3+2,15)×Fdn3(1,2,15)+I(i3+2,j3,0)×Fdn3(2,0,0)+I(i3+2,j3,1)×Fdn3(2,0,1)+…+I(i3+2,j3,15)×Fdn3(2,0,15) +I(i3+2,j3+1,0)×Fdn3(2,1,0)+I(i3+2,j3+1,1)×Fdn3(2,1,1)+…+I(i3+2,j3+1,15)×Fdn3(2,1,15)+I(i3+2,j3+2,0)×Fdn3(2,2,0)+I(i3+2,j3+2,1)×Fdn3(2,2,1)+…+I(i3+2,j3+2,15)×Fdn3(2,2,15)...(1)。 A(i3,j3,n3) represents the value of the pulse array output at the position (high position i3, wide position j3, channel position n3), and its mathematical expression is: A(i3,j3,n3)=I(i3,j3,0)×F dn3 (0,0,0)+I(i3,j3,1)×F dn3 (0,0,1)+…+I(i3,j3,15)×F dn3 (0,0,15)+I(i3,j3+1,0)×F dn3 (0,1,0)+I(i3,j3+1,1)×F dn3 (0,1,1)+…+I(i3,j3+1,15)×F dn3 (0,1,15)+I(i3,j3+2,0)×F dn3 (0,2,0)+I(i3,j3+2,1)×F dn3 (0,2,1)+…+I(i3,j3+2,15)×F dn3 (0,2,15)+I(i3+1,j3,0)×F dn3 (1,0,0)+I(i3+1,j3,1)×F dn3 (1,0,1)+…+I(i3+1,j3,15)×F dn3 (1,0,15)+I(i3+1,j3+1,0)×F dn3 (1,1,0)+I(i3+1,j3+1,1)×F dn3 (1,1,1)+…+I(i3+1,j3+1,15)×F dn3 (1,1,15)+I(i3+1,j3+2,0)×F +I(i3+2,j3+1,0)×F dn3 (2,0,0)+I(i3+2,j3,1)×F dn3 (2,0,1)+…+I(i3+2,j3,15 )×F dn3 (2,0,15) +I(i3+2,j3+1,0)×F dn3 ( 2,1,0 )+I(i3+2,j3+1,1)×F dn3 ( 2,1,1)+… + I(i3+2, j3 +1,15)×F (2,1,15)+I(i3+2,j3+2,0)× Fdn3 (2,2,0)+I(i3+2,j3+2,1)× Fdn3 (2,2,1)+…+I(i3+2,j3+2,15)× Fdn3 (2,2,15)…(1).

圖6A是依據本發明一實施例的脈動陣列輸出SA3o的示意圖。請參照圖6A,脈動陣列輸出SA3o的高Ha3o為1,其寬Wa3o為16,且其通道數Ca3o為16。而在數學表示式(1)中,代表高的i3

Figure 110140625-A0305-02-0016-15
0,代表寬的j3
Figure 110140625-A0305-02-0016-16
0~15且代表過濾器輸出通道的n3
Figure 110140625-A0305-02-0016-17
0~15。 FIG6A is a schematic diagram of a pulse array output SA 3o according to an embodiment of the present invention. Referring to FIG6A , the pulse array output SA 3o has a height H a3o of 1, a width W a3o of 16, and a channel number C a3o of 16. In the mathematical expression (1), the height i3
Figure 110140625-A0305-02-0016-15
0, represents wide j3
Figure 110140625-A0305-02-0016-16
0~15 and represents the filter output channel n3
Figure 110140625-A0305-02-0016-17
0~15.

圖6B是依據本發明一實施例的脈動陣列輸出SA4o的示意圖。請參照圖6B,脈動陣列輸出SA4o的高Ha3o為1,其寬Wa4o為14,且其通道數Ca3o為16。而在數學表示式(1)中,代表高的i3

Figure 110140625-A0305-02-0016-18
0,代表寬的j3
Figure 110140625-A0305-02-0016-19
16~29且代表過濾器輸出通道的n3
Figure 110140625-A0305-02-0016-20
0~15。此外,已完成輸出601為圖6A的脈動陣列輸出SA3o,且當前處理輸出602為脈動陣列輸出SA4o。 FIG6B is a schematic diagram of a pulse array output SA 4o according to an embodiment of the present invention. Referring to FIG6B , the pulse array output SA 4o has a height H a3o of 1, a width W a4o of 14, and a channel number C a3o of 16. In the mathematical expression (1), the height i3
Figure 110140625-A0305-02-0016-18
0, represents wide j3
Figure 110140625-A0305-02-0016-19
16~29 and represents the filter output channel n3
Figure 110140625-A0305-02-0016-20
0~15. In addition, the completed output 601 is the pulse array output SA 3o of FIG. 6A , and the currently processed output 602 is the pulse array output SA 4o .

依此類推,圖6C是依據本發明一實施例的脈動陣列輸出SA4o的示意圖。請參照圖6C,脈動陣列輸出SA5o的高Ha3o為1,其寬Wa5o為14,且其通道數Ca3o為16。而在數學表示式(1)中,代表高的i3

Figure 110140625-A0305-02-0016-21
4,代表寬的j3
Figure 110140625-A0305-02-0016-22
16~29且代表過濾器輸出通道的n3
Figure 110140625-A0305-02-0016-23
0~15。此外,當前處理輸出602為脈動陣列輸出SA5o。這些脈動陣列輸出SA3o~SA5o可形成一筆或更多筆第一輸出資料。 Similarly, FIG. 6C is a schematic diagram of a pulse array output SA 4o according to an embodiment of the present invention. Referring to FIG. 6C , the height H a3o of the pulse array output SA 5o is 1, its width W a5o is 14, and its number of channels C a3o is 16. In the mathematical expression (1), the height i3
Figure 110140625-A0305-02-0016-21
4, represents wide j3
Figure 110140625-A0305-02-0016-22
16~29 and represents the filter output channel n3
Figure 110140625-A0305-02-0016-23
0~15. In addition, the current processing output 602 is the pulse array output SA 5o . These pulse array outputs SA 3o ~SA 5o can form one or more first output data.

在一實施例中,第一運算為卷積運算,且第一運算器135對記憶體110所儲存的輸入資料往第一滑動方向讀取第一部分資 料。第一運算器135將輸入資料區分成多個區段,並以平行於輸入資料的高的第一滑動方向接續讀取下一區段,並據以作為第一部分資料。 In one embodiment, the first operation is a convolution operation, and the first operator 135 reads the first portion of the input data stored in the memory 110 in the first sliding direction. The first operator 135 divides the input data into multiple segments, and continues to read the next segment in a first sliding direction parallel to the height of the input data, and uses it as the first portion of the data.

例如,圖7A是依據本發明一實施例說明讀取輸入資料Ffi1,Ffi2的示意圖。請參照圖7A,若已完成第一部分資料Ffi1的第一運算,則第一運算器135將這第一部分資料Ffi1視為已完成輸入701,並進一步往方向D1(例如,圖面下方)讀取輸入資料F中的下一區段的第一部分資料Ffi2並作為當前處理輸入702。 For example, FIG7A is a schematic diagram illustrating reading input data F fi1 , F fi2 according to an embodiment of the present invention. Referring to FIG7A , if the first operation of the first portion of data F fi1 has been completed, the first operator 135 regards the first portion of data F fi1 as having completed input 701, and further reads the first portion of data F fi2 of the next segment in the input data F in the direction D1 (e.g., below the figure) and uses it as the current processing input 702.

圖7B是依據本發明一實施例的第一輸出資料Ffo1,Ffo2的示意圖。請參照圖7B,第一輸出資料Ffo1是對圖7A的第一部分資料Ffi1進行卷積運算的輸出並作為已完成輸出703。此外,第一輸出資料Ffo2是對圖7A的第一部分資料Ffi2進行卷積運算的輸出並作為當前處理輸出704。第一輸出資料Ffo2也是依據圖7A的方向D1排列在第一輸出資料Ffo1的下方。 FIG7B is a schematic diagram of the first output data F fo1 , F fo2 according to an embodiment of the present invention. Referring to FIG7B , the first output data F fo1 is the output of the convolution operation performed on the first part of the data F fi1 of FIG7A and is used as the completed output 703. In addition, the first output data F fo2 is the output of the convolution operation performed on the first part of the data F fi2 of FIG7A and is used as the current processing output 704. The first output data F fo2 is also arranged below the first output data F fo1 according to the direction D1 of FIG7A.

圖7C是依據本發明一實施例說明讀取輸入資料Ffi3的示意圖。請參照圖7C,若已完成輸入701已到達輸入資料F底部,則第一運算器135往方向D2(例如,圖面右方)並由上至下(對應於圖7A的方向D1)讀取輸入資料F中的下一區段的第一部分資料Ffi3並作為當前處理輸入702。 FIG7C is a schematic diagram illustrating the reading of input data F fi3 according to an embodiment of the present invention. Referring to FIG7C , if the input 701 has been completed and has reached the bottom of the input data F, the first operator 135 reads the first part of the next segment of the input data F fi3 in the direction D2 (e.g., the right side of the figure) from top to bottom (corresponding to the direction D1 of FIG7A ) and uses it as the current processing input 702.

圖7D是依據本發明一實施例的第一輸出資料Ffo3的示意圖。請參照圖7D,第一輸出資料Ffo3是對圖7C的第一部分資料Ffi3進行卷積運算的輸出並作為當前處理輸出704。相似地,當前 處理輸出704排列在已完成輸出703的右方。 FIG7D is a schematic diagram of the first output data F fo3 according to an embodiment of the present invention. Referring to FIG7D , the first output data F fo3 is the output of the convolution operation on the first part of the data F fi3 in FIG7C and serves as the current processing output 704. Similarly, the current processing output 704 is arranged to the right of the completed output 703.

圖7E是依據本發明一實施例說明讀取輸入資料Ffi4的示意圖。請參照圖7E,作為當前處理輸入702的第一部分資料Ffi4為輸入資料中的最後一個區段。 FIG7E is a schematic diagram illustrating reading input data Ffi4 according to an embodiment of the present invention. Referring to FIG7E, the first portion of data Ffi4 currently processed as input 702 is the last segment of the input data.

圖7F是依據本發明一實施例的第一輸出資料Ffo4的示意圖。請參照圖7F,第一輸出資料Ffo4是對圖7E的第一部分資料Ffi4進行卷積運算的輸出並作為當前處理輸出704。相似地,當前處理輸出704排列在已完成輸出703的下方,並據以完成對輸入資料F的卷積運算。 FIG7F is a schematic diagram of the first output data F fo4 according to an embodiment of the present invention. Referring to FIG7F , the first output data F fo4 is the output of the convolution operation on the first part of the data F fi4 in FIG7E and serves as the current processing output 704. Similarly, the current processing output 704 is arranged below the completed output 703 and is used to complete the convolution operation on the input data F.

在另一實施例中,第一運算器135對記憶體110所儲存的輸入資料往(不同於第一滑動方向的)第二滑動方向讀取第一部分資料。相似地,第一運算器135將輸入資料區分成多個區段,並以平行於輸入資料的寬的第二滑動方向接續讀取下一區段,並據以作為第一部分資料。 In another embodiment, the first operator 135 reads the first portion of the input data stored in the memory 110 in a second sliding direction (different from the first sliding direction). Similarly, the first operator 135 divides the input data into multiple segments, and continues to read the next segment in a second sliding direction parallel to the width of the input data, and uses it as the first portion of the data.

例如,圖8是依據本發明一實施例說明讀取輸入資料的示意圖。請參照圖8,若已完成第一部分資料Ffi1的第一運算,則第一運算器135將這第一部分資料Ffi1視為已完成輸入701,並進一步往方向D2(例如,圖面右方)讀取輸入資料F中的下一區段的第一部分資料Ffi6並作為當前處理輸入702。相似地,若往方向D2已讀取到同行的最後一區段,則第一運算器135會讀取第一部分資料Ffi1下方的區段。此外,第一部分資料Ffi1及其他第一部分資料(圖未示)的排列可參照前述說明,於此不再贅述。 For example, FIG8 is a schematic diagram illustrating reading input data according to an embodiment of the present invention. Referring to FIG8 , if the first operation of the first partial data F fi1 has been completed, the first operator 135 regards the first partial data F fi1 as having completed input 701, and further reads the first partial data F fi6 of the next segment in the input data F in the direction D2 (for example, the right side of the figure) and uses it as the current processing input 702. Similarly, if the last segment of the same row has been read in the direction D2, the first operator 135 will read the segment below the first partial data F fi1 . In addition, the arrangement of the first partial data F fi1 and other first partial data (not shown) can refer to the above description, and will not be repeated here.

請參照圖2,第一運算器135暫存一個或更多個第一輸出資料至第一暫存記憶體151的第一暫存區(步驟S250)。具體而言,不同於現有技術將第一輸出資料輸出至記憶體110,本發明實施例的第一輸出資料輸出至第二運算器155的暫存第一暫存記憶體151,從而降低記憶體110的存取次數。 Referring to FIG. 2 , the first operator 135 temporarily stores one or more first output data in the first temporary storage area of the first temporary storage memory 151 (step S250). Specifically, unlike the prior art that outputs the first output data to the memory 110, the first output data of the embodiment of the present invention is output to the temporary first temporary storage memory 151 of the second operator 155, thereby reducing the number of accesses to the memory 110.

而第一暫存記憶體151(或第一暫存區)所暫存的第一輸出資料大於第一預定資料量時,第二運算器155對第一輸出資料進行第二運算,以取得第二輸出資料(步驟S270)。具體而言,現有多卷積層的架構中,下一個卷積層需要等到上一個卷積層對其所有輸入資料運算並輸出至主記憶體之後,才會自主記憶體讀取上一個卷積層所輸出的輸入資料。不同於現有技術,除了暫存至記憶體110以外的儲存媒體(例如,第一暫存記憶體151或第二暫存記憶體171),本發明實施例更在每當滿足下一個卷積層所需的輸入資料的大小(即,第一預定資料量)的情況下,即可觸發下一個卷積層的卷積運算。同時,若上一個卷積層對所有輸入資料的運算尚未完成,這兩個卷積層的運算可一併同時進行。換句而言,第二運算器155對第一輸出資料進行第二運算時,第一運算器135持續對輸入資料進行第一運算。 When the first output data temporarily stored in the first temporary storage memory 151 (or the first temporary storage area) is greater than the first predetermined data amount, the second operator 155 performs a second operation on the first output data to obtain second output data (step S270). Specifically, in the existing multi-convolution layer architecture, the next convolution layer needs to wait until the previous convolution layer operates on all its input data and outputs it to the main memory before reading the input data output by the previous convolution layer from the main memory. Different from the prior art, in addition to temporarily storing in a storage medium other than the memory 110 (e.g., the first temporary storage memory 151 or the second temporary storage memory 171), the embodiment of the present invention can trigger the convolution operation of the next convolution layer whenever the size of the input data required by the next convolution layer (i.e., the first predetermined data amount) is met. At the same time, if the operation of the previous convolution layer on all input data has not been completed, the operations of the two convolution layers can be performed simultaneously. In other words, when the second operator 155 performs the second operation on the first output data, the first operator 135 continues to perform the first operation on the input data.

值得注意的是,第二運算所輸入的第二部份資料包括第一暫存記憶體151所暫存的第一輸出資料,且第二輸出資料的大小相關於第二運算的第二過濾器大小。假設第二運算是逐深度(depthwise)卷積運算。逐深度卷積運算的各過濾器分別僅對應於第 二部分資料中的一個通道的資料。即,逐深度卷積運算的任一過濾器僅對一個通道的資料卷積運算。因此,逐深度卷積運算的過濾器的數量通常等於第二部份資料的通道數。然而,卷積運算的各過濾器是對所有通道的資料進行卷積運算。此外,只要暫存的那些第一輸出資料的高增加至過濾器的高且第一輸出資料的寬增加至過濾器的寬,過濾器即可與這些暫存的第一輸出資料(作為第二部份資料)進行逐深度卷積運算。 It is worth noting that the second part of data input by the second operation includes the first output data temporarily stored in the first temporary memory 151, and the size of the second output data is related to the size of the second filter of the second operation. Assume that the second operation is a depthwise convolution operation. Each filter of the depthwise convolution operation only corresponds to the data of one channel in the second part of data. That is, any filter of the depthwise convolution operation only performs convolution operation on the data of one channel. Therefore, the number of filters of the depthwise convolution operation is usually equal to the number of channels of the second part of data. However, each filter of the convolution operation performs convolution operation on the data of all channels. In addition, as long as the height of the temporarily stored first output data increases to the height of the filter and the width of the first output data increases to the width of the filter, the filter can perform depth-wise convolution operations with these temporarily stored first output data (as the second part of data).

在一實施例中,假設逐深度卷積運算所用的各過濾器的高為Hkd,且其過濾器的寬(width)為Wkd。每一區段的第一輸出資料的高為Hf1o,且第一輸出資料的寬為Wf1o。當第一暫存記憶體151或第一暫存區所暫存的第一輸出資料大於Wkd×Hkd時,第二運算器155可進行第二運算。當第一暫存記憶體151或第一暫存區所暫存的第一輸出資料大於第一預定資料量時,第一暫存記憶體151或第一暫存區所暫存的第一輸出資料所組成的高為MH×Hf1o且所組成的寬為MW×Wf1o。MH及MW為倍數且為正整數,MH×Hf1o未小於Hkd,且MW×Wf1o未小於Wkd。換句而言,在所暫存的第一輸出資料的高MH×Hf1o小於過濾器的高Hkd且所暫存的第一輸出資料的寬MW×Wf1o小於過濾器的寬Wkd的情況下,第二運算器155會繼續等待下一筆第一輸出資料或脈動陣列輸出,直到至所暫存的第一輸出資料的高MH×Hf1o大於或等於過濾器的高Hkd且所暫存的第一輸出資料的寬MW×Wf1o大於或等於過濾器的寬WkdIn one embodiment, it is assumed that the height of each filter used in the depthwise convolution operation is H kd and the width of the filter is W kd . The height of the first output data of each segment is H f1o and the width of the first output data is W f1o . When the first output data temporarily stored in the first temporary storage memory 151 or the first temporary storage area is greater than W kd ×H kd , the second operator 155 may perform the second operation. When the first output data temporarily stored in the first temporary storage memory 151 or the first temporary storage area is greater than the first predetermined data amount, the height of the first output data temporarily stored in the first temporary storage memory 151 or the first temporary storage area is M H ×H f1o and the width is M W ×W f1o . MH and MW are multiples and positive integers, MH ×H f1o is not less than H kd , and MW ×W f1o is not less than W kd . In other words, when the high MH ×H f1o of the first output data stored in advance is less than the high H kd of the filter and the width MW ×W f1o of the first output data stored in advance is less than the width W kd of the filter, the second operator 155 will continue to wait for the next first output data or pulse array output until the high MH ×H f1o of the first output data stored in advance is greater than or equal to the high H kd of the filter and the width MW ×W f1o of the first output data stored in advance is greater than or equal to the width W kd of the filter.

舉例而言,圖9A是依據本發明一實施例說明第二運算的 觸發條件的示意圖,且圖9B是依據本發明一實施例的暫存的第一輸出資料的示意圖。請參照圖9A,輸入資料中的已完成輸入901對應於第一輸出資料的已完成輸出903。若當前處理輸入902對應的當前處理輸出904與已完成輸出903的大小符合第二運算所需的大小,則可觸發第二運算。 For example, FIG. 9A is a schematic diagram illustrating a triggering condition of the second operation according to an embodiment of the present invention, and FIG. 9B is a schematic diagram of the temporarily stored first output data according to an embodiment of the present invention. Referring to FIG. 9A , the completed input 901 in the input data corresponds to the completed output 903 of the first output data. If the size of the current processing output 904 corresponding to the current processing input 902 and the completed output 903 meets the size required by the second operation, the second operation can be triggered.

請參照圖9B,假設圖9A的已完成輸出903與當前處理輸出904形成暫存的第一輸出資料Ftfo。第一運算器135所用脈動陣列的大小為16×16,其中脈動陣列輸出的寬Wtfo1可為16或寬Wtfo2可為14。假設逐深度卷積運算所用的各過濾器的高為3,且其過濾器的寬為3。寬Wtfo1,Wtfo2皆已大於3。若第五筆脈動陣列輸出暫存於第一暫存記憶體151,則第一筆至第五筆脈動陣列輸出的大小(高,寬,通道數)為(1,16,16)或(1,14,16),即通道數Ctfo為16)輸出所形成的大小已滿足3×3的大小。即,高度為1的脈動陣列輸出堆疊三層,使堆疊後的高度為3。此時,這些脈動陣列輸出可作為第二部份資料,並可供第二運算使用。 Please refer to FIG9B , assuming that the completed output 903 of FIG9A and the currently processed output 904 form the temporarily stored first output data F tfo . The size of the pulse array used by the first operator 135 is 16×16, wherein the width W tfo1 of the pulse array output can be 16 or the width W tfo2 can be 14. Assume that the height of each filter used in the depthwise convolution operation is 3 and the width of the filter is 3. The widths W tfo1 and W tfo2 are both greater than 3. If the fifth pulse array output is temporarily stored in the first temporary memory 151, the size (height, width, number of channels) of the first to fifth pulse array outputs is (1, 16, 16) or (1, 14, 16), that is, the number of channels C tfo is 16) and the size formed by the output has satisfied the size of 3×3. That is, the pulse array output with a height of 1 is stacked in three layers, so that the height after stacking is 3. At this time, these pulse array outputs can be used as the second part of data and can be used for the second operation.

須說明的是,圖9A及圖9B是只要堆疊的層數等於過濾器的高即觸發第二運算。然而,在其他實施例中,也可能是堆疊層數大於過濾器的高。 It should be noted that FIG. 9A and FIG. 9B trigger the second operation as long as the number of stacked layers is equal to the height of the filter. However, in other embodiments, the number of stacked layers may be greater than the height of the filter.

針對逐深度卷積運算,圖10A是依據本發明一實施例的第二運算的示意圖。請參照圖10A,假設第二部份資料Fsi1的大小(高,寬,通道數)為(5,30,16),且逐深度卷積運算所用過濾器Fd的大小3×3。I(i4,j4,n4)代表第二部份資料在位置(高位置i4,寬位置j4, 通道位置n4)的值。Fdn4(i5,j5,n5)代表所讀取的第n4過濾器在位置(高位置i5,寬位置j5)的值。A(i4,j4,n4)代表第二輸出資料或脈動陣列輸出在位置(高位置i4,寬位置j4,通道位置n4)的值,且其數學表示式為:A(i4,j4,n4)=I(i4,j4,n4)×Fdn4(0,0)+I(i4,j4+1,n4)×Fdn4(0,1)+I(i4,j4+2,n4)×Fdn4(0,2)+I(i4+1,j4,n4)×Fdn4(1,0)+I(i4+1,j4+1,n4)×Fdn4(1,1)+I(i4+1,j4+2,n4)×Fdn4(1,2)+I(i4+2,j4,n)×Fdn4(2,0)+I(i4+2,j4+1,n4)×Fdn4(2,1)+I(i4+2,j4+2,n4)×Fdn4(2,2)...(2)。 FIG10A is a schematic diagram of the second operation according to an embodiment of the present invention for depth-wise convolution operation. Referring to FIG10A , it is assumed that the size (height, width, number of channels) of the second part data F si1 is (5, 30, 16), and the size of the filter F d used in the depth-wise convolution operation is 3×3. I(i4, j4, n4) represents the value of the second part data at the position (height position i4, width position j4, channel position n4). F dn4 (i5, j5, n5) represents the value of the read n4th filter at the position (height position i5, width position j5). A(i4,j4,n4) represents the value of the second output data or pulse array output at the position (high position i4, wide position j4, channel position n4), and its mathematical expression is: A(i4,j4,n4)=I(i4,j4,n4)×F dn4 (0,0)+I(i4,j4+1,n4)×F dn4 (0,1)+I(i4,j4+2,n4)×F dn4 (0,2)+I(i4+1,j4,n4)×F dn4 (1,0)+I(i4+1,j4+1,n4)×F dn4 (1,1)+I(i4+1,j4+2,n4)×F dn4 (1,2)+I(i4+2,j4,n)×F dn4 (2,0)+I(i4+2,j4+1,n4)× Fdn4 (2,1)+I(i4+2,j4+2,n4)× Fdn4 (2,2)…(2).

圖10B是依據本發明一實施例的第二輸出資料Fso1的示意圖。請參照圖10B,假設當前處理的第二輸出資料Fso1的大小(高Hso1,寬Wso1,通道數Cso1)為(1,28,16)。其中,第二輸出資料Fso1中的各值為:A(0,0,n4)=I(0,0,n4)×Fdn4(0,0)+I(0,1,n4)×Fdn4(0,1)+I(0,2,n4)×Fdn4(0,2)+I(1,0,n4)×Fdn4(1,0)+I(1,1,n4)×Fdn4(1,1)+I(1,2,n4)×Fdn4(1,2)+I(2,0,n)×Fdn4(2,0)+I(2,1,n4)×Fdn4(2,1)+I(2,2,n4)×Fdn4(2,2)...(3) FIG10B is a schematic diagram of the second output data F so1 according to an embodiment of the present invention. Referring to FIG10B , it is assumed that the size (height H so1 , width W so1 , number of channels C so1 ) of the second output data F so1 currently being processed is (1, 28, 16). The values in the second output data F so1 are: A(0,0,n4)=I(0,0,n4)×F dn4 (0,0)+I(0,1,n4)×F dn4 (0,1)+I(0,2,n4)×F dn4 (0,2)+I(1,0,n4)×F dn4 (1,0)+I(1,1,n4)×F dn4 (1,1)+I(1,2,n4)×F dn4 (1,2)+I(2,0,n)×F dn4 (2,0)+I(2,1,n4)×F dn4 (2,1)+I(2,2,n4)×F dn4 (2,2)...(3)

A(0,1,n4)=I(0,1,n4)×Fdn4(0,0)+I(0,2,n4)×Fdn4(0,1)+I(0,3,n4)×Fdn4(0,2)+I(1,1,n4)×Fdn4(1,0)+I(1,2,n4)×Fdn4(1,1)+ I(1,3,n4)×Fdn4(1,2)+I(2,1,n)×Fdn4(2,0)+I(2,2,n4)×Fdn4(2,1)+I(2,3,n4)×Fdn4(2,2)...(4) A(0,1,n4)=I(0,1,n4)×F dn4 (0,0)+I(0,2,n4)×F dn4 (0,1)+I(0,3,n4)×F dn4 (0,2)+I(1,1,n4)×F dn4 (1,0)+I(1,2,n4)×F dn4 (1,1)+ I(1,3,n4)×F dn4 (1,2)+I(2,1,n)×F dn4 (2,0)+I(2,2,n4)×F dn4 (2,1)+I(2,3,n4)×F dn4 (2,2)…(4)

A(0,27,n4)=I(0,27,n4)×Fdn4(0,0)+I(0,28,n4)×Fdn4(0,1)+I(0,29,n4)×Fdn4(0,2)+I(1,27,n4)×Fdn4(1,0)+I(1,28,n4)×Fdn4(1,1)+I(1,29,n4)×Fdn4(1,2)+I(2,27,n)×Fdn4(2,0)+I(2,28,n4)×Fdn4(2,1)+I(2,29,n4)×Fdn4(2,2)...(5)其餘依此類推,於此不再贅述。 A(0,27,n4)=I(0,27,n4)×F dn4 (0,0)+I(0,28,n4)×F dn4 (0,1)+I(0,29,n4)×F dn4 (0,2)+I(1,27,n4)×F dn4 (1,0)+I(1,28,n4)×F dn4 (1,1)+I(1,29,n4)×F dn4 (1,2)+I(2,27,n)×F dn4 (2,0)+I(2,28,n4)×F dn4 (2,1)+I(2,29,n4)×F dn4 (2,2)…(5) The rest can be deduced from this formula and will not be elaborated here.

圖10C是依據本發明一實施例的第二輸出資料Fso2的示意圖。請參照圖10C,已完成輸出101為圖10B的第二輸出資料Fso1。第二輸出資料Fso2為當前處理輸出102,且其大小可相同於圖10B的第二輸出資料Fso1。其中,第二輸出資料Fso2中的各值為:A(1,0,n4)=I(1,0,n4)×Fdn4(0,0)+I(1,1,n4)×Fdn4(0,1)+I(1,2,n4)×Fdn4(0,2)+I(2,0,n4)×Fdn4(1,0)+I(2,1,n4)×Fdn4(1,1)+I(2,2,n4)×Fdn4(1,2)+I(3,0,n)×Fdn4(2,0)+I(3,1,n4)×Fdn4(2,1)+I(3,2,n4)×Fdn4(2,2)...(6) FIG10C is a schematic diagram of the second output data F so2 according to an embodiment of the present invention. Referring to FIG10C , the completed output 101 is the second output data F so1 of FIG10B . The second output data F so2 is the current processing output 102 , and its size may be the same as the second output data F so1 of FIG10B . The values in the second output data F so2 are: A(1,0,n4)=I(1,0,n4)×F dn4 (0,0)+I(1,1,n4)×F dn4 (0,1)+I(1,2,n4)×F dn4 (0,2)+I(2,0,n4)×F dn4 (1,0)+I(2,1,n4)×F dn4 (1,1)+I(2,2,n4)×F dn4 (1,2)+I(3,0,n)×F dn4 (2,0)+I(3,1,n4)×F dn4 (2,1)+I(3,2,n4)×F dn4 (2,2)...(6)

A(1,1,n4)=I(1,1,n4)×Fdn4(0,0)+I(1,2,n4)×Fdn4(0,1)+I(1,3,n4)×Fdn4(0,2) +I(2,1,n4)×Fdn4(1,0)+I(2,2,n4)×Fdn4(1,1)+I(2,3,n4)×Fdn4(1,2)+I(3,1,n)×Fdn4(2,0)+I(3,2,n4)×Fdn4(2,1)+I(3,3,n4)×Fdn4(2,2)...(7) A(1,1,n4)=I(1,1,n4)×F dn4 (0,0)+I(1,2,n4)×F dn4 (0,1)+I(1,3,n4)×F dn4 (0,2)+I(2,1,n4)×F dn4 (1,0)+I(2,2,n4)×F dn4 (1,1)+I(2,3,n4)×F dn4 (1,2)+I(3,1,n)×F dn4 (2,0)+I(3,2,n4)×F dn4 (2,1)+I(3,3,n4)×F dn4 (2,2)…(7)

A(1,27,n4)=I(1,27,n4)×Fdn4(0,0)+I(1,28,n4)×Fdn4(0,1)+I(1,29,n4)×Fdn4(0,2)+I(2,27,n4)×Fdn4(1,0)+I(2,28,n4)×Fdn4(1,1)+I(2,29,n4)×Fdn4(1,2)+I(3,27,n)×Fdn4(2,0)+I(3,28,n4)×Fdn4(2,1)+I(3,29,n4)×Fdn4(2,2)...(8)其餘依此類推,於此不再贅述。 A(1,27,n4)=I(1,27,n4)×F dn4 (0,0)+I(1,28,n4)×F dn4 (0,1)+I(1,29,n4)×F dn4 (0,2)+I(2,27,n4)×F dn4 (1,0)+I(2,28,n4)×F dn4 (1,1)+I(2,29,n4)×F dn4 (1,2)+I(3,27,n)×F dn4 (2,0)+I(3,28,n4)×F dn4 (2,1)+I(3,29,n4)×F dn4 (2,2)…(8)The rest can be deduced from this formula and will not be elaborated here.

圖10D是依據本發明一實施例的第二輸出資料Fso3的示意圖。請參照圖10C,第二輸出資料Fso3為當前處理輸出102,且其大小可相同於圖10B的第二輸出資料Fso1。其中,第二輸出資料Fso3中的各值為:A(2,0,n4)=I(2,0,n4)×Fdn4(0,0)+I(2,1,n4)×Fdn4(0,1)+I(2,2,n4)×Fdn4(0,2)+I(3,0,n4)×Fdn4(1,0)+I(3,1,n4)×Fdn4(1,1)+I(3,2,n4)×Fdn4(1,2)+I(4,0,n)×Fdn4(2,0)+I(4,1,n4)×Fdn4(2,1)+I(4,2,n4)×Fdn4(2,2)...(9) FIG10D is a schematic diagram of the second output data F so3 according to an embodiment of the present invention. Referring to FIG10C , the second output data F so3 is the current processing output 102 , and its size may be the same as the second output data F so1 of FIG10B . The values in the second output data F so3 are: A(2,0,n4)=I(2,0,n4)×F dn4 (0,0)+I(2,1,n4)×F dn4 (0,1)+I(2,2,n4)×F dn4 (0,2)+I(3,0,n4)×F dn4 (1,0)+I(3,1,n4)×F dn4 (1,1)+I(3,2,n4)×F dn4 (1,2)+I(4,0,n)×F dn4 (2,0)+I(4,1,n4)×F dn4 (2,1)+I(4,2,n4)×F dn4 (2,2)...(9)

A(2,1,n4)=I(2,1,n4)×Fdn4(0,0)+I(2,2,n4)×Fdn4(0,1)+I(2,3,n4)×Fdn4(0,2) +I(3,1,n4)×Fdn4(1,0)+I(3,2,n4)×Fdn4(1,1)+I(3,3,n4)×Fdn4(1,2)+I(4,1,n)×Fdn4(2,0)+I(4,2,n4)×Fdn4(2,1)+I(4,3,n4)×Fdn4(2,2)...(10) A(2,1,n4)=I(2,1,n4)×F dn4 (0,0)+I(2,2,n4)×F dn4 (0,1)+I(2,3,n4)×F dn4 (0,2)+I(3,1,n4)×F dn4 (1,0)+I(3,2,n4)×F dn4 (1,1)+I(3,3,n4)×F dn4 (1,2)+I(4,1,n)×F dn4 (2,0)+I(4,2,n4)×F dn4 (2,1)+I(4,3,n4)×F dn4 (2,2)…(10)

A(2,27,n4)=I(2,27,n4)×Fdn4(0,0)+I(2,28,n4)×Fdn4(0,1)+I(2,29,n4)×Fdn4(0,2)+I(3,27,n4)×Fdn4(1,0)+I(3,28,n4)×Fdn4(1,1)+I(3,29,n4)×Fdn4(1,2)+I(4,27,n)×Fdn4(2,0)+I(4,28,n4)×Fdn4(2,1)+I(4,29,n4)×Fdn4(2,2)...(11)其餘依此類推,於此不再贅述。 A(2,27,n4)=I(2,27,n4)×F dn4 (0,0)+I(2,28,n4)×F dn4 (0,1)+I(2,29,n4)×F dn4 (0,2)+I(3,27,n4)×F dn4 (1,0)+I(3,28,n4)×F dn4 (1,1)+I(3,29,n4)×F dn4 (1,2)+I(4,27,n)×F dn4 (2,0)+I(4,28,n4)×F dn4 (2,1)+I(4,29,n4)×F dn4 (2,2)…(11)The rest can be deduced from this formula and will not be elaborated here.

在一實施例中,第二運算器155採用脈動陣列結構。第二運算器155將第二部分資料(即,部份暫存的第一輸出資料)區分成多個第二脈動陣列輸入,並分別對這些第二脈動陣列輸入進行第二運算,以取得多個第二脈動陣列輸出。各第二脈動陣列輸出的大小將受限於脈動陣列的大小。例如,第二脈動陣列輸出的元素量小於或等於脈動陣列的容量。此外,基於相同第二部份資料的這些第二脈動陣列輸出組成第二輸出資料。以圖10B為例,若脈動陣列的大小為16×16,則第二輸出資料Fso1包括1×16×16及1×12×16的第二脈動陣列輸出。 In one embodiment, the second operator 155 adopts a pulse array structure. The second operator 155 divides the second part of data (i.e., part of the temporarily stored first output data) into a plurality of second pulse array inputs, and performs a second operation on these second pulse array inputs respectively to obtain a plurality of second pulse array outputs. The size of each second pulse array output will be limited by the size of the pulse array. For example, the number of elements of the second pulse array output is less than or equal to the capacity of the pulse array. In addition, these second pulse array outputs based on the same second part of data constitute the second output data. Taking FIG. 10B as an example, if the size of the pulse array is 16×16, the second output data F so1 includes second pulse array outputs of 1×16×16 and 1×12×16.

針對下一個卷積層,圖11是依據本發明一實施例的基於卷積神經網路的資料處理方法的流程圖。請參照圖11,在一實施例中,第二運算器155可暫存一個或更多個第二輸出資料在第二 暫存記憶體171的第二暫存區(步驟S111)(步驟S280)。具體而言,相似地,本發明實施例將上一個卷積層的輸出暫存在下一個卷積層的緩衝器中,而不是將輸出資料直接輸出到記憶體110。 For the next convolution layer, FIG11 is a flow chart of a data processing method based on a convolution neural network according to an embodiment of the present invention. Referring to FIG11, in one embodiment, the second operator 155 may temporarily store one or more second output data in a second temporary storage area of the second temporary storage memory 171 (step S111) (step S280). Specifically, similarly, the embodiment of the present invention temporarily stores the output of the previous convolution layer in the buffer of the next convolution layer instead of directly outputting the output data to the memory 110.

當第一暫存記憶體171或第二暫存區所暫存的第二輸出資料大於第二預定資料量時,第三運算器175可對第二輸出資料進行第三運算,以取得第三輸出資料(步驟S113)。具體而言,第三運算所輸入的第三部份資料包括第二暫存記憶體171所暫存的第二輸出資料,且第三部份資料的大小相關於第三運算的過濾器大小。假設第三運算是逐點(pointwise)卷積運算。逐點卷積運算的各過濾器的大小僅為1×1。相似於卷積運算,逐點卷積運算的各過濾器也是對所有通道的資料進行卷積運算。此外,只要暫存的那些第二輸出資料的高增加至過濾器的高(為1)且第二輸出資料的寬增加至過濾器的寬(為1),過濾器即可與這些暫存的第二輸出資料(作為第三部份資料)進行逐深度卷積運算。 When the second output data temporarily stored in the first temporary storage memory 171 or the second temporary storage area is greater than the second predetermined data amount, the third operator 175 can perform a third operation on the second output data to obtain the third output data (step S113). Specifically, the third part of the data input by the third operation includes the second output data temporarily stored in the second temporary storage memory 171, and the size of the third part of the data is related to the filter size of the third operation. Assume that the third operation is a pointwise convolution operation. The size of each filter of the pointwise convolution operation is only 1×1. Similar to the convolution operation, each filter of the pointwise convolution operation also performs a convolution operation on the data of all channels. In addition, as long as the height of the temporarily stored second output data increases to the height of the filter (1) and the width of the second output data increases to the width of the filter (1), the filter can perform depth-wise convolution operations with these temporarily stored second output data (as the third part of data).

在一實施例中,依據圖10B~圖10D所示,每一筆的第二輸出資料皆可滿足逐點卷積運算所需的大小。因此,第二先入先出單元172可依序輸入每一筆第二輸出資料至第三運算器175。第三運算器175可對各暫存的第二輸出資料進行第三運算。 In one embodiment, as shown in FIG. 10B to FIG. 10D , each second output data can satisfy the size required for the point-by-point convolution operation. Therefore, the second first-in first-out unit 172 can sequentially input each second output data to the third operator 175. The third operator 175 can perform the third operation on each temporarily stored second output data.

例如,圖12A是依據本發明一實施例的暫存的第一輸出資料Ftfo的示意圖,且圖12B是依據本發明一實施例的暫存的第二輸出資料Ftso的示意圖。請參照圖12A及圖12B,若第二運算器155已完成對所暫存的多個第一輸出資料中的一部分的卷積運 算,則第一暫存記憶體151可暫存大小(高,寬,通道數)為(1,Wso21,Cso2)的第二脈動陣列輸出或(1,Wso21+ Wso21,Cso2)的第二輸出資料,並據以成為暫存的第二輸出資料Ftso。其中,通道數Cso2相同於通道數Ctf2For example, FIG. 12A is a schematic diagram of the first temporarily stored output data F tfo according to an embodiment of the present invention, and FIG. 12B is a schematic diagram of the second temporarily stored output data F tso according to an embodiment of the present invention. Referring to FIG. 12A and FIG. 12B, if the second operator 155 has completed the convolution operation on a portion of the temporarily stored multiple first output data, the first temporary storage memory 151 can temporarily store the second pulse array output of size (height, width, number of channels) (1, W so21 , C so2 ) or the second output data of (1, W so21+ W so21 , C so2 ), and accordingly become the temporarily stored second output data F tso . Wherein, the number of channels C so2 is the same as the number of channels C tf2 .

圖13A是依據本發明一實施例的第三運算的示意圖。請參照圖13A,第三運算器175將圖12B的暫存的第二輸出資料Ftso作為第三部份資料Fti(其寬Wso3為Wso21+Wso21),並對第三部份資料Fti與逐點卷積運算所用的過濾器Fp進行第三運算。 FIG13A is a schematic diagram of a third operation according to an embodiment of the present invention. Referring to FIG13A, the third operator 175 uses the second temporarily stored output data Ftso of FIG12B as the third partial data Fti (whose width Wso3 is Wso21+ Wso21 ), and performs a third operation on the third partial data Fti and the filter Fp used for the point-by-point convolution operation.

圖13B是依據本發明一實施例的第三輸出資料Fto的示意圖。請參照圖13A及圖13B,第三輸出資料Fto大小等同於第三部份資料Fti。即,寬Wto1相同於寬Wso3,且通道數Cto1相同於通道數Cso2FIG13B is a schematic diagram of the third output data F to according to an embodiment of the present invention. Referring to FIG13A and FIG13B , the third output data F to is equal to the third partial data F ti . That is, the width W to1 is equal to the width W so3 , and the number of channels C to1 is equal to the number of channels C so2 .

在一實施例中,第三運算器175採用脈動陣列結構。第三運算器175將第三部分資料區分成多個第三脈動陣列輸入,並分別對這些第三脈動陣列輸入進行第三運算,以取得多個第三脈動陣列輸出。各第三脈動陣列輸出的大小將受限於脈動陣列的大小。例如,第三脈動陣列輸出的元素量小於或等於脈動陣列的容量。此外,基於相同第三部份資料(即,部份暫存的第二輸出資料)的這些第三脈動陣列輸出組成第三輸出資料。例如,若第三部份資料的大小為1×28×16,且脈動陣列的大小為16×16,則第三輸出資料包括1×16×16及1×12×16的第三脈動陣列輸出。 In one embodiment, the third operator 175 adopts a pulse array structure. The third operator 175 divides the third portion of data into a plurality of third pulse array inputs, and performs a third operation on these third pulse array inputs respectively to obtain a plurality of third pulse array outputs. The size of each third pulse array output will be limited by the size of the pulse array. For example, the number of elements of the third pulse array output is less than or equal to the capacity of the pulse array. In addition, these third pulse array outputs based on the same third portion of data (i.e., part of the temporarily stored second output data) constitute the third output data. For example, if the size of the third part of the data is 1×28×16 and the size of the pulse array is 16×16, the third output data includes the third pulse array output of 1×16×16 and 1×12×16.

例如,圖14A是依據本發明一實施例的脈動陣列輸出 SA6o的示意圖。請參照圖14A,表(6)是儲存在第二暫存記憶體171的第二輸出資料的資料:

Figure 110140625-A0305-02-0028-7
For example, FIG14A is a schematic diagram of a pulse array output SA 6o according to an embodiment of the present invention. Referring to FIG14A, Table (6) is the data of the second output data stored in the second temporary memory 171:
Figure 110140625-A0305-02-0028-7

I(i6,j6,n6)代表所讀取的輸入資料在位置(高位置i6,寬位置j6,通道位置n6)的值。而第二先入先出單元172由右至左且由上至下依序輸入那些資料至第三運算器175。 I(i6,j6,n6) represents the value of the input data read at the position (high position i6, wide position j6, channel position n6). The second FIFO unit 172 sequentially inputs those data from right to left and from top to bottom to the third operator 175.

表(7)為逐點卷積運算所用16個通道的1×1過濾器的資料:

Figure 110140625-A0305-02-0028-8
Table (7) shows the data of the 16-channel 1×1 filter used in the point-by-point convolution operation:
Figure 110140625-A0305-02-0028-8

Fdn(i7,j7,n7)代表所讀取的第n過濾器在位置(高位置i7,寬位置j7,通道位置n7)的值。 F dn (i7, j7, n7) represents the value read from the nth filter at position (high position i7, wide position j7, channel position n7).

表(9)為脈動陣列輸出:表(9)

Figure 110140625-A0305-02-0029-9
Table (9) shows the pulse array output: Table (9)
Figure 110140625-A0305-02-0029-9

A(i6,j6,n6)代表脈動陣列輸出在位置(高位置i6,寬位置j6,通道位置n6)的值,且其數學表示式為:A(i6,j3,n6)=I(i6,j6,0)×Fdn6(0,0,0)+I(i6,j6,1)×Fdn6(0,0,1)+…+I(i6,j6,15)×Fdn6(0,0,15)...(12)。 A(i6,j6,n6) represents the value of the pulse array output at the position (high position i6, wide position j6, channel position n6), and its mathematical expression is: A(i6,j3,n6)=I(i6,j6,0)×F dn6 (0,0,0)+I(i6,j6,1)×F dn6 (0,0,1)+…+I(i6,j6,15)×F dn6 (0,0,15)...(12).

因此,脈動陣列輸出SA6o的各值為(n6

Figure 110140625-A0305-02-0029-26
0~15):A(0,0,n6)=I(0,0,0)×Fdn6(0,0,0)+I(0,0,1)×Fdn6(0,0,1)+…+I(0,0,15)×Fdn6(0,0,15)...(13);A(0,1,n6)=I(0,1,0)×Fdn6(0,0,0)+I(0,1,1)×Fdn6(0,0,1)+…+I(0,1,15)×Fdn6(0,0,15)...(14)。 Therefore, the values of the pulse array output SA 6o are ( n 6
Figure 110140625-A0305-02-0029-26
0~15): A(0,0,n6)=I(0,0,0)× Fdn6 (0,0,0)+I(0,0,1)× Fdn6 (0,0,1)+…+I(0,0,15)× Fdn6 (0,0,15)...(13); A(0,1,n6)=I(0,1,0)× Fdn6 (0,0,0)+I(0,1,1)× Fdn6 (0,0,1)+…+I(0,1,15)× Fdn6 (0,0,15)...(14).

A(0,15,n6)=I(0,15,0)×Fdn6(0,0,0)+I(0,15,1)×Fdn6(0,0,1)+…+I(0,15,15)×Fdn6(0,0,15)...(15),其餘依此類推且不再贅述。 A(0,15,n6)=I(0,15,0)×F dn6 (0,0,0)+I(0,15,1)×F dn6 (0,0,1)+…+I(0,15,15)×F dn6 (0,0,15)…(15), and the rest can be deduced in the same way and will not be elaborated on.

又例如,圖14B是依據本發明一實施例的脈動陣列輸出SA7o的示意圖。請參照圖14A,表(10)是儲存在第二暫存記憶體171的第二輸出資料的資料:

Figure 110140625-A0305-02-0029-10
Figure 110140625-A0305-02-0030-11
For another example, FIG. 14B is a schematic diagram of a pulse array output SA 7o according to an embodiment of the present invention. Referring to FIG. 14A , Table (10) is the data of the second output data stored in the second temporary memory 171:
Figure 110140625-A0305-02-0029-10
Figure 110140625-A0305-02-0030-11

表(11)為脈動陣列輸出:

Figure 110140625-A0305-02-0030-12
Table (11) shows the pulse array output:
Figure 110140625-A0305-02-0030-12

因此,脈動陣列輸出SA7o的各值為(n6

Figure 110140625-A0305-02-0030-25
0~15):A(0,16,n6)=I(0,16,0)×Fdn6(0,0,0)+I(0,16,1)×Fdn6(0,0,1)+…+I(0,16,15)×Fdn6(0,0,15)...(16);A(0,17,n6)=I(0,17,0)×Fdn6(0,0,0)+I(0,17,1)×Fdn6(0,0,1)+…+I(0,17,15)×Fdn6(0,0,15)...(17)。 Therefore, the values of the pulse array output SA 7o are ( n 6
Figure 110140625-A0305-02-0030-25
(0~15): A(0,16,n6)=I(0,16,0)× Fdn6 (0,0,0)+I(0,16,1)× Fdn6 (0,0,1)+…+I(0,16,15)× Fdn6 (0,0,15)…(16); A(0,17,n6)=I(0,17,0)× Fdn6 (0,0,0)+I(0,17,1)× Fdn6 (0,0,1)+…+I(0,17,15)× Fdn6 (0,0,15)…(17).

A(0,27,n6)=I(0,27,0)×Fdn6(0,0,0)+I(0,27,1)×Fdn6(0,0,1)+…+I(0,27,15)×Fdn6(0,0,15)...(18),其餘依此類推且不再贅述。此外,圖14A的脈動陣列輸出SA6o為已完成輸出141,脈動陣列輸出SA7o為當前處理輸出142。 A(0,27,n6)=I(0,27,0)×F dn6 (0,0,0)+I(0,27,1)×F dn6 (0,0,1)+…+I(0,27,15)×F dn6 (0,0,15)…(18), and the rest is similar and will not be repeated. In addition, the pulse array output SA 6o of FIG. 14A is the completed output 141, and the pulse array output SA 7o is the current processing output 142.

再例如,圖14C是依據本發明一實施例的脈動陣列輸出 SA8o的示意圖。請參照圖14A,表(12)是儲存在第二暫存記憶體171的第二輸出資料的資料:

Figure 110140625-A0305-02-0031-13
For another example, FIG14C is a schematic diagram of a pulse array output SA 8o according to an embodiment of the present invention. Referring to FIG14A, Table (12) is the data of the second output data stored in the second temporary memory 171:
Figure 110140625-A0305-02-0031-13

表(13)為脈動陣列輸出:

Figure 110140625-A0305-02-0031-14
Table (13) shows the pulse array output:
Figure 110140625-A0305-02-0031-14

因此,最後一筆當前處理輸出142的脈動陣列輸出SA8o的各值為(n6

Figure 110140625-A0305-02-0031-24
0~15):A(2,16,n6)=I(2,16,0)×Fdn6(0,0,0)+I(2,16,1)×Fdn6(0,0,1)+…+I(2,16,15)×Fdn6(0,0,15)...(19);A(2,17,n6)=I(2,17,0)×Fdn6(0,0,0)+I(2,17,1)×Fdn6(0,0,1)+…+I(2,17,15)×Fdn6(0,0,15)...(20)。 Therefore, the values of the pulse array output SA 8o of the last current processing output 142 are ( n 6
Figure 110140625-A0305-02-0031-24
(0~15): A(2,16,n6)=I(2,16,0)× Fdn6 (0,0,0)+I(2,16,1)× Fdn6 (0,0,1)+…+I(2,16,15)× Fdn6 (0,0,15)...(19); A(2,17,n6)=I(2,17,0)× Fdn6 (0,0,0)+I(2,17,1)× Fdn6 (0,0,1)+…+I(2,17,15)× Fdn6 (0,0,15)...(20).

A(2,27,n6)= I(2,27,0)×Fdn6(0,0,0)+I(2,27,1)×Fdn6(0,0,1)+…+I(2,27,15)×Fdn6(0,0,15)...(21),其餘依此類推且不再贅述。 A(2,27,n6)= I(2,27,0)×F dn6 (0,0,0)+I(2,27,1)×F dn6 (0,0,1)+…+I(2,27,15)×F dn6 (0,0,15)…(21), and the rest can be deduced in the same way and will not be elaborated on here.

在一實施例中,第三運算器175運行第三運算時,第一運算器135及第二運算器155持續分別運行第一運算及第二運算。也就是說,若第一運算器135及第二運算器155對所有輸入資料的運算尚未完成,這第一運算器135、第二運算器155及第三運算器175的運算可一併進行。 In one embodiment, when the third operator 175 performs the third operation, the first operator 135 and the second operator 155 continue to perform the first operation and the second operation respectively. In other words, if the first operator 135 and the second operator 155 have not completed the operation of all input data, the operations of the first operator 135, the second operator 155 and the third operator 175 can be performed together.

請參照圖2,最後,第三運算器175將第三運算所得出的第三輸出資料輸出至記憶體110(步驟S290)。 Please refer to Figure 2. Finally, the third operator 175 outputs the third output data obtained by the third operation to the memory 110 (step S290).

為了方便理解完整流程,以下再舉一實施例說明。圖15是依據本發明一實施例的MobileNet架構的資料處理方法的流程圖。請參照圖15,第一運算器135自記憶體110中的輸入資料讀取已定義區段的寬的資料以做作第一部分資料(步驟S1501)。第一運算器135判斷當前處理的條數是否大於或等於(第一過濾器的大小-1)且這條數與第一運算所用的第一跨步(stride)相除所得的餘數是否為1(步驟S1503)。若符合步驟S1503的條件,則第一先入先出單元132依序將第一部分資料輸出至第一運算器135(步驟S1505)。第一運算器135自記憶體110讀取卷積運算所用的第一過濾器中的權重(步驟S1511),進行卷積運算(步驟S1513),並將所得的第一輸出資料輸出至第一暫存記憶體151(步驟S1515)。第一運算器135判斷當前區段中的當前條(其大小相同於過濾器的大小)的所有資料是否皆已進行卷積運算(步驟S1517)。若這些資料 尚未完成卷積運算,則第一先入先出單元132繼續將第一部分資料輸出至第一運算器135(步驟S1505)。若這條的所有資料皆已進行卷積運算,則第一運算器135判斷當前區段中的所有條的所有資料是否皆已進行卷積運算(步驟S1518)。若當前區段中的一條或更多條的資料尚未完成卷積運算或步驟S1503的條件為符合,則第一運算器135自記憶體110中的輸入資料繼續處理下一條的資料(步驟S1507)。若當前區段中的每一條的所有資料皆已進行卷積運算,則第一運算器135判斷所有區段的所有資料是否皆已進行卷積運算(步驟S1519)。若尚有區段的資料未完成卷積運算,則第一運算器135繼續處理下一個區段的資料(步驟S1509)。此外,第一運算器135將當前處理的條數歸零,並設定當前處理的寬數設為:原寬數+區段的寬-(第一運算所用的第一跨步-1+第二運算所用的第二跨步-1)。若所有區段的所有資料皆已進行卷積運算,則第一運算器135完成對輸入資料的所有卷積運算(步驟S1520)。 In order to facilitate understanding of the complete process, another embodiment is given below for illustration. Figure 15 is a flow chart of a data processing method of a MobileNet architecture according to an embodiment of the present invention. Referring to Figure 15, the first operator 135 reads the width data of a defined segment from the input data in the memory 110 to make the first part of the data (step S1501). The first operator 135 determines whether the number of items currently being processed is greater than or equal to (the size of the first filter - 1) and whether the remainder obtained by dividing this number by the first stride used in the first operation is 1 (step S1503). If the conditions of step S1503 are met, the first first-in-first-out unit 132 outputs the first part of the data to the first operator 135 in sequence (step S1505). The first operator 135 reads the weights in the first filter used for the convolution operation from the memory 110 (step S1511), performs the convolution operation (step S1513), and outputs the obtained first output data to the first temporary memory 151 (step S1515). The first operator 135 determines whether all the data of the current strip (whose size is the same as the size of the filter) in the current segment have been convolution-operated (step S1517). If the data has not completed the convolution operation, the first FIFO unit 132 continues to output the first part of the data to the first operator 135 (step S1505). If all the data of this piece have been convolved, the first operator 135 determines whether all the data of all the pieces in the current segment have been convolved (step S1518). If one or more pieces of data in the current segment have not completed the convolution operation or the condition of step S1503 is met, the first operator 135 continues to process the next piece of data from the input data in the memory 110 (step S1507). If all the data of each piece in the current segment have been convolved, the first operator 135 determines whether all the data of all the segments have been convolved (step S1519). If there are still segments of data that have not completed the convolution operation, the first operator 135 continues to process the data of the next segment (step S1509). In addition, the first operator 135 resets the number of currently processed items to zero and sets the width of the current processing to: original width + segment width - (first stride used in the first operation - 1 + second stride used in the second operation - 1). If all data in all segments have been convolutionally operated, the first operator 135 completes all convolution operations on the input data (step S1520).

第二運算器155判斷第一暫存記憶體151所暫存的第一輸出資料是否大於第一預定資料量(步驟S1521)。第二過濾器的大小以3×3為例,則第二運算器155判斷所暫存的第一輸出資料是否已有三條。第二運算器155判斷在第一運算已處理的條數與第二運算所用的第二跨步所得的餘數是否等於零(步驟S1523)。若於數為零,則第二運算器155讀取暫存的第一輸出資料並作為第二部分資料(步驟S1525)。 The second operator 155 determines whether the first output data temporarily stored in the first temporary memory 151 is greater than the first predetermined data amount (step S1521). For example, the size of the second filter is 3×3, and the second operator 155 determines whether there are three temporarily stored first output data. The second operator 155 determines whether the remainder obtained by the number of processed items in the first operation and the second stride used in the second operation is equal to zero (step S1523). If the remainder is zero, the second operator 155 reads the temporarily stored first output data and uses it as the second part of the data (step S1525).

若步驟S1521及步驟S1523未符合條件,則第二運算器 155判斷目前處理的資料是否為第一筆第一部分資料(步驟S1531)。若步驟S1531未符合條件,則結束所有第二運算(步驟S1540)。另一方面,第二運算器155自記憶體110讀取逐深度卷積運算所用的第二過濾器中的權重(步驟S1533),進行逐深度卷積運算(步驟S1535),並將所得的第二輸出資料輸出至第二暫存記憶體171(步驟S1537)。第二運算器155判斷當前區段中的所有條的所有資料是否皆已進行逐深度卷積運算(步驟S1538)。若當前區段中的一條或更多條的資料尚未完成卷積運算,則第二運算器155位移至下一個點索引(例如,相隔第二跨步)(步驟S1527)並處理下一筆的資料。直到第一暫存記憶體151中的所有條的所有資料皆已進行逐深度卷積運算,則第二運算器155將當前處理的條數設為:原條數+1,且將當前處理的寬數歸零(步驟S1539)。接著,第二運算器155完成對第二輸入資料的所有逐深度卷積運算(步驟S1540)。 If step S1521 and step S1523 do not meet the conditions, the second operator 155 determines whether the data currently being processed is the first batch of first part data (step S1531). If step S1531 does not meet the conditions, all second operations are terminated (step S1540). On the other hand, the second operator 155 reads the weights in the second filter used for the depth-by-depth convolution operation from the memory 110 (step S1533), performs the depth-by-depth convolution operation (step S1535), and outputs the obtained second output data to the second temporary memory 171 (step S1537). The second operator 155 determines whether all data of all bars in the current segment have been subjected to the depth-by-depth convolution operation (step S1538). If one or more data in the current segment have not completed the convolution operation, the second operator 155 moves to the next point index (for example, the second stride apart) (step S1527) and processes the next data. Until all data in all data in the first temporary memory 151 have been subjected to depth-wise convolution operation, the second operator 155 sets the number of currently processed data to: the original number of data + 1, and returns the currently processed width to zero (step S1539). Then, the second operator 155 completes all depth-wise convolution operations on the second input data (step S1540).

第三運算器175判斷第二暫存記憶體171所暫存的第二輸出資料是否已達到一條第二輸出資料(步驟S1541)。第三運算器175讀取暫存的第二輸出資料並作為第三部份資料(步驟S1543)。第三運算器175自記憶體110讀取逐點卷積運算所用的第三過濾器中的權重(步驟S1551),進行逐點卷積運算(步驟S1553),並將所得的第三輸出資料輸出至記憶體110(步驟S1555)。第三運算器175判斷第二暫存記憶體171中的所有條的所有資料是否皆已進行逐點卷積運算。若這些資料尚未完成逐點卷積運算,則第二先入先出單元172繼續將第三部份資料輸出至第三運算器175。若第二 暫存記憶體171中的所有條的所有資料皆已完成逐點卷積運算,則第三運算器175完成對第三部份資料的所有逐點卷積運算(步驟S1560)。 The third operator 175 determines whether the second output data temporarily stored in the second temporary storage memory 171 has reached one second output data (step S1541). The third operator 175 reads the temporarily stored second output data and uses it as the third part of data (step S1543). The third operator 175 reads the weight in the third filter used for the point-by-point convolution operation from the memory 110 (step S1551), performs the point-by-point convolution operation (step S1553), and outputs the obtained third output data to the memory 110 (step S1555). The third operator 175 determines whether all the data in all the pieces in the second temporary storage memory 171 have been subjected to the point-by-point convolution operation. If these data have not completed the point-by-point convolution operation, the second FIFO unit 172 continues to output the third part of the data to the third operator 175. If all the data in all the entries in the second temporary memory 171 have completed the point-by-point convolution operation, the third operator 175 completes all the point-by-point convolution operations on the third part of the data (step S1560).

本發明實施例更提供一種電腦可讀取儲存媒體(例如,硬碟、光碟、快閃記憶體、固態硬碟(Solid State Disk,SSD)等儲存媒體),並用以儲存程式碼。運算電路100或其他處理器可載入程式碼,並據以執行本發明實施例的一個或更多個資料處理方法的相應流程。這些流程可參酌上文說明,且於此不再贅述。 The present invention further provides a computer-readable storage medium (e.g., a hard disk, an optical disk, a flash memory, a solid state disk (SSD), etc.) for storing program code. The computing circuit 100 or other processor can load the program code and execute the corresponding process of one or more data processing methods of the present invention. These processes can be referred to the above description and will not be repeated here.

綜上所述,在本發明實施例的基於卷積神經網路的運算電路、資料處理方法及電腦可讀取儲存媒體中,暫存第一輸出資料及/或第二輸出資料而不輸出至記憶體,並可在暫存的資料符合第二運算及/或第三運算所需的大小的情況下,開始第二運算及/或第三運算。藉此,可減少記憶體的存取次數,並提升運算效率。 In summary, in the computing circuit, data processing method and computer-readable storage medium based on the convolutional neural network of the embodiment of the present invention, the first output data and/or the second output data are temporarily stored without being output to the memory, and the second operation and/or the third operation can be started when the temporarily stored data meets the size required by the second operation and/or the third operation. In this way, the number of memory accesses can be reduced and the computing efficiency can be improved.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。 Although the present invention has been disclosed as above by the embodiments, it is not intended to limit the present invention. Anyone with ordinary knowledge in the relevant technical field can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the scope defined by the attached patent application.

S210~S290:步驟 S210~S290: Steps

Claims (13)

一種基於卷積神經網路(Conventional Neural Network,CNN)的資料處理方法,包括:自一記憶體讀取一輸入資料;透過一第一運算器對該輸入資料中的一第一部份資料進行一第一運算,以取得一第一輸出資料,其中該第一運算設有一第一過濾器,該第一輸出資料的大小相關於該第一運算的該第一過濾器的大小及該第一部份資料的大小;暫存該第一輸出資料於一第一暫存區;當該第一暫存區所暫存的該第一輸出資料的量大於一第一預定資料量、MH×Hf1o未小於Hkd且MW×Wf1o未小於Wkd時,透過一第二運算器對該第一輸出資料進行一第二運算,以取得一第二輸出資料,其中該第二運算設有一第二過濾器,該第一預定資料量為該第二過濾器的大小,該第二輸出資料的大小相關於該第二運算的該第二過濾器的大小,該第二過濾器的高(height)為Hkd,該第二過濾器的寬(width)為Wkd,該第二過濾器的大小為Wkd×Hkd,每一區段的該第一輸出資料的高為Hf1o,每一區段的該第一輸出資料的寬為Wf1o,Hkd、Wkd、Hf1o及Wf1o為正整數,該第一暫存區所暫存的該第一輸出資料的量所組成的高為MH×Hf1o且所組成的寬為MW×Wf1o,MH及MW為倍數且為正整數;暫存該第二輸出資料於一第二暫存區;以及對該第二輸出資料進行一第三運算所得出的一第三輸出資料 輸出至該記憶體,其中,對該第一輸出資料進行該第二運算時,持續對該輸入資料進行該第一運算。 A data processing method based on a convolutional neural network (CNN) includes: reading an input data from a memory; performing a first operation on a first part of the input data through a first operator to obtain a first output data, wherein the first operation is provided with a first filter, and the size of the first output data is related to the size of the first filter of the first operation and the size of the first part of the data; temporarily storing the first output data in a first temporary storage area; when the amount of the first output data temporarily stored in the first temporary storage area is greater than a first predetermined amount of data, M H ×H f1o is not less than H kd , and M W ×W f1o is not less than W kd , a second operation is performed on the first output data through a second operator to obtain a second output data, wherein the second operation is provided with a second filter, the first predetermined data amount is the size of the second filter, the size of the second output data is related to the size of the second filter of the second operation, the height of the second filter is H kd , the width of the second filter is W kd , the size of the second filter is W kd ×H kd , the height of the first output data of each segment is H f1o , the width of the first output data of each segment is W f1o , H kd , W kd , H f1o and W f1o are positive integers, and the amount of the first output data temporarily stored in the first buffer area is composed of a height of M H ×H f1o and a width of M W ×W f1o , MH and MW are multiples and positive integers; temporarily storing the second output data in a second temporary storage area; and performing a third operation on the second output data to output a third output data to the memory, wherein when the second operation is performed on the first output data, the first operation is continuously performed on the input data. 如請求項1所述的基於卷積神經網路的資料處理方法,其中該第二運算不同於該第三運算,且暫存該第二輸出資料於該第二暫存區記憶體的步驟包括:當該第二暫存區所暫存的該第二輸出資料的量大於一第二預定資料量時,對該第二輸出資料進行該第三運算,以取得該第三輸出資料,其中該第三運算設有一第三過濾器,且該第三輸出資料的大小相關於該第三過濾器的大小。 The data processing method based on the convolution neural network as described in claim 1, wherein the second operation is different from the third operation, and the step of temporarily storing the second output data in the second buffer memory includes: when the amount of the second output data temporarily stored in the second buffer is greater than a second predetermined amount of data, performing the third operation on the second output data to obtain the third output data, wherein the third operation is provided with a third filter, and the size of the third output data is related to the size of the third filter. 如請求項2所述的基於卷積神經網路的資料處理方法,其中對該第二輸出資料進行該第三運算所得出的該第三輸出資料輸出至該記憶體的步驟包括:對該第二輸出資料進行該第三運算時,持續進行該第一運算及該第二運算。 As described in claim 2, the data processing method based on the convolutional neural network, wherein the step of outputting the third output data obtained by performing the third operation on the second output data to the memory includes: when performing the third operation on the second output data, continuously performing the first operation and the second operation. 如請求項1所述的基於卷積神經網路的資料處理方法,其中該第二運算為一逐深度(depthwise)卷積運算,且對該第一輸出資料進行該第二運算的步驟包括:當該第一暫存區所暫存的該第一輸出資料的量大於Wkd×Hkd時,進行該第二運算。 A data processing method based on a convolutional neural network as described in claim 1, wherein the second operation is a depthwise convolution operation, and the step of performing the second operation on the first output data includes: performing the second operation when the amount of the first output data stored in the first buffer is greater than W kd ×H kd . 如請求項2所述的基於卷積神經網路的資料處理方法,其中該第三運算為一逐點(pointwise)卷積運算,該第三過濾器 的高及寬皆為1,且對該第三輸入資料進行該第三運算的步驟包括:對每一該暫存的第二輸出資料進行該第三運算。 A data processing method based on a convolutional neural network as described in claim 2, wherein the third operation is a pointwise convolution operation, the height and width of the third filter are both 1, and the step of performing the third operation on the third input data includes: performing the third operation on each of the temporarily stored second output data. 如請求項4所述的基於卷積神經網路的資料處理方法,其中該第一運算為一卷積運算,且自該記憶體讀取該輸入資料的步驟包括:對該輸入資料往一第一滑動方向讀取該第一部分資料,其中該第一滑動方向平行於該輸入資料的高。 As described in claim 4, the data processing method based on the convolution neural network, wherein the first operation is a convolution operation, and the step of reading the input data from the memory includes: reading the first portion of the input data in a first sliding direction, wherein the first sliding direction is parallel to the height of the input data. 如請求項4所述的基於卷積神經網路的資料處理方法,其中該第一運算為一卷積運算,且自該記憶體讀取該輸入資料的步驟包括:對該輸入資料往一第二滑動方向讀取該第一部分資料,其中該第二滑動方向平行於該輸入資料的寬。 A data processing method based on a convolutional neural network as described in claim 4, wherein the first operation is a convolution operation, and the step of reading the input data from the memory includes: reading the first portion of the input data in a second sliding direction, wherein the second sliding direction is parallel to the width of the input data. 如請求項2所述的基於卷積神經網路的資料處理方法,其中對該輸入資料中的該第一部份資料進行該第一運算、對該第一輸出資料進行該第二運算、或對該第二輸出資料進行該第三運算的步驟包括:將該第一部份資料區分成多個第一脈動陣列(systolic array)輸入;以及分別對該些第一脈動陣列輸入進行該第一運算,以取得多個第一脈動陣列輸出,其中該些第一脈動陣列輸出組成該第一輸出資料。 The data processing method based on the convolution neural network as described in claim 2, wherein the step of performing the first operation on the first part of the input data, performing the second operation on the first output data, or performing the third operation on the second output data includes: dividing the first part of the data into a plurality of first systolic array inputs; and performing the first operation on the first systolic array inputs respectively to obtain a plurality of first systolic array outputs, wherein the first systolic array outputs constitute the first output data. 一種基於卷積神經網路的運算電路,包括:一記憶體,用以儲存一輸入資料;一處理元件,耦接該記憶體,並包括:一第一運算器,用以對該輸入資料中的一第一部份資料進行一第一運算以取得一第一輸出資料,並暫存該第一輸出資料至該處理元件的一第一暫存記憶體,其中該第一運算設有一第一過濾器,該第一輸出資料的大小相關於該第一運算的該第一過濾器的大小及該第一部份資料的大小;一第二運算器,用以當該第一暫存記憶體所暫存的該第一輸出資料的量大於一第一預定資料量、MH×Hf1o未小於Hkd且MW×Wf1o未小於Wkd時,對該第一輸出資料進行該第二運算以取得一第二輸出資料,並暫存該第二輸出資料至一第二暫存記憶體,其中該第二運算設有一第二過濾器,該第一預定資料量為該第二過濾器的大小,該第二輸出資料的大小相關於該第二運算的該第二過濾器的大小,該第二過濾器的高(height)為Hkd,該第二過濾器的寬(width)為Wkd,該第二過濾器的大小為Wkd×Hkd,每一區段的該第一輸出資料的高為Hf1o,每一區段的該第一輸出資料的寬為Wf1o,Hkd、Wkd、Hf1o及Wf1o為正整數,該第一暫存記憶體所暫存的該第一輸出資料的量所組成的高為MH×Hf1o且所組成的寬為MW×Wf1o,MH及MW為倍數且為正整數;該第二暫存記憶體,用以儲存該第二輸出資料;以及一第三運算器,用以將對該第二輸出資料進行一第三運 算所得出的一第三輸出資料輸出至該記憶體,其中,該第二運算器進行該第二運算時,該第一運算器持續進行該第一運算。 A computation circuit based on a convolutional neural network comprises: a memory for storing input data; a processing element coupled to the memory and comprising: a first operator for performing a first operation on a first part of the input data to obtain a first output data, and temporarily storing the first output data in a first temporary storage of the processing element, wherein the first operation is provided with a first filter, and the size of the first output data is related to the size of the first filter of the first operation and the size of the first part of the data; a second operator for performing a first operation on a first part of the input data to obtain a first output data when the amount of the first output data temporarily stored in the first temporary storage is greater than a first predetermined amount of data, M H ×H f1o is not less than H kd , and M W ×W f1o is not less than W kd , the second operation is performed on the first output data to obtain a second output data, and the second output data is temporarily stored in a second temporary storage, wherein the second operation is provided with a second filter, the first predetermined data amount is the size of the second filter, the size of the second output data is related to the size of the second filter of the second operation, the height of the second filter is H kd , the width of the second filter is W kd , the size of the second filter is W kd ×H kd , the height of the first output data of each segment is H f1o , the width of the first output data of each segment is W f1o , H kd , W kd , H f1o and W f1o is a positive integer, the amount of the first output data temporarily stored in the first temporary storage memory has a height of MH ×H f1o and a width of MW ×W f1o , MH and MW are multiples and positive integers; the second temporary storage memory is used to store the second output data; and a third operator is used to output a third output data obtained by performing a third operation on the second output data to the memory, wherein when the second operator performs the second operation, the first operator continues to perform the first operation. 如請求項9所述的基於卷積神經網路的運算電路,其中,該第一運算器在一單位時間內具有一第一最大運算量,該第二運算器在該單位時間內具有一第二最大運算量,該第三運算器在該單位時間內具有一第三最大運算量,該第一最大運算量大於該第二最大運算量,該第一最大運算量大於該第三最大運算量。 The computing circuit based on the convolution neural network as described in claim 9, wherein the first operator has a first maximum computing amount within a unit time, the second operator has a second maximum computing amount within the unit time, the third operator has a third maximum computing amount within the unit time, the first maximum computing amount is greater than the second maximum computing amount, and the first maximum computing amount is greater than the third maximum computing amount. 如請求項9所述的基於卷積神經網路的運算電路,其中,該第三運算器運行該第三運算時,該第一運算器及該第二運算器持續運行該第一運算及該第二運算。 The computing circuit based on the convolutional neural network as described in claim 9, wherein when the third operator performs the third operation, the first operator and the second operator continue to perform the first operation and the second operation. 如請求項9所述的基於卷積神經網路的運算電路,其中,該第一暫存記憶體以及該第二暫存記憶體為一靜態隨機存取記憶體。 The computing circuit based on the convolutional neural network as described in claim 9, wherein the first temporary storage memory and the second temporary storage memory are static random access memories. 一種電腦可讀取儲存媒體,用於儲存一程式碼,一處理器載入該程式碼以執行如請求項1至8中任一項所述的基於卷積神經網路的資料處理方法。 A computer-readable storage medium for storing a program code, a processor loading the program code to execute a data processing method based on a convolutional neural network as described in any one of claims 1 to 8.
TW110140625A 2021-01-21 2021-11-01 Computing circuit and data processing method based on convolution neural network and computer readable storage medium TWI840715B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210055056.1A CN114781626A (en) 2021-01-21 2022-01-18 Arithmetic circuit, data processing method, and computer-readable storage medium
US17/578,416 US20220230055A1 (en) 2021-01-21 2022-01-18 Computing circuit and data processing method based on convolutional neural network and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163139809P 2021-01-21 2021-01-21
US63/139,809 2021-01-21

Publications (2)

Publication Number Publication Date
TW202230229A TW202230229A (en) 2022-08-01
TWI840715B true TWI840715B (en) 2024-05-01

Family

ID=

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147318A1 (en) 2017-11-14 2019-05-16 Google Llc Highly Efficient Convolutional Neural Networks

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190147318A1 (en) 2017-11-14 2019-05-16 Google Llc Highly Efficient Convolutional Neural Networks

Similar Documents

Publication Publication Date Title
US11948070B2 (en) Hardware implementation of a convolutional neural network
EP3265907B1 (en) Data processing using resistive memory arrays
CN108573305B (en) Data processing method, equipment and device
CN105117351B (en) To the method and device of buffering write data
CN108629406B (en) Arithmetic device for convolutional neural network
WO2022206556A1 (en) Matrix operation method and apparatus for image data, device, and storage medium
CN108804973B (en) Hardware architecture of target detection algorithm based on deep learning and execution method thereof
JP7381429B2 (en) Storage system and method for accelerating hierarchical sorting around storage
TWI648640B (en) A parallel hardware searching system for building artificial intelligent computer
TWI840715B (en) Computing circuit and data processing method based on convolution neural network and computer readable storage medium
CN108764182B (en) Optimized acceleration method and device for artificial intelligence
US20180173460A1 (en) Contention reduction scheduler for nand flash array with raid
CN116992203A (en) FPGA-based large-scale high-throughput sparse matrix vector integer multiplication method
TW202230229A (en) Computing circuit and data processing method based on convolution neural network and computer readable storage medium
CN109902821B (en) Data processing method and device and related components
CN109800867B (en) Data calling method based on FPGA off-chip memory
CN111047026B (en) Memory chip capable of executing artificial intelligent operation and operation method thereof
Okafor et al. Fusing in-storage and near-storage acceleration of convolutional neural networks
CN114781626A (en) Arithmetic circuit, data processing method, and computer-readable storage medium
JP2024516514A (en) Memory mapping of activations for implementing convolutional neural networks
US20220172032A1 (en) Neural network circuit
CN109002467B (en) Database sorting method and system based on vectorization execution
Ulrich NODF–a FORTRAN program for nestedness analysis
Huai et al. Crossbar-aligned & integer-only neural network compression for efficient in-memory acceleration
TWI788257B (en) Method and non-transitory computer readable medium for compute-in-memory macro arrangement, and electronic device applying the same