WO2020059074A1 - Data construct, information processing device, method, and program - Google Patents
Data construct, information processing device, method, and program Download PDFInfo
- Publication number
- WO2020059074A1 WO2020059074A1 PCT/JP2018/034779 JP2018034779W WO2020059074A1 WO 2020059074 A1 WO2020059074 A1 WO 2020059074A1 JP 2018034779 W JP2018034779 W JP 2018034779W WO 2020059074 A1 WO2020059074 A1 WO 2020059074A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- exponent
- value
- floating
- mantissa
- bit
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/38—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
- G06F7/48—Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
- G06F7/483—Computations with numbers represented by a non-linear combination of denominational numbers, e.g. rational numbers, logarithmic number system or floating-point numbers
Definitions
- the present disclosure relates to information processing technology.
- Patent Document 1 Conventionally, as information processing using a floating-point number, a technology for compressing audio data by combining a fixed-point method and a floating-point method using a flag indicating whether the expression is a fixed-point expression or a floating-point expression (see Patent Document 1).
- Patent Document 2 See Patent Document 2, a circuit for changing the bit length of data expressed in the floating point format in the IEEE 754 format (see Patent Document 2), and a technology for reducing the scale of a circuit that handles a denormalized number in the floating point format in the IEEE 754 format (Patent Document 2). 3) has been proposed.
- CNN convolutional Neural Network
- 8-bit fixed-point number representation is used.
- a data set for calibration is inferred using bit-floating-point representation, and the distribution of each layer / data obtained therefrom and the distribution obtained by quantizing them are used to determine a scale factor that minimizes information loss.
- a calculation method has been proposed (see Non-Patent Document 1).
- Non-Patent Document 2 As information processing using a floating point number having a relatively small bit width, a unique floating point number expression (ms-fp8) is used (see Non-Patent Document 2), and a low-precision floating point number defined by IEEE is used. There has been proposed a technique (see Non-Patent Document 3) for improving the inference accuracy of the CNN operation by using expressions (FP8 / FP7 / FP6).
- floating point numbers have been used in information processing, and the precision of numerical values that can be represented by floating point numbers depends on the bit width of the floating point type used, particularly the bit width of the mantissa.
- information processing using floating-point numbers there is an information processing method that can obtain an advantage by expanding a dynamic range that can be represented by data even if data accuracy (sampling width) is reduced.
- a floating-point number is used for data representation, but the inference accuracy of CNN is more dependent on the dynamic range of data than the accuracy of data.
- the present disclosure has been made in view of the above-described problems, and provides a data structure for a floating-point number suitable for specific information processing in which an advantage is obtained by expanding a dynamic range that can be represented by data even if data accuracy is reduced.
- the task is to provide
- An example of the present disclosure is a data structure for recording a floating-point number with a predetermined bit width in a storage device of an information processing device, wherein an exponent of the floating-point number is recorded in the predetermined bit width.
- the data structure is an extended exponent that expresses a part of the exponent, and the extended exponent and the exponent are combined to represent the exponent of the floating-point number.
- an example of the present disclosure uses a first exponent part, a first mantissa part, and a part of the first mantissa part when the value of the first exponent part becomes a predetermined value.
- the used bit width is expanded in order from the lower bit, and the exponent calculated with reference to the first exponent part and the extended exponent part is a second exponent part having a wider bit width than the first exponent part.
- Exponent output means for outputting as a value recorded in the, and, of the first mantissa, outputs the value of the bit not used as the extended exponent as it is, and outputs the value of the bit used as the extended exponent.
- the second temporary Mantissa output means for outputting the values recorded in the parts, an information processing apparatus including a.
- the present disclosure can be understood as an information processing device, a system, a method executed by a computer, or a program executed by a computer.
- the present disclosure can be understood as such a program recorded on a recording medium readable by a computer, another device, a machine, or the like.
- a computer-readable recording medium is a recording medium that stores information such as data and programs by electrical, magnetic, optical, mechanical, or chemical action and can be read from a computer or the like.
- FIG. 2 is a schematic diagram illustrating a hardware configuration of a CNN processing system according to the embodiment. It is a figure showing an outline of functional composition of a CNN processing system concerning an embodiment.
- FIG. 2 is a diagram illustrating an outline of a connection circuit (Rotate circuit) according to the embodiment.
- 6 is a flowchart (A) illustrating an outline of a flow of a control process of the Rotate circuit according to the embodiment.
- 6 is a flowchart (B) illustrating an outline of a flow of a control process of the Rotate circuit according to the embodiment.
- FIG. 9 is a diagram illustrating a relationship among a remainder x_mod6, a control signal SEL, and a read signal RD in the embodiment.
- FIG. 6 is a time chart of signals used in the input buffer and the connection circuit when the control processing according to the embodiment is executed.
- FIG. 3 is a diagram illustrating a data structure of a unique 9-bit floating-point type PFU-FP9 used in the embodiment.
- FIG. 4 is a diagram illustrating a data structure of a unique 8-bit floating point type PFU-FP8 used in the embodiment.
- FIG. 3 is a diagram schematically illustrating a functional configuration of a conversion circuit from a floating-point PFU-FP8 to a floating-point PFU-FP9 in the embodiment.
- FIG. 4 is a diagram illustrating a conversion circuit from a floating-point PFU-FP8 to a floating-point PFU-FP9 in the embodiment.
- FIG. 3 is a diagram showing a conversion circuit from a floating-point FP8 (IEEE) to a floating-point FP9 (IEEE) in the prior art. It is a figure showing an outline of functional composition of a variation of a CNN processing system concerning an embodiment.
- FIG. 3 is a diagram showing a data structure of a unique 7-bit floating point type PFU-FP7 used in the embodiment.
- FIG. 4 is a diagram illustrating a data structure of a unique 6-bit floating-point type PFU-FP6 used in the embodiment.
- FIG. 1 is a schematic diagram illustrating a hardware configuration of a convolutional neural network (CNN) processing system 1 according to the present embodiment.
- the CNN processing system 1 includes a CPU (Central Processing Unit) 11, a host-side RAM (Random Access Memory) 12a, an FPGA-side RAM 12b, a ROM (Read Only Memory), and an EEPROM (Electrically Available Radio Anywhere Radio Anywhere Radio Memory Array). ) Or a hard disk drive (HDD), a communication unit such as a network interface card (NIC) 15, a field-programmable gate array (FPGA) 16, and the like.
- NIC network interface card
- FPGA field-programmable gate array
- image data that is obtained by being imaged by an external camera and includes a plurality of pixels arranged in a predetermined order is used as input data.
- the type of the input data is not limited to the image data, and the elements constituting the input data are not limited to the pixels.
- the CNN processing system 1 can handle various data such as natural language data, game data, and time-series data as learning / inference targets.
- the CNN processing system 1 is a system that uses a FPGA as an accelerator from a host machine having the CPU 11 mounted thereon.
- the image data obtained from the external camera is read into the FPGA-side RAM 12b via the host-side RAM 12a, and under the control of the CPU 11, an inference operation or the like is performed in the FPGA.
- the output data as the calculation result is transmitted to the outside by the CPU 11 using the NIC 15, and is utilized.
- FIG. 2 is a diagram schematically illustrating a functional configuration of the CNN processing system 1 according to the present embodiment.
- the CNN processing system 1 the programs recorded in the storage device 14 are read out to the RAMs 12a and 12b, executed by the CPU 11 and / or the FPGA 16, and the hardware provided in the server 50 is controlled. 2 functions as an information processing apparatus including the input data reading unit 21, the input buffer 22, the product-sum operation module 23, the output buffer 24, the accumulation addition pipeline 25, the weight data reading unit 26, and the weight buffer 27 shown in FIG. .
- each function of the CNN processing system 1 is executed by the CPU 11 and / or the FPGA 16 which are general-purpose processors. Alternatively, it may be executed by a plurality of dedicated processors.
- the input data reading unit 21 reads, from the FPGA-side RAM 12b, input data (image data in the present embodiment) including a plurality of elements (pixels in the present embodiment) arranged in a predetermined order, and stores the read data in the input buffer 22. Write.
- the input buffer 22 has ⁇ memories 0 to ⁇ 1.
- the input data reading unit 21 stores a plurality of elements in the input data one by one in order from the first memory 0 in the memory 0 to ⁇ 1 in a predetermined order, and reaches the last memory ⁇ 1. Then, the process returns to the top memory 0 and stores the elements again in a predetermined order.
- the product-sum operation module 23 receives input data from the input buffer 22 with an even input width ⁇ , and receives weight data from the weight data reading unit 26 as an odd number of taps r (weight data width. This corresponds to the kernel width kw in the CNN). ), And performs a Wingrad conversion process, a weighting process, and a Wingrad inverse transform process, and outputs output data with an even output width m.
- the product-sum operation module 23 performs the input data Winograd conversion process on the input data, and performs the weight data Winograd conversion process on the weight data.
- the result of the Winograd conversion processing of the weight data is recorded in the weight buffer 27 because it is used a plurality of times.
- the product-sum operation module 23 performs a product-sum operation using the input data and the weight data to which the Winograd conversion process has been applied, performs the Winograd reverse conversion process on the result, and obtains output data.
- the obtained output data may be subjected to a normalization process or a bias process, an activation process using a so-called ReLU (Rectified Linear Unit) function, or the like.
- the output data is rearranged in the output buffer 24 so that the writing order is sequential, and is written to the FPGA-side RAM 12b via the cumulative addition pipeline 25.
- the input width ⁇ is preferably set to an even fixed value in order to always obtain the maximum performance by operating all the multipliers.
- an odd value such as 1, 3, 5, 7, 11 is often used for the kernel width kw corresponding to the number of taps r.
- the convolution operation is performed by inserting padding pixels by the same number of pixels ((kw ⁇ 1) ⁇ 2) at the left and right ends of the input image.
- the size of the input image and the size of the output image can be made the same.
- the input width ⁇ is often even
- the output width m is even
- the number of taps r is odd, so that the technology according to the present disclosure can be used.
- FIG. 3 is a diagram showing an outline of a connection circuit (Rotate circuit) according to the present embodiment.
- the connection circuit is arranged between the memories 0 to ⁇ -1 of the input buffer 22 and the input terminals 0 to ⁇ -1 of the product-sum operation module 23 described with reference to FIG.
- connection circuit connects the input terminals 0 to ⁇ -1 and the memories 0 to ⁇ -1 for receiving input data with the input width ⁇ in the product-sum operation module 23.
- the connection circuit connects the odd-numbered input terminal to the odd-numbered memory, and connects the even-numbered input terminal to the even-numbered memory.
- the connection between the other memory and the input terminal (specifically, the connection between the odd-numbered input terminal and the even-numbered memory, and the connection between the even-numbered input terminal and the odd-numbered memory) ) May be arbitrarily omitted, and in this embodiment, the input terminals 0 to ⁇ -1 and the memories 0 to ⁇ -1 are connected only by odd numbers and even numbers.
- FIG. 3 shows an example of a connection line when the input width ⁇ is 6, and RAMs 0 to 5 in the figure correspond to memories 0 to 5.
- the memory 0 is connected only to the input terminals 0, 2 and 4
- the memory 1 is connected only to the input terminals 1, 3 and 5, the connection between the memory 0 and the input terminals 1, 3 and 5, and the connection between the memory 1 and the input terminals 0, It can be seen that the connections to 2 and 4 have been omitted. This is the same for the memories 3 to 5.
- the connection circuit may be a physical circuit or a logic circuit in a programmable device.
- the control unit When data from the input buffer 22 is input to the product-sum operation module 23, the control unit (selector) transmits data from any one of the memories 0 to ⁇ -1 to any one of the input terminals 0 to ⁇ -1. Controls whether to enter data. Specifically, the control unit divides the number of the memory input to the input terminal in the i-th process by “input terminal number + (output width m * (i ⁇ 1))” by the input width ⁇ . It is the value of the remainder at that time. In actual control, the content of the control signal by the control unit may be determined by a calculation formula, or may be determined by referring to a map, a table, or the like. A specific flow of processing by the control unit will be described later using a flowchart.
- memories 0 to 5 are storage areas in the input buffer 22 when the input width ⁇ is 6.
- Pixel memories D0 to D23 in the image data are stored in the memories 0 to 5, respectively.
- the input data reading unit 21 stores the plurality of elements in the input data one by one in the memory 0 to ⁇ 1 in order from the first memory 0 according to a predetermined order, and stores the plurality of elements in the last memory ⁇ .
- the process returns to the first memory 0 and stores the elements again in a predetermined order. Therefore, the memory 0 stores the pixel data D0, D6, D12, and D18 in order from the top, and the memory 1 stores D1, D7, D13, and D19 in order from the top.
- Pixel data stored in each of the memories 0 to 5 is specified by a variable indicating an address in the memory.
- variables for indicating the addresses in the memories 0 to 5 are the addresses RA0 to RA5.
- the control unit transmits control signals SEL0 to SEL5 for selecting data to each of the input terminals 0 to 5, so that the pixel data is transmitted from any of the memories 0 to 5 to any of the input terminals 0 to 5. Control what is done.
- the data sent to the input terminals 0 to 5 are the data ROT0 to ROT5 obtained by rotating the image data so as to be left-justified (the image data of the smallest number comes to I [0]).
- FIGS. 4 and 5 are flowcharts showing the outline of the flow of control processing of the connection circuit (Rotate circuit) between the input buffer 22 and the product-sum operation module 23 according to the present embodiment.
- the processing shown in this flowchart is repeatedly executed during the inference calculation in the CNN processing system 1.
- step S101 parameters are initialized.
- the control unit sets a value corresponding to a kernel size width (kernel width) kw at the time of executing the convolution operation in the increment x_inc of the coordinate x.
- the control unit sets 6 when the kernel width kw is 1, 4 when the kernel width kw is 3, 2 when the kernel width kw is 5, and adds x to the increment x_inc.
- addresses RA0 to RA5 are set.
- the control unit sets the current value of the quotient x_div6 to each of the addresses RA0 to RA5, which is a variable for indicating the address in the memory in each of the memories 0 to 5 (step S102).
- the control unit compares the current value of the remainder x_mod6 with a predetermined value, and updates the values of the addresses RA0 to RA5 according to the result of the comparison (steps S103 to S112).
- step S113 the content of the control signal is determined.
- the control unit selects the read signal RD selected by the control signals SEL0 to SEL5 according to the value of the remainder x_mod6 as shown in FIG.
- FIG. 6 is a diagram showing the relationship between the remainder x_mod6, the control signal SEL, and the read signal RD in the present embodiment.
- the memory number input to the input terminal in the i-th processing is the remainder value obtained by dividing “input terminal number + (output width m * (i ⁇ 1))” by the input width ⁇ . is there. Thereafter, the process proceeds to step S114.
- step S114 pixel data is read from the input buffer 22 and input to the corresponding input terminal.
- the control unit outputs the values of the addresses RA0 to RA5 to the corresponding memories 0 to 5, and outputs the values of the control signals SEL0 to SEL5 to the connection circuit (Rotate circuit).
- the pixel data at the addresses indicated by the addresses RA0 to RA5 is read from the memories 0 to 5, and is input to the input terminals of the numbers specified by the control signals SEL0 to SEL5. Thereafter, the process proceeds to step S115.
- steps S115 to S117 the parameters are updated.
- the control unit updates the coordinate x to the value of “the value of the coordinate x before the update + the increment x_inc”, and further updates the remainder x_mod6 to the value of the “the value of the remainder x_mod6 before the update + the increment x_inc” (step).
- the control unit subtracts the input width ⁇ from the remainder x_mod6 to obtain the remainder x_mod6.
- x_mod6 is adjusted to a value smaller than the input width ⁇ , and 1 is added to the quotient x_div6 (step S117). Thereafter, the process proceeds to step S118.
- step S118 it is determined whether or not the processing needs to be completed.
- the control unit determines whether or not the updated coordinate x updated in step S115 is equal to or larger than the width of the image data in the X-axis direction. If the coordinate x is smaller than the width of the image data in the X-axis direction, the process returns to step S102 because unprocessed pixels remain in the X-axis direction. On the other hand, when the coordinate x is equal to or larger than the width of the image data in the X-axis direction, the processing of this flowchart ends.
- FIG. 7 is a time chart of signals used in the input buffer 22 and the connection circuit when the control processing according to the present embodiment is executed. According to this time chart, if the even-numbered memories and the input terminals are connected to each other and the odd-numbered memories and the input terminals are connected to each other, it is possible to pass the input data to the product-sum operation module 23 without any problem. I understand.
- a plurality of different data types of floating-point numbers are used to represent data.
- the host-side RAM 12a, the FPGA-side RAM 12b, the input buffer 22, and the output buffer 24 use PFU-FP8, which is a unique 8-bit floating-point type.
- PFU-FP9, which is a 9-bit floating-point type, and FP32, which is a single-precision floating-point type of the IEEE754 standard, are used (see FIG. 2).
- FIG. 8 is a diagram showing a data structure of the unique 9-bit floating point type PFU-FP9 used in the present embodiment.
- the floating-point type PFU-FP9 has a data structure for recording a floating-point number with a 9-bit width.
- the first bit is a sign bit indicating a sign (positive or negative) of a numerical value, and a second bit to a fifth bit.
- Four bits are an exponent part, and four bits from a sixth bit to a ninth bit are a mantissa part.
- FIG. 9 is a diagram showing a data structure of the unique 8-bit floating point type PFU-FP8 used in the present embodiment.
- the floating-point type PFU-FP8 has a data structure for recording a floating-point number with an 8-bit width.
- the first bit is a sign bit indicating a sign (positive or negative) of a numerical value, and a second bit to a fourth bit.
- Three bits are an exponent part, and four bits from a fifth bit to an eighth bit are a mantissa part.
- indices like the common floating-point data is determined for the range from index 2 -1 to 2 -7 (range of bits of the exponent portion from “111" to "001")
- range of bits of the exponent portion from “111" to "001”
- the exponent from 2 ⁇ 8 to 2 ⁇ 11 when the bit of the exponent is “000”
- a part of the mantissa is used as an extended exponent expressing a part of the exponent.
- the exponent part and the exponent part are combined to represent the actual exponent.
- the bit width used as the extension exponent is extended in order from the lower bit (8th bit) in accordance with the value of the exponent to be expressed, and a flag is set in any bit of the extension exponent.
- the exponent of the floating point number is represented depending on whether
- the exponent is 2 ⁇ 8 , the value of the exponent part is “000”, and 7 bits If there is a flag (“1") in the eye, the exponent is 2-9. If the value of the exponent part is "000” and the flag ("1") is in the sixth bit, the exponent is 2-10 . If the value of the exponent part is "000” and the flag ("1") is in the fifth bit, the exponent is 2-11 . According to such a unique floating-point type, the accuracy of data is coarse, but the expressible dynamic range is widened, so that the inference accuracy by the CNN can be improved.
- FIG. 10 is a diagram schematically illustrating a functional configuration of a conversion circuit from the floating-point PFU-FP8 to the floating-point PFU-FP9 in the present embodiment.
- the conversion circuit includes a receiving unit 31, an exponent output unit 32, and a mantissa output unit 33.
- the receiving unit 31 receives the sign of the floating-point PFU-FP8, the exponent of the floating-point PFU-FP8, and the mantissa of the floating-point PFU-FP8.
- the exponent output unit 32 converts the exponent calculated with reference to the exponent part and the extended exponent part of the floating-point PFU-FP8 into a floating-point PFU-FP9 having a wider bit width than the exponent part of the floating-point PFU-FP8. Is output as the value recorded in the exponent part of.
- the mantissa output unit 33 outputs, as it is, the value of the bit that is not used as the extension exponent in the mantissa of the floating-point type PFU-FP8, and outputs the value of the bit that is used as the extension exponent as 0. Thus, a value recorded in the mantissa of the floating-point type PFU-FP9 is output.
- FIG. 11 is a diagram showing a conversion circuit from the floating-point PFU-FP8 to the floating-point PFU-FP9 in the present embodiment.
- FP8_f0 to FP3 indicate the input of the mantissa of the floating-point PFU-FP8
- FP8_exp0 to 2 indicate the input of the exponent of the floating-point PFU-FP8
- FP8_sign indicates the input of the floating-point PFU-FP8.
- the input of the code of FP8 is shown.
- FP9_f0 to FP3 indicate the output of the mantissa of the floating-point PFU-FP9
- FP9_exp0 to 3 indicate the output of the exponent of the floating-point PFU-FP9
- FP9_sign indicates the output of the floating-point PFU-FP9. The output of the code is shown.
- FIG. 12 is a diagram showing a conversion circuit from a floating-point type FP8 (IEEE) to a floating-point type FP9 (IEEE) in the prior art.
- FP8_f0 to 3 indicate the input of the mantissa of the floating-point FP8 (IEEE)
- FP8_exp0 to 2 indicate the input of the exponent of the floating-point FP8 (IEEE)
- FP8_sign indicates the input of the floating-point. This shows input of a code of FP8 (IEEE).
- FP9_f0 to FP3 indicate the output of the mantissa of the floating-point FP9 (IEEE)
- FP9_exp0 to 3 indicate the output of the exponent of the floating-point FP9 (IEEE)
- FP9_sign indicates the output of the floating-point FP9 (IEEE). 2 shows the output of the IEEE (IEEE) code.
- the circuits (1) and (5) indicated by broken lines in FIGS. 11 and 12 are circuits for determining whether or not all exponents are 0.
- the circuit (2) and the circuit (6) are circuits that calculate the exponent indicated by the mantissa when the exponents are all 0 (that is, when the exponents are denormalized numbers).
- the circuit (3) and the circuit (7) are circuits for obtaining the value of the exponent part when the denormalized number is expressed as a normalized number in the FP9.
- the circuit (4) is a circuit for converting the mantissa part of the denormalized number in the floating-point type FP8 (IEEE) to the mantissa part of the normalized number in the floating-point type FP9 (IEEE), and performs a bit shift. It is.
- the circuit (8) is a circuit for converting the mantissa part when expressing the denormalized number in the floating-point PFU-FP8 into the mantissa part of the normalized number in the
- the floating-point type PFU-FP8 and the floating-point type PFU-FP9 are employed as the unique floating-point types, but other floating-point types may be employed.
- FIG. 13 is a diagram schematically illustrating a functional configuration of the CNN processing system 1b according to the present embodiment.
- the functional configuration of the CNN processing system 1b is substantially the same as that of the CNN processing system 1 described with reference to FIG. 2 except that the input buffer 22, the Wingrad conversion process, the cumulative addition pipeline 25, and the like are omitted.
- the floating point type employed is different.
- the host-side RAM 12a, the FPGA-side RAM 12b, the input buffer 22, and the output buffer 24 use the unique 6-bit floating-point type PFU-FP6.
- PFU-FP7, a 7-bit floating-point type, and FP32, a single-precision floating-point type of the IEEE754 standard, are used.
- FIG. 14 is a diagram showing a data structure of a unique 7-bit floating point type PFU-FP7 used in the present embodiment.
- the floating-point type PFU-FP7 has a data structure for recording a floating-point number with a 7-bit width, and the first bit is a sign bit indicating a sign (positive or negative) of a numerical value, and a second bit to a fifth bit.
- Four bits are an exponent part, and two bits from the sixth bit to the seventh bit are a mantissa part.
- FIG. 15 is a diagram showing a data structure of a unique 6-bit floating point type PFU-FP6 used in the present embodiment.
- the floating-point type PFU-FP6 is a data structure for recording a floating-point number with a 6-bit width, and the first bit is a sign bit indicating a sign (positive or negative) of a numerical value, and a second bit to a fourth bit. Three bits are an exponent part, and two bits from the fifth bit to the sixth bit are a mantissa part.
- the second bit (seventh bit) from the lower part of the mantissa part Is an extended exponent part that expresses a part of the exponent (see the part surrounded by a broken line in the figure).
- the exponent of the floating-point number is represented by a combination of the state of the flag at the 7th bit, which is the extended exponent part, and the value of the exponent part.
- the values of the exponents 2-6 to 2-11 are represented by 4-bit combinations of the exponent part and the extended exponent part, the values of "0101" to "0000". According to such a unique floating-point type, the accuracy of data is coarse, but the expressible dynamic range is widened, so that the inference accuracy by the CNN can be improved.
- the number of connection circuits between the memories 0 to ⁇ of the input buffer 22 and the input terminals 0 to ⁇ of the product-sum operation module 23 is reduced, and the circuit scale (even if it is a logical circuit, Circuit).
- the circuit scale even if it is a logical circuit, Circuit.
- the circuit scale for inputting data to the arithmetic module that performs is reduced to 1/4.
- a part of the mantissa part becomes an extended exponent part expressing a part of the exponent, and the extended exponent part
- the accuracy of the data is coarse, but the expressible dynamic range is widened, and the inference accuracy by the CNN can be improved.
- the conversion cost of the floating-point type is small, and the conversion circuit may be a physical circuit or a logic circuit. It is possible to improve the situation where resources such as a memory and a logic circuit are scarce, and it is possible to improve productivity and customizability by using a dedicated device such as an ASIC.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Mathematics (AREA)
- Computing Systems (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Nonlinear Science (AREA)
- General Engineering & Computer Science (AREA)
- Complex Calculations (AREA)
Abstract
A data construct for recording a floating-point number, said data construct comprising an exponent part in which the exponent of the floating-point number is recorded among a prescribed bit width, and a significand part in which the significand of the floating-point number is recorded among a prescribed bit width, wherein some of the significand part becomes an extended exponent part that expresses some of the exponent if the value of the exponent part reaches a prescribed value, and the extended exponent part and the exponent part combined represent the exponent of the floating-point number.
Description
本開示は、情報処理技術に関する。
The present disclosure relates to information processing technology.
従来、浮動小数点数を用いた情報処理として、固定小数点表現なのか浮動小数点表現なのかを示すフラグを用い、固定小数点方式と浮動小数点方式とを組み合わせてオーディオデータを圧縮する技術(特許文献1を参照)や、IEEE754形式で浮動小数点表現されたデータのビット長を変える回路(特許文献2を参照)、IEEE754形式で浮動小数点表現の非正規化数を扱う回路の規模を削減する技術(特許文献3を参照)、等が提案されている。また、認識精度を可能な限り落とさずに畳み込みニューラルネットワーク(Convolutional Neural Network。以下、「CNN」と称する)の各層の入力/出力/重み値データを8ビット固定小数点数表現で表すために、32ビット浮動小数点数表現を用いてキャリブレーション用のデータセットをまず推論し、そこで得た各層/各データの分布と、それらを量子化した分布とで情報の損失が最も小さくなるようなスケールファクタを算出する手法が提案されている(非特許文献1を参照)。更に、ビット幅が比較的小さい浮動小数点数を用いる情報処理として、独自の浮動小数点数表現(ms-fp8)を用いることや(非特許文献2を参照)、IEEE定義の低精度の浮動小数点数表現(FP8/FP7/FP6)によるCNN演算の高推論精度の向上手法(非特許文献3を参照)が提案されている。
Conventionally, as information processing using a floating-point number, a technology for compressing audio data by combining a fixed-point method and a floating-point method using a flag indicating whether the expression is a fixed-point expression or a floating-point expression (see Patent Document 1). (See Patent Document 2), a circuit for changing the bit length of data expressed in the floating point format in the IEEE 754 format (see Patent Document 2), and a technology for reducing the scale of a circuit that handles a denormalized number in the floating point format in the IEEE 754 format (Patent Document 2). 3) has been proposed. In order to represent the input / output / weight value data of each layer of a convolutional neural network (Convolutional Neural Network; hereinafter, referred to as “CNN”) without deteriorating the recognition accuracy as much as possible, an 8-bit fixed-point number representation is used. First, a data set for calibration is inferred using bit-floating-point representation, and the distribution of each layer / data obtained therefrom and the distribution obtained by quantizing them are used to determine a scale factor that minimizes information loss. A calculation method has been proposed (see Non-Patent Document 1). Further, as information processing using a floating point number having a relatively small bit width, a unique floating point number expression (ms-fp8) is used (see Non-Patent Document 2), and a low-precision floating point number defined by IEEE is used. There has been proposed a technique (see Non-Patent Document 3) for improving the inference accuracy of the CNN operation by using expressions (FP8 / FP7 / FP6).
従来、情報処理において浮動小数点数が用いられており、浮動小数点数で表すことができる数値の精度は、用いられる浮動小数点型のビット幅、特に仮数部のビット幅に依存する。一方で、浮動小数点数が用いられる情報処理の中には、データの精度(サンプリング幅)が粗くなったとしても、データで表現可能なダイナミックレンジが広がることでメリットが得られるものがある。例えば、CNN演算ではデータ表現に浮動小数点数が用いられるが、CNNの推論精度は、データの精度に比べて、データのダイナミックレンジにより依存する。
Conventionally, floating point numbers have been used in information processing, and the precision of numerical values that can be represented by floating point numbers depends on the bit width of the floating point type used, particularly the bit width of the mantissa. On the other hand, among information processing using floating-point numbers, there is an information processing method that can obtain an advantage by expanding a dynamic range that can be represented by data even if data accuracy (sampling width) is reduced. For example, in the CNN operation, a floating-point number is used for data representation, but the inference accuracy of CNN is more dependent on the dynamic range of data than the accuracy of data.
本開示は、上記した問題に鑑み、データの精度が粗くなったとしても、データで表現可能なダイナミックレンジが広がることでメリットが得られる特定の情報処理に適した浮動小数点数のためのデータ構造を提供することを課題とする。
The present disclosure has been made in view of the above-described problems, and provides a data structure for a floating-point number suitable for specific information processing in which an advantage is obtained by expanding a dynamic range that can be represented by data even if data accuracy is reduced. The task is to provide
本開示の一例は、情報処理装置の記憶装置に、所定のビット幅で浮動小数点数を記録するためのデータ構造であって、前記所定のビット幅のうち、前記浮動小数点数の指数が記録される指数部と、前記所定のビット幅のうち、前記浮動小数点数の仮数が記録される仮数部と、を備え、前記仮数部の一部は、前記指数部の値が所定値となった場合に、指数の一部を表現する拡張指数部となり、前記拡張指数部と前記指数部とは、組み合わせられて前記浮動小数点数の指数を表す、データ構造である。
An example of the present disclosure is a data structure for recording a floating-point number with a predetermined bit width in a storage device of an information processing device, wherein an exponent of the floating-point number is recorded in the predetermined bit width. An exponent part, and a mantissa part of the predetermined bit width, in which a mantissa of the floating-point number is recorded, wherein a part of the mantissa part is when the value of the exponent part becomes a predetermined value. In addition, the data structure is an extended exponent that expresses a part of the exponent, and the extended exponent and the exponent are combined to represent the exponent of the floating-point number.
更に、本開示の一例は、第一の指数部と、第一の仮数部と、前記第一の指数部の値が所定値となった場合に該第一の仮数部の一部を用いて指数の一部を表現する拡張指数部とを有するデータ構造で記録された浮動小数点数の入力を受け付ける受付手段と、前記拡張指数部では、表現される指数の値に応じて該拡張指数部として用いられるビット幅が下位ビットから順に拡張され、前記第一の指数部及び前記拡張指数部を参照して算出された指数を、前記第一の指数部より広いビット幅を有する第二の指数部に記録される値として出力する指数出力手段と、前記第一の仮数部のうち、前記拡張指数部として用いられていないビットの値をそのまま出力し、前記拡張指数部として用いられていたビットの値を0として出力することで、第二の仮数部に記録される値を出力する仮数出力手段と、を備える情報処理装置である。
Furthermore, an example of the present disclosure uses a first exponent part, a first mantissa part, and a part of the first mantissa part when the value of the first exponent part becomes a predetermined value. Receiving means for receiving an input of a floating-point number recorded in a data structure having an extended exponent that expresses a part of an exponent; and the extended exponent includes an extended exponent according to a value of the exponent to be expressed. The used bit width is expanded in order from the lower bit, and the exponent calculated with reference to the first exponent part and the extended exponent part is a second exponent part having a wider bit width than the first exponent part. Exponent output means for outputting as a value recorded in the, and, of the first mantissa, outputs the value of the bit not used as the extended exponent as it is, and outputs the value of the bit used as the extended exponent. By outputting the value as 0, the second temporary Mantissa output means for outputting the values recorded in the parts, an information processing apparatus including a.
本開示は、情報処理装置、システム、コンピューターによって実行される方法又はコンピューターに実行させるプログラムとして把握することが可能である。又、本開示は、そのようなプログラムをコンピューターその他の装置、機械等が読み取り可能な記録媒体に記録したものとしても把握できる。ここで、コンピューター等が読み取り可能な記録媒体とは、データやプログラム等の情報を電気的、磁気的、光学的、機械的又は化学的作用によって蓄積し、コンピューター等から読み取ることができる記録媒体をいう。
The present disclosure can be understood as an information processing device, a system, a method executed by a computer, or a program executed by a computer. In addition, the present disclosure can be understood as such a program recorded on a recording medium readable by a computer, another device, a machine, or the like. Here, a computer-readable recording medium is a recording medium that stores information such as data and programs by electrical, magnetic, optical, mechanical, or chemical action and can be read from a computer or the like. Say.
本開示によれば、データの精度が粗くなったとしても、データで表現可能なダイナミックレンジが広がることでメリットが得られる特定の情報処理に適した浮動小数点数のためのデータ構造を提供することが可能となる。
According to the present disclosure, it is possible to provide a data structure for a floating-point number suitable for specific information processing in which an advantage can be obtained by expanding a dynamic range that can be represented by data even when data accuracy is coarse. Becomes possible.
以下、本開示に係るデータ構造、情報処理装置、方法及びプログラムの実施の形態を、図面に基づいて説明する。但し、以下に説明する実施の形態は、実施形態を例示するものであって、本開示に係るデータ構造、情報処理装置、方法及びプログラムを以下に説明する具体的構成に限定するものではない。実施にあたっては、実施の態様に応じた具体的構成が適宜採用され、又、種々の改良や変形が行われてよい。
Hereinafter, embodiments of a data structure, an information processing device, a method, and a program according to the present disclosure will be described with reference to the drawings. However, the embodiments described below exemplify the embodiments, and do not limit the data structure, the information processing device, the method, and the program according to the present disclosure to the specific configurations described below. In the implementation, a specific configuration according to the embodiment is appropriately adopted, and various improvements and modifications may be made.
実施形態の説明では、本開示に係るデータ構造、情報処理装置、方法及びプログラムを、畳み込みニューラルネットワークの演算を行うためのシステムにおいて実施した場合の実施の形態について説明する。なお、本開示に係るデータ構造、情報処理装置、方法及びプログラムは、情報処理において広く用いることが可能であり、本開示の適用対象は、実施形態において示した例に限定されない。
In the description of the embodiment, an embodiment will be described in which the data structure, the information processing device, the method, and the program according to the present disclosure are implemented in a system for performing a convolutional neural network operation. The data structure, the information processing device, the method, and the program according to the present disclosure can be widely used in information processing, and the application target of the present disclosure is not limited to the examples described in the embodiments.
<システムの構成>
図1は、本実施形態に係る畳み込みニューラルネットワーク(CNN)処理システム1のハードウェア構成を示す概略図である。本実施形態に係るCNN処理システム1は、CPU(Central Processing Unit)11、ホスト側RAM(Random Access Memory)12a、FPGA側RAM12b、ROM(Read Only Memory)13、EEPROM(Electrically Erasable and Programmable Read Only Memory)やHDD(Hard Disk Drive)等の記憶装置14、NIC(Network Interface Card)15等の通信ユニット、FPGA(Field-Programmable Gate Array)16、等を備えるコンピューターである。 <System configuration>
FIG. 1 is a schematic diagram illustrating a hardware configuration of a convolutional neural network (CNN)processing system 1 according to the present embodiment. The CNN processing system 1 according to the present embodiment includes a CPU (Central Processing Unit) 11, a host-side RAM (Random Access Memory) 12a, an FPGA-side RAM 12b, a ROM (Read Only Memory), and an EEPROM (Electrically Available Radio Anywhere Radio Anywhere Radio Memory Array). ) Or a hard disk drive (HDD), a communication unit such as a network interface card (NIC) 15, a field-programmable gate array (FPGA) 16, and the like.
図1は、本実施形態に係る畳み込みニューラルネットワーク(CNN)処理システム1のハードウェア構成を示す概略図である。本実施形態に係るCNN処理システム1は、CPU(Central Processing Unit)11、ホスト側RAM(Random Access Memory)12a、FPGA側RAM12b、ROM(Read Only Memory)13、EEPROM(Electrically Erasable and Programmable Read Only Memory)やHDD(Hard Disk Drive)等の記憶装置14、NIC(Network Interface Card)15等の通信ユニット、FPGA(Field-Programmable Gate Array)16、等を備えるコンピューターである。 <System configuration>
FIG. 1 is a schematic diagram illustrating a hardware configuration of a convolutional neural network (CNN)
本実施形態では、CNN処理システム1に接続された外部カメラから入力される画像を処理する例について説明する。即ち、本実施形態では、外部カメラによって撮像されることで得られた、所定の順序に並べられた複数の画素からなる画像データが、入力データとして用いられる。但し、入力データの種類は画像データに限定されず、入力データを構成する要素も、画素に限定されない。CNN処理システム1では、学習/推論の対象として、自然言語データや、ゲームデータ、時系列データ等、様々なデータを取り扱うことが可能である。
In the present embodiment, an example of processing an image input from an external camera connected to the CNN processing system 1 will be described. That is, in the present embodiment, image data that is obtained by being imaged by an external camera and includes a plurality of pixels arranged in a predetermined order is used as input data. However, the type of the input data is not limited to the image data, and the elements constituting the input data are not limited to the pixels. The CNN processing system 1 can handle various data such as natural language data, game data, and time-series data as learning / inference targets.
また、本実施形態に係るCNN処理システム1は、CPU11を搭載するホストマシンからアクセラレータとしてFPGAを使用するシステムである。外部カメラから得られた画像データは、ホスト側RAM12aを経由してFPGA側RAM12bに読み込まれ、CPU11による管理の下、FPGAにおいて推論演算等が行われる。演算結果としての出力データは、CPU11によってNIC15を用いて外部に送信され、活用される。
The CNN processing system 1 according to the present embodiment is a system that uses a FPGA as an accelerator from a host machine having the CPU 11 mounted thereon. The image data obtained from the external camera is read into the FPGA-side RAM 12b via the host-side RAM 12a, and under the control of the CPU 11, an inference operation or the like is performed in the FPGA. The output data as the calculation result is transmitted to the outside by the CPU 11 using the NIC 15, and is utilized.
図2は、本実施形態に係るCNN処理システム1の機能構成の概略を示す図である。CNN処理システム1は、記憶装置14に記録されているプログラムが、RAM12a及び12bに読み出され、CPU11及び/又はFPGA16によって実行されて、サーバー50に備えられた各ハードウェアが制御されることで、図2に示された入力データ読取部21、入力バッファ22、積和演算モジュール23、出力バッファ24、累積加算パイプライン25、重みデータ読取部26及び重みバッファ27を備える情報処理装置として機能する。なお、本実施形態及び後述する他の実施形態では、CNN処理システム1の備える各機能は、汎用プロセッサであるCPU11及び/又はFPGA16によって実行されるが、これらの機能の一部又は全部は、1又は複数の専用プロセッサによって実行されてもよい。
FIG. 2 is a diagram schematically illustrating a functional configuration of the CNN processing system 1 according to the present embodiment. In the CNN processing system 1, the programs recorded in the storage device 14 are read out to the RAMs 12a and 12b, executed by the CPU 11 and / or the FPGA 16, and the hardware provided in the server 50 is controlled. 2 functions as an information processing apparatus including the input data reading unit 21, the input buffer 22, the product-sum operation module 23, the output buffer 24, the accumulation addition pipeline 25, the weight data reading unit 26, and the weight buffer 27 shown in FIG. . In the present embodiment and other embodiments described later, each function of the CNN processing system 1 is executed by the CPU 11 and / or the FPGA 16 which are general-purpose processors. Alternatively, it may be executed by a plurality of dedicated processors.
入力データ読取部21は、FPGA側RAM12bから、所定の順序に並べられた複数の要素(本実施形態では、画素)からなる入力データ(本実施形態では、画像データ)を読み出し、入力バッファ22に書き込む。
The input data reading unit 21 reads, from the FPGA-side RAM 12b, input data (image data in the present embodiment) including a plurality of elements (pixels in the present embodiment) arranged in a predetermined order, and stores the read data in the input buffer 22. Write.
入力バッファ22は、α個のメモリ0からα-1を有している。入力データ読取部21は、メモリ0からα-1に、入力データ中の複数の要素の夫々を所定の順序に従って先頭のメモリ0から順に1つずつ格納し、最後尾のメモリα-1に達すると先頭のメモリ0に戻って再び所定の順序に従って要素を格納する。なお、本実施形態では、簡単のためX軸方向についての処理のみ説明するが、実際のシステムでは、入力データはX軸方向の幅とY軸方向の幅とを有する二次元の入力データが用いられてよい。この場合、入力データのサイズはα*α(即ち、α=6であれば6*6=36)となり、入力バッファ22のメモリの数もα*α個となる。
The input buffer 22 has α memories 0 to α−1. The input data reading unit 21 stores a plurality of elements in the input data one by one in order from the first memory 0 in the memory 0 to α−1 in a predetermined order, and reaches the last memory α−1. Then, the process returns to the top memory 0 and stores the elements again in a predetermined order. In this embodiment, for simplicity, only processing in the X-axis direction will be described. However, in an actual system, input data is two-dimensional input data having a width in the X-axis direction and a width in the Y-axis direction. May be. In this case, the size of the input data is α * α (that is, 6 * 6 = 36 if α = 6), and the number of memories in the input buffer 22 is also α * α.
積和演算モジュール23は、入力バッファ22から入力データを偶数の入力幅αで受け付け、重みデータ読取部26から重みデータを奇数のタップ数r(重みデータの幅。CNNにおけるカーネル幅kwに相当する。)で受け付けて、Winograd変換処理、重み付け処理、及びWinograd逆変換処理を行って、出力データを偶数の出力幅mで出力する。
The product-sum operation module 23 receives input data from the input buffer 22 with an even input width α, and receives weight data from the weight data reading unit 26 as an odd number of taps r (weight data width. This corresponds to the kernel width kw in the CNN). ), And performs a Wingrad conversion process, a weighting process, and a Wingrad inverse transform process, and outputs output data with an even output width m.
より具体的には、積和演算モジュール23は、入力データに対して入力データ用のWinograd変換処理を行い、重みデータに対して重みデータ用のWinograd変換処理を行う。重みデータのWinograd変換処理結果は、複数回利用されるため、重みバッファ27に記録される。そして、積和演算モジュール23は、Winograd変換処理が適用された後の入力データ及び重みデータを用いて積和演算を行い、結果に対してWinograd逆変換処理を行って、出力データを得る。なお、得られた出力データに対しては、正規化処理やバイアス処理、所謂ReLU(Rectified Linear Unit)関数等を用いた活性化処理、等が行われてもよい。出力データは、出力バッファ24において書き込み順が逐次的になるように並べ替えられ、累積加算パイプライン25を経由してFPGA側RAM12bに書き込まれる。
More specifically, the product-sum operation module 23 performs the input data Winograd conversion process on the input data, and performs the weight data Winograd conversion process on the weight data. The result of the Winograd conversion processing of the weight data is recorded in the weight buffer 27 because it is used a plurality of times. Then, the product-sum operation module 23 performs a product-sum operation using the input data and the weight data to which the Winograd conversion process has been applied, performs the Winograd reverse conversion process on the result, and obtains output data. The obtained output data may be subjected to a normalization process or a bias process, an activation process using a so-called ReLU (Rectified Linear Unit) function, or the like. The output data is rearranged in the output buffer 24 so that the writing order is sequential, and is written to the FPGA-side RAM 12b via the cumulative addition pipeline 25.
ここで、Winograd変換処理の性質上、入力幅α、出力幅m、タップ数rの間には、「α=m+r-1」の関係が成立する。そして、出力幅mやタップ数rが変化しても、常に全ての乗算器を動作させることで最大の性能を得るため、入力幅αは偶数の固定値とすることが好ましい。また、CNNにおいては、タップ数rに対応するカーネル幅kwには、1,3,5,7,11といった奇数の値が使用されることが多い。これは、カーネル幅kwが偶数のCNNも設計することも可能であるが、入力画像の左右の両端にパディングピクセルを同じピクセル数((kw-1)÷2)だけ挿入して畳み込み演算をすることで、入力画像のサイズと出力画像のサイズとを同じにすることができるためである。結果として、Winograd変換処理では、入力幅αが偶数、出力幅mが偶数、タップ数rが奇数となるケースが多く、本開示に係る技術を用いることが出来る。
Here, due to the nature of the Wingrad conversion process, a relationship of α = m + r−1 is established between the input width α, the output width m, and the number of taps r. Even if the output width m and the number of taps r change, the input width α is preferably set to an even fixed value in order to always obtain the maximum performance by operating all the multipliers. In the CNN, an odd value such as 1, 3, 5, 7, 11 is often used for the kernel width kw corresponding to the number of taps r. Although it is possible to design a CNN having an even kernel width kw, the convolution operation is performed by inserting padding pixels by the same number of pixels ((kw−1) ÷ 2) at the left and right ends of the input image. This is because the size of the input image and the size of the output image can be made the same. As a result, in the Wingrad conversion process, the input width α is often even, the output width m is even, and the number of taps r is odd, so that the technology according to the present disclosure can be used.
図3は、本実施形態に係る接続回路(Rotate回路)の概要を示す図である。接続回路は、図2を用いて説明した入力バッファ22のメモリ0からα-1と積和演算モジュール23の入力端子0からα-1との間に配置される。
FIG. 3 is a diagram showing an outline of a connection circuit (Rotate circuit) according to the present embodiment. The connection circuit is arranged between the memories 0 to α-1 of the input buffer 22 and the input terminals 0 to α-1 of the product-sum operation module 23 described with reference to FIG.
接続回路は、積和演算モジュール23において入力データを入力幅αで受け付けるための入力端子0からα-1とメモリ0からα-1とを接続する回路である。本実施形態において、接続回路は、奇数番号の入力端子と奇数番号のメモリとを接続し、偶数番号の入力端子と偶数番号のメモリとを接続する。この際、その他のメモリと入力端子との間の接続(具体的には、奇数番号の入力端子と偶数番号のメモリとの間の接続、及び偶数番号の入力端子と奇数番号のメモリとの間の接続)は任意に省略されてよく、本実施形態では、入力端子0からα-1とメモリ0からα-1とは、奇数番号同士、偶数番号同士でのみ接続される。
The connection circuit connects the input terminals 0 to α-1 and the memories 0 to α-1 for receiving input data with the input width α in the product-sum operation module 23. In the present embodiment, the connection circuit connects the odd-numbered input terminal to the odd-numbered memory, and connects the even-numbered input terminal to the even-numbered memory. At this time, the connection between the other memory and the input terminal (specifically, the connection between the odd-numbered input terminal and the even-numbered memory, and the connection between the even-numbered input terminal and the odd-numbered memory) ) May be arbitrarily omitted, and in this embodiment, the input terminals 0 to α-1 and the memories 0 to α-1 are connected only by odd numbers and even numbers.
図3には、入力幅αが6である場合の接続回線の例が示されており、図中のRAM0から5が、メモリ0から5に相当する。メモリ0は入力端子0、2及び4のみと、メモリ1は入力端子1、3及び5のみと接続され、メモリ0と入力端子1、3及び5との接続や、メモリ1と入力端子0、2及び4との接続は省略されていることが分かる。このことは、メモリ3から5についても同様である。なお、接続回路は、物理回路であってもよいし、プログラマブルデバイスにおける論理回路であってもよい。
FIG. 3 shows an example of a connection line when the input width α is 6, and RAMs 0 to 5 in the figure correspond to memories 0 to 5. The memory 0 is connected only to the input terminals 0, 2 and 4, the memory 1 is connected only to the input terminals 1, 3 and 5, the connection between the memory 0 and the input terminals 1, 3 and 5, and the connection between the memory 1 and the input terminals 0, It can be seen that the connections to 2 and 4 have been omitted. This is the same for the memories 3 to 5. Note that the connection circuit may be a physical circuit or a logic circuit in a programmable device.
制御部(セレクター)は、積和演算モジュール23に入力バッファ22からのデータが入力される際に、メモリ0からα-1の何れのメモリから入力端子0からα-1の何れの入力端子へデータを入力するかを制御する。具体的には、制御部は、i回目の処理において入力端子に入力されるメモリの番号を、「入力端子番号+(出力幅m*(i-1))」を入力幅αで整数除算したときの剰余の値とする。なお、実際の制御において、制御部による制御信号の内容は、計算式によって決定されてもよいし、マップやテーブル等を参照することで決定されてもよい。制御部による具体的な処理の流れについては、フローチャートを用いて後述する。
When data from the input buffer 22 is input to the product-sum operation module 23, the control unit (selector) transmits data from any one of the memories 0 to α-1 to any one of the input terminals 0 to α-1. Controls whether to enter data. Specifically, the control unit divides the number of the memory input to the input terminal in the i-th process by “input terminal number + (output width m * (i−1))” by the input width α. It is the value of the remainder at that time. In actual control, the content of the control signal by the control unit may be determined by a calculation formula, or may be determined by referring to a map, a table, or the like. A specific flow of processing by the control unit will be described later using a flowchart.
図3において、メモリ0から5は、入力幅αが6である場合の、入力バッファ22内の記憶領域である。メモリ0から5の夫々には、画像データ中の画素データD0から23が格納されている。上述の通り、入力データ読取部21は、メモリ0からα-1に、入力データ中の複数の要素の夫々を所定の順序に従って先頭のメモリ0から順に1つずつ格納し、最後尾のメモリα-1に達すると先頭のメモリ0に戻って再び所定の順序に従って要素を格納する。このため、メモリ0には、先頭から順に画素データD0、D6、D12及びD18が格納されており、メモリ1には、先頭から順にD1、D7、D13及びD19が格納されている。メモリ2から5についても同様である(図3を参照)。メモリ0から5の夫々に格納された画素データは、メモリ内のアドレスを示す変数によって指定される。本実施形態では、メモリ0から5の夫々におけるメモリ内のアドレスを示すための変数を、アドレスRA0から5とする。
In FIG. 3, memories 0 to 5 are storage areas in the input buffer 22 when the input width α is 6. Pixel memories D0 to D23 in the image data are stored in the memories 0 to 5, respectively. As described above, the input data reading unit 21 stores the plurality of elements in the input data one by one in the memory 0 to α−1 in order from the first memory 0 according to a predetermined order, and stores the plurality of elements in the last memory α. When the value reaches -1, the process returns to the first memory 0 and stores the elements again in a predetermined order. Therefore, the memory 0 stores the pixel data D0, D6, D12, and D18 in order from the top, and the memory 1 stores D1, D7, D13, and D19 in order from the top. The same applies to the memories 2 to 5 (see FIG. 3). Pixel data stored in each of the memories 0 to 5 is specified by a variable indicating an address in the memory. In this embodiment, variables for indicating the addresses in the memories 0 to 5 are the addresses RA0 to RA5.
制御部は、入力端子0から5の夫々に対して、データを選択する制御信号SEL0から5を送ることで、メモリ0から5の何れから入力端子0から5の何れに対して画素データが送られるかを制御する。ここで、入力端子0から5に送られるデータは、画像データを左詰めとなる(I[0]に最も若い番号の画像データが来る)よう回転したデータROT0から5である。
The control unit transmits control signals SEL0 to SEL5 for selecting data to each of the input terminals 0 to 5, so that the pixel data is transmitted from any of the memories 0 to 5 to any of the input terminals 0 to 5. Control what is done. Here, the data sent to the input terminals 0 to 5 are the data ROT0 to ROT5 obtained by rotating the image data so as to be left-justified (the image data of the smallest number comes to I [0]).
<処理の流れ>
次に、本実施形態に係るCNN処理システム1によって実行される処理の流れを説明する。なお、以下に説明する処理の具体的な内容及び処理順序は、本開示を実施するための一例である。具体的な処理内容及び処理順序は、本開示の実施の形態に応じて適宜選択されてよい。 <Process flow>
Next, a flow of processing executed by theCNN processing system 1 according to the present embodiment will be described. The specific contents and the processing order of the processing described below are an example for implementing the present disclosure. The specific processing content and processing order may be appropriately selected according to the embodiment of the present disclosure.
次に、本実施形態に係るCNN処理システム1によって実行される処理の流れを説明する。なお、以下に説明する処理の具体的な内容及び処理順序は、本開示を実施するための一例である。具体的な処理内容及び処理順序は、本開示の実施の形態に応じて適宜選択されてよい。 <Process flow>
Next, a flow of processing executed by the
図4及び図5は、本実施形態に係る、入力バッファ22と積和演算モジュール23との間の接続回路(Rotate回路)の制御処理の流れの概要を示すフローチャートである。本フローチャートに示された処理は、CNN処理システム1における推論演算中に、繰り返し実行される。
FIGS. 4 and 5 are flowcharts showing the outline of the flow of control processing of the connection circuit (Rotate circuit) between the input buffer 22 and the product-sum operation module 23 according to the present embodiment. The processing shown in this flowchart is repeatedly executed during the inference calculation in the CNN processing system 1.
ステップS101では、パラメータが初期化される。制御部は、入力データ中の要素(本実施形態では、画素データ)のX軸方向における位置を示す座標xと、座標xをα(本実施形態では、α=6)で整数除算した時の剰余x_mod6と、座標xをα(本実施形態では、α=6)で整数除算した時の商x_div6とを、何れも0で初期化する。更に、制御部は、座標xの増分x_incに、畳み込み演算を実行する際のカーネルサイズの幅(カーネル幅)kwに応じた値を設定する。具体的には、本実施形態において、制御部は、カーネル幅kwが1の場合は6を、カーネル幅kwが3の場合は4を、カーネル幅kwが5の場合は2を、増分x_incに設定する。ここで、上述の通り、入力幅α、出力幅m、タップ数rの間には「α=m+r-1」の関係が成立し、「カーネル幅kw=タップ数r」であり、「増分x_inc=出力幅m」であるため、増分x_incは、入力幅α及びカーネル幅kwに基づいて決定される。その後、処理はステップS102へ進む。
で は In step S101, parameters are initialized. The control unit calculates a coordinate x indicating a position in the X-axis direction of an element (pixel data in the present embodiment) in the input data and an integer division of the coordinate x by α (α = 6 in the present embodiment). A remainder x_mod6 and a quotient x_div6 when the coordinate x is integer-divided by α (α = 6 in the present embodiment) are both initialized to 0. Further, the control unit sets a value corresponding to a kernel size width (kernel width) kw at the time of executing the convolution operation in the increment x_inc of the coordinate x. Specifically, in the present embodiment, the control unit sets 6 when the kernel width kw is 1, 4 when the kernel width kw is 3, 2 when the kernel width kw is 5, and adds x to the increment x_inc. Set. Here, as described above, the relationship of “α = m + r−1” is established between the input width α, the output width m, and the number of taps, “kernel width kw = number of taps r”, and “incremental x_inc” = Output width m ", the increment x_inc is determined based on the input width α and the kernel width kw. Thereafter, the process proceeds to step S102.
ステップS102からステップS112では、アドレスRA0から5が設定される。制御部は、メモリ0から5の夫々におけるメモリ内のアドレスを示すための変数であるアドレスRA0から5の夫々に、商x_div6の現在値を設定する(ステップS102)。そして、制御部は、剰余x_mod6の現在値を所定の値と比較し、比較の結果に従って、アドレスRA0から5の値を更新する(ステップS103からステップS112)。具体的には、制御部は、「x_mod6>=1」である場合、アドレスRA0に「x_div6+1」の値を設定し、「x_mod6>=2」である場合、アドレスRA1に「x_div6+1」の値を設定し、「x_mod6>=3」である場合、アドレスRA2に「x_div6+1」の値を設定し、「x_mod6>=4」である場合、アドレスRA3に「x_div6+1」の値を設定し、「x_mod6>=5」である場合、アドレスRA4に「x_div6+1」の値を設定する。その後、処理はステップS113へ進む。
ア ド レ ス In steps S102 to S112, addresses RA0 to RA5 are set. The control unit sets the current value of the quotient x_div6 to each of the addresses RA0 to RA5, which is a variable for indicating the address in the memory in each of the memories 0 to 5 (step S102). Then, the control unit compares the current value of the remainder x_mod6 with a predetermined value, and updates the values of the addresses RA0 to RA5 according to the result of the comparison (steps S103 to S112). Specifically, the control unit sets the value of “x_div6 + 1” to the address RA0 when “x_mod6> = 1”, and sets the value of “x_div6 + 1” to the address RA1 when “x_mod6> = 2”. If “x_mod6> = 3”, the value of “x_div6 + 1” is set to the address RA2, and if “x_mod6> = 4”, the value of “x_div6 + 1” is set to the address RA3, and “x_mod6> = 5 ", the value of" x_div6 + 1 "is set to the address RA4. Thereafter, the process proceeds to step S113.
ステップS113では、制御信号の内容が決定される。制御部は、剰余x_mod6の値に従い、制御信号SEL0から5により選択される読出信号RDを、図6に示されるよう選択する。
In step S113, the content of the control signal is determined. The control unit selects the read signal RD selected by the control signals SEL0 to SEL5 according to the value of the remainder x_mod6 as shown in FIG.
図6は、本実施形態における、剰余x_mod6、制御信号SEL、及び読出信号RDの関係を示す図である。上述の通り、入力幅α、出力幅m、タップ数rの間には「α=m+r-1」の関係が成立し、「カーネル幅kw=タップ数r」であり、「増分x_inc=出力幅m」であるため、入力幅α=6のWinogradアルゴリズムで、カーネルサイズが1*1、3*3、及び5*5の何れかであれば、増分x_incは常に偶数となる。このため、座標xを6で整数除算した時の剰余x_mod6の値も、偶数となる。
FIG. 6 is a diagram showing the relationship between the remainder x_mod6, the control signal SEL, and the read signal RD in the present embodiment. As described above, the relationship of “α = m + r−1” is established between the input width α, the output width m, and the number of taps, “kernel width kw = number of taps r”, and “increment x_inc = output width”. m ”, the increment x_inc is always an even number if the kernel size is any of 1 * 1, 3 * 3, and 5 * 5 in the Winograd algorithm with the input width α = 6. Therefore, the value of the remainder x_mod6 when the coordinate x is integer-divided by 6 is also an even number.
ここで、図6に示された読出信号RDの番号(=メモリ番号)は、剰余x_mod6の値に入力端子番号を加算し、加算結果を入力幅α(本実施形態では、6)で除算した剰余を求めることで算出された値である。換言すれば、i回目の処理において入力端子に入力されるメモリ番号は、「入力端子番号+(出力幅m*(i-1))」を入力幅αで整数除算したときの剰余の値である。その後、処理はステップS114へ進む。
Here, the number of the read signal RD (= memory number) shown in FIG. 6 is obtained by adding the input terminal number to the value of the remainder x_mod6 and dividing the addition result by the input width α (6 in the present embodiment). This is a value calculated by calculating the remainder. In other words, the memory number input to the input terminal in the i-th processing is the remainder value obtained by dividing “input terminal number + (output width m * (i−1))” by the input width α. is there. Thereafter, the process proceeds to step S114.
ステップS114では、入力バッファ22から画素データが読み出されて、対応する入力端子へ入力される。制御部は、アドレスRA0から5の値を対応するメモリ0から5へ出力し、制御信号SEL0から5の値を接続回路(Rotate回路)へ向けて出力する。これによって、アドレスRA0から5に示されたアドレスにある画素データがメモリ0から5より読み出され、制御信号SEL0から5によって指定された番号の入力端子へ入力される。その後、処理はステップS115へ進む。
In step S114, pixel data is read from the input buffer 22 and input to the corresponding input terminal. The control unit outputs the values of the addresses RA0 to RA5 to the corresponding memories 0 to 5, and outputs the values of the control signals SEL0 to SEL5 to the connection circuit (Rotate circuit). As a result, the pixel data at the addresses indicated by the addresses RA0 to RA5 is read from the memories 0 to 5, and is input to the input terminals of the numbers specified by the control signals SEL0 to SEL5. Thereafter, the process proceeds to step S115.
ステップS115からステップS117では、パラメータが更新される。制御部は、座標xを「更新前の座標xの値+増分x_inc」の値に更新し、更に、剰余x_mod6を、「更新前の剰余x_mod6の値+増分x_inc」の値に更新する(ステップS115)。ここで、更新後の剰余x_mod6が入力幅α(本フローチャートに示された例では、6)以上となった場合(ステップS116)、制御部は、剰余x_mod6から入力幅αを引くことで、剰余x_mod6を入力幅α未満の値に調整し、更に、商x_div6に1加算する(ステップS117)。その後、処理はステップS118へ進む。
パ ラ メ ー タ In steps S115 to S117, the parameters are updated. The control unit updates the coordinate x to the value of “the value of the coordinate x before the update + the increment x_inc”, and further updates the remainder x_mod6 to the value of the “the value of the remainder x_mod6 before the update + the increment x_inc” (step). S115). Here, if the updated remainder x_mod6 is equal to or larger than the input width α (6 in the example shown in this flowchart) (step S116), the control unit subtracts the input width α from the remainder x_mod6 to obtain the remainder x_mod6. x_mod6 is adjusted to a value smaller than the input width α, and 1 is added to the quotient x_div6 (step S117). Thereafter, the process proceeds to step S118.
ステップS118では、処理終了の要否が判定される。制御部は、ステップS115で更新された更新後の座標xが、画像データのX軸方向の幅以上となったか否かを判定する。座標xが、画像データのX軸方向の幅未満である場合、X軸方向に未処理の画素が残っているため、処理はステップS102へ戻る。一方、座標xが、画像データのX軸方向の幅以上である場合、本フローチャートの処理は終了する。
で は In step S118, it is determined whether or not the processing needs to be completed. The control unit determines whether or not the updated coordinate x updated in step S115 is equal to or larger than the width of the image data in the X-axis direction. If the coordinate x is smaller than the width of the image data in the X-axis direction, the process returns to step S102 because unprocessed pixels remain in the X-axis direction. On the other hand, when the coordinate x is equal to or larger than the width of the image data in the X-axis direction, the processing of this flowchart ends.
図7は、本実施形態に係る制御処理を実行した場合の、入力バッファ22及び接続回路において用いられる信号のタイムチャートである。本タイムチャートによれば、偶数番号のメモリと入力端子同士、奇数番号のメモリと入力端子同士が接続されていれば、問題なく入力データを積和演算モジュール23に渡すことが可能であることが分かる。
FIG. 7 is a time chart of signals used in the input buffer 22 and the connection circuit when the control processing according to the present embodiment is executed. According to this time chart, if the even-numbered memories and the input terminals are connected to each other and the odd-numbered memories and the input terminals are connected to each other, it is possible to pass the input data to the product-sum operation module 23 without any problem. I understand.
<浮動小数点数を用いた演算>
本実施形態に係るCNN処理システム1では、データの表現に、複数の異なるデータ型の浮動小数点数表現が用いられる。具体的には、ホスト側RAM12a、FPGA側RAM12b、入力バッファ22及び出力バッファ24においては、独自形式の8ビット浮動小数点型であるPFU-FP8が用いられ、積和演算モジュール23においては、独自形式の9ビット浮動小数点型であるPFU-FP9及びIEEE754標準の単精度浮動小数点型であるFP32が用いられる(図2を参照)。 <Operation using floating point number>
In theCNN processing system 1 according to the present embodiment, a plurality of different data types of floating-point numbers are used to represent data. Specifically, the host-side RAM 12a, the FPGA-side RAM 12b, the input buffer 22, and the output buffer 24 use PFU-FP8, which is a unique 8-bit floating-point type. PFU-FP9, which is a 9-bit floating-point type, and FP32, which is a single-precision floating-point type of the IEEE754 standard, are used (see FIG. 2).
本実施形態に係るCNN処理システム1では、データの表現に、複数の異なるデータ型の浮動小数点数表現が用いられる。具体的には、ホスト側RAM12a、FPGA側RAM12b、入力バッファ22及び出力バッファ24においては、独自形式の8ビット浮動小数点型であるPFU-FP8が用いられ、積和演算モジュール23においては、独自形式の9ビット浮動小数点型であるPFU-FP9及びIEEE754標準の単精度浮動小数点型であるFP32が用いられる(図2を参照)。 <Operation using floating point number>
In the
図8は、本実施形態において用いられる、独自形式の9ビット浮動小数点型PFU-FP9のデータ構造を示す図である。浮動小数点型PFU-FP9は、9ビット幅で浮動小数点数を記録するためのデータ構造であり、先頭の1ビットが数値の符号(正負)を示す符号ビット、2ビット目から5ビット目までの4ビットが指数部、6ビット目から9ビット目までの4ビットが仮数部である。
FIG. 8 is a diagram showing a data structure of the unique 9-bit floating point type PFU-FP9 used in the present embodiment. The floating-point type PFU-FP9 has a data structure for recording a floating-point number with a 9-bit width. The first bit is a sign bit indicating a sign (positive or negative) of a numerical value, and a second bit to a fifth bit. Four bits are an exponent part, and four bits from a sixth bit to a ninth bit are a mantissa part.
図9は、本実施形態において用いられる、独自形式の8ビット浮動小数点型PFU-FP8のデータ構造を示す図である。浮動小数点型PFU-FP8は、8ビット幅で浮動小数点数を記録するためのデータ構造であり、先頭の1ビットが数値の符号(正負)を示す符号ビット、2ビット目から4ビット目までの3ビットが指数部、5ビット目から8ビット目までの4ビットが仮数部である。但し、指数が2-1から2-7までの範囲(指数部のビットが「111」から「001」までの範囲)については一般的な浮動小数点型データと同様に指数が決定されるが、指数が2-8から2-11までの範囲(指数部のビットが「000」である場合)については、仮数部の一部が、指数の一部を表現する拡張指数部として用いられ、拡張指数部と指数部とが組み合わせられて実際の指数を表す。
FIG. 9 is a diagram showing a data structure of the unique 8-bit floating point type PFU-FP8 used in the present embodiment. The floating-point type PFU-FP8 has a data structure for recording a floating-point number with an 8-bit width. The first bit is a sign bit indicating a sign (positive or negative) of a numerical value, and a second bit to a fourth bit. Three bits are an exponent part, and four bits from a fifth bit to an eighth bit are a mantissa part. However, although indices like the common floating-point data is determined for the range from index 2 -1 to 2 -7 (range of bits of the exponent portion from "111" to "001"), In the range of the exponent from 2 −8 to 2 −11 (when the bit of the exponent is “000”), a part of the mantissa is used as an extended exponent expressing a part of the exponent. The exponent part and the exponent part are combined to represent the actual exponent.
具体的には、浮動小数点型PFU-FP8では、指数部の値が「000」である場合に、仮数部の一部が、指数の一部を表現する拡張指数部となる(図中の破線で囲われた部分を参照)。そして、拡張指数部では、表現される指数の値に応じて、当該拡張指数部として用いられるビット幅が下位ビット(8ビット目)から順に拡張され、拡張指数部の何ビット目にフラグが立てられているかに応じて、浮動小数点数の指数が表される。
Specifically, in the floating-point type PFU-FP8, when the value of the exponent part is “000”, a part of the mantissa part becomes an extended exponent part expressing a part of the exponent (the broken line in the figure). (See the part surrounded by.) In the extension exponent, the bit width used as the extension exponent is extended in order from the lower bit (8th bit) in accordance with the value of the exponent to be expressed, and a flag is set in any bit of the extension exponent. The exponent of the floating point number is represented depending on whether
より具体的には、指数部の値が「000」で且つ8ビット目にフラグ(「1」)がある場合、指数は2-8であり、指数部の値が「000」で且つ7ビット目にフラグ(「1」)がある場合、指数は2-9であり、指数部の値が「000」で且つ6ビット目にフラグ(「1」)がある場合、指数は2-10であり、指数部の値が「000」で且つ5ビット目にフラグ(「1」)がある場合、指数は2-11である。このような独自形式の浮動小数点型によれば、データの精度は粗くなるが、表現可能なダイナミックレンジが広がるため、CNNによる推論精度を向上させることができる。
More specifically, if the value of the exponent part is “000” and the eighth bit has a flag (“1”), the exponent is 2 −8 , the value of the exponent part is “000”, and 7 bits If there is a flag ("1") in the eye, the exponent is 2-9. If the value of the exponent part is "000" and the flag ("1") is in the sixth bit, the exponent is 2-10 . If the value of the exponent part is "000" and the flag ("1") is in the fifth bit, the exponent is 2-11 . According to such a unique floating-point type, the accuracy of data is coarse, but the expressible dynamic range is widened, so that the inference accuracy by the CNN can be improved.
更に、浮動小数点型PFU-FP8によれば、他の浮動小数点型への変換が容易である。以下、本開示に係る浮動小数点型を採用した場合に、他の浮動小数点型への変換コストが低減されることを説明する。
Furthermore, according to the floating-point type PFU-FP8, conversion to another floating-point type is easy. Hereinafter, it will be described that when the floating-point type according to the present disclosure is adopted, the cost of conversion to another floating-point type is reduced.
図10は、本実施形態における、浮動小数点型PFU-FP8から浮動小数点型PFU-FP9への変換回路の機能構成の概略を示す図である。変換回路は、受付部31、指数出力部32及び仮数出力部33を備える。
FIG. 10 is a diagram schematically illustrating a functional configuration of a conversion circuit from the floating-point PFU-FP8 to the floating-point PFU-FP9 in the present embodiment. The conversion circuit includes a receiving unit 31, an exponent output unit 32, and a mantissa output unit 33.
受付部31は、浮動小数点型PFU-FP8の符号と、浮動小数点型PFU-FP8の指数部と、浮動小数点型PFU-FP8の仮数部と、を受け付ける。
The receiving unit 31 receives the sign of the floating-point PFU-FP8, the exponent of the floating-point PFU-FP8, and the mantissa of the floating-point PFU-FP8.
指数出力部32は、浮動小数点型PFU-FP8の指数部及び拡張指数部を参照して算出された指数を、浮動小数点型PFU-FP8の指数部より広いビット幅を有する浮動小数点型PFU-FP9の指数部に記録される値として出力する。
The exponent output unit 32 converts the exponent calculated with reference to the exponent part and the extended exponent part of the floating-point PFU-FP8 into a floating-point PFU-FP9 having a wider bit width than the exponent part of the floating-point PFU-FP8. Is output as the value recorded in the exponent part of.
仮数出力部33は、浮動小数点型PFU-FP8の仮数部のうち、拡張指数部として用いられていないビットの値をそのまま出力し、拡張指数部として用いられていたビットの値を0として出力することで、浮動小数点型PFU-FP9の仮数部に記録される値を出力する。
The mantissa output unit 33 outputs, as it is, the value of the bit that is not used as the extension exponent in the mantissa of the floating-point type PFU-FP8, and outputs the value of the bit that is used as the extension exponent as 0. Thus, a value recorded in the mantissa of the floating-point type PFU-FP9 is output.
図11は、本実施形態における、浮動小数点型PFU-FP8から浮動小数点型PFU-FP9への変換回路を示す図である。図11において、FP8_f0から3は、浮動小数点型PFU-FP8の仮数部の入力を示し、FP8_exp0から2は、浮動小数点型PFU-FP8の指数部の入力を示し、FP8_signは、浮動小数点型PFU-FP8の符号の入力を示す。また、FP9_f0から3は、浮動小数点型PFU-FP9の仮数部の出力を示し、FP9_exp0から3は、浮動小数点型PFU-FP9の指数部の出力を示し、FP9_signは、浮動小数点型PFU-FP9の符号の出力を示す。
FIG. 11 is a diagram showing a conversion circuit from the floating-point PFU-FP8 to the floating-point PFU-FP9 in the present embodiment. In FIG. 11, FP8_f0 to FP3 indicate the input of the mantissa of the floating-point PFU-FP8, FP8_exp0 to 2 indicate the input of the exponent of the floating-point PFU-FP8, and FP8_sign indicates the input of the floating-point PFU-FP8. The input of the code of FP8 is shown. FP9_f0 to FP3 indicate the output of the mantissa of the floating-point PFU-FP9, FP9_exp0 to 3 indicate the output of the exponent of the floating-point PFU-FP9, and FP9_sign indicates the output of the floating-point PFU-FP9. The output of the code is shown.
図12は、従来技術における、浮動小数点型FP8(IEEE)から浮動小数点型FP9(IEEE)への変換回路を示す図である。図12において、FP8_f0から3は、浮動小数点型FP8(IEEE)の仮数部の入力を示し、FP8_exp0から2は、浮動小数点型FP8(IEEE)の指数部の入力を示し、FP8_signは、浮動小数点型FP8(IEEE)の符号の入力を示す。また、FP9_f0から3は、浮動小数点型FP9(IEEE)の仮数部の出力を示し、FP9_exp0から3は、浮動小数点型FP9(IEEE)の指数部の出力を示し、FP9_signは、浮動小数点型FP9(IEEE)の符号の出力を示す。
FIG. 12 is a diagram showing a conversion circuit from a floating-point type FP8 (IEEE) to a floating-point type FP9 (IEEE) in the prior art. In FIG. 12, FP8_f0 to 3 indicate the input of the mantissa of the floating-point FP8 (IEEE), FP8_exp0 to 2 indicate the input of the exponent of the floating-point FP8 (IEEE), and FP8_sign indicates the input of the floating-point. This shows input of a code of FP8 (IEEE). FP9_f0 to FP3 indicate the output of the mantissa of the floating-point FP9 (IEEE), FP9_exp0 to 3 indicate the output of the exponent of the floating-point FP9 (IEEE), and FP9_sign indicates the output of the floating-point FP9 (IEEE). 2 shows the output of the IEEE (IEEE) code.
図11及び図12において破線で示した回路(1)及び回路(5)は、指数部が全て0であるか否かを判別するための回路である。回路(2)及び回路(6)は、指数部が全て0である場合(即ち非正規化数である場合)に、仮数部が示す指数を求める回路である。回路(3)及び回路(7)は、非正規化数をFP9において正規化数として表現する時の指数部の値を求める回路である。回路(4)は、浮動小数点型FP8(IEEE)における非正規化数の仮数部を浮動小数点型FP9(IEEE)における正規化数の仮数部に変換するための回路であり、ビットシフトを行う回路である。回路(8)は、浮動小数点型PFU-FP8における非正規化数を表現する際の仮数部を浮動小数点型PFU-FP9における正規化数の仮数部に変換するための回路である。
The circuits (1) and (5) indicated by broken lines in FIGS. 11 and 12 are circuits for determining whether or not all exponents are 0. The circuit (2) and the circuit (6) are circuits that calculate the exponent indicated by the mantissa when the exponents are all 0 (that is, when the exponents are denormalized numbers). The circuit (3) and the circuit (7) are circuits for obtaining the value of the exponent part when the denormalized number is expressed as a normalized number in the FP9. The circuit (4) is a circuit for converting the mantissa part of the denormalized number in the floating-point type FP8 (IEEE) to the mantissa part of the normalized number in the floating-point type FP9 (IEEE), and performs a bit shift. It is. The circuit (8) is a circuit for converting the mantissa part when expressing the denormalized number in the floating-point PFU-FP8 into the mantissa part of the normalized number in the floating-point PFU-FP9.
ここで、図11と図12とを比較すると、図11の回路(8)の回路規模が、従来技術を示した図12の回路(4)と比べて小さいことが分かる。これは、浮動小数点型PFU-FP8が、仮数部の一部を非正規化して拡張指数部とする表現方式を採用しているにも関わらず、拡張指数部が下位ビット(8ビット目)から順に拡張される方式であるために、ビットシフト等の処理を行うことなく拡張指数部を「0」とするのみで浮動小数点型PFU-FP9に変換可能であるためである。即ち、本実施形態において説明した浮動小数点型PFU-FP8によれば、他の浮動小数点型に変換する際に、従来のようなビットシフトのための回路が不要となり、変換コスト(回路規模やプログラム量、使用メモリ量等)を低減させることが出来る。
Here, comparing FIG. 11 with FIG. 12, it can be seen that the circuit scale of the circuit (8) in FIG. 11 is smaller than the circuit (4) in FIG. This is because although the floating-point type PFU-FP8 employs a representation scheme in which a part of the mantissa is denormalized to be an extended exponent, the extended exponent starts from the lower bit (eighth bit). This is because the system is expanded in order, and can be converted to the floating-point PFU-FP9 only by setting the expansion exponent to “0” without performing a process such as a bit shift. That is, according to the floating-point type PFU-FP8 described in the present embodiment, when converting to another floating-point type, a circuit for bit shifting as in the related art is unnecessary, and conversion costs (circuit size and program Amount, used memory amount, etc.) can be reduced.
<バリエーション>
なお、上記説明した実施形態では、独自形式の浮動小数点型として、浮動小数点型PFU-FP8及び浮動小数点型PFU-FP9を採用したが、その他の浮動小数点型が採用されてもよい。 <Variation>
In the embodiment described above, the floating-point type PFU-FP8 and the floating-point type PFU-FP9 are employed as the unique floating-point types, but other floating-point types may be employed.
なお、上記説明した実施形態では、独自形式の浮動小数点型として、浮動小数点型PFU-FP8及び浮動小数点型PFU-FP9を採用したが、その他の浮動小数点型が採用されてもよい。 <Variation>
In the embodiment described above, the floating-point type PFU-FP8 and the floating-point type PFU-FP9 are employed as the unique floating-point types, but other floating-point types may be employed.
図13は、本実施形態に係るCNN処理システム1bの機能構成の概略を示す図である。CNN処理システム1bの機能構成は、入力バッファ22、Winograd変換処理、累積加算パイプライン25等が省略されている点を除いて、図2を参照して説明したCNN処理システム1と概略同様であるが、採用される浮動小数点型が異なる。CNN処理システム1bでは、ホスト側RAM12a、FPGA側RAM12b、入力バッファ22及び出力バッファ24においては、独自形式の6ビット浮動小数点型であるPFU-FP6が用いられ、積和演算モジュール23においては、独自形式の7ビット浮動小数点型であるPFU-FP7及びIEEE754標準の単精度浮動小数点型であるFP32が用いられる。
FIG. 13 is a diagram schematically illustrating a functional configuration of the CNN processing system 1b according to the present embodiment. The functional configuration of the CNN processing system 1b is substantially the same as that of the CNN processing system 1 described with reference to FIG. 2 except that the input buffer 22, the Wingrad conversion process, the cumulative addition pipeline 25, and the like are omitted. However, the floating point type employed is different. In the CNN processing system 1b, the host-side RAM 12a, the FPGA-side RAM 12b, the input buffer 22, and the output buffer 24 use the unique 6-bit floating-point type PFU-FP6. PFU-FP7, a 7-bit floating-point type, and FP32, a single-precision floating-point type of the IEEE754 standard, are used.
図14は、本実施形態において用いられる、独自形式の7ビット浮動小数点型PFU-FP7のデータ構造を示す図である。浮動小数点型PFU-FP7は、7ビット幅で浮動小数点数を記録するためのデータ構造であり、先頭の1ビットが数値の符号(正負)を示す符号ビット、2ビット目から5ビット目までの4ビットが指数部、6ビット目から7ビット目までの2ビットが仮数部である。
FIG. 14 is a diagram showing a data structure of a unique 7-bit floating point type PFU-FP7 used in the present embodiment. The floating-point type PFU-FP7 has a data structure for recording a floating-point number with a 7-bit width, and the first bit is a sign bit indicating a sign (positive or negative) of a numerical value, and a second bit to a fifth bit. Four bits are an exponent part, and two bits from the sixth bit to the seventh bit are a mantissa part.
図15は、本実施形態において用いられる、独自形式の6ビット浮動小数点型PFU-FP6のデータ構造を示す図である。浮動小数点型PFU-FP6は、6ビット幅で浮動小数点数を記録するためのデータ構造であり、先頭の1ビットが数値の符号(正負)を示す符号ビット、2ビット目から4ビット目までの3ビットが指数部、5ビット目から6ビット目までの2ビットが仮数部である。但し、指数が2-1から2-5までの範囲(指数部のビットが「111」から「011」までの範囲)については一般的な浮動小数点型データと同様に指数が決定されるが、指数が2-6から2-11までの範囲については、仮数部の一部が、指数の一部を表現する拡張指数部として用いられ、拡張指数部と指数部とが組み合わせられて実際の指数を表す。
FIG. 15 is a diagram showing a data structure of a unique 6-bit floating point type PFU-FP6 used in the present embodiment. The floating-point type PFU-FP6 is a data structure for recording a floating-point number with a 6-bit width, and the first bit is a sign bit indicating a sign (positive or negative) of a numerical value, and a second bit to a fourth bit. Three bits are an exponent part, and two bits from the fifth bit to the sixth bit are a mantissa part. However, although indices like the common floating-point data is determined for the range from index 2 -1 to 2 -5 (range of bits of the exponent portion from "111" to "011"), For exponents ranging from 2 −6 to 2 −11 , part of the mantissa is used as an extended exponent to represent part of the exponent, and the extended exponent and exponent are combined to form the actual exponent. Represents
具体的には、浮動小数点型PFU-FP6では、指数部の値が「010」、「001」及び「000」の何れかである場合に、仮数部の下位から2ビット目(7ビット目)が、指数の一部を表現する拡張指数部となる(図中の破線で囲われた部分を参照)。そして、拡張指数部である7ビット目におけるフラグの状態と、指数部の値の組合せによって、浮動小数点数の指数が表される。
Specifically, in the floating-point type PFU-FP6, when the value of the exponent part is “010”, “001” or “000”, the second bit (seventh bit) from the lower part of the mantissa part Is an extended exponent part that expresses a part of the exponent (see the part surrounded by a broken line in the figure). The exponent of the floating-point number is represented by a combination of the state of the flag at the 7th bit, which is the extended exponent part, and the value of the exponent part.
より具体的には、指数部と拡張指数部を合わせた4ビットの組み合わせ「0101」から「0000」までの値によって、指数2-6から2-11までの値が表現される。このような独自形式の浮動小数点型によれば、データの精度は粗くなるが、表現可能なダイナミックレンジが広がるため、CNNによる推論精度を向上させることができる。
More specifically, the values of the exponents 2-6 to 2-11 are represented by 4-bit combinations of the exponent part and the extended exponent part, the values of "0101" to "0000". According to such a unique floating-point type, the accuracy of data is coarse, but the expressible dynamic range is widened, so that the inference accuracy by the CNN can be improved.
<効果>
上記説明した実施形態によれば、入力バッファ22のメモリ0からαと積和演算モジュール23の入力端子0からαとの間の接続回路を削減し、回路規模(論理回路であっても、物理回路であってもよい)を縮小することが出来る。例えば、α=6の二次元Winograd変換を行う場合、従来であれば6*6=36必要であった接続回路が、本実施形態に係る技術によれば、3*3=9となり、Winograd変換を行う演算モジュールにデータを入力するための回路規模が1/4となる。これによって、FPGA等のプログラマブルデバイスであれば、メモリや論理回路等のリソースが少ないという状況を改善することが出来、また、ASIC等の専用デバイスであれば、生産性やカスタマイズ性を向上させることが出来る。 <Effect>
According to the embodiment described above, the number of connection circuits between thememories 0 to α of the input buffer 22 and the input terminals 0 to α of the product-sum operation module 23 is reduced, and the circuit scale (even if it is a logical circuit, Circuit). For example, when performing the two-dimensional Wingrad conversion of α = 6, the connection circuit that conventionally required 6 * 6 = 36 is changed to 3 * 3 = 9 according to the technology according to the present embodiment, and the Wingrad conversion is performed. The circuit scale for inputting data to the arithmetic module that performs is reduced to 1/4. As a result, it is possible to improve the situation in which resources such as a memory and a logic circuit are small in the case of a programmable device such as an FPGA, and to improve productivity and customizability in the case of a dedicated device such as an ASIC. Can be done.
上記説明した実施形態によれば、入力バッファ22のメモリ0からαと積和演算モジュール23の入力端子0からαとの間の接続回路を削減し、回路規模(論理回路であっても、物理回路であってもよい)を縮小することが出来る。例えば、α=6の二次元Winograd変換を行う場合、従来であれば6*6=36必要であった接続回路が、本実施形態に係る技術によれば、3*3=9となり、Winograd変換を行う演算モジュールにデータを入力するための回路規模が1/4となる。これによって、FPGA等のプログラマブルデバイスであれば、メモリや論理回路等のリソースが少ないという状況を改善することが出来、また、ASIC等の専用デバイスであれば、生産性やカスタマイズ性を向上させることが出来る。 <Effect>
According to the embodiment described above, the number of connection circuits between the
また、上記説明した実施形態によれば、FPGA等のプログラマブルデバイスやASIC等の専用デバイスを利用したCNNでWinograd変換を用いて演算回数を削減することが容易になり、CNN演算の効率の向上及び高速化が可能となる。
Further, according to the above-described embodiment, it is easy to reduce the number of operations by using the Winograd transform in a CNN using a programmable device such as an FPGA or a dedicated device such as an ASIC, thereby improving the efficiency of CNN operation. Higher speed is possible.
また、上記説明した独自形式の浮動小数点型によれば、指数部の値が所定値となった場合に、仮数部の一部が指数の一部を表現する拡張指数部となり、拡張指数部と前記指数部とが組み合わせられて指数を表すことで、データの精度は粗くなるが表現可能なダイナミックレンジが広がり、CNNによる推論精度を向上させることができる。
According to the above-described unique floating-point type, when the value of the exponent part becomes a predetermined value, a part of the mantissa part becomes an extended exponent part expressing a part of the exponent, and the extended exponent part When the exponent is combined with the exponent to represent the exponent, the accuracy of the data is coarse, but the expressible dynamic range is widened, and the inference accuracy by the CNN can be improved.
更に、上記説明した独自形式の浮動小数点型によれば、浮動小数点型の変換コストが小さく、変換回路は物理回路であっても論理回路であってもよいため、FPGA等のプログラマブルデバイスであれば、メモリや論理回路等のリソースが少ないという状況を改善することが出来、また、ASIC等の専用デバイスであれば、生産性やカスタマイズ性を向上させることが出来る。
Further, according to the above-described unique floating-point type, the conversion cost of the floating-point type is small, and the conversion circuit may be a physical circuit or a logic circuit. It is possible to improve the situation where resources such as a memory and a logic circuit are scarce, and it is possible to improve productivity and customizability by using a dedicated device such as an ASIC.
1 CNN処理システム
{1} CNN processing system
Claims (12)
- 情報処理装置の記憶装置に、所定のビット幅で浮動小数点数を記録するためのデータ構造であって、
前記所定のビット幅のうち、前記浮動小数点数の指数が記録される指数部と、
前記所定のビット幅のうち、前記浮動小数点数の仮数が記録される仮数部と、
を備え、
前記仮数部の一部は、前記指数部の値が所定値となった場合に、指数の一部を表現する拡張指数部となり、
前記拡張指数部と前記指数部とは、組み合わせられて前記浮動小数点数の指数を表す、
データ構造。 A data structure for recording a floating-point number with a predetermined bit width in a storage device of the information processing device,
An exponent part of the predetermined bit width in which an exponent of the floating-point number is recorded;
A mantissa part in which the mantissa of the floating-point number is recorded in the predetermined bit width;
With
A part of the mantissa becomes an extended exponent expressing a part of the exponent when the value of the exponent becomes a predetermined value,
The extended exponent and the exponent are combined to represent the exponent of the floating point number,
data structure. - 前記仮数部の一部は、前記指数部の値が0となった場合に、指数の一部を表現する拡張指数部となる、
請求項1に記載のデータ構造。 A part of the mantissa becomes an extended exponent expressing a part of the exponent when the value of the exponent becomes zero.
The data structure according to claim 1. - 前記拡張指数部では、表現される指数の値に応じてビット幅が変化し、何ビット目にフラグが立てられているかに応じて、前記浮動小数点数の指数が表される、
請求項1又は2に記載のデータ構造。 In the extended exponent, the bit width changes according to the value of the exponent to be represented, and depending on which bit is flagged, the exponent of the floating-point number is represented.
The data structure according to claim 1. - 前記拡張指数部では、表現される指数の値に応じて、該拡張指数部として用いられるビット幅が下位ビットから順に拡張される、
請求項1から3の何れか一項に記載のデータ構造。 In the extension exponent, a bit width used as the extension exponent is extended in order from a lower bit in accordance with a value of an exponent to be represented.
The data structure according to any one of claims 1 to 3. - 前記拡張指数部である所定ビットにおけるフラグの状態と、前記指数部の値の組合せによって、前記浮動小数点数の指数が表される、
請求項1に記載のデータ構造。 The exponent of the floating-point number is represented by a combination of a state of a flag in a predetermined bit that is the extended exponent part and a value of the exponent part.
The data structure according to claim 1. - 第一の指数部と、第一の仮数部と、前記第一の指数部の値が所定値となった場合に該第一の仮数部の一部を用いて指数の一部を表現する拡張指数部とを有するデータ構造で記録された浮動小数点数の入力を受け付ける受付手段と、
前記拡張指数部では、表現される指数の値に応じて該拡張指数部として用いられるビット幅が下位ビットから順に拡張され、前記第一の指数部及び前記拡張指数部を参照して算出された指数を、前記第一の指数部より広いビット幅を有する第二の指数部に記録される値として出力する指数出力手段と、
前記第一の仮数部のうち、前記拡張指数部として用いられていないビットの値をそのまま出力し、前記拡張指数部として用いられていたビットの値を0として出力することで、第二の仮数部に記録される値を出力する仮数出力手段と、
を備える情報処理装置。 A first exponent, a first mantissa, and an extension that expresses a part of the exponent by using a part of the first mantissa when the value of the first exponent becomes a predetermined value. Receiving means for receiving an input of a floating-point number recorded in a data structure having an exponent part;
In the extension exponent, the bit width used as the extension exponent is extended in order from the lower bit according to the value of the exponent to be expressed, and is calculated with reference to the first exponent and the extension exponent. Exponent output means for outputting an exponent as a value recorded in a second exponent part having a bit width wider than the first exponent part,
By outputting the value of the bit not used as the extended exponent part of the first mantissa part as it is and outputting the value of the bit used as the extended exponent part as 0, the second mantissa A mantissa output means for outputting a value recorded in the section,
An information processing apparatus comprising: - 前記拡張指数部では、何ビット目にフラグが立てられているかに応じて、前記浮動小数点数の指数が表される、
請求項6に記載の情報処理装置。 In the extension exponent, an exponent of the floating-point number is represented according to which bit is flagged,
The information processing device according to claim 6. - 前記受付手段、指数出力手段及び仮数出力手段は、物理回路として構成される、
請求項6又は7に記載の情報処理装置。 The receiving means, the exponent output means and the mantissa output means are configured as a physical circuit,
The information processing device according to claim 6. - 前記受付手段、指数出力手段及び仮数出力手段は、プログラマブルデバイスにおける論理回路として構成される、
請求項6又は7の何れか一項に記載の情報処理装置。 The receiving means, the exponent output means and the mantissa output means are configured as a logic circuit in a programmable device,
The information processing device according to claim 6. - 前記情報処理装置は、畳み込みニューラルネットワークにおいて演算を行う情報処理装置である、
請求項6から9の何れか一項に記載の情報処理装置。 The information processing device is an information processing device that performs an operation in a convolutional neural network,
The information processing apparatus according to claim 6. - コンピュータが、
第一の指数部と、第一の仮数部と、前記第一の指数部の値が所定値となった場合に該第一の仮数部の一部を用いて指数の一部を表現する拡張指数部とを有するデータ構造で記録された浮動小数点数の入力を受け付ける受付ステップと、
前記拡張指数部では、表現される指数の値に応じて該拡張指数部として用いられるビット幅が下位ビットから順に拡張され、前記第一の指数部及び前記拡張指数部を参照して算出された指数を、前記第一の指数部より広いビット幅を有する第二の指数部に記録される値として出力する指数出力ステップと、
前記第一の仮数部のうち、前記拡張指数部として用いられていないビットの値をそのまま出力し、前記拡張指数部として用いられていたビットの値を0として出力することで、第二の仮数部に記録される値を出力する仮数出力ステップと、
を実行する方法。 Computer
A first exponent, a first mantissa, and an extension that expresses a part of the exponent by using a part of the first mantissa when the value of the first exponent becomes a predetermined value. A receiving step of receiving an input of a floating-point number recorded in a data structure having an exponent part;
In the extension exponent, the bit width used as the extension exponent is extended in order from the lower bit according to the value of the exponent to be expressed, and is calculated with reference to the first exponent and the extension exponent. An exponent output step of outputting an exponent as a value recorded in a second exponent having a bit width wider than the first exponent;
By outputting the value of the bit not used as the extended exponent part of the first mantissa part as it is and outputting the value of the bit used as the extended exponent part as 0, the second mantissa A mantissa output step of outputting a value recorded in the section;
How to do. - コンピュータに、
第一の指数部と、第一の仮数部と、前記第一の指数部の値が所定値となった場合に該第一の仮数部の一部を用いて指数の一部を表現する拡張指数部とを有するデータ構造で記録された浮動小数点数の入力を受け付ける受付ステップと、
前記拡張指数部では、表現される指数の値に応じて該拡張指数部として用いられるビット幅が下位ビットから順に拡張され、前記第一の指数部及び前記拡張指数部を参照して算出された指数を、前記第一の指数部より広いビット幅を有する第二の指数部に記録される値として出力する指数出力ステップと、
前記第一の仮数部のうち、前記拡張指数部として用いられていないビットの値をそのまま出力し、前記拡張指数部として用いられていたビットの値を0として出力することで、第二の仮数部に記録される値を出力する仮数出力ステップと、
を実行させるためのプログラム。 On the computer,
A first exponent, a first mantissa, and an extension that expresses a part of the exponent by using a part of the first mantissa when the value of the first exponent becomes a predetermined value. A receiving step of receiving an input of a floating-point number recorded in a data structure having an exponent part;
In the extension exponent, the bit width used as the extension exponent is extended in order from the lower bit according to the value of the exponent to be expressed, and is calculated with reference to the first exponent and the extension exponent. An exponent output step of outputting an exponent as a value recorded in a second exponent having a bit width wider than the first exponent;
By outputting the value of the bit not used as the extended exponent part of the first mantissa part as it is and outputting the value of the bit used as the extended exponent part as 0, the second mantissa A mantissa output step of outputting a value recorded in the section;
A program for executing
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2018/034779 WO2020059074A1 (en) | 2018-09-20 | 2018-09-20 | Data construct, information processing device, method, and program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2018/034779 WO2020059074A1 (en) | 2018-09-20 | 2018-09-20 | Data construct, information processing device, method, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020059074A1 true WO2020059074A1 (en) | 2020-03-26 |
Family
ID=69888561
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/034779 WO2020059074A1 (en) | 2018-09-20 | 2018-09-20 | Data construct, information processing device, method, and program |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2020059074A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63259720A (en) * | 1988-03-25 | 1988-10-26 | Hitachi Ltd | Floating point multiplication circuit |
JP2010027049A (en) * | 2008-07-22 | 2010-02-04 | Internatl Business Mach Corp <Ibm> | Circuit device using floating point execution unit, integrated circuit device, program product, and method (dynamic range adjusting floating point execution unit) |
-
2018
- 2018-09-20 WO PCT/JP2018/034779 patent/WO2020059074A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63259720A (en) * | 1988-03-25 | 1988-10-26 | Hitachi Ltd | Floating point multiplication circuit |
JP2010027049A (en) * | 2008-07-22 | 2010-02-04 | Internatl Business Mach Corp <Ibm> | Circuit device using floating point execution unit, integrated circuit device, program product, and method (dynamic range adjusting floating point execution unit) |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111652367B (en) | Data processing method and related product | |
CN110363279B (en) | Image processing method and device based on convolutional neural network model | |
CN109685198B (en) | Method and device for quantifying parameters of a neural network | |
CN108701250B (en) | Data fixed-point method and device | |
CN108337000B (en) | Automatic method for conversion to lower precision data formats | |
CN110413255B (en) | Artificial neural network adjusting method and device | |
US11429838B2 (en) | Neural network device for neural network operation, method of operating neural network device, and application processor including the neural network device | |
US20210263995A1 (en) | Reduced dot product computation circuit | |
CN108139885B (en) | Floating point number rounding | |
JP2018156451A (en) | Network learning device, network learning system, network learning method, and program | |
CN112771547A (en) | End-to-end learning in a communication system | |
CN112308226B (en) | Quantization of neural network model, method and apparatus for outputting information | |
JP2022512211A (en) | Image processing methods, equipment, in-vehicle computing platforms, electronic devices and systems | |
WO2020059073A1 (en) | Information processing device, method, and program | |
CN112561050B (en) | Neural network model training method and device | |
CN111587441B (en) | Generating output examples using regression neural networks conditioned on bit values | |
CN110337636A (en) | Data transfer device and device | |
US20190370682A1 (en) | Computer-readable recording medium having stored therein training program, training method, and information processing apparatus | |
CN113327194A (en) | Image style migration method, device, equipment and storage medium | |
WO2020059074A1 (en) | Data construct, information processing device, method, and program | |
CN113947177A (en) | Quantization calibration method, calculation device and computer readable storage medium | |
JP7506276B2 (en) | Implementations and methods for processing neural networks in semiconductor hardware - Patents.com | |
JP6749530B1 (en) | Structure conversion device, structure conversion method, and structure conversion program | |
CN118247171B (en) | Image processing method, image processing apparatus, and readable storage medium | |
US20220188077A1 (en) | Arithmetic processing device, arithmetic processing method, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18933935 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18933935 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |