WO2017185396A1 - 一种用于执行矩阵加/减运算的装置和方法 - Google Patents

一种用于执行矩阵加/减运算的装置和方法 Download PDF

Info

Publication number
WO2017185396A1
WO2017185396A1 PCT/CN2016/081117 CN2016081117W WO2017185396A1 WO 2017185396 A1 WO2017185396 A1 WO 2017185396A1 CN 2016081117 W CN2016081117 W CN 2016081117W WO 2017185396 A1 WO2017185396 A1 WO 2017185396A1
Authority
WO
WIPO (PCT)
Prior art keywords
matrix
instruction
scalar
data
addition
Prior art date
Application number
PCT/CN2016/081117
Other languages
English (en)
French (fr)
Inventor
张潇
刘少礼
陈天石
陈云霁
Original Assignee
北京中科寒武纪科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京中科寒武纪科技有限公司 filed Critical 北京中科寒武纪科技有限公司
Priority to EP21173815.8A priority Critical patent/EP3910503A1/en
Priority to EP16899907.6A priority patent/EP3451163A4/en
Publication of WO2017185396A1 publication Critical patent/WO2017185396A1/zh
Priority to US16/171,926 priority patent/US10860681B2/en
Priority to US16/171,681 priority patent/US20190065436A1/en
Priority to US16/250,123 priority patent/US10891353B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/3001Arithmetic instructions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30003Arrangements for executing specific machine instructions
    • G06F9/30007Arrangements for executing specific machine instructions to perform operations on data operands
    • G06F9/30036Instructions to perform operations on packed data, e.g. vector, tile or matrix operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Definitions

  • the present disclosure relates to the field of computers, and more particularly to an apparatus and method for performing matrix addition and subtraction operations.
  • a known scheme for performing matrix addition and subtraction operations is to use a general purpose processor that performs general purpose instructions through a general purpose register file and general purpose functions to perform matrix addition and subtraction operations.
  • a general purpose processor that performs general purpose instructions through a general purpose register file and general purpose functions to perform matrix addition and subtraction operations.
  • one of the disadvantages of this method is that a single general purpose processor is mostly used for scalar calculations, and the performance is low when performing matrix operations.
  • the effect of less processor enhancement is not significant; when the number of processors is high, mutual communication between them may become a performance bottleneck.
  • a series of matrix addition and subtraction operations are performed using a graphics processing unit (GPU) in which operations are performed by executing general purpose SIMD instructions using a general purpose register file and a general purpose stream processing unit.
  • GPU graphics processing unit
  • the GPU on-chip cache is too small, and it is necessary to continuously perform off-chip data transfer when performing large-scale matrix operations, and the off-chip bandwidth becomes a main performance bottleneck.
  • matrix-addition and subtraction operations are performed using specially tailored matrix arithmetic devices in which matrix operations are performed using custom register files and custom processing units.
  • the existing dedicated matrix operation device is limited by the design of the register file, and cannot flexibly support matrix addition and subtraction operations of different lengths.
  • the present disclosure provides an apparatus and method for performing matrix addition and subtraction operations.
  • an apparatus for performing a matrix addition and subtraction operation including:
  • a storage unit configured to store matrix data related to the matrix operation instruction
  • a register unit for storing scalar data related to the matrix operation instruction
  • control unit for decoding a matrix operation instruction and controlling an operation process of the matrix operation instruction
  • a matrix operation unit configured to perform matrix addition and subtraction operations on the input matrix according to the decoded matrix operation instruction
  • the matrix operation unit is a customized hardware circuit.
  • an apparatus for performing a matrix addition and subtraction operation including:
  • the fetch module is configured to take out the next matrix operation instruction to be executed from the instruction sequence, and transmit the matrix operation instruction to the decoding module;
  • a decoding module configured to decode the matrix operation instruction, and transmit the decoded matrix operation instruction to the instruction queue module;
  • the instruction queue module is configured to temporarily store the decoded matrix operation instruction, and obtain scalar data related to the matrix operation instruction operation from the matrix operation instruction or the scalar register; after obtaining the scalar data, send the matrix operation instruction to the dependency Relationship processing unit;
  • a scalar register file including a plurality of scalar registers for storing scalar data associated with matrix operation instructions
  • a dependency processing unit configured to determine whether there is a dependency relationship between the matrix operation instruction and the previously unexecuted operation instruction; if there is a dependency relationship, send the matrix operation instruction to the storage queue module, if there is no dependency Relationship, the matrix operation instruction is sent to the matrix operation unit;
  • a storage queue module configured to store a matrix operation instruction having a dependency relationship with the previous operation instruction, and sending the matrix operation instruction to the matrix operation unit after the dependency relationship is released;
  • a matrix operation unit configured to perform matrix addition and subtraction operations on the input matrix according to the received matrix operation instruction
  • a scratchpad memory for storing an input matrix and an output matrix
  • An input/output access module for directly accessing the scratchpad memory, responsible for reading an output matrix and writing an input matrix from the scratchpad memory.
  • the present disclosure also provides a method of performing matrix addition and subtraction operations.
  • the present disclosure can be applied to the following scenarios (including but not limited to): data processing, robots, computers, printers, scanners, phones, tablets, smart terminals, mobile phones, driving recorders, navigators, sensors, cameras, cloud servers , cameras, camcorders, projectors, watches, earphones, mobile storage, wearable devices and other electronic products; aircraft, ships, vehicles and other types of transportation; televisions, air conditioners, microwave ovens, refrigerators, rice cookers, humidifiers, washing machines, Electric lights, gas stoves, range hoods and other household appliances; and including nuclear magnetic resonance instruments, B-ultrasound, electrocardiograph and other medical equipment.
  • FIG. 1 is a schematic structural diagram of an apparatus for performing matrix addition and subtraction operations according to an embodiment of the present disclosure.
  • FIG. 2 is a schematic diagram of the operation of a matrix operation unit in accordance with an embodiment of the present disclosure.
  • FIG. 3 is a format diagram of an instruction set in accordance with an embodiment of the present disclosure.
  • FIG. 4 is a schematic structural diagram of a matrix operation device according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart of a matrix operation device executing a matrix addition instruction according to an embodiment of the present disclosure.
  • FIG. 6 is a flow diagram of a matrix operation device executing a matrix scalar instruction in accordance with an embodiment of the present disclosure.
  • the present disclosure provides a matrix addition and subtraction operation device, including: a storage unit, a register unit, a control unit, and a matrix operation unit;
  • the storage unit stores a matrix
  • the register unit stores an input matrix address, an input matrix length, and an output matrix address
  • the control unit is configured to perform a decoding operation on the matrix operation instruction, and control each module according to the matrix operation instruction to control the execution process of the matrix addition and subtraction operation;
  • the matrix operation unit obtains an input matrix address, an input matrix length, and an output matrix address in an instruction or a register unit, and then acquires a corresponding matrix in the storage unit according to the input matrix address, and then performs a matrix operation according to the acquired matrix to obtain a matrix operation. The result of the matrix operation.
  • the present disclosure temporarily stores the matrix data participating in the calculation on a storage unit (for example, a scratch pad memory), so that the matrix operation process can more flexibly and efficiently support data of different widths, and improve the execution performance of a task including a large number of matrix addition and subtraction operations. .
  • a storage unit for example, a scratch pad memory
  • the matrix addition and subtraction unit may be implemented as a customized hardware circuit, including but not limited to an FPGA, a CGRA, an application specific integrated circuit ASIC, an analog circuit, a memristor, and the like.
  • FIG. 1 is a schematic structural diagram of an apparatus for performing matrix addition and subtraction operations provided by the present disclosure. As shown in FIG. 1, the apparatus includes:
  • a storage unit for storing matrices may be a scratchpad memory capable of supporting matrix data of different sizes.
  • the present disclosure temporarily stores necessary calculation data on a scratch pad memory (Scratchpad Memory), so that the computing device is in progress. In the matrix operation process, data of different widths can be supported more flexibly and efficiently.
  • the scratchpad memory can be implemented by a variety of different memory devices such as SRAM, DRAM, eDRAM, memristor, 3D-DRAM, and nonvolatile memory.
  • the register unit for storing a matrix address, wherein the matrix address is an address stored in the memory unit by the matrix;
  • the register unit may be a scalar register file, providing a scalar register required for the operation, a scalar register Store input matrix address, input matrix length, and output matrix address.
  • the matrix unit not only gets the matrix address from the register unit, but also gets the corresponding scalar from the register unit.
  • control unit for controlling the behavior of various modules in the device.
  • the control unit reads the prepared instruction, decodes and generates a plurality of micro-instructions, and sends the same to other modules in the device, and the other modules perform corresponding operations according to the obtained micro-instructions.
  • a matrix operation unit configured to acquire various addition and subtraction operation instructions, acquire a matrix address in the register unit according to the instruction, and then acquire a corresponding matrix in the storage unit according to the matrix address, and then perform an operation according to the acquired matrix, The result of the matrix operation is obtained, and the result of the matrix operation is stored in the scratchpad memory.
  • the matrix operation unit is responsible for all matrix addition and subtraction operations of the device, including but not limited to matrix addition operations, matrix subtraction operations, matrix scalar operations, and matrix scalar operations.
  • the matrix addition and subtraction operation instruction is sent to the operation unit for execution, and all the operation components are parallel vector operation components, and the same operation can be performed on a whole column of data in parallel on the same clock.
  • FIG. 2 shows an operational schematic of a matrix operation unit in accordance with an embodiment of the present disclosure.
  • 1 is a vector operator composed of a plurality of scalar operators
  • 2 is a storage of the matrix A in the scratch pad memory
  • 3 is a storage of the matrix B in the scratch pad memory.
  • Both matrices are m*n, and the width of the vector operator is k. That is, the vector operator can calculate the addition and subtraction result of the vector of length k at a time. Each time the operator obtains vector data of length k from A and B, performs addition and subtraction operations in the operator, and writes the result back.
  • a complete matrix addition and subtraction may require several calculations as described above. As shown in FIG.
  • the matrix addition and subtraction component is composed of a plurality of parallel scalar addition and subtraction operators.
  • the operation unit sequentially reads data of a certain length for two matrix data of a specified size. This length is equal to the number of scalar addition and subtraction operators.
  • the corresponding data performs addition and subtraction operations in the corresponding scalar operator, each time calculating a part of the matrix data, and finally completing the addition and subtraction of the entire matrix.
  • the operation unit expands the scalar data read into the register into vector data of the same width as the number of scalar operators, as one input of addition and subtraction, and the other input and the aforementioned execution matrix.
  • the addition and subtraction process is the same, reading a certain length of matrix data from the scratchpad memory, and performing addition and subtraction operations with the scalar extended vector.
  • the matrix addition and subtraction operation device further includes: an instruction buffer unit, configured to store a matrix operation instruction to be executed. During the execution of the instruction, it is also cached in the instruction cache unit. When an instruction is executed, the instruction will be submitted.
  • an instruction buffer unit configured to store a matrix operation instruction to be executed. During the execution of the instruction, it is also cached in the instruction cache unit. When an instruction is executed, the instruction will be submitted.
  • control unit in the apparatus further includes: an instruction queue module, configured to sequentially store the decoded matrix operation instructions, and after obtaining the scalar data required for the matrix operation instruction, The matrix operation instructions and scalar data are sent to the dependency processing module.
  • the control unit in the apparatus further includes: a dependency processing unit, configured to determine, before the matrix operation unit acquires the instruction, whether there is a dependency relationship between the operation instruction and the previously uncompleted operation instruction If the access to the same matrix storage address, if so, the operation instruction is sent to the storage queue mode In the block, after the execution of the previous operation instruction is completed, the operation instruction in the storage queue is provided to the matrix operation unit; otherwise, the operation instruction is directly provided to the matrix operation unit. Specifically, when the matrix operation instruction needs to access the scratchpad memory, the front and back instructions may access the same block of storage space. To ensure the correctness of the instruction execution result, if the current instruction is detected to have a dependency relationship with the data of the previous instruction, The instruction must wait in the store queue until the dependency is removed.
  • a dependency processing unit configured to determine, before the matrix operation unit acquires the instruction, whether there is a dependency relationship between the operation instruction and the previously uncompleted operation instruction If the access to the same matrix storage address, if so, the operation instruction is sent to the storage
  • control unit in the apparatus further includes: a storage queue module, the module includes an ordered queue, and an instruction having a dependency on the data in the previous instruction is stored in the ordered queue Until the dependency is eliminated, after the dependency is eliminated, it provides the arithmetic instruction to the matrix operation unit.
  • the apparatus further includes: an input and output unit configured to store the matrix in the storage unit, or obtain an operation result from the storage unit.
  • the input/output unit can directly access the storage unit, and is responsible for reading the matrix data from the memory to the storage unit or writing the matrix data from the storage unit to the memory.
  • the instruction set for the apparatus of the present disclosure adopts a Load/Store structure, and the matrix operation unit does not operate on data in the memory.
  • This instruction set uses a reduced instruction set architecture.
  • the instruction set only provides the most basic matrix operations. Complex matrix operations are simulated by combining these simple instructions so that instructions can be executed in a single cycle at high clock frequencies.
  • the device fetches the instruction for decoding, and then sends it to the instruction queue for storage.
  • each parameter in the instruction is obtained, and the parameters may be directly written in the operation domain of the instruction, It can be read from the specified register according to the register number in the instruction operation field.
  • the advantage of using register storage parameters is that there is no need to change the instruction itself. As long as the value in the register is changed by the instruction, most of the loops can be realized, thus greatly saving the number of instructions required to solve some practical problems.
  • the dependency processing unit determines whether the data actually needed by the instruction has a dependency relationship with the previous instruction, which determines whether the instruction can be immediately sent to the matrix operation unit for execution.
  • the instruction must wait until the instruction it depends on has been executed before it can be sent to the matrix operation unit for execution.
  • the instruction will be executed quickly, and the result, that is, the generated result matrix, is written back to the address provided by the instruction, and the instruction is executed.
  • the matrix addition and subtraction operation instruction includes an operation code and at least one operation domain, wherein the operation code is used to indicate the function of the matrix operation instruction.
  • the matrix operation unit can perform different matrix operations by identifying the operation code, and the operation field is used to indicate data information of the matrix operation instruction, wherein the data information can be an immediate number or a register number, for example, when a matrix is to be acquired, according to The register number can obtain the matrix start address and matrix length in the corresponding register, and then according to the matrix. The starting address and the length of the matrix obtain the matrix in which the corresponding address is stored in the storage unit.
  • a matrix addition instruction (MA) according to which the device extracts matrix data of a specified size from a specified address of the scratch pad memory, performs matrix addition in the matrix operation unit, and writes the calculation result back to the specification of the scratch pad memory. Address; it is worth noting that the vector can be stored in the scratchpad memory as a special form of matrix (a matrix of only one row of elements).
  • a matrix subtraction instruction according to which the device extracts matrix data of a specified size from a specified address of the scratchpad memory, performs matrix subtraction in the matrix operation unit, and writes the calculation result back to the specification of the scratch pad memory.
  • the vector can be stored in the scratchpad memory as a special form of matrix (a matrix of only one row of elements).
  • a matrix scalar instruction (MAS) according to which the device fetches matrix data of a specified size from a specified address of the scratchpad memory, extracts scalar data from a specified address of the scalar register file, and performs matrix scalarization in the matrix operation unit. The operation is performed, and the calculation result is written back to the specified address of the scratch pad memory.
  • MAS matrix scalar instruction
  • a matrix subtraction scalar instruction according to which the device takes out matrix data of a specified size from a specified address of the scratchpad memory, extracts scalar data from a specified address of the scalar register file, and performs matrix subtraction of the scalar quantity in the matrix operation unit. The operation is performed, and the calculation result is written back to the specified address of the scratch pad memory.
  • the scalar register file not only stores the address of the matrix but also stores the scalar data.
  • the device includes an instruction module, a decoding module, an instruction queue module, a scalar register file, a dependency processing unit, and a storage queue module.
  • Matrix operation unit cache register, IO memory access module;
  • the fetch module which is responsible for fetching the next instruction to be executed from the instruction sequence and passing the instruction to the decoding module;
  • the module is responsible for decoding the instruction, and transmitting the decoded instruction to the instruction queue;
  • the instruction queue is configured to temporarily store the decoded matrix operation instruction, and obtain scalar data related to the matrix operation instruction operation from the matrix operation instruction or the scalar register; after obtaining the scalar data, send the matrix operation instruction to the dependency relationship Processing unit
  • a scalar register file providing a scalar register required by the device during operation;
  • the scalar register file includes a plurality of scalar registers for storing scalar data associated with the matrix operation instructions;
  • a dependency processing unit that handles storage dependencies that may exist between a processing instruction and a previous instruction.
  • the matrix operation instruction accesses the scratch pad memory, and the front and back instructions may access the same block of memory. That is, the unit detects whether the storage range of the input data of the current instruction overlaps with the storage range of the output data of the instruction that has not been executed before, and indicates that the instruction logically needs to use the calculation result of the previous instruction, so it must Wait until the instructions that depend on it before execution is complete. In this process, the instructions are actually temporarily stored in the following storage queue. In order to ensure the correctness of the execution result of the instruction, if the current instruction is detected to have a dependency on the data of the previous instruction, the instruction must wait in the storage queue until the dependency is eliminated.
  • Storing a queue module the module is an ordered queue, and instructions related to previous instructions on the data are stored in the queue until the storage relationship is eliminated;
  • a matrix operation unit which is responsible for performing addition and subtraction of the matrix
  • the module is a temporary storage device dedicated to matrix data, capable of supporting matrix data of different sizes; mainly used for storing input matrix data and output matrix data;
  • IO memory access module which is used to directly access the scratchpad memory and is responsible for reading data or writing data from the scratchpad memory.
  • FIG. 5 is a flowchart of executing a matrix addition instruction by an arithmetic apparatus according to an embodiment of the present disclosure. As shown in FIG. 5, the process of executing a matrix addition instruction includes:
  • the fetch module extracts the matrix addition instruction and sends the instruction to the decoding module.
  • the decoding module decodes the matrix addition instruction, and sends the matrix addition instruction to the instruction queue.
  • the matrix addition instruction obtains scalar data corresponding to the four operation fields in the instruction from the matrix addition instruction itself or from the scalar register file, including an input matrix address, an input matrix length, and an output matrix address.
  • the instruction is sent to the dependency processing unit.
  • the dependency processing unit analyzes whether the instruction has a dependency on the data with the previous instruction that has not been executed. If there is a dependency, the instruction needs to wait in the store queue until it no longer has a dependency on the data with the previous unexecuted instruction.
  • the matrix operation unit extracts the input matrix data from the cache according to the address and the length of the input matrix, and reads the corresponding data of one positioning width in the two input matrices each time, and the aligned two in the matrix addition and subtraction unit The column data is added, repeated, and the entire matrix addition operation is performed in the matrix operation unit.
  • FIG. 6 is a flowchart of executing a matrix scalar instruction by an arithmetic device according to an embodiment of the present disclosure. As shown in FIG. 6, the process of executing a matrix scalar instruction includes:
  • the instruction module takes out the matrix matrix scalar instruction and sends the instruction to the decoding module.
  • the decoding module decodes the matrix scalar instruction and sends the instruction to the instruction queue.
  • the matrix scalar instruction obtains scalar data corresponding to the four operation fields in the instruction from the instruction itself or from the scalar register file, including the input matrix address, the input matrix length, the input scalar, and the output matrix address.
  • the instruction is sent to the dependency processing unit.
  • the dependency processing unit analyzes whether the instruction has a dependency on the data with the previous instruction that has not been executed. If there is a dependency, the instruction needs to wait in the store queue until it no longer has a dependency on the data with the previous unexecuted instruction.
  • the matrix matrix scalar instruction is sent to the matrix operation unit.
  • the matrix operation unit sequentially reads in part of the matrix data, performs a column of data in the matrix addition and subtraction scalar component and simultaneously subtracts the scalar data stored in the register, and repeats, and completes the operation of the entire matrix subtraction amount.
  • the present disclosure provides a matrix computing device, and cooperates with corresponding instructions, which can well solve the problem that more and more algorithms in the current computer field contain a large number of matrix addition and subtraction operations, compared to existing conventional solutions.
  • the present disclosure may have the advantages of simple instruction, convenient use, flexible matrix scale supported, and sufficient on-chip buffering.
  • the present disclosure can be used in a variety of computing tasks involving a large number of matrix addition and subtraction operations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Advance Control (AREA)
  • Executing Machine-Instructions (AREA)
  • Complex Calculations (AREA)

Abstract

本公开提供了一种用于执行矩阵加减运算的装置,其中,包括:存储单元,用于存储矩阵运算指令相关的矩阵数据;寄存器单元,用于存储矩阵运算指令相关的标量数据;控制单元,用于对矩阵运算指令进行译码,并控制矩阵运算指令的运算过程;矩阵运算单元,用于根据译码后的矩阵运算指令,对输入矩阵进行矩阵加减运算操作;其中,所述矩阵运算单元为定制的硬件电路。本公开还提供了一种执行矩阵加减法运算的方法。

Description

一种用于执行矩阵加/减运算的装置和方法 技术领域
本公开涉及计算机领域,尤其涉及一种用于执行矩阵加减法运算的装置和方法。
背景技术
当前计算机领域,伴随着大数据、机器学习等新兴技术的成熟,越来越多的任务中包含了各种各样的矩阵加减法运算,尤其是大矩阵的加减法运算,这些往往成为算法速度和效果提高的瓶颈。
在现有技术中,一种进行矩阵加减法运算的已知方案是使用通用处理器,该方法通过通用寄存器堆和通用功能部件来执行通用指令,从而执行矩阵加减法运算。然而,该方法的缺点之一是单个通用处理器多用于标量计算,在进行矩阵运算时运算性能较低。而使用多个通用处理器并行执行时,处理器的个数较少提升的效果不够显著;处理器个数较高时它们之间的相互通讯又有可能成为性能瓶颈。
在另一种现有技术中,使用图形处理器(GPU)来进行一系列矩阵加减法运算,其中,通过使用通用寄存器堆和通用流处理单元执行通用SIMD指令来进行运算。但在上述方案中,GPU片上缓存太小,在进行大规模矩阵运算时需要不断进行片外数据搬运,片外带宽成为了主要性能瓶颈。
在另一种现有技术中,使用专门定制的矩阵运算装置来进行矩阵加减法运算,其中,使用定制的寄存器堆和定制的处理单元进行矩阵运算。然而根据这种方法,目前已有的专用矩阵运算装置受限于寄存器堆的设计,不能够灵活地支持不同长度的矩阵加减法运算。
综上所述,现有的不管是片上多核通用处理器、片间互联通用处理器(单核或多核)、还是片间互联图形处理器都无法进行高效的矩阵加减法运算,并且这些现有技术在处理矩阵加减法运算问题时存在着代码量大,受限于片间通讯,片上缓存不够,支持的矩阵规模不够灵活等问题。
发明内容
基于此,本公开提供了一种执行矩阵加减法运算的装置和方法。
根据本公开一方面,提供了一种用于执行矩阵加减运算的装置,其中,包括:
存储单元,用于存储矩阵运算指令相关的矩阵数据;
寄存器单元,用于存储矩阵运算指令相关的标量数据;
控制单元,用于对矩阵运算指令进行译码,并控制矩阵运算指令的运算过程;
矩阵运算单元,用于根据译码后的矩阵运算指令,对输入矩阵进行矩阵加减运算操作;
其中,所述矩阵运算单元为定制的硬件电路。
根据本公开另一方面,提供了一种用于执行矩阵加减法运算的装置,其中,包括:
取指模块,用于从指令序列中取出下一条要执行的矩阵运算指令,并将该矩阵运算指令传给译码模块;
译码模块,用于对该矩阵运算指令进行译码,并将译码后的矩阵运算指令传送给指令队列模块;
指令队列模块,用于暂存译码后的矩阵运算指令,并从矩阵运算指令或标量寄存器获得矩阵运算指令运算相关的标量数据;获得所述标量数据后,将所述矩阵运算指令送至依赖关系处理单元;
标量寄存器堆,包括多个标量寄存器,用于存储矩阵运算指令相关的标量数据;
依赖关系处理单元,用于判断所述矩阵运算指令与之前未执行完的运算指令之间是否存在依赖关系;如果存在依赖关系,则将所述矩阵运算指令送至存储队列模块,如果不存在依赖关系,则将所述矩阵运算指令送至矩阵运算单元;
存储队列模块,用于存储与之前运算指令存在依赖关系的矩阵运算指令,并且在所述依赖关系解除后,将所述矩阵运算指令送至矩阵运算单元;
矩阵运算单元,用于根据接收到的矩阵运算指令对输入矩阵进行矩阵加减法运算操作;
高速暂存存储器,用于存储输入矩阵和输出矩阵;
输入输出存取模块,用于直接访问所述高速暂存存储器,负责从所述高速暂存存储器中读取输出矩阵和写入输入矩阵。
本公开还提供了一种执行矩阵加减法运算的方法。
本公开可以应用于以下场景中(包括但不限于):数据处理、机器人、电脑、打印机、扫描仪、电话、平板电脑、智能终端、手机、行车记录仪、导航仪、传感器、摄像头、云端服务器、相机、摄像机、投影仪、手表、耳机、移动存储、可穿戴设备等各类电子产品;飞机、轮船、车辆等各类交通工具;电视、空调、微波炉、冰箱、电饭煲、加湿器、洗衣机、电灯、燃气灶、油烟机等各类家用电器;以及包括核磁共振仪、B超、心电图仪等各类医疗设备。
附图说明
图1是根据本公开实施例的执行矩阵加减法运算的装置的结构示意图。
图2是根据本公开实施例的矩阵运算单元的操作示意图。
图3是根据本公开实施例的指令集的格式示意图。
图4是根据本公开实施例的矩阵运算装置的结构示意图。
图5是根据本公开实施例的矩阵运算装置执行矩阵加法指令的流程图。
图6是根据本公开实施例的矩阵运算装置执行矩阵减标量指令的流程图。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本公开作进一步的详细说明。
本公开提供了一种矩阵加减法运算装置,包括:存储单元、寄存器单元、控制单元和矩阵运算单元;
所述存储单元存储矩阵;
所述寄存器单元中存储有输入矩阵地址、输入矩阵长度、输出矩阵地址;
所述控制单元用于对矩阵运算指令执行译码操作,并根据矩阵运算指令控制各个模块,以控制矩阵加减法运算的执行过程;
矩阵运算单元在指令中或寄存器单元中获取输入矩阵地址、输入矩阵长度、输出矩阵地址,然后,根据该输入矩阵地址在存储单元中获取相应的矩阵,接着,根据获取的矩阵进行矩阵运算,得到矩阵运算结果。
本公开将参与计算的矩阵数据暂存在存储单元(例如,高速暂存存储器)上,使得矩阵运算过程中可以更加灵活有效地支持不同宽度的数据,提升包含大量矩阵加减法运算任务的执行性能。
本公开中,所述矩阵加减法运算单元可以实现为定制的硬件电路,包括但不限于FPGA、CGRA、专用集成电路ASIC、模拟电路和忆阻器等。
图1是本公开提供的用于执行矩阵加减法运算的装置的结构示意图,如图1所示,该装置包括:
存储单元,用于存储矩阵。在一种实施方式中,该存储单元可以是高速暂存存储器,能够支持不同大小的矩阵数据;本公开将必要的计算数据暂存在高速暂存存储器上(Scratchpad Memory),使本运算装置在进行矩阵运算过程中可以更加灵活有效地支持不同宽度的数据。所述高速暂存存储器可以通过各种不同存储器件如SRAM、DRAM、eDRAM、忆阻器、3D-DRAM和非易失存储等实现。
寄存器单元,用于存储矩阵地址,其中,矩阵地址为矩阵在存储单元中存储的地址;在一种实施方式中,寄存器单元可以是标量寄存器堆,提供运算过程中所需的标量寄存器,标量寄存器存储输入矩阵地址、输入矩阵长度、输出矩阵地址。当涉及到矩阵与标量的运算时,矩阵运算单元不仅要从寄存器单元中获取矩阵地址,还要从寄存器单元中获取相应的标量。
控制单元,用于控制装置中各个模块的行为。在一种实施方式中,控制单元读取准备好的指令,进行译码生成多条微指令,发送给装置中的其他模块,其他模块根据得到的微指令执行相应的操作。
矩阵运算单元,用于获取各种加减运算指令,根据指令在所述寄存器单元中获取矩阵地址,然后,根据该矩阵地址在存储单元中获取相应的矩阵,接着,根据获取的矩阵进行运算,得到矩阵运算结果,并将矩阵运算结果存储于高速暂存存储器中。矩阵运算单元负责装置的所有矩阵加减运算,包括但不限于矩阵加法操作、矩阵减法操作、矩阵加标量操作和矩阵减标量操作。矩阵加减运算指令被送往该运算单元执行,所有的运算部件均是并行的向量运算部件,可以在同一时钟并行地对一整列数据进行相同的运算。
图2示出了根据本公开实施例的矩阵运算单元的操作示意图。其中1是由多个标量运算器构成向量运算器,2表示矩阵A在高速暂存存储器中的存储,3表示矩阵B在高速暂存存储器中的存储。两矩阵均是m*n的大小,向量运算器的宽度为k,即向量运算器可以一次计算出长度为k的向量的加减运算结果。运算器每次分别从A和B中获取长度为k的向量数据,在运算器中执行加减运算,并将结果写回,一个完整的矩阵加减可能需要进行若干次上述计算。如图2所示,矩阵加减部件由多个并行的标量加减运算器构成,在执行矩阵加减运算的过程中,对于指定大小的两矩阵数据,运算单元依次读入一定长度的数据,该长度等于标量加减运算器的个数。对应的数据在对应的标量运算器中执行加减法运算,每次计算矩阵数据中的一部分,并最终完成整个矩阵的加减法运算。
在执行矩阵加减标量的过程中,运算单元会将读入寄存器中的标量数据扩展成与标量运算器个数等宽的向量数据,作为加减法的一个输入,另一输入与前述执行矩阵加减的过程相同,从高速暂存存储器中读取一定长度的矩阵数据,与标量扩展后的向量执行加减法运算。
根据本公开的一种实施方式,所述矩阵加减法运算装置还包括:指令缓存单元,用于存储待执行的矩阵运算指令。指令在执行过程中,同时也被缓存在指令缓存单元中,当一条指令执行完之后,该指令将被提交。
根据本公开的一种实施方式,所述装置中的控制单元还包括:指令队列模块,用于对译码后的矩阵运算指令进行顺序存储,并在获得矩阵运算指令所需的标量数据后,将矩阵运算指令以及标量数据送至依赖关系处理模块。
根据本公开的一种实施方式,所述装置中的控制单元还包括:依赖关系处理单元,用于在矩阵运算单元获取指令前,判断该运算指令与之前未完成运算指令之间是否存在依赖关系,如是否访问相同的矩阵存储地址,若是,将该运算指令送至存储队列模 块中,待前一运算指令执行完毕后,将存储队列中的该运算指令提供给所述矩阵运算单元;否则,直接将该运算指令提供给所述矩阵运算单元。具体地,矩阵运算指令需要访问高速暂存存储器时,前后指令可能会访问同一块存储空间,为了保证指令执行结果的正确性,当前指令如果被检测到与之前的指令的数据存在依赖关系,该指令必须在存储队列内等待至依赖关系被消除。
根据本公开的一种实施方式,所述装置中的控制单元还包括:存储队列模块,该模块包括一个有序队列,与之前指令在数据上有依赖关系的指令被存储在该有序队列内直至依赖关系被消除,在依赖关系消除后,其将运算指令提供给矩阵运算单元。
根据本公开的一种实施方式,装置还包括:输入输出单元,用于将矩阵存储于存储单元,或者,从存储单元中获取运算结果。其中,输入输出单元可直接访问存储单元,负责从内存向存储单元读取矩阵数据或从存储单元向内存写入矩阵数据。
根据本公开的一种实施方式,用于本公开装置的指令集采用Load/Store(加载/存储)结构,矩阵运算单元不会对内存中的数据进行操作。本指令集采用精简指令集架构,指令集只提供最基本的矩阵运算操作,复杂的矩阵运算都由这些简单指令通过组合进行模拟,使得可以在高时钟频率下单周期执行指令。
在本装置执行矩阵运算的过程中,装置取出指令进行译码,然后送至指令队列存储,根据译码结果,获取指令中的各个参数,这些参数可以是直接写在指令的操作域中,也可以是根据指令操作域中的寄存器号从指定的寄存器中读取。这种使用寄存器存储参数的好处是无需改变指令本身,只要用指令改变寄存器中的值,就可以实现大部分的循环,因此大大节省了在解决某些实际问题时所需要的指令条数。在全部操作数之后,依赖关系处理单元会判断指令实际需要使用的数据与之前指令中是否存在依赖关系,这决定了这条指令是否可以被立即发送至矩阵运算单元中执行。一旦发现与之前的数据之间存在依赖关系,则该条指令必须等到它依赖的指令执行完毕之后才可以送至矩阵运算单元执行。在定制的矩阵运算单元中,该条指令将快速执行完毕,并将结果,即生成的结果矩阵写回至指令提供的地址,该条指令执行完毕。
图3是本公开提供的矩阵加减运算指令的格式示意图,如图3所示,矩阵加减运算指令包括一操作码和至少一操作域,其中,操作码用于指示该矩阵运算指令的功能,矩阵运算单元通过识别该操作码可进行不同的矩阵运算,操作域用于指示该矩阵运算指令的数据信息,其中,数据信息可以是立即数或寄存器号,例如,要获取一个矩阵时,根据寄存器号可以在相应的寄存器中获取矩阵起始地址和矩阵长度,再根据矩阵 起始地址和矩阵长度在存储单元中获取相应地址存放的矩阵。
有下列几种矩阵加减运算指令:
矩阵加法指令(MA),根据该指令,装置从高速暂存存储器的指定地址取出指定大小的矩阵数据,在矩阵运算单元中进行矩阵加法运算,并将计算结果写回至高速暂存存储器的指定地址;值得说明的是,向量可以作为特殊形式的矩阵(只有一行元素的矩阵)存储于高速暂存存储器中。
矩阵减法指令(MS),根据该指令,装置从高速暂存存储器的指定地址取出指定大小的矩阵数据,在矩阵运算单元中进行矩阵减法运算,并将计算结果写回至高速暂存存储器的指定地址;值得说明的是,向量可以作为特殊形式的矩阵(只有一行元素的矩阵)存储于高速暂存存储器中。
矩阵加标量指令(MAS),根据该指令,装置从高速暂存存储器的指定地址取出指定大小的矩阵数据,从标量寄存器堆的指定地址中取出标量数据,在矩阵运算单元中进行矩阵加标量的运算,并将计算结果写回至高速暂存存储器的指定地址,需要说明的是,标量寄存器堆不仅存储有矩阵的地址,还存储有标量数据。
矩阵减标量指令(MSS),根据该指令,装置从高速暂存存储器的指定地址取出指定大小的矩阵数据,从标量寄存器堆的指定地址中取出标量数据,在矩阵运算单元中进行矩阵减标量的运算,并将计算结果写回至高速暂存存储器的指定地址,需要说明的是,标量寄存器堆不仅存储有矩阵的地址,还存储有标量数据。
图4是本公开一实施例提供的矩阵运算装置的结构示意图,如图4所示,装置包括取指模块、译码模块、指令队列模块、标量寄存器堆、依赖关系处理单元、存储队列模块、矩阵运算单元、高速暂存器、IO内存存取模块;
取指模块,该模块负责从指令序列中取出下一条将要执行的指令,并将该指令传给译码模块;
译码模块,该模块负责对指令进行译码,并将译码后指令传给指令队列;
指令队列,用于暂存译码后的矩阵运算指令,并从矩阵运算指令或标量寄存器获得矩阵运算指令运算相关的标量数据;获得所述标量数据后,将所述矩阵运算指令送至依赖关系处理单元;
标量寄存器堆,提供装置在运算过程中所需的标量寄存器;标量寄存器堆包括多个标量寄存器,用于存储矩阵运算指令相关的标量数据;
依赖关系处理单元,该模块处理处理指令与前一条指令可能存在的存储依赖关系。 矩阵运算指令会访问高速暂存存储器,前后指令可能会访问同一块存储空间。即该单元会检测当前指令的输入数据的存储范围和之前尚未执行完成的指令的输出数据的存储范围是否有重叠,有则说明该条指令在逻辑上需要使用前面指令的计算结果,因此它必须等到在它之前的所依赖的指令执行完毕后才能够开始执行。在这个过程中,指令实际被暂存在下面的存储队列中。为了保证指令执行结果的正确性,当前指令如果被检测到与之前的指令的数据存在依赖关系,该指令必须在存储队列内等待至依赖关系被消除。
存储队列模块,该模块是一个有序队列,与之前指令在数据上有依赖关系的指令被存储在该队列内直至存储关系被消除;
矩阵运算单元,该模块负责执行矩阵的加减运算;
高速暂存存储器,该模块是矩阵数据专用的暂存存储装置,能够支持不同大小的矩阵数据;主要用于存储输入矩阵数据和输出矩阵数据;
IO内存存取模块,该模块用于直接访问高速暂存存储器,负责从高速暂存存储器中读取数据或写入数据。
图5是本公开实施例提供的运算装置执行矩阵加法指令的流程图,如图5所示,执行矩阵加法指令的过程包括:
S1,取指模块取出该条矩阵加法指令,并将该指令送往译码模块。
S2,译码模块对该矩阵加法指令译码,并将该矩阵加法指令送往指令队列。
S3,在指令队列中,该矩阵加法指令从矩阵加法指令本身或从标量寄存器堆中获取指令中四个操作域所对应的标量数据,包括输入矩阵地址、输入矩阵长度、输出矩阵地址。
S4,在取得需要的标量数据后,该指令被送往依赖关系处理单元。依赖关系处理单元分析该指令与前面的尚未执行结束的指令在数据上是否存在依赖关系。如果存在依赖关系,则该条指令需要在存储队列中等待至其与前面的未执行结束的指令在数据上不再存在依赖关系为止。
S5,依赖关系不存在后,该条矩阵加法指令被送往矩阵运算单元。
S6,矩阵运算单元根据输入矩阵的地址和长度从高速暂存器中取出输入矩阵数据,每次分别读入两输入矩阵中一定位宽的对应数据,在矩阵加减运算器中对对齐的两列数据进行加法运算,不断重复,在矩阵运算单元中完成整个矩阵加法的运算。
S7,运算完成后,将运算结果写回至高速暂存存储器的指定地址。
图6是本公开实施例提供的运算装置执行矩阵减标量指令的流程图,如图6所示,执行矩阵减标量指令的过程包括:
S1’,取指模块取出该条矩阵减标量指令,并将该指令送往译码模块。
S2’,译码模块对该矩阵减标量指令译码,并将指令送往指令队列。
S3’,在指令队列中,该矩阵减标量指令从指令本身或从标量寄存器堆中获取指令中四个操作域所对应的标量数据,包括输入矩阵地址、输入矩阵长度、输入标量和输出矩阵地址。
S4’,在取得需要的标量数据后,该指令被送往依赖关系处理单元。依赖关系处理单元分析该指令与前面的尚未执行结束的指令在数据上是否存在依赖关系。如果存在依赖关系,则该条指令需要在存储队列中等待至其与前面的未执行结束的指令在数据上不再存在依赖关系为止。
S5’,依赖关系不存在后,该条矩阵减标量指令被送往矩阵运算单元。
S6’,矩阵运算单元每次依次读入矩阵数据的一部分,在矩阵加减标量部件中进行一列数据同时减去寄存器中存储的标量数据的操作,不断重复,完成整个矩阵减标量的运算。
S7’,运算完成后,将运算结果写回至高速暂存存储器的指定地址。
综上所述,本公开提供矩阵运算装置,并配合相应的指令,能够很好地解决当前计算机领域越来越多的算法包含大量矩阵加减运算的问题,相比于已有的传统解决方案,本公开可以具有指令精简、使用方便、支持的矩阵规模灵活、片上缓存充足等优点。本公开可以用于多种包含大量矩阵加减运算的计算任务。
以上所述的具体实施例,对本公开的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本公开的具体实施例而已,并不用于限制本公开,凡在本公开的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (14)

  1. 一种用于执行矩阵加减运算的装置,其中,包括:
    存储单元,用于存储矩阵运算指令相关的矩阵数据;
    寄存器单元,用于存储矩阵运算指令相关的标量数据;
    控制单元,用于对矩阵运算指令进行译码,并控制矩阵运算指令的运算过程;
    矩阵运算单元,用于根据译码后的矩阵运算指令,对输入矩阵进行矩阵加减运算操作;
    其中,所述矩阵运算单元为定制的硬件电路。
  2. 如权利要求1所述的装置,其中,所述寄存器单元所存储的标量数据包括矩阵运算指令相关的输入矩阵地址、输入矩阵长度和输出矩阵地址以及矩阵加减标量运算用到的标量数据。
  3. 如权利要求1所述的装置,其中,所述控制单元包括:
    指令队列模块,用于对译码后的矩阵运算指令进行顺序存储,并获取矩阵运算指令相关的标量数据。
  4. 如权利要求1所述的装置,其中,所述控制单元包括:
    依赖关系处理单元,用于在矩阵运算单元获取当前矩阵运算指令前,判断当前矩阵运算指令与之前未执行完的矩阵运算指令是否存在依赖关系。
  5. 如权利要求1所述的装置,其中,所述控制单元包括:
    存储队列模块,用于在当前矩阵运算指令与之前未执行完的运算指令存在依赖关系时,暂时存储当前矩阵运算指令,并且在该依赖关系消除时,将暂存的矩阵运算指令送往矩阵运算单元。
  6. 如权利要求1-5任一项所述的装置,其中,所述装置还包括:
    指令缓存单元,用于存储待执行的矩阵运算指令;
    输入输出单元,用于将矩阵运算指令相关的向量数据存储于存储单元,或者,从存储单元中获取矩阵运算指令的运算结果。
  7. 如权利要求1所述的装置,其中,所述矩阵运算指令包括操作码和操作域;
    所述操作码用于指示执行矩阵运算操作;
    所述操作域包括立即数和/或寄存器号,指示矩阵运算相关的标量数据,其中寄存器号用于指向所述寄存器单元地址。
  8. 如权利要求1-5、7任一项所述的装置,其中,所述存储单元为高速暂存存储器。
  9. 如权利要求1-5、7任一项所述的装置,其中,所述矩阵运算单元包括多个并行的标量加减运算器,其中:
    在执行矩阵加减运算的过程中,对于指定大小的两输入矩阵,运算单元依次读入一定长度的矩阵数据,该长度等于标量加减运算器的个数。对应的数据在对应的标量运算器中执行加减法运算,每次计算矩阵数据中的一部分,并最终完成整个矩阵的加减法运算;以及
    在执行矩阵加减标量的过程中,装置首先根据指令从指令中直接读取或根据指令提供的寄存器号从标量寄存器堆中取出该标量数据送至矩阵运算单元,矩阵运算单元将所述标量扩展成与标量运算器个数等宽的向量数据,作为标量加减法运算器的一个输入,另一输入为从存储单元读取的一定长度的矩阵数据,与标量扩展后得到的向量数据执行加减法运算。
  10. 一种用于执行矩阵加减法运算的装置,其中,包括:
    取指模块,用于从指令序列中取出下一条要执行的矩阵运算指令,并将该矩阵运算指令传给译码模块;
    译码模块,用于对该矩阵运算指令进行译码,并将译码后的矩阵运算指令传送给指令队列模块;
    指令队列模块,用于暂存译码后的矩阵运算指令,并从矩阵运算指令或标量寄存器获得矩阵运算指令运算相关的标量数据;获得所述标量数据后,将所述矩阵运算指令送至依赖关系处理单元;
    标量寄存器堆,包括多个标量寄存器,用于存储矩阵运算指令相关的标量数据;
    依赖关系处理单元,用于判断所述矩阵运算指令与之前未执行完的运算指令之间是否存在依赖关系;如果存在依赖关系,则将所述矩阵运算指令送至存储队列模块,如果不存在依赖关系,则将所述矩阵运算指令送至矩阵运算单元;
    存储队列模块,用于存储与之前运算指令存在依赖关系的矩阵运算指令,并且在所述依赖关系解除后,将所述矩阵运算指令送至矩阵运算单元;
    矩阵运算单元,用于根据接收到矩阵运算指令对输入矩阵进行矩阵加减法运算操作;
    高速暂存存储器,用于存储输入矩阵和输出矩阵;
    输入输出存取模块,用于直接访问所述高速暂存存储器,负责从所述高速暂存存储器中读取输出矩阵和写入输入矩阵。
  11. 如权利要求10所述的装置,其中,所述矩阵运算单元为定制的硬件电路。
  12. 如权利要求10所述的装置,其中,所述矩阵运算单元包括多个并行的标量加减运算器,其中:
    在执行矩阵加减运算的过程中,对于指定大小的两输入矩阵,运算单元依次读入一定长度的矩阵数据,该长度等于标量加减运算器的个数。对应的数据在对应的标量运算器中执行加减法运算,每次计算矩阵数据中的一部分,并最终完成整个矩阵的加减法运算;以及
    在执行矩阵加减标量的过程中,装置首先根据指令从指令中直接读取或根据矩阵运算指令提供的寄存器号从标量寄存器堆中取出该标量数据送至运算单元,矩阵运算单元将所述标量扩展成与标量运算器个数等宽的向量数据,作为标量加减法运算器的一个输入,另一输入为从存储单元读取的一定长度的矩阵数据,与标量扩展后得到的向量数据执行加减法运算。
  13. 一种用于执行矩阵加法运算的方法,其中,该方法包括:
    S1,取指模块取出矩阵加法指令,并将该指令送往译码模块;
    S2,译码模块对该矩阵加法指令译码,并将该矩阵加法指令送往指令队列;
    S3,在指令队列中,该矩阵加法指令从矩阵加法指令本身或从标量寄存器堆中获取指令中四个操作域所对应的标量数据,包括输入矩阵地址、输入矩阵长度、输出矩阵地址;
    S4,在取得需要的标量数据后,该指令被送往依赖关系处理单元,依赖关系处理单元分析该指令与前面的尚未执行结束的指令在数据上是否存在依赖关系,如果存在依赖关系,则该条指令需要在存储队列中等待至其与前面的未执行结束的指令在数据上不再存在依赖关系为止;
    S5,依赖关系不存在后,该条矩阵加法指令被送往矩阵运算单元;
    S6,矩阵运算单元根据输入矩阵的地址和长度从高速暂存器中取出输入矩阵数据,每次分别读入两输入矩阵中一定位宽的对应数据,在矩阵加减运算器中对对齐的两列数据进行加法运算,不断重复,在矩阵运算单元中完成整个矩阵加法的运算;
    S7,运算完成后,将运算结果写回至高速暂存存储器的指定地址。
  14. 一种用于执行矩阵减标量运算的方法,其中,该方法包括:
    S1’,取指模块取出该条矩阵减标量指令,并将该指令送往译码模块;
    S2’,译码模块对该矩阵减标量指令译码,并将指令送往指令队列;
    S3’,在指令队列中,该矩阵减标量指令从指令本身或从标量寄存器堆中获取指令中四个操作域所对应的标量数据,包括输入矩阵地址、输入矩阵长度、输入标量和输出矩阵地址;
    S4’,在取得需要的标量数据后,该指令被送往依赖关系处理单元;依赖关系处理单元分析该指令与前面的尚未执行结束的指令在数据上是否存在依赖关系;如果存在依赖关系,则该条指令需要在存储队列中等待至其与前面的未执行结束的指令在数据上不再存在依赖关系为止;
    S5’,依赖关系不存在后,该条矩阵减标量指令被送往矩阵运算单元;
    S6’,矩阵运算单元每次依次读入矩阵数据的一部分,在矩阵加减标量部件中进行一列数据同时减去寄存器中存储的标量数据的操作,不断重复,完成整个矩阵减标量的运算;
    S7’,运算完成后,将运算结果写回至高速暂存存储器的指定地址。
PCT/CN2016/081117 2016-04-26 2016-05-05 一种用于执行矩阵加/减运算的装置和方法 WO2017185396A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP21173815.8A EP3910503A1 (en) 2016-04-26 2016-05-05 Device and method for executing matrix addition/subtraction operation
EP16899907.6A EP3451163A4 (en) 2016-04-26 2016-05-05 DEVICE AND METHOD FOR USE IN THE MAKING OF MATRIX ADDITION / SUBTRACTION OPERATIONS
US16/171,926 US10860681B2 (en) 2016-04-26 2018-10-26 Apparatus and methods for matrix addition and subtraction
US16/171,681 US20190065436A1 (en) 2016-04-26 2018-10-26 Apparatus and methods for matrix addition and subtraction
US16/250,123 US10891353B2 (en) 2016-04-26 2019-01-17 Apparatus and methods for matrix addition and subtraction

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610266805.X 2016-04-26
CN201610266805.XA CN107315715B (zh) 2016-04-26 2016-04-26 一种用于执行矩阵加/减运算的装置和方法

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/171,681 Continuation US20190065436A1 (en) 2016-04-26 2018-10-26 Apparatus and methods for matrix addition and subtraction

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US16/171,681 Continuation-In-Part US20190065436A1 (en) 2016-04-26 2018-10-26 Apparatus and methods for matrix addition and subtraction
US16/171,926 Continuation-In-Part US10860681B2 (en) 2016-04-26 2018-10-26 Apparatus and methods for matrix addition and subtraction
US16/250,123 Continuation-In-Part US10891353B2 (en) 2016-04-26 2019-01-17 Apparatus and methods for matrix addition and subtraction

Publications (1)

Publication Number Publication Date
WO2017185396A1 true WO2017185396A1 (zh) 2017-11-02

Family

ID=60160565

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/081117 WO2017185396A1 (zh) 2016-04-26 2016-05-05 一种用于执行矩阵加/减运算的装置和方法

Country Status (4)

Country Link
US (3) US20190065436A1 (zh)
EP (2) EP3451163A4 (zh)
CN (3) CN111857820B (zh)
WO (1) WO2017185396A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108388674A (zh) * 2018-03-26 2018-08-10 百度在线网络技术(北京)有限公司 用于推送信息的方法和装置
US10860681B2 (en) 2016-04-26 2020-12-08 Cambricon Technologies Corporation Limited Apparatus and methods for matrix addition and subtraction

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754062B (zh) * 2017-11-07 2024-05-14 上海寒武纪信息科技有限公司 卷积扩展指令的执行方法以及相关产品
CN108037908B (zh) * 2017-12-15 2021-02-09 中科寒武纪科技股份有限公司 一种计算方法及相关产品
CN108121688B (zh) * 2017-12-15 2020-06-23 中科寒武纪科技股份有限公司 一种计算方法及相关产品
CN108108189B (zh) * 2017-12-15 2020-10-30 安徽寒武纪信息科技有限公司 一种计算方法及相关产品
US11809869B2 (en) * 2017-12-29 2023-11-07 Intel Corporation Systems and methods to store a tile register pair to memory
US11816483B2 (en) * 2017-12-29 2023-11-14 Intel Corporation Systems, methods, and apparatuses for matrix operations
CN108197705A (zh) * 2017-12-29 2018-06-22 国民技术股份有限公司 卷积神经网络硬件加速装置及卷积计算方法及存储介质
US11789729B2 (en) 2017-12-29 2023-10-17 Intel Corporation Systems and methods for computing dot products of nibbles in two tile operands
CN111353595A (zh) * 2018-12-20 2020-06-30 上海寒武纪信息科技有限公司 运算方法、装置及相关产品
CN111078285B (zh) * 2018-10-19 2021-01-26 中科寒武纪科技股份有限公司 运算方法、系统及相关产品
US11082098B2 (en) 2019-05-11 2021-08-03 Marvell Asia Pte, Ltd. Methods and apparatus for providing an adaptive beamforming antenna for OFDM-based communication systems
US10997116B2 (en) * 2019-08-06 2021-05-04 Microsoft Technology Licensing, Llc Tensor-based hardware accelerator including a scalar-processing unit
CN111158756B (zh) * 2019-12-31 2021-06-29 百度在线网络技术(北京)有限公司 用于处理信息的方法和装置
CN111242293B (zh) * 2020-01-13 2023-07-18 腾讯科技(深圳)有限公司 一种处理部件、数据处理的方法以及电子设备
CN113254078B (zh) * 2021-06-23 2024-04-12 北京中科通量科技有限公司 一种在gpdpu模拟器上高效执行矩阵加法的数据流处理方法
CN116685964A (zh) * 2021-12-31 2023-09-01 华为技术有限公司 运算加速的处理方法、运算加速器的使用方法及运算加速器

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1289212A (zh) * 2000-10-27 2001-03-28 清华大学 用于运动估计算法的分层可编程并行视频信号处理器结构
US20020198911A1 (en) * 2001-06-06 2002-12-26 Blomgren James S. Rearranging data between vector and matrix forms in a SIMD matrix processor
CN1842779A (zh) * 2003-09-08 2006-10-04 飞思卡尔半导体公司 用于执行simd运算的数据处理系统及其方法
CN101122896A (zh) * 2006-08-11 2008-02-13 展讯通信(上海)有限公司 一种基于矩阵运算的高效asic数据处理方法
CN101957743A (zh) * 2010-10-12 2011-01-26 中国电子科技集团公司第三十八研究所 并行数字信号处理器

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4888679A (en) * 1988-01-11 1989-12-19 Digital Equipment Corporation Method and apparatus using a cache and main memory for both vector processing and scalar processing by prefetching cache blocks including vector data elements
KR100331565B1 (ko) * 1999-12-17 2002-04-06 윤종용 매트릭스 연산 장치 및 매트릭스 연산기능을 갖는 디지털신호처리 장치
JP4042841B2 (ja) * 2002-03-29 2008-02-06 富士通株式会社 行列演算処理装置
CN100545804C (zh) * 2003-08-18 2009-09-30 上海海尔集成电路有限公司 一种基于cisc结构的微控制器及其指令集的实现方法
GB2411975B (en) * 2003-12-09 2006-10-04 Advanced Risc Mach Ltd Data processing apparatus and method for performing arithmetic operations in SIMD data processing
US7433981B1 (en) * 2005-09-30 2008-10-07 Nvidia Corporation System and method for using co-processor hardware to accelerate highly repetitive actions
CN100424654C (zh) * 2005-11-25 2008-10-08 杭州中天微系统有限公司 一种矩阵数据存取方法及其矩阵数据存储装置
GB2464292A (en) * 2008-10-08 2010-04-14 Advanced Risc Mach Ltd SIMD processor circuit for performing iterative SIMD multiply-accumulate operations
US8984043B2 (en) * 2009-12-23 2015-03-17 Intel Corporation Multiplying and adding matrices
US9600281B2 (en) * 2010-07-12 2017-03-21 International Business Machines Corporation Matrix multiplication operations using pair-wise load and splat operations
CN101980182A (zh) * 2010-10-15 2011-02-23 清华大学 基于矩阵运算的并行计算方法
CN102541814B (zh) * 2010-12-27 2015-10-14 北京国睿中数科技股份有限公司 用于数据通信处理器的矩阵计算装置和方法
US9411585B2 (en) * 2011-09-16 2016-08-09 International Business Machines Corporation Multi-addressable register files and format conversions associated therewith
CN102360344B (zh) * 2011-10-10 2014-03-12 西安交通大学 矩阵处理器及其指令集和嵌入式系统
WO2013095653A1 (en) * 2011-12-23 2013-06-27 Intel Corporation Systems, apparatuses, and methods for performing a conversion of a writemask register to a list of index values in a vector register
JP5840994B2 (ja) * 2012-03-27 2016-01-06 富士通株式会社 行列演算装置
US9652245B2 (en) * 2012-07-16 2017-05-16 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Branch prediction for indirect jumps by hashing current and previous branch instruction addresses
US10235180B2 (en) * 2012-12-21 2019-03-19 Intel Corporation Scheduler implementing dependency matrix having restricted entries
JP6225687B2 (ja) * 2013-02-18 2017-11-08 富士通株式会社 データ処理装置、およびデータ処理方法
US9389854B2 (en) * 2013-03-15 2016-07-12 Qualcomm Incorporated Add-compare-select instruction
JP6094356B2 (ja) * 2013-04-22 2017-03-15 富士通株式会社 演算処理装置
CN103336758B (zh) * 2013-06-29 2016-06-01 中国科学院软件研究所 一种采用带有局部信息的压缩稀疏行的稀疏矩阵存储方法及基于该方法的SpMV实现方法
CN103678257B (zh) * 2013-12-20 2016-09-28 上海交通大学 基于fpga的正定矩阵浮点求逆器及其求逆方法
CN103970720B (zh) * 2014-05-30 2018-02-02 东南大学 基于大规模粗粒度嵌入式可重构系统及其处理方法
CN105302522B (zh) * 2014-06-26 2019-07-26 英特尔公司 提供通用gf(256)simd密码算法功能性的指令和逻辑
US10061746B2 (en) * 2014-09-26 2018-08-28 Intel Corporation Instruction and logic for a vector format for processing computations
CN111857820B (zh) 2016-04-26 2024-05-07 中科寒武纪科技股份有限公司 一种用于执行矩阵加/减运算的装置和方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1289212A (zh) * 2000-10-27 2001-03-28 清华大学 用于运动估计算法的分层可编程并行视频信号处理器结构
US20020198911A1 (en) * 2001-06-06 2002-12-26 Blomgren James S. Rearranging data between vector and matrix forms in a SIMD matrix processor
CN1842779A (zh) * 2003-09-08 2006-10-04 飞思卡尔半导体公司 用于执行simd运算的数据处理系统及其方法
CN101122896A (zh) * 2006-08-11 2008-02-13 展讯通信(上海)有限公司 一种基于矩阵运算的高效asic数据处理方法
CN101957743A (zh) * 2010-10-12 2011-01-26 中国电子科技集团公司第三十八研究所 并行数字信号处理器

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3451163A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10860681B2 (en) 2016-04-26 2020-12-08 Cambricon Technologies Corporation Limited Apparatus and methods for matrix addition and subtraction
US10891353B2 (en) 2016-04-26 2021-01-12 Cambricon Technologies Corporation Limited Apparatus and methods for matrix addition and subtraction
CN108388674A (zh) * 2018-03-26 2018-08-10 百度在线网络技术(北京)有限公司 用于推送信息的方法和装置
CN108388674B (zh) * 2018-03-26 2021-11-26 百度在线网络技术(北京)有限公司 用于推送信息的方法和装置

Also Published As

Publication number Publication date
CN111857820B (zh) 2024-05-07
CN107315715A (zh) 2017-11-03
US10860681B2 (en) 2020-12-08
US20190065437A1 (en) 2019-02-28
CN107315715B (zh) 2020-11-03
CN111857819A (zh) 2020-10-30
CN111857819B (zh) 2024-05-03
US10891353B2 (en) 2021-01-12
EP3910503A1 (en) 2021-11-17
US20190147015A1 (en) 2019-05-16
CN111857820A (zh) 2020-10-30
EP3451163A1 (en) 2019-03-06
EP3451163A4 (en) 2019-11-20
US20190065436A1 (en) 2019-02-28

Similar Documents

Publication Publication Date Title
WO2017185396A1 (zh) 一种用于执行矩阵加/减运算的装置和方法
CN109240746B (zh) 一种用于执行矩阵乘运算的装置和方法
WO2017185393A1 (zh) 一种用于执行向量内积运算的装置和方法
CN107315717B (zh) 一种用于执行向量四则运算的装置和方法
CN111651201B (zh) 一种用于执行向量合并运算的装置和方法
WO2017185384A1 (zh) 一种用于执行向量循环移位运算的装置和方法
CN107315716B (zh) 一种用于执行向量外积运算的装置和方法
WO2017185404A1 (zh) 一种用于执行向量逻辑运算的装置及方法
WO2017185395A1 (zh) 一种用于执行向量比较运算的装置和方法
WO2017185419A1 (zh) 一种用于执行向量最大值最小值运算的装置和方法
WO2017185388A1 (zh) 一种用于生成服从一定分布的随机向量的装置和方法
KR102467544B1 (ko) 연산 장치 및 그 조작 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16899907

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016899907

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2016899907

Country of ref document: EP

Effective date: 20181126