WO2021217502A1 - 一种计算架构 - Google Patents

一种计算架构 Download PDF

Info

Publication number
WO2021217502A1
WO2021217502A1 PCT/CN2020/087814 CN2020087814W WO2021217502A1 WO 2021217502 A1 WO2021217502 A1 WO 2021217502A1 CN 2020087814 W CN2020087814 W CN 2020087814W WO 2021217502 A1 WO2021217502 A1 WO 2021217502A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
block
calculation
dependent
blocks
Prior art date
Application number
PCT/CN2020/087814
Other languages
English (en)
French (fr)
Inventor
夏天
任鹏举
赵浩然
李泽华
赵文哲
郑南宁
Original Assignee
西安交通大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西安交通大学 filed Critical 西安交通大学
Publication of WO2021217502A1 publication Critical patent/WO2021217502A1/zh
Priority to US17/864,014 priority Critical patent/US11886347B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0207Addressing or allocation; Relocation with multidimensional access, e.g. row/column, matrix
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0862Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7867Architectures of general purpose stored program computers comprising a single central processing unit with reconfigurable architecture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0813Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes
    • G06F12/0879Burst mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
    • G06F15/781On-chip cache; Off-chip memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/34Addressing or accessing the instruction operand or the result ; Formation of operand address; Addressing modes
    • G06F9/355Indexed addressing
    • G06F9/3555Indexed addressing using scaling, e.g. multiplication of index
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3836Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
    • G06F9/3838Dependency mechanisms, e.g. register scoreboarding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
    • G06F9/3887Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units controlled by a single instruction for multiple data lanes [SIMD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1021Hit rate improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1048Scalability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/45Caching of specific data in cache memory
    • G06F2212/454Vector or matrix data

Definitions

  • the present disclosure belongs to the technical field of processing large-scale data, and particularly relates to a computing architecture.
  • computing data cannot be completely stored in an on-chip cache (such as a multi-level Cache), so data transfer between on-chip storage and off-chip storage (such as DDR memory) is required.
  • on-chip cache such as a multi-level Cache
  • off-chip storage such as DDR memory
  • the data volume is 64MB, which is much larger than the overhead that the on-chip storage can afford.
  • the characteristics of data access in solving equations and matrix operation problems are: 1) poor data locality, 2) irregular data access patterns, and 3) online random reorganization of data structures.
  • the present disclosure provides a computing architecture, including: off-chip memory, on-chip cache unit, transmitting unit, pre-recombination network, post-recombination network, main computing array, data dependent controller and global scheduler; among them,
  • the off-chip memory is used to store all large-scale data in a block format, wherein the large-scale data is divided into multiple blocks of equal size;
  • the on-chip cache unit is used to store part of the data of the block to be calculated and the dependent data required for calculation;
  • the transmitting unit is used to read the data of the corresponding block from the on-chip buffer unit according to the order specified by the scheduling algorithm and send it to the pre-reorganization network;
  • the main calculation array is used to complete the calculation of the data of the main block
  • the pre-reorganization network is used to perform arbitrary data reorganization on the data of the block before calculating the data of the block;
  • the post-reorganization network is used to perform arbitrary data reorganization on the data of the block after the data of the block is calculated;
  • the data dependence controller is used to process the data dependence relationship between the data of the block
  • Global scheduler used to execute preset scheduling algorithms, control block data prefetching, transmission, calculation, data reorganization, and data dependency processing; the above technical solutions change the data storage mode and calculation strategy of matrix operations To improve the locality of memory access, and to dynamically complete data reorganization by adding multi-functional data paths, reduce the impact of irregular data structures and data rearrangements on computing efficiency, and maximize the on-chip cache and computing unit. Utilization rate, improve calculation speed.
  • the computing architecture can improve data utilization and increase data processing flexibility, thereby reducing Cache Miss and reducing memory bandwidth pressure.
  • the beneficial effects brought by this technical solution are embodied in the following three aspects:
  • the large-scale matrix is divided into multiple tiles, and the tiles are used as the smallest granularity data for matrix operations.
  • the data of each block is continuously stored in the memory, so the utilization of the cache can be effectively improved.
  • multiple reuse of blocks can be realized, thereby further improving the utilization of the cache and reducing the performance bottleneck caused by the memory bandwidth.
  • multiple blocks are allowed to complete flexible data reorganization and exchange in the data path, so that the data structure can be reorganized according to the computing requirements, so that it can maximize the computing requirements of the computing array and the format requirements of the storage unit.
  • the block data can be arranged for the deployment of the computing array, so as to maximize the efficiency of the computing array.
  • any global row and column exchange in the matrix can be efficiently completed, and this operation is completed during data transmission without consuming additional storage space and Delay, thus effectively improving the efficiency of random row and column exchange in the matrix.
  • any global matrix reorganization can be completed through a limited number of data reorganizations within and between blocks. This greatly improves the scalability and adaptability of the computing system to irregular matrix operations.
  • High reuse rate is the key to improving computing performance.
  • the locality of data is usually weak. This is because there is generally a global data dependency between each iteration, so it is difficult to achieve localization. Repeated and iterative use of data will directly lead to on-chip and off-chip data handling becoming a key bottleneck.
  • This technical solution analyzes the dependency relationship between each block in different iterations, and realizes the maximum reuse rate that conforms to the dependency relationship by means of block grouping, and ensures that the matrix operation after block division has good data locality.
  • FIG. 1 is a schematic structural diagram of a computing architecture provided in an embodiment of the present disclosure
  • 2(a) to 2(c) are the block division and block grouping of the original matrix in an embodiment of the present disclosure, and the distribution diagram of the data of each block in the off-chip storage;
  • FIG. 3 is a diagram of changes produced by multiple blocks after passing through a pre-reorganization network in an embodiment of the present disclosure
  • FIG. 4 is a diagram of operand input and result output of the main calculation array in an embodiment of the present disclosure
  • Figures 5(a) to 5(d) are diagrams illustrating examples of data dependence in an embodiment of the present disclosure
  • FIG. 6 is a dependency relationship diagram between block groups in an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of another computing architecture provided in an embodiment of the present disclosure.
  • Fig. 8 is a schematic flow chart of the overall calculation process of a block in an embodiment of the present disclosure.
  • Figure 9 is a schematic diagram of producer-consumer block groups divided according to block dependency in an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of the work flow of a data-dependent controller in an embodiment of the present disclosure.
  • Figure 11 is a schematic diagram of the BENES data exchange network structure in an embodiment of the present disclosure.
  • FIG. 12 is an example diagram of a work flow of a data reorganization network module in an embodiment of the present disclosure
  • FIG. 13 is a schematic diagram of matrix global data reorganization in an embodiment of the present disclosure.
  • FIG. 14 is a schematic diagram of the block dependency relationship in the GJE-based matrix inversion calculation in an embodiment of the present disclosure
  • Fig. 15 is a complete calculation flow chart of matrix inversion in an embodiment of the present disclosure.
  • FIG. 16 is a comparison diagram of the speedup ratio of matrix inversion operation of this architecture compared with other computing platforms in an embodiment of the present disclosure
  • FIG. 17 is a comparison diagram of the calculation speedup ratio of solving linear equations of the present architecture compared with other computing platforms in an embodiment of the present disclosure.
  • a computing architecture including: off-chip memory, on-chip cache unit, transmission unit, pre-recombination network, post-recombination network, main computing array, and data-dependent controller And the global scheduler; among them,
  • the off-chip memory is used to store all large-scale data in a block format, wherein the large-scale data is divided into multiple blocks of equal size;
  • the on-chip cache unit is used to store part of the data of the block to be calculated and the dependent data required for calculation;
  • the transmitting unit is used to read the data of the corresponding block from the on-chip buffer unit according to the order specified by the scheduling algorithm and send it to the pre-reorganization network;
  • the main calculation array is used to complete the calculation of the data of the main block
  • the pre-reorganization network is used to perform arbitrary data reorganization on the data of the block before calculating the data of the block;
  • the post-reorganization network is used to perform arbitrary data reorganization on the data of the block after the data of the block is calculated;
  • the data dependence controller is used to process the data dependence relationship between the data of the block
  • Global scheduler used to execute preset scheduling algorithms, control block data prefetching, transmission, calculation, data reorganization, and data dependency processing; the above technical solutions change the data storage mode and calculation strategy of matrix operations To improve the locality of memory access, and to dynamically complete data reorganization by adding multi-functional data paths, reduce the impact of irregular data structures and data rearrangements on computing efficiency, and maximize the on-chip cache and computing unit. Utilization rate, improve calculation speed.
  • the off-chip memory is used to store all large-scale data in a block format.
  • the off-chip storage device is a large-capacity storage device, such as DDR, which is characterized by slower access speed and larger storage capacity.
  • all large-scale matrix data are stored in off-chip storage.
  • the large-scale matrix is divided into multiple equal-sized tiles in advance and stored in the off-chip memory.
  • Block is the smallest granularity data of matrix operation, and it is also the smallest unit of transmission, operation and control.
  • Each block is a partial M*N sub-matrix of the original data, and the element data inside each block is continuously stored in the memory.
  • the data of different blocks is usually stored continuously in a block group, that is, a group of blocks composed of multiple blocks are stored in a continuous storage address space.
  • the edge will be extended to meet the N*N sub-block division method.
  • Figures 2(a) to 2(c) show the block division and block grouping of the original matrix, and the distribution of the data of each block in the off-chip storage. In the examples of Fig. 2(a), Fig. 2(b), and Fig.
  • each block is a sub-matrix with a size of 3*2.
  • the original matrix is divided according to the size of 3*2. If the size of the original matrix does not constitute an integer multiple of M*N, 0 is added at the edge (as shown in Figure 2(b)). It can be seen that the various elements within each block are continuously stored in the memory, and different blocks are stored continuously according to block groups. In addition, for vectors that need to be operated on with the matrix, these vectors are also stored in M*N blocks and managed in a unified manner with the matrix blocks. As shown in Figure 2(c).
  • the present disclosure is designed for large-scale matrix operations, it can handle matrices of any size when computing resources and storage resources are sufficient.
  • the value of the block size M and N should match the scale of the computing array. According to the current mainstream computing architecture scale and storage device scale, the reasonable value of M and N should be between 4-32, the dimension of the matrix to be processed It can be between 4-50000.
  • the block refers to a sub-matrix at a specific position in the matrix, and the block is a concept relative to the matrix.
  • a matrix is divided into multiple blocks, that is, the range of the sub-matrix area corresponding to each block is determined.
  • the data of a block refers to all the elements in the sub-matrix area contained in a block. Therefore, the entity involved in the calculation is the block data instead of the block. After the block data is calculated, the value of this part of the data may be changed. Therefore, in the matrix calculation, the block data is constantly updated.
  • the block (as the range of a sub-matrix) is constant.
  • the on-chip cache unit is an embedded on-chip storage device that provides faster read and write access speed, but has a lower storage capacity.
  • the on-chip cache is used to store part of the blocks to be calculated and the dependent data required for calculation. Among them, some of the blocks to be calculated refer to the complete data of several blocks. If the on-chip cache unit is large enough, all the blocks of the original matrix can be stored. If the on-chip cache unit is not large enough, the blocks stored therein are only part of the multiple blocks divided by the matrix to be calculated.
  • the block is read from the off-chip storage unit to the on-chip cache unit and the calculation is completed, and then written back to the off-chip storage unit.
  • the data that the calculation depends on refers to the information and values other than the block element itself that the block in the on-chip storage unit needs when performing the calculation. There is a detailed explanation about the dependent data later.
  • the transmitting unit is used to read the data of the corresponding block from the on-chip cache unit and send it to the pre-reorganization network according to the order specified by the global scheduler module.
  • the transmitting unit can read multiple blocks of data from the on-chip cache unit at a time, usually 2-4.
  • the transmitting unit is also used to add a corresponding tag bit to each block when it is transmitted. These tag bits follow the block data packet to flow through all subsequent processing procedures. With the help of the tag bit, the transmitting unit can accurately control the behavior of the transmitted block in the entire calculation process. There is a detailed explanation about the tag bit in the following text.
  • the pre-reorganization network is a non-blocking data exchange network with a data width of k*N*N. This network is used to process the k blocks sent by the transmitting unit, and is responsible for the processing of the blocks before these blocks enter the main computing array.
  • the data undergoes data reorganization. Data reorganization can occur within a single block or between multiple blocks, and its form can be any row exchange, column exchange, data rearrangement in any order, data multicast, etc.
  • Figure 3 illustrates several types of changes that occur after multiple blocks have passed through the pre-reorganized network. As shown in Figure 3, the network input data is a collection of single or multiple block elements, which are expanded according to a one-dimensional vector and sent to the pre-recombination network.
  • the output of the pre-recombination network is also a one-dimensional vector of the same length as the input, and this vector is the element of each block of the output.
  • Data reorganization can be completed between the various elements within the block, and the elements of multiple blocks can be exchanged and rearranged.
  • the operations that the network can perform on the input data are not limited to the examples listed in Figure 3.
  • the pre-reorganization network can be realized by selecting different data exchange networks according to specific reorganization requirements.
  • the BENES network is used as the pre-switching network, and its structure and specific introduction are shown below.
  • the main calculation array is used to complete the calculation of the data of the main block and generate the calculation result.
  • the host computer array includes a parallel calculation array, which can perform calculations on the input block data in parallel.
  • the operand of the calculation array also includes the dependent data required for the calculation.
  • the dependent data will be described in detail later.
  • the host computer array After the host computer array performs operations on the input block, it will use the calculation result to update the value of the corresponding element in the block, and for some algorithms, it will also generate other calculation results. Therefore, the final output data of the main computing array includes updated block data.
  • the example of Fig. 4 shows the operand input and result output of the main calculation array. Note that Fig. 4 is only a possible scale and calculation mode of the main calculation array.
  • the post-reorganization network is used to perform arbitrary data reorganization on the calculation results generated by the main calculation array after the data calculation of the block, that is, the updated block data; its reorganization function is similar to the pre-reorganization network.
  • the data dependence controller is used to process the data dependence relationship between the data of the block.
  • the data dependence relationship is generated by the calculations and operations required by the block. In many cases, the calculations required by the block cannot be done solely by the elements of the block itself, but other information and values are needed. These extra elements besides the block itself are the dependent data for the calculation of the block.
  • the dependent data can be the values of all the elements of other blocks, the values of some elements, or the intermediate values calculated from other block elements.
  • the existence of dependent data means that there is a dependency relationship between different blocks.
  • the dependency is divided into direct dependency and indirect dependency. If a certain operation requires all elements of multiple blocks to participate simultaneously, then these blocks are directly dependent on each other, because they must all directly participate in the operation.
  • the dependent data of a certain block is part of the elements of one or several other blocks, or the intermediate calculation result derived from these blocks, then this dependence is an indirect dependence.
  • the block that generates the dependent data is the "producer block”
  • the block that uses the dependent data is the "consumer block”.
  • Figures 5(a) to 5(d) list several examples of data dependence: Figure 5(a) is the addition of block A and block B, and block A and block B are directly composed Dependency relationship; Figure 5(b) shows that block A and block B need to exchange arbitrary rows, and block A and block B constitute a direct dependency relationship; Figure 5(c) shows that each row of block A needs to be subtracted A row of elements of block B, block A and block B form an indirect dependency, where B is the "producer block” and A is the "consumer block”; Figure 5(d) is block C multiplied by A row of elements after the addition of blocks A and B, A block and B/C block constitute an indirect dependency, B/C block is a "producer block”, and A is a "consumer block”.
  • the block groups can be further defined, as well as the dependencies between multiple block groups.
  • a block group refers to a collection of multiple blocks. There may be a dependency relationship between multiple blocks in the same group. This kind of dependency data between different blocks in the same group is called “local dependency data”. In addition, some blocks in one block group may form a dependency relationship with some blocks in another block group. This kind of cross-block group dependency data is called “global dependency data”. The block group that generates the “global dependency data” is called the “producer block group”, and the block group that uses the “global dependency data” is called the “consumer block group”. This constitutes a dependency relationship between block groups.
  • Figure 6 shows an example.
  • blocks A, B, C, and D are divided into bit block group 1, and E, F, and G are divided into block group 2.
  • A is the producer block
  • B, C, and D are consumer blocks
  • the dependency data between them is the local dependency data of block group 1.
  • block E generates local dependency data in block group 2.
  • the A block also generates the dependent data required in the block group 2. Since the data crosses the block group, it is the global dependent data. Since the global dependency data is generated by block group 1, a dependency relationship is formed between block group 2 and block group 1.
  • block group 1 is the "producer block group”
  • block group 2 is the "consumer block group”.
  • the global scheduler is the core control module of this architecture. It is used to execute preset scheduling algorithms and control block data prefetching, transmission, calculation, data reorganization, and data dependency processing. Specifically, the global scheduler instructs the transmitting module to read and transmit the blocks in the on-chip buffer according to a certain scheduling sequence, and set different tag bits for different blocks according to the instructions of the global scheduler.
  • the tag bit of each block indicates the required processing and operation of each module such as the subsequent pre-switching network, main computing array, post-switching network, and data dependent controller.
  • the global scheduler determines the transmission sequence of the blocks and the operations that the blocks need to complete based on the dependencies between each block and between each block group.
  • the scheduling principle is that the producer block precedes the consumer block, and the producer block group precedes the consumer block group.
  • a possible scheduling sequence is: A->B->C->D->E->F->G.
  • the global scheduler can be implemented in many forms, including state machines, dynamic lookup tables, MCU processors, and so on.
  • the global scheduler is also responsible for notifying the prefetch module in advance to carry out the block transfer between the off-chip storage unit and the on-chip storage unit according to the processing sequence of the blocks.
  • the global scheduler is responsible for block prefetching, calculation, data reorganization, and dependency processing according to a preset scheduling algorithm.
  • the global scheduler reads data blocks into the on-chip cache by prefetching, and performs calculations in units of blocks.
  • the transmitting module is responsible for reading the corresponding data block from the on-chip cache and sending it to the subsequent processing flow according to the order specified by the global scheduler.
  • the module reads and sends k blocks at a time (k>1). K blocks can pass through all arithmetic processing flows in parallel.
  • a block switching network is used to reorganize the data structure.
  • the pre-recombination network and the post-recombination network are both non-blocking data exchange BENES switching networks with a data width of k*N*N. These two networks can perform arbitrary data reorganization on k blocks before and after calculation.
  • the main calculation array is a set of parallel fixed-point/floating-point arithmetic units, and the operation type is common fixed-point/floating-point.
  • the host computer array is a pipeline design, and k*N*N elements can be input per cycle, and arithmetic addition (add), multiplication (multiply) or multiplication and addition (mac) operation can be completed.
  • the data dependency module is responsible for handling the data dependency relationships that may exist between different blocks.
  • the data dependent module manages dependent data, and it can call an auxiliary calculation array to perform calculations dependent on the data.
  • the auxiliary calculation array is a set of parallel fixed-point/floating-point arithmetic units, and its array size and operation type depend on the specific matrix algorithm.
  • the utilization rate of the on-chip cache is very high.
  • the dependency-based block grouping and scheduling algorithm used in this embodiment, as well as the management module for dependent data, can reduce the coupling between blocks to the greatest extent, increase the multiplexing rate of the blocks, and reduce the number of off-chips.
  • the access pressure of storage devices greatly reduces the performance bottleneck caused by memory access delays, thereby providing high-performance, low-latency matrix calculations.
  • the disclosed computing architecture further includes:
  • the prefetch unit is used to complete the transfer of block data between off-chip storage and on-chip cache;
  • the write-back cache unit is used to write the data of the block back to the on-chip cache unit after the data of the block is calculated;
  • the auxiliary calculation array is used to assist the data dependent controller in the extraction, preprocessing and calculation of dependent data.
  • the prefetch unit is used to complete the transfer of block data between the off-chip storage and the on-chip cache according to the order specified by the global scheduler module.
  • This module performs simple data transfer between two storage devices.
  • the address and length of the transferred data are specified by the global scheduler module.
  • the existing data handling technology can be used to realize the function of this module.
  • the data of the block is continuously stored in the memory.
  • the data of each block is continuously stored in the memory, so the utilization rate of the cache can be effectively improved.
  • the elements of each part of each block are always stored in a continuous address.
  • the data of different blocks are usually stored continuously in block groups, that is, a group of blocks composed of multiple blocks in a continuous storage address space . There can be multiple block groups.
  • the transmitting unit is also used to add corresponding tag bits to each block when transmitting.
  • these tag bits follow the block data packet to flow through all subsequent processing procedures.
  • the transmitting unit can accurately control the behavior of the transmitted block in the entire calculation process.
  • the processing flow of the block is shown in Figure 8. It can be seen from Fig. 8 that the block carries different types of flag bits when it is transmitted. These flag bits indicate the processing mode of the block in different modules, and are discarded after completing a specific operation.
  • the tag bits indicate the calculation tasks that the block needs to perform, data dependency information, and block data reorganization information.
  • Table 2 is only a case of tag bit setting, and the specific tag bit content and setting method need to be determined according to the actual calculation task.
  • the data dependence relationship includes direct dependence and indirect dependence;
  • the direct dependence means that multiple blocks are required to directly participate in the calculation, and the obtained calculation result is directly used to update the block, or as intermediate dependent data;
  • the indirect dependence means that the calculation of a certain block needs to be completed with the help of data of other blocks.
  • the block scheduling algorithm aims to analyze the dependency relationship between different blocks and optimize the reuse efficiency of the blocks. Specifically, the scheduling sequence and scheduling strategy of each block depends on the dependencies between the blocks.
  • Indirect dependence means that the calculation of a certain block needs to be completed by the data information of other blocks.
  • the block used is called the leading block, and the data information used is called the dependent data.
  • the dependent data is used as the intermediate data of the calculation, which can be stored in the on-chip cache and read during the calculation of the relevant block.
  • Direct dependence refers to the need for multiple blocks to directly participate in the calculation, and the obtained calculation result is directly used to update the block, or as intermediate dependent data.
  • the various blocks involved constitute a direct dependency on each other. For example, for data exchange between multiple blocks, these blocks will form a direct dependency. For another example, when searching for the maximum value of a certain column of the matrix, the block to which this column belongs will form a direct dependency.
  • the producer block group will generate two types of dependent data during the calculation process: one type is "local dependent data", which is only used for the calculation of the blocks in the group and is not shared with other block groups.
  • the other type is "global dependency data”. This type of data is not only used for the calculation of blocks in this group, but also needs to be provided to the corresponding "consumer block group" for use.
  • the bottom “global dependency data” may be the upper "local dependency data”.
  • the producer block and the consumer block can be effectively decoupled.
  • the iterative process of matrix calculation it is no longer necessary to repeatedly load the producer block and consumer block multiple times, which can greatly improve the reuse rate of the block cache on the chip.
  • the producer block can continuously complete multiple calculation iterations on the chip and store the corresponding global cache data. Consumer blocks that are subsequently loaded can also complete multiple iterations continuously on the chip.
  • the block scheduling algorithm is based on the following principles: (1) Starting from the bottom "producer-consumer" dependency relationship, the blocks in the producer block group are selected and launched first. (2) All blocks with direct dependencies are continuously transmitted. (3) Repeated transmission and calculation of the existing blocks in the on-chip cache until the dependent conditions are no longer satisfied. (4) Pre-judge the block group required for the follow-up, and prefetch it into the on-chip cache in advance.
  • the global scheduler is set as a state machine, used to control block prefetching, transmission, and calculation at each moment, and determines the data-dependent operations that need to be performed. These behaviors are completed through the control interface between the global scheduler and the prefetch module, the transmission module, and the data-dependent controller module.
  • the data-dependent controller is also used to: 1) determine whether the current block contains dependent data on which subsequent blocks depend, and if it contains, extract, calculate and save the dependent data, Among them, the calculation of dependent data depends on the auxiliary calculation array; 2) Determine whether the current block operation depends on the previously stored block data, if so, read the relevant dependent data and provide it to the main calculation array for Perform calculations on the current block.
  • the specific functions of the data-dependent controller are as follows: (1) Manage the storage, reading and clearing of all global dependent data and local dependent data. (2) For each block currently transmitted, if its calculation requires dependent data, the data dependent controller reads the corresponding dependent data from the on-chip cache and sends it to the main computing array. (3) For each block currently transmitted, if the block needs to generate dependent data, the data dependence controller is responsible for caching the corresponding block data and extracting the required dependent data. The extraction of dependent data can be done with the aid of an auxiliary calculation array.
  • the workflow of the data dependent controller is shown in Figure 10.
  • the data dependency controller After receiving the flag bit carried by the transmitting block, the data dependency controller first judges: (1) whether the block corresponding to the tag needs to rely on data to complete the calculation; (2) whether the block will have a dependency that needs to be saved data. Note that the above two operations may exist at the same time. Therefore, the data-dependent controller implements two sets of parallel logic that handle data reading and data storage respectively. For the former, the controller calculates the data-dependent read address, reads it from the on-chip cache, and sends it to the host computer array for calculation. For the latter, the controller needs to further determine whether the dependent data can be directly obtained from the current block data, for example, the value of a certain row/column or a certain element in the block.
  • the controller will call the auxiliary calculation array to complete the corresponding calculations and save the calculation results to the on-chip cache.
  • the dependent data includes local dependent data and global dependent data;
  • the local dependent data refers to intermediate data generated by a certain block group and used only in the calculation of the block group;
  • the global dependent data refers to intermediate data that is generated by a certain block group and needs to be used in the calculation of this block group and other block groups.
  • this type of data does not need to be shared with other block groups. Therefore, the local dependent data is only saved in the calculation phase of the corresponding block group, and is discarded after the calculation is completed.
  • Global dependent data refers to the intermediate data generated by a certain block group and need to be used in the calculation of this block group and other block groups (ie, the corresponding "consumer block group"). This type of data needs to be stored in the on-chip cache for a long time, and the global dependent data can not be discarded until all related dependent blocks have been calculated.
  • the data dependent controller cooperates with the global scheduler to manage the above two types of dependent data. Specifically, the global scheduler determines the data dependence relationship between the blocks, and indicates the data dependence operation that the block needs to complete through the tag when the corresponding block is transmitted. After the data-dependent controller receives the flag bit carried by the block, it completes the operation on the dependent data according to the instruction of the flag bit. An example of the flow of this process can be seen in Figure 10.
  • the pre-recombination network and the post-recombination network are data exchange networks.
  • the network can be a BENES network or other networks with data exchange functions, such as a Batcher-Banyan network.
  • a pre-data reorganization network and a post-data reorganization network, which are respectively deployed before and after the main computing array.
  • These two networks are responsible for completing complex data reorganization tasks within each block or between multiple blocks, including row exchange, column exchange, transposition, and other necessary data rearrangements.
  • the data reorganization network adopts the BENES network with k*N*N input.
  • the schematic diagram of the BENES network is shown in Figure 11.
  • the BENES network consists of several levels of switching units, each of which can complete the direct connection or exchange of two input signals.
  • control signals By applying control signals to the BENES network, arbitrary data rearrangement from input port to output port can be realized. These control signals are called "control words".
  • N-input BENES network can be used as two independent N/2-input BENES networks.
  • an 8-input BENES can be used as two independent 4-input BENES networks.
  • the BENES network input by k*N*N can not only complete any data reorganization between k blocks, but also complete data reorganization for only one or several networks.
  • control words are stored in the on-chip ROM, and can be read by the pre-data reorganization network and the post-data reorganization network.
  • the tag bits of the block respectively record the control word ID corresponding to the pre-rearrangement and post-rearrangement operations required by the block.
  • the data reorganization of a block can be completed only within a single block, or it can be completed between multiple blocks that are transmitted in parallel (up to k). For complex data reorganization that requires multiple blocks to complete together, the involved blocks need to be cached in the write-back cache module first, and then the post-data reorganization network processes them in a specified order.
  • Figure 12 shows an example.
  • the blocks that need to exchange data form a direct dependence on each other.
  • the blocks (9, 10, 13, 14) need to be exchanged at the same time, so they constitute a direct dependency of the four blocks.
  • (1, 2) and (5, 6) need to complete column exchange, (11, 12) and (15, 16) need to complete row exchange, and these blocks all constitute a direct dependency relationship.
  • the global scheduler sets the transmission sequence as shown in FIG. 13 according to its dependency relationship.
  • the blocks launched at the same time complete row/column exchange in the data reorganization network.
  • arbitrary data reorganization includes: row swap, column swap, transpose, and data rearrangement.
  • the on-chip cache unit is partitioned into block data, local dependent data, and global dependent data.
  • the size of the partition is preset according to resource constraints and algorithm requirements during system design.
  • the data-dependent controller manages all read and write operations on locally dependent data and global dependent data.
  • the calculation architecture can efficiently complete the matrix inversion and linear equation system solving algorithm based on Gauss-Jordon Elimination (Gauss-Jordon Elimination, hereinafter referred to as the GJE algorithm).
  • GJE algorithm Gauss-Jordon Elimination
  • the GJE algorithm is a classic algorithm in linear algebra and one of the algorithms often used in scientific computing.
  • the GJE algorithm is selected by many parallel computing systems as the basic algorithm for computing linear equations, matrix inversion, and LU decomposition due to its good computing parallelism and relatively simple computing operations.
  • the purpose of the GJE algorithm is to transform any square matrix into an identity matrix through a series of iterative elementary row transformations. For a matrix A with a size of N*N, the GJE algorithm requires a total of N iterations. At the i-th iteration, GJE will convert the i-th column of the matrix A into an identity matrix. For the i-th iteration, the process is as follows:
  • Pivot row exchange swap the position of the pivot row (that is, the k-th row) and the i-th row of the A matrix. Now the pivot row becomes the i-th row of the A matrix.
  • a and the identity matrix I of the same size can be combined into an enhanced matrix [A
  • A is eliminated as an identity matrix
  • I is transformed into an inverse matrix A -1 .
  • the matrix is divided into 8 ⁇ 8 blocks.
  • Each row of blocks serves as a block group.
  • the following types of dependent data are involved in the calculation process: pivot row elements, pivot elements, and pivot column elements.
  • the main element column elements are used to calculate the elimination coefficient of each row of the matrix.
  • each block column we divide all the blocks in each block column into a block group.
  • the dependency relationship between blocks can be obtained, as shown on the right side in FIG. 14.
  • the direct dependency and two indirect dependencies are identified: local data dependency and global data dependency.
  • the local data inside each block group depends on the pivot row element from the block group in this column.
  • each block group needs to use the elimination coefficient calculated by the pivot element and pivot column to complete the elimination operation. Therefore, the block group where the pivot element is located assumes the role of the "producer" block group, and the elimination coefficient generated in the calculation is saved as global dependent data and used by other block groups.
  • the global scheduler will follow the scheduling principle to determine the transmission order of each block. That is, the producer block takes precedence over the consumer block, and blocks with direct dependencies are continuously launched. Therefore, the final transmission sequence of the blocks in Figure 14 is: (11,12)->(3,7)->(12,16)->(4,8)->(9,10,13,14) ->(1,2)->(5,6).
  • the producer block group ⁇ 3, 7, 11, 15> can continuously complete the elimination iterations of columns 9-12, and then other block groups can continuously complete multiple elimination iterations.
  • the number of multiplexing for each block is 4.
  • This block group can complete 8 consecutive elimination iterations.
  • the multiplexing factor of each block rises to 8.
  • the optimal block multiplexing times can be set according to factors such as on-chip computing power and off-chip main memory bandwidth, and then the size of the block group can be set.
  • the access time to the off-chip main memory can be completely covered within the on-chip calculation time, and theoretically, it can reach close to 100% of the computing array utilization.
  • the main calculation process is block transmission-elimination-data reorganization-write-back cache.
  • the block transmission module can transmit up to two blocks in each cycle. According to the scheduling strategy, the same block group can be transmitted multiple times, thereby realizing the multiplexing of block calculations.
  • the main control process includes data dependence control and global scheduling control.
  • Dependent data control mainly focuses on the elimination coefficient corresponding to the pivot row data and pivot column.
  • the main metadata is locally dependent data, which is extracted and saved at the beginning of each block group calculation, and discarded after the block group calculation ends.
  • the elimination coefficient is globally dependent data and needs to be stored in the cache for a long time.
  • the calculation of the elimination coefficient depends on the value of the pivot element column and the value of the pivot element, and needs to be pre-calculated in the iterative process. That is, during the iteration of the elimination of the kth column, the pivot element and elimination coefficient of the k+1th column are pre-calculated.
  • the data-dependent controller needs to determine whether the block contains the pivotal column corresponding to the next iteration (that is, the k+1th column, which becomes the next pivotal column in the figure). If it does, you need to cache the next pivot column and search for the largest element as the pivot element. After that, the data-dependent controller also calls the auxiliary calculation array to calculate the elimination coefficient corresponding to the next iteration. Finally, the elimination coefficient is stored in the cache as global dependent data. It should be noted that the above dependent data extraction and calculation process is parallel to the main calculation process and will not block the main calculation process.
  • the workflow of Figure 15 also describes the workflow of the global scheduler.
  • the global scheduler is responsible for generating the transmission order of the blocks and the prefetching order. As described above, in this embodiment, the blocks in each column are divided into a block group.
  • the scheduling strategy of the global controller mainly includes the following factors:
  • (1) and (2) only depend on the matrix size and system resource constraints, and are set offline.
  • (3) and (4) are generated by online dynamic calculation.
  • (3) and (4) both depend on the process of local principal element selection, that is, the row exchange situation of the A matrix. Therefore, the global scheduler needs to obtain the row exchange information of the A matrix in time, and determine the column exchange order of the inverse matrix A -1 to be completed subsequently based on this information.
  • the global scheduler will integrate row switching and column switching requirements to generate block transmission and prefetch sequences. This process can be seen in the flowchart in Figure 15.
  • the performance test in this embodiment is completed by simulation.
  • the simulation experiment is based on RTL code, IP simulation model of DDR/SRAM, and IP model of floating point arithmetic unit.
  • the system parameters of this embodiment are as follows: operating frequency: 800MHz; block size: 8x8; main computing array size: 128x 32-bit FP MAC Unit; auxiliary computing array size: 8x 32-bit FP Division Unit; on-chip cache size: 776KB; BENES network size: 128x32-bit input.
  • the working frequency is obtained by the Synopsys Design Compiler (DC) tool to synthesize the RTL code and the IP simulation model of the DDR/SRAM which can be synthesized, and the IP model of the floating-point arithmetic unit, which can be regarded as a practical working frequency.
  • DC Synopsys Design Compiler
  • the test set is a matrix of random floating-point numbers of different sizes.
  • the matrix inversion and linear equation system solving operations are respectively performed on the test set matrix, and the operation delay is recorded.
  • the control group of the test is the current mainstream and commonly used high-performance large-scale matrix computing libraries: MKL, LAPACK and CUBLAS.
  • MKL version 3.8.0
  • LAPACK version 3.8.0
  • CUBLAS version 10.1
  • Table 3 The parameter tables of different platforms in this experiment are shown in Table 3.
  • the test set tests the performance of the matrix range 32-2048.
  • the test set tested the performance of the matrix range of 32-2048.
  • the size of Y will also affect the overall performance, so we tested different The impact of Y size on performance.
  • the size of Y is N*8, N*32 and N*64 respectively.
  • Table 4 lists the delay (unit: second) for completing the matrix inversion operation on different platforms on the matrix of each size
  • Figure 16 lists the speedup ratio of this computing architecture compared to other control groups.
  • the ordinate in Figure 16 is "the acceleration multiple of this computing architecture compared to other platforms". In other words, the ordinate is the ratio of the calculation time of other platforms to the calculation time of this computing architecture.
  • the calculation time of MKL, LAPACK and CUBLAS are 47.8 times of the calculation time of this computing architecture, respectively. 128 times and 69 times.
  • Table 5 lists the delays (unit: seconds) for completing the matrix inversion operation on different platforms on matrices of various sizes.
  • Figure 17 lists the speedup ratio of the present invention compared to other control groups.
  • this embodiment is significantly better than other computing platforms on matrices of various scales, and still has a high speedup ratio in the calculation of large-scale matrices.
  • MKL is currently the best high-performance scientific computing library.
  • This computing architecture can stably obtain twice the speedup compared to MKL in large-scale matrix operations.
  • the resource consumption of this embodiment is much lower than that of other computing platforms.
  • the on-chip cache of this embodiment is only 1/30 of Intel CPU, and the DDR bandwidth is also much lower than other platforms. This comparison further illustrates that this architecture can achieve high-efficiency use of on-chip cache resources, thereby achieving performance far superior to traditional computing methods with fewer resources.
  • any matrix calculation can design its scheduling strategy by analyzing the dependencies between its blocks, and then deploy it into this computing architecture. It should be noted that for different matrix algorithms, the required data-dependent calculation methods and block calculation methods may be quite different. Therefore, the corresponding calculation modules and pipelines need to be customized according to different matrix algorithms. However, the overall structure and calculation process, scheduling strategy algorithm, and functions of each module of this architecture will not change.
  • the support of this architecture for large-scale matrices depends on the amount of on-chip storage resources and the scale of the computing array. In actual deployment, suitable storage resources and computing arrays can be customized according to the actual algorithm situation and matrix size.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

一种计算架构,包括:片下存储器、片上缓存单元、预取单元、全局调度器、发射单元、预重组网络、后重组网络、主计算阵列、写回缓存单元、数据依赖控制器和辅助计算阵列。本架构通过预取的方式将数据块读入片上缓存中,并按照数据区块进行计算;区块的计算过程中采用区块交换网络来重组数据结构,并设置数据依赖模块来处理不同区块之间可能存在的数据依赖关系。该计算架构能够提高数据利用率、提升数据处理灵活度、从而降低Cache Miss、降低内存带宽压力。

Description

一种计算架构 技术领域
本公开属于一种处理大规模数据技术领域,特别涉及一种计算架构。
背景技术
大规模线性方程组求解和矩阵运算是现代科学计算和工程计算中最为关键的运算之一。目前这类运算主要依赖高性能的线性代数库,如GPU平台的CUBLAS,和CPU平台的Linear Algebra Package(LAPACK)和Intel Math Kernel Library(MKL)等计算库。这类计算库中均普遍采用基于LU分解的矩阵求逆和方程组求解算法,并使用高并行度运算单元的Single Instruction Multiple Data(SIMD)风格进行实现,以求最大化实现数据处理的并行化。然而,对于大规模问题,运算数据无法完全存储在片上缓存中(如多级Cache),因此需要进行片上存储与片外存储(如DDR内存)之间的数据搬运。例如,对于4096x 4096大小的单精度浮点矩阵,其数据量为64MB,远大于片上存储所能负担的开销。与此同时,方程组求解和矩阵运算问题中对数据的访问特点为:1)数据局部性差、2)数据访问模式不规则、3)数据结构需要在线随机重组。在处理数据规模很大时,以上特点对CUBLAS和MKL等传统高性能计算库造成了巨大的压力。具体地说,这类计算库在处理大规模的方程组求解和矩阵运算时,会不可避免地出现Cache Miss频繁和计算效率低下的问题。此时极低的Cache利用率和有限的内存带宽成为限制性能的主要瓶颈,严重制约了整体的计算性能。
发明内容
为了解决上述问题,本公开提供了一种计算架构,包括:片下存储器、片上缓存单元、发射单元、预重组网络、后重组网络、主计算阵列、数据依赖控制器和全局调度器;其中,
片下存储器,用于以区块格式存储全部的大规模的数据,其中,所述大规模的数据被划分为多个大小相等的区块;
片上缓存单元,用于存储部分的待计算区块的数据以及计算所需的依赖数据;
发射单元,用于根据所述的调度算法所指定的顺序,由片上缓存单元中读取相应的区块的数据并发送给预重组网络;
主计算阵列,用于完成主要的区块的数据的计算;
预重组网络,用于在区块的数据计算前对区块的数据进行任意数据重组;
后重组网络,用于在区块的数据计算后对区块的数据进行任意数据重组;
数据依赖控制器,用于处理区块的数据之间的数据依赖关系;
全局调度器,用于执行预设的调度算法,控制区块的数据的预取、发射、计算、数据重组、和数据依赖关系处理;上述技术方案,通过改变矩阵运算的数据存储方式和计算策略来提升访存的局部性,同时通过增加多功能的数据通路来动态完成数据重组,降低非规则化的数据结构和数据重排对计算效率产生的影响,最大限度地提升片上缓存和计算单元的利用率,提升计算速度。
通过上述技术方案,该计算架构能够提高数据利用率、提升数据处理灵活度、从而降低Cache Miss、降低内存带宽压力。该技术方案所带来的有益效果具体体现在如下三个方面:
第一、将大规模矩阵划分为多个区块(tile),区块作为矩阵运算的最小粒度数据。每个区块的数据在内存中连续存储,因此可以有效提高cache的利用率。除此之外,通过构建相应的算法,可以实现对区块的多次复用,从而进一步提升了cache的利用率,减轻内存带宽造成的性能瓶颈。
第二、允许多个区块在数据通路中完成灵活的数据重组和交换,从而可以根据计算需求进行数据结构重组,使其可以最大限度满足计算阵列的计算需求,和存储单元的格式需求。例如,区块数据可以针对计算阵列的部署进行排列,从而使计算阵列的效率达到最高。除此之外,通过支持多个区块之间的数据交换和重组,可以高效完成矩阵中的任意全局行列交换,且这一操作是在数据传输途中完成的,并不消耗额外的存储空间和延迟,因此有效提升了矩阵中随机行列交换的效率。理论上任何全局的矩阵重组都可以通过有限次的区块内和区块间的数据重组完成。这样就极大提升了计算系统对不规则矩阵操作的可扩展性和适应性。
第三、根据矩阵运算中的区块依赖关系完成计算的优化调度,实现区块处理的高复用率,进一步提升了cache利用率,可以很好的适配已有矩阵算法。高复用率是提升计算性能的关键,而对于多次迭代的矩阵算法而言,数据的局部性通常较弱,这是因为每次迭代之间一般存在全局的数据依赖关系,因此难以实现局部数据的重复迭代使用,这就会直接导致片上与片下的数据搬运会成为关键瓶颈。本技术方案会分析各个区块在不同迭代之间的依赖关系,并通过区块分组的方式 实现符合依赖关系的最大复用率,确保了分块后的矩阵运算具有良好的数据局部性。
附图说明
图1是本公开一个实施例中所提供的一种计算架构的结构示意图;
图2(a)至图2(c)是本公开一个实施例中原始矩阵的区块划分、区块分组,以及各个区块的数据在片下存储中的分布图;
图3是本公开一个实施例中多个区块在经过预重组网络后所产生的变化图;
图4是本公开一个实施例中主计算阵列的操作数输入和结果输出图;
图5(a)至图5(d)是本公开一个实施例中产生数据依赖的示例图;
图6是本公开一个实施例中区块组之间的依赖关系图;
图7是本公开一个实施例中所提供的另一种计算架构的结构示意图;
图8是本公开一个实施例中区块的整体计算流程的流程示意图;
图9是本公开一个实施例中根据区块依赖关系划分的生产者-消费者区块组示意图;
图10是本公开一个实施例中数据依赖控制器的工作流程示意图;
图11是本公开一个实施例中BENES数据交换网络结构示意图;
图12是本公开一个实施例中数据重组网络模块的工作流程示例图;
图13是本公开一个实施例中矩阵全局数据重组示意图;
图14是本公开一个实施例中基于GJE的矩阵求逆计算中的区块依赖关系示意图;
图15是本公开一个实施例中矩阵求逆完整计算流程图;
图16是本公开一个实施例中本架构相比于其他计算平台的矩阵求逆运算加速比对照图;
图17是本公开一个实施例中本架构相比于其他计算平台的线性方程组求解运算加速比对照图。
具体实施方式
在一个实施例中,如图1所示,公开了提供了一种计算架构,包括:片下存储器、片上缓存单元、发射单元、预重组网络、后重组网络、主计算阵列、数据依赖控制器和全局调度器;其中,
片下存储器,用于以区块格式存储全部的大规模的数据,其中,所述大规模的数据被划分为多个大小相等的区块;
片上缓存单元,用于存储部分的待计算区块的数据以及计算所需的依赖数据;
发射单元,用于根据所述的调度算法所指定的顺序,由片上缓存单元中读取相应的区块的数据并发送给预重组网络;
主计算阵列,用于完成主要的区块的数据的计算;
预重组网络,用于在区块的数据计算前对区块的数据进行任意数据重组;
后重组网络,用于在区块的数据计算后对区块的数据进行任意数据重组;
数据依赖控制器,用于处理区块的数据之间的数据依赖关系;
全局调度器,用于执行预设的调度算法,控制区块的数据的预取、发射、计算、数据重组、和数据依赖关系处理;上述技术方案,通过改变矩阵运算的数据存储方式和计算策略来提升访存的局部性,同时通过增加多功能的数据通路来动态完成数据重组,降低非规则化的数据结构和数据重排对计算效率产生的影响,最大限度地提升片上缓存和计算单元的利用率,提升计算速度。
就该实施例而言,片下存储器,用于以区块格式存储全部的大规模的数据。片下存储设备为大容量存储设备,例如DDR,这类设备的特点是访问速度较慢,而存储容量较大。在本公开中,全部的大规模矩阵的数据均存储在片下存储中。大规模矩阵预先被划分为多个大小相等的区块(tile),并存储在片下存储器中。区块是矩阵运算的最小粒度数据,也是传输、运算和控制的最小单元。每个区块为原始数据的局部M*N子矩阵,每个区块内部的元素数据在内存中连续存储。不同区块的数据,通常以区块组为单位连续存储,即由多个区块组成的一组区块在连续的存储地址空间中。可以存在多个区块组。区块的大小,也就是M和N的具体取值,根据具体问题和计算规模而定,某些特殊情况下,可以采用M=N,即每个区块是一个局部方阵。对于无法划分为M*N子块的原始数据,将通过对边缘进行0扩展的方式使其满足N*N子块的划分方法。图2(a)至图2(c)展示了原始矩阵的区块划分、区块分组,以及各个区块的数据在片下存储中的分布情况。在图2(a),图2(b),图2(c)的例子中,M=3,N=2,每个区块是3*2大小的子矩阵。原始矩阵按照3*2的大小被划分,如果原始矩阵的大小不构成M*N的整数倍,则在边缘处补0(如图2(b)所示)。可以看到,每个区块内部的各个元 素在内存中连续存储,不同区块按照区块组连续存储。除此之外,对于需要与矩阵进行运算的向量,则将这些向量也按照M*N的区块存储,并与矩阵区块统一管理。如图2(c)所示。
本公开虽然针对大规模矩阵运算进行设计,但是在计算资源和存储资源充足的情况下,可以处理任意大小的矩阵。区块的尺寸M和N的取值要和计算阵列的规模相匹配,根据目前主流的计算架构规模和存储器件规模,M和N的合理取值应该在4-32之间,处理的矩阵维度可在4-50000之间。
需要注意的是,区块指的是矩阵中某个特定位置的子矩阵,区块是相对于矩阵而言的概念。一个矩阵被划分为多个区块,即确定每个区块所对应的子矩阵区域范围。区块的数据,指的是一个区块所包含的子矩阵区域里的所有元素。因此,参与运算的实体是区块数据而不是区块,在区块数据被计算之后,这部分数据的值可能会被更改,因此,在矩阵计算中,区块数据是不断在更新的,而区块(作为一个子矩阵的范围)是恒定不变的。
片上缓存单元,为嵌入式的片上存储设备,提供较快的读写访问速度,但是存储容量较低。片上缓存用于存储部分的待计算区块以及计算所需的依赖数据。其中,部分的待计算的区块指的是若干个区块的完整数据。如果片上缓存单元足够大,则可以存储全部的原始矩阵的区块,如果片上缓存单元不够大,那么其中存储的区块仅为待计算的矩阵所划分出的多个区块中的一部分。区块从片下存储单元读取到片上缓存单元并完成计算,然后再写回片下存储单元。计算所依赖的数据指的是片上存储单元中的区块在进行计算的时候所需要的除了区块元素本身之外的其他信息和数值。后文有关于依赖数据的详细解释。
发射单元,用于根据全局调度器模块所指定的顺序,由片上缓存单元中读取相应的区块的数据并发送给预重组网络。发射单元每次可以从片上缓存单元中读取多个区块的数据,通常为2-4个。发射单元还用于在发射每个区块时为其添加相应的标签位。这些标签位跟随着区块数据包流经后续所有处理流程。借助标签位,发射单元可以准确控制所发射的区块在整个计算流程中的行为。后文有关于标签位的详细解释。
预重组网络,为数据宽度k*N*N的无阻塞数据交换网络,这个网络用于处理在发射单元所发出的k个区块,负责在这些区块进入主计算阵列之前,对区块的 数据进行数据重组。数据重组可以发生在单个区块内部,可以发生在多个区块之间,其形式可以为任意的行交换、列交换、数据按照任意顺序重排、数据多播等。图3举例说明了多个区块在经过预重组网络后所产生的几类变化。如图3所示,网络输入数据就是单个或者多个区块元素集合,这些元素按照一维向量展开并送入预重组网络中。预重组网络输出的也是一个与输入等长度的一维向量,这个向量就是输出的各个区块的元素。数据重组可以在区块内部的各个元素之间完成,可以将多个区块的元素进行交换和重新排列。该网络可以对输入的数据完成的操作不仅限于图3列出的示例。预重组网络可可以根据具体的重组需求,选择不同的数据交换网络实现。在本实施例中,采用了BENES网络作为预交换网络,其结构和具体介绍见下文。
主计算阵列,用于完成主要的区块的数据的计算,并产生计算结果。主机算阵列包含并行的计算阵列,可以并行地对输入的区块数据进行计算。通常来说,计算阵列的操作数除了输入的区块数据之外,还包括计算所需的依赖数据。依赖数据将在后文详细介绍。主机算阵列对输入的区块进行运算后,会使用计算结果对区块中的对应元素的值进行更新,对于某些算法,还会生成其他计算结果。因此,主计算阵列最终输出的数据包括更新后的区块数据。图4的举例表示了主计算阵列的操作数输入和结果输出,注意图4仅为主计算阵列的一种可能的规模和计算模式。
后重组网络,用于在区块的数据计算后,对主计算阵列所产生的计算结果,即更新后的区块数据,进行任意的数据重组;其重组功能与预重组网络类似。
数据依赖控制器,用于处理区块的数据之间的数据依赖关系。数数据依赖关系是由区块所需进行的运算和操作而产生的。很多时候,区块所需的运算无法仅凭借区块本身的元素完成,而是需要其它信息和数值,这些除区块本身的额外元素就是这个区块运算的依赖数据。依赖数据可以是其它区块的全部元素的值、部分元素的值,或者由其它区块元素计算而得的中间值。依赖数据的存在意味着不同的区块之间存在着依赖关系。依赖关系又分为直接依赖和间接依赖。如果某个运算需要多个区块的全部元素同时参与,那么这些区块之间互为直接依赖,因为它们必须全部直接地参与到运算中。与之相对应的,如果某个区块的依赖数据为其它一个或几个区块的部分元素,或者是由这些区块所衍生的中间计算结果,那 么这种依赖关系为间接依赖。在间接依赖关系中,产生依赖数据的区块为“生产者区块”,使用依赖数据的区块为“消费者区块”。图5(a)至图5(d)中列出了几个会产生数据依赖的示例:图5(a)是区块A和区块B进行加法运算,A区块与B区块构成直接依赖关系;图5(b)是区块A和区块B需要进行任意行交换,A区块与B区块构成直接依赖关系;图5(c)是区块A的每一行都需要减去区块B的某一行元素,A区块与B区块构成间接依赖关系,其中B是“生产者区块”,A是“消费者区块”;图5(d)是区块C乘以区块A和B相加后的某一行元素,A区块与B/C区块构成间接依赖关系,B/C区块为“生产者区块”,A为“消费者区块”。
基于区块的依赖关系,可以进一步定义区块组,以及多个区块组之间的依赖关系。区块组是指多干个区块的集合。同一组的多个区块之间可能会存在依赖关系,这种同组内不同区块之间的依赖数据被称为“本地依赖数据”。除此之外,一个区块组中的某些区块可能与另一个区块组中的某些区块构成依赖关系,这种跨区块组的依赖数据被称为“全局依赖数据”。产生“全局依赖数据”的区块组被称为“生产者区块组”,使用“全局依赖数据”的区块组被称为“消费者区块组”。这就构成了区块组之间的依赖关系。图6展示了一个示例,在该示例中,区块A、B、C、D被划分位区块组1,E、F、G被划分为区块组2。在区块组1内部,A为生产者区块,B、C、D为消费者区块,他们之间的依赖数据为区块组1的本地依赖数据。同理,区块组2中由E区块产生本地依赖数据。除此之外,A区块还产生了区块组2中所需的依赖数据,由于该数据跨越了区块组,因此是全局依赖数据。由于该全局依赖数据是区块组1产生的,因此区块组2与区块组1之间构成了依赖关系。其中区块组1为“生产者区块组”,区块组2为“消费者区块组”。
本公开中的依赖数据的提取、计算和管理均由数据依赖管理器模块完成。关于区块依赖关系的具体说明,可详见后文的实施例介绍。
全局调度器,是本架构的核心控制模块,用于执行预设的调度算法,控制区块的数据的预取、发射、计算、数据重组、和数据依赖关系处理。具体来说,全局调度器指示发射模块按照一定的调度顺序读取并发射片上缓存中的区块,并依据全局调度器的指示,为不同的区块设置不同的标签位。每个区块的标签位都指 示了其在后续的预交换网络、主计算阵列、后交换网络、数据依赖控制器等各个模块的所需处理和操作。全局调度器基于各个区块之间和以及各个区块组之间的依赖关系,决定区块的发射顺序,以及区块所需完成的操作。简单来说,调度原则为,生产者区块先于消费者区块,生产者区块组先于消费者区块组。例如图6所示的例子中,其一种可能的调度顺序为:A->B->C->D->E->F->G。全局调度器可以采用多种形式实现,包括状态机、动态查询表、MCU处理器等。除此之外,全局调度器还负责根据区块的处理顺序,预先通知预取模块进行片下存储单元和片上存储单元之间的区块搬运。
就该实施例而言,在整个计算过程中,全局调度器依据预设的调度算法来负责区块的预取、计算、数据重组、和依赖关系处理。全局调度器采用预取的方式将数据块读入片上缓存中,并以区块为单位进行计算,在本实施例中,区块的尺寸为M=N=8。
发射模块负责根据全局调度器所指定的顺序,由片上缓存中读取相应的数据区块并发送给后续的处理流程。该模块每次读取并发送k个区块(k>1)。K个区块可并行通过全部运算处理流程。
区块的计算过程中采用区块交换网络来重组数据结构,在本实施例中,预重组网络和后重组网络均为数据宽度k*N*N的无阻塞数据交换BENES交换网络。这两个网络可在计算前后对k个区块进行任意数据重组。
主计算阵列为一组并行的定点/浮点运算单元,运算类型为常见的定点/浮点。在本实施例中,主机算阵列为流水线设计,每周期可输入k*N*N个元素,并完成运算加法(add)、乘法(multiply)或乘加(mac)操作。
数据依赖模块负责来处理不同区块之间可能存在的数据依赖关系。数据依赖模块管理依赖数据,并且其可以调用辅助计算阵列来进行依赖数据的计算。辅助计算阵列为一组并行的定点/浮点运算单元,其阵列规模和运算类型取决于具体的矩阵算法。
在本实施例中,由于区块数据在存储空间中连续分布,且由全局调度器进行统一的预取和管理,因此片上缓存的利用率很高。本实施例中采用的基于依赖关系的区块分组和调度算法,以及对依赖数据的管理模块,可以最大程度的降低区块之间的耦合性,提升区块的复用率,减少对片下存储设备的访问压力,大幅度 降低访存延迟所造成的性能瓶颈,进而提供高性能、低延迟的矩阵计算。
在另一个实施例中,如图7所示,公开了的一种计算架构还包括:
预取单元,用于完成区块的数据在片下存储与片上缓存之间的搬运;
写回缓存单元,用于在区块的数据计算后将区块的数据写回片上缓存单元;
辅助计算阵列,用于协助数据依赖控制器进行依赖数据的提取、预处理和计算。
就该实施例而言,预取单元,用于根据全局调度器模块所指定的顺序,完成区块的数据在片下存储与片上缓存之间的搬运。这一模块进行简单的在两个存储设备之间的数据搬运,搬运数据的地址和长度是由全局调度器模块指定的。目前已有的数据搬运技术均可用实现该模块的功能。
辅助计算阵列,用于协助数据依赖控制器进行依赖数据的提取、预处理和计算。需要注意的是,辅助计算阵列的运算单元和运算规模取决于不同的计算算法,且并不是必须的组件。在某些矩阵计算中,并不需要辅助计算阵列参与依赖数据的提取和计算。通常来说,辅助计算阵列的规模小于主机算阵列。
在另一个实施例中,所述区块的数据在内存中连续存储。
就该实施例而言,每个区块的数据在内存中连续存储,因此可以有效提高cache的利用率。每个区块每部的元素始终在连续地址上存储,不同区块的数据,通常以区块组为单位连续存储,即由多个区块组成的一组区块在连续的存储地址空间中。可以存在多个区块组。
在另一个实施例中,发射单元,还用于在发射每个区块时为其添加相应的标签位。
就该实施例而言,这些标签位跟随着区块数据包流经后续所有处理流程。借助标签位,发射单元可以准确控制所发射的区块在整个计算流程中的行为。整体来看,区块的处理流程如图8所示。从图8中可以看出,区块在发射时携带了不同类型的标志位,这些标志位指示了区块在不同模块中的处理方式,并且在完成特定操作后即被抛弃。
在另一个实施例中,所述标签位指示了区块所需要执行的计算任务、数据依赖信息以及区块数据重组信息。
就该实施例而言,标签位具体设置如下表1所示。
Figure PCTCN2020087814-appb-000001
表1
例如,在图8所展示的计算流程中,其所涉及的标签位Tag1-4的一种可能的配置方式如下表所示。需要注意的是,表2仅为标签位设置的一种案例,具体的标签位内容及其设置方法需要根据实际计算任务而定。
Figure PCTCN2020087814-appb-000002
表2
在另一个实施例中,所述数据依赖关系包括直接依赖和间接依赖;所述直接依赖指需要多个区块直接参与运算,得到的运算结果直接用于更新区块,或者作 为中间依赖数据;所述间接依赖指某个区块的计算需要借助其他区块的数据完成。
就该实施例而言,对于需要多次迭代计算的矩阵算法,区块调度算法旨在分析不同区块之间的依赖关系,并优化区块的复用效率。具体来说,各个区块的调度顺序和调度策略取决于区块之间的依赖关系。
间接依赖指某个区块的计算需要借助其他区块的数据信息完成,在这种依赖关系中,所借助的区块被称为前导区块,所借助的数据信息被称为依赖数据。依赖数据作为运算的中间数据,可以存储在在片上缓存中,并在相关区块的计算时进行读取。
直接依赖指需要多个区块直接参与运算,得到的运算结果直接用于更新区块,或者作为中间依赖数据。在这种情况下,所涉及的各个区块互相构成直接依赖关系。例如,对于多个区块之间的数据交换,这些区块之间将构成直接依赖关系。再例如,在搜索矩阵某一列元素的最大值时,这一列元素所属的区块之间将构成直接依赖。
基于以上两类基本依赖关系,对于给定的矩阵算法,我们可以分析并建立其所有区块的依赖关系。基于“生产者-消费者”模型和区块之间的间接依赖关系,可以将所有区块都递归地分组为“生产者区块组”和“消费者区块组”。前者在计算过程中产生依赖数据,后者在计算中使用这些依赖数据。如图9所示:在以上的模型中,我们将区块划分为“生产者区块组”和“消费者区块组”。前者为后者提供计算所需的依赖数据。所有的间接依赖关系均会产生相应的依赖数据,这些数据需要被共享给消费者区块组,以便完成消费者区块的计算。生产者区块组在运算过程中会产生两类依赖数据:一类是“本地依赖数据“,仅用于本组内区块运算,不与其他区块组共享。另一类是“全局依赖数据”,这类数据不仅用于本组内区块的计算,而且需要提供给相应的“消费者区块组”使用。除此之外,还可以看到多层级的“生产者-消费者”依赖关系。即某些生产者/消费者区块组内的各个区块之间仍然存在着更底层的“生产者-消费者”依赖关系。需要注意,对于多层级的“生产者-消费者”关系,底层的“全局依赖数据”可能是上层的“本地依赖数据”。通过缓存“生产者”区块组所产生的“全局依赖数据”,并且在后续供“消费者”区块组使用这种方式,可以有效地将生产者区块和消费者区块解耦,在矩阵计算的迭代过程中,不再需要多次重复的载入生产者区块和消费 者区块,可以极大地提升区块在片上缓存的复用率。具体地说,生产者区块可以在片上连续完成多次的计算迭代,并且存储相应的全局缓存数据。后续载入的消费者区块也可以在片上连续完成多次迭代。
另外需要注意的是,在矩阵运算的不同迭代阶段,区块组的划分可能会动态地发生变化。基于以上模型,区块调度算法基于如下原则进行:(1)由最底层的“生产者-消费者”依赖关系开始,优先选择并发射生产者区块组中的区块。(2)所有具有直接依赖关系的区块均连续发射。(3)对片上缓存中已有的区块,重复发射并计算,直至其依赖条件不再满足。(4)预判后续所需的区块组,并提前将其预取到片上缓存中。
具体实施过程中,需要根据矩阵算法的实际情况,分析各个区块之间的依赖关系,建立“生产者区块组”和“消费者区块组”,确定区块的发射顺序和调度策略,并据此设置全局调度器的调度策略。
全局调度器被设置为状态机,用于控制每个时刻的区块预取、发射和计算,并决定着需要执行的与数据依赖相关的操作。这些行为是通过全局调度器与预取模块、发射模块和数据依赖控制器模块之间的控制接口来完成的。
在另一个实施例中,数据依赖控制器,还用于:1)判断当前的区块中是否包含后续区块所依赖的依赖数据,如果包含,则对该依赖数据进行提取、计算和保存,其中对依赖数据的计算依靠辅助计算阵列来完成;2)判断当前的区块运算是否依赖之前存储的区块数据,如果是,则读取相关的依赖数据,并将其提供给主计算阵列以进行当前区块的运算。
就该实施例而言,数据依赖控制器的具体功能如下:(1)管理所有全局依赖数据和本地依赖数据的存储、读取和清空。(2)对于当前发射的各个区块,如果其计算需要依赖数据,则数据依赖控制器从片上缓存中读取相应的依赖数据并发送到主计算阵列中。(3)对于当前发射的各个区块,如果该区块需要产生依赖数据,则数据依赖控制器负责缓存相应的区块数据,并提取所需的依赖数据。对依赖数据的提取可以借助辅助计算阵列来完成。
数据依赖控制器的工作流程如图10所示。在接收到发射区块所携带的标志位后,数据依赖控制器首先判断:(1)该标签对应的区块是否需要依赖数据来完成计算;(2)该区块是否会产生需要保存的依赖数据。注意,以上两种操作可能 同时存在。因此,数据依赖控制器中实现了分别处理数据读取和数据保存的两套并行逻辑。对于前者,控制器会计算出依赖数据的读取地址,并将其从片上缓存中读出,发送到主机算阵列进行计算。对于后者,控制器需要进一步判断,依赖数据是否可以由当前的区块数据直接获得,例如是区块中的某一行/列或者某个元素的值。如果是,那么直接在区块中选择出依赖数据,并将其保存到片上缓存中。如果不是,则依赖数据需要对区块数据做进一步计算来获得。在这种情况下,控制器会调用辅助计算阵列完成相应计算,并将计算结果保存到片上缓存中。
在另一个实施例中,所述依赖数据包括本地依赖数据和全局依赖数据;所述本地依赖数据是指由某个区块组产生,且仅在本区块组运算中需要使用的中间数据;所述全局依赖数据是指由某个区块组产生的,且在本区块组和其他区块组运算中都需要使用的中间数据。
就该实施例而言,区块之间可能存在间接依赖关系。为了解耦区块之间的依赖关系,提升区块的复用率,降低片上缓存与片下主存之间的数据搬运,需要在处理“生产者“区块组的时候将其产生的”依赖数据“缓存起来,这些依赖数据之后可供”消费者“区块组进行计算。这些依赖数据都需要数据依赖控制器模块来管理。
本地依赖数据,这类数据不需要分享给其他区块组。因此,本地依赖数据仅在相应区块组的计算阶段被保存,在计算完成后被抛弃。
全局依赖数据是指由某个区块组产生的,且在本区块组和其他区块组(即对应的“消费者区块组”)运算中都需要使用的中间数据。这类数据需要被长期保存在片上缓存中,直到所有相关的依赖区块均计算完毕后,全局依赖数据才可以被抛弃。
数据依赖控制器与全局调度器协作,管理上述两类依赖数据。具体来说,全局调度器确定区块之间的数据依赖关系,并且在相应的区块发射时通过标志位(Tag)指示该区块所需要完成的数据依赖操作。数据依赖控制器在接收到区块携带的标志位后,根据标志位的指示完成对依赖数据的操作。该过程的流程示例可见图10。
在另一个实施例中,所述预重组网络和后重组网络是数据交换网络。该网络可以是BENES网络,也可以是其他具有数据交换功能网络,如Batcher-Banyan 网络。
就该实施例而言,整个计算路径上部署了两个区块交换网络:预数据重组网络和后数据重组网络,分别部署在主计算阵列之前和之后。这两个网络负责完成每个区块内部或者多个区块之间的复杂数据重组任务,包括行交换、列交换、转置、以及其他必要的数据重排。数据重组网络采用k*N*N输入的BENES网络实现。
BENES网络的示意图如图11所示。BENES网络由若干级交换单元组成,每个交换单元可以完成两个输入信号的直通或者交换。通过对BENES网络施加控制信号,可以实现输入端口到输出端口的任意数据重排。这些控制信号被称为“控制字“(control words)。需要注意的是,由于BENES是递归搭建的,所以一个N输入的BENES网络可以被作为两个独立的N/2输入BENES网络来使用。如图11所示,一个8输入的BENES可以作为两个独立的4输入BENES网络使用。k*N*N输入的BENES网络不仅可以完成k个区块之间的任意数据重组,还可以仅针对某一个或几个网络完成数据重组。
在实际使用中,需要事前确定所有需要的数据重排操作,并预先计算其控制字。这些控制字被存在片上ROM中,并且可以由预数据重组网络和后数据重组网络读取。区块的标签位中分别记录了该区块所需的预重排和后重排操作所对应的控制字ID。区块的数据重组可以仅在单个区块内部完成,也可以在并行发射的多个区块之间完成(最多为k个)。对于需要多个区块共同完成的复杂数据重组,需要首先将所涉及的区块缓存在写回缓存模块中,然后由后数据重组网络根据指定顺序进行处理。图12给出了一个示例。
通过设置适当的区块数据重组模式,并且设置合理的区块调度策略,可以完成全矩阵范围内的多种数据重组,如任意行交互、列交换等。下文给出了一个完成矩阵全局行列交换的示例。在这个例子中,需要交换数据的区块之间构成相互的直接依赖关系。其中,区块(9,10,13,14)因为同时需要进行行列交换,因此构成了四个区块的直接依赖关系。除此之外,(1,2)和(5,6)需要完成列交换,(11,12)和(15,16)需要完成行交换,这些区块之间均构成直接依赖关系。全局调度器按照其依赖关系,设置如图13所示的发射顺序。同一时刻发射的区块在数据重组网络中完成行/列交换。通过以上操作,全局的矩阵行列交换可以做到无额外开销地完成。
在另一个实施例中,任意数据重组包括:行交换、列交换、转置和数据重排。
在另一个实施例中,片上缓存单元被实现分区为区块数据、本地依赖数据和全局依赖数据。
就该实施例而言,分区的大小是在系统设计时根据资源限制和算法需求而预先设定好的。数据依赖控制器管理着所有对本地依赖数据和全局依赖数据的读写操作。
在另一个实施例中,给出了本计算架构可高效地完成基于高斯-约旦消元(Gauss-Jordon Elimination,后文简称GJE算法)的矩阵求逆和线性方程组求解算法。
GJE算法是线性代数中的经典算法,且是科学计算中经常使用的算法之一。GJE算法由于其较好的计算并行性和相对简单的计算操作,被许多并行计算系统选为计算线性方程组、矩阵求逆和LU分解等的基础算法。GJE算法的目的是通过一系列的迭代初等行变换,将任意的方形矩阵转化为单位矩阵。对于大小为N*N的矩阵A,GJE算法共需要N次迭代,在第i次迭代,GJE会将矩阵A的第i列转化为单位矩阵。对于第i次迭代,其流程如下:
(1)选取主元(pivoting):搜索A矩阵第i列中的[i:N-1]个元素,选取其中绝对值最大的元素a k,i作为主元元素(pivot element),该元素所对应的第k行被称为主元行(pivot row)。这一过程被称为局部选主元(partial pivoting)。
(2)主元行交换:交换A矩阵主元行(即第k行)和第i行的位置。现在主元行成为A矩阵的第i行。
(3)消元(Elimination):对于除去主元行(即第i行),其他所有行的元素a x,y均按照如下公式更新其值:a x,y=a x,y-(a i,i/a x,i)*a i,y。其中(a i,i/a x,i)被称为消元系数。经过如此更新后,A矩阵第i列的元素,除去主元元素之外,均被消除为0。
(4)归一化(Normalization):对于主元行的所有元素a i,y,按照如下公式更新:a i,y=a i,y/a i,l。经过此更新后,主元元素被归一化为1。至此,GJE算法的第i次迭代结束。
以上迭代连续进行N次,直至矩阵A完全转化为单位矩阵。
GJE算法可以用于计算线性方程组的解或者逆矩阵。
对于线性方程组:AX=Y,可以将A与Y组合成增强矩阵[A|Y],然后对A执行GJE算法,且矩阵Y跟随矩阵A的初等行变换。在A被消元为单位矩阵时,Y被转化为方程组的解X。
对于求A矩阵的逆矩阵A -1,可以将A与同大小的单位矩阵I合成增强矩阵[A|I],然后对A执行GJE算法,且矩阵I跟随矩阵A的初等行变换。在A被消元为单位矩阵时,I被转化为逆矩阵A -1
每次迭代中,矩阵A的一列被消元为单元矩阵,于此同时,增强矩阵中右侧的单位矩阵中的一列被转化为逆矩阵A -1的一列。由于这种对应关系,在实际计算中我们可以仅保存非单位矩阵的列,从而将整体的矩阵存储开销降低为原始算法的1/2。但是,这种优化方式存在一个问题:由于局部选主元的步骤存在,矩阵A会经历随机的行交换,由此导致,逆矩阵A -1的列的出现顺序也是随机的。由于我们只能按照A -1的列的出现顺序来存储这些列,因此这些列在内存中的顺序是混乱的。因此,在实际计算中,逆矩阵A -1的列需要经过列之间的重新排序来恢复。
在本实施例中,矩阵被分为8x8大小的区块。每一列区块作为一个区块组。由GJE算法可知,计算过程中除去矩阵区块本身的元素之外,还涉及以下几类依赖数据:主元行元素、主元元素和主元列元素。其中主元列元素用于计算矩阵各个行在消元时的消元系数。
以基于GJE的矩阵求逆计算为例,假设图14左侧的矩阵为消元迭代到第10次的状态。其中右侧元素为A矩阵,左侧元素为逆矩阵A -1。如图可见,为了消除第10列,需要找到矩阵第10列中A[10:16,10]元素的最大值作为主元元素,经过搜索,该元素为A[15,10]。在接下来的迭代流程中,需要完成以下几项任务:
(1)交换A矩阵的行15与行10;
(2)使用主元元素和主元列元素,计算每一行的消元系数;
(3)使用主元行元素和消元系数,对每一行进行消元操作;
(4)归一化主元行;
(5)为了还原逆矩阵A-1的正确列顺序,交换逆矩阵A -1的列2与列8。
在本实施例中,我们划分每个区块列中的所有区块为一个区块组。结合上述的计算任务,可以得到区块之间的依赖关系,如图14中的右侧所示。其中分别标识出了直接依赖关系,以及两种间接依赖关系:本地数据依赖和全局数据依赖。可以看到,区块之间的数据重组构成了直接依赖关系。每个区块组内部的本地数据依赖来自于本列区块组的主元行元。除此之外,各个区块组均需要使用主元元素和主元列所计算的消元系数来完成消元操作。因此,主元元素所在的区块组承担了“生产者”区块组的角色,它在计算中产生的消元系数被作为全局依赖数据保存起来,并供其他区块组使用。与此同时,由于列2和列8之间存在数据交换,因此相对应的区块之间构成直接依赖关系,这就导致这两列所对应的两个区块组合并为同一个区块组。因此,在图8所示的时刻中,共存在一个“生产者”区块组,两个“消费者”区块组。
对于图14所示的依赖关系,全局调度器会遵循调度原则确定各个区块的发射顺序。即:生产者区块优先于消费者区块,且具有直接依赖关系的区块均连续发射。因此,图14中区块的最终发射顺序为:(11,12)->(3,7)->(12,16)->(4,8)->(9,10,13,14)->(1,2)->(5,6)。
需要注意的是,以上调度策略并没有考虑到区块复用的情况。实际上,根据GJE算法以及图14的区块组划分可知,区块组之间仅需要共享主元列对应的消元系数。由于主元列是随着迭代次数逐渐在A矩阵中从左向右顺序推移的,因此主元列在连续的GJE迭代中会顺序存在于一个或多个连续的区块组内。在这种情况下,主元元素所在的“生产者”区块组会在连续的若干个迭代中一直是“生产者”。也就是说,可以首先多次复用“生产者”区块组进行多次的GJE消元迭代计算,并将每次迭代所产生的全局依赖数据均记录下来。然后其他“消费者”区块组也就可以基于这些全局依赖数据,复用完成多次迭代的。
对图14的例子而言,生产者区块组<3,7,11,15>可以连续完成列9-12的消元迭代,然后再由其他区块组连续完成多次消元迭代。这样,对于图14的例子来说,每个区块的复用次数为4。我们还可以更进一步地,将区块组<3,7,11,15>和<4,8,12,16>合并为一个大的区块组<3,7,11,15,4,8,12,16>。这个区块组可以连续完成8次的消元迭代,这种情况下,每个区块的复用系数上 升到8。通过提升区块的复用率,可以有效地减少主存和片上缓存之间的数据搬运,提升计算效率。在实际部署中,可以根据片上算力和片下主存带宽等因素来设置最优的区块复用次数,并进而设置区块组的大小。通过设置最优的区块复用次数,可以使对片外主存的访问时间完全覆盖在片上的计算时间之内,理论上可以达到接近100%的计算阵列利用率。
以矩阵求逆计算为例,本实施例的整体计算流程如图15所示。
本实施例中,主干的计算流程是区块的发射-消元-数据重组-写回缓存。区块发射模块在每周期可最多发射两个区块。根据调度策略,可以多次发射同一个区块组,从而实现区块计算的复用。
主要的控制流程包括数据依赖控制和全局调度控制。
依赖数据控制主要针对主元行数据和主元列对应的消元系数。其中主元行数据是本地依赖数据,在每个区块组计算的最开始被提取并保存,并且在该区块组计算结束后抛弃。而消元系数是全局依赖数据,需要在缓存中长期保存。消元系数的计算依赖于主元列元素的值和主元元素的值,需要在迭代过程中预计算。即在消元第k列的迭代时,预计算第k+1列的主元元素和消元系数。因此,数据依赖控制器需要判断区块是否包含下次迭代对应的主元列(即第k+1列,图中成为next主元列)。如果包含,则需要将next主元列缓存,并搜索最大元素作为主元元素。在这之后,数据依赖控制器还要调用辅助计算阵列计算下次迭代对应的消元系数。最终,消元系数作为全局依赖数据被保存到缓存中。需要注意的是,以上的依赖数据提取和计算过程与主干计算流程是并行的,并不会阻塞主干计算流程。
图15的流程图中还描述了全局调度器的工作流程。全局调度器负责产生区块的发射顺序,以及预取顺序。如上所述,本实施例中将每一列的区块划分为一个区块组。全局控制器的调度策略主要包含以下几点因素:
(1)基于消元系数,不同区块组之间的依赖关系。主元列所在的区块组先于其他区块组得到调度。且对同一区块组多次复用。
(2)基于主元行元素,同一区块组中不同区块的依赖关系。包含主元行的区块先于其他区块调度。
(3)由于局部选主元,需要进行全局的矩阵行交换。需要行交换的区块之 间构成直接依赖关系,需要同时发射。
(4)由于逆矩阵A -1的乱序,需要进行A -1矩阵的列交换。需要列交换的区块之间构成直接依赖关系,需要同时发射。
以上几个因素中,(1)和(2)仅取决于矩阵规模和系统资源限制,是离线设置好的。而(3)和(4)则是需要在线动态计算生成的。由之前介绍的GJE算法可知,(3)和(4)都取决于局部选主元过程,即A矩阵的行交换情况。因此全局调度器需要及时获取A矩阵的行交换信息,并根据此信息确定后续需要完成的逆矩阵A -1的列交换顺序。最终,全局调度器会综合行交换和列交换需求,产生区块的发射和预取顺序。这一过程可见图15的流程图。
在另一个实施例中,本实施例的性能测试采用仿真完成。仿真实验基于RTL代码、DDR/SRAM的IP仿真模型、浮点运算单元的IP模型。本实施例的系统参数如下:工作频率:800MHz;区块尺寸:8x 8;主计算阵列规模:128x 32-bit FP MAC Unit;辅助计算阵列规模:8x 32-bit FP Division Unit;片上缓存规模:776KB;BENES网络规模:128x32-bit input。
其中,工作频率是由Synopsys Design Compiler(DC)工具对RTL代码和可综合的DDR/SRAM的IP仿真模型、浮点运算单元的IP模型进行综合所得到的,可视为实际可行的工作频率。
测试集为不同尺寸的随机浮点数矩阵。本实施例对测试集矩阵分别完成矩阵求逆和线性方程组求解运算,并记录运算延迟。测试的对照组为目前主流常用的高性能大规模矩阵运算库:MKL,LAPACK和CUBLAS。其中MKL(version 3.8.0)和LAPACK(version 3.8.0)工作在Intel XEON Gold 6146平台,CUBLAS(version 10.1)工作在NVIDIA GPU RTX 2080 Ti平台。本实验中不同平台的参数表格如表3。
Figure PCTCN2020087814-appb-000003
表3
对于矩阵求逆运算,测试集测试了矩阵范围32-2048的性能。对于线性方程 组求解AX=Y,测试集测试了矩阵范围32-2048的性能.与求逆运算不同的是,在方程组求解中,Y的大小也会影响整体性能,因此我们分别测试了不同的Y尺寸对性能的影响。Y的大小分别为N*8,N*32与N*64。
表4列出了不同平台在各个尺寸的矩阵上,完成矩阵求逆运算的延迟(单位:秒),图16列出了本计算架构相比于其他对照组的加速比。图16中的纵坐标是“本计算架构与其他平台相比的加速倍数”。也就是说,纵坐标是其他平台的计算时间与本计算架构计算时间的比值,例如在图16的矩阵规模32中,MKL,LAPACK和CUBLAS的计算时间分别是本计算架构计算时间的47.8倍、128倍和69倍。
Matrix Order 本计算架构 LAPACK MKL CUBLAS
32x 32 0.0007 0.093 0.034 0.050
64x 64 0.0043 0.319 0.061 0.217
128x 128 0.0286 1.244 0.144 1.018
256x 256 0.2034 8.281 0.615 4.75
512x 512 1.4878 61.91 3.267 32.64
1024x 1024 11.534 497.21 26.375 268.40
2048x 2048 92.274 3920.8 195.91 2213.90
表4
表5列出了不同平台在各个尺寸的矩阵上,完成矩阵求逆运算的延迟(单位:秒),图17列出了本发明相比于其他对照组的加速比。
Figure PCTCN2020087814-appb-000004
表5
由以上实验结果可知,本实施例在多种规模的矩阵上均明显优于其他计算平台,且在大规模矩阵的计算中仍然具有很高的加速比。尤其需要注意的是,MKL是目前效果最好的高性能科学计算库。本计算架构在大规模矩阵运算中,相对MKL可以稳定地获得两倍的加速比。除此之外,本实施例的资源消耗远远低于其他计算平台,本实施例的片上缓存仅为Intel CPU的1/30,DDR带宽也远低于其 他平台。这种对比进一步说明了本架构可以实现对片上缓存资源的高效率使用,从而在较少的资源下取得远优于传统计算方法的性能。
理论上,任何矩阵计算均可以通过分析其区块之间的依赖关系来设计其调度策略,进而部署到本计算架构中。需要注意的是,对于不同的矩阵算法,其所需的数据依赖计算方式和区块计算方式可能有较大不同,因此需要根据不同的矩阵算法来定制对应的计算模块和流水线。但是,本架构的整体结构和计算流程、调度策略算法、以及各个模块的功能等均不会发生变化。
同时,由于高复用性的调度策略需要更多的片上存储资源以储存更多的全局依赖数据,因此本架构对大规模矩阵的支持取决于片上存储资源的多少,以及计算阵列的规模。在实际部署中,可以根据实际算法情况和矩阵大小来定制合适的存储资源和计算阵列。
尽管以上结合附图对本发明的实施方案进行了描述,但本发明并不局限于上述的具体实施方案和应用领域,上述的具体实施方案仅仅是示意性的、指导性的,而不是限制性的。本领域的普通技术人员在本说明书的启示下和在不脱离本发明权利要求所保护的范围的情况下,还可以做出很多种的形式,这些均属于本发明保护之列。

Claims (10)

  1. 一种计算架构,包括:片下存储器、片上缓存单元、发射单元、预重组网络、后重组网络、主计算阵列、数据依赖控制器和全局调度器;其中,
    片下存储器,用于以区块格式存储全部的大规模的数据,其中,所述大规模的数据被划分为多个大小相等的区块;
    片上缓存单元,用于存储部分的待计算区块的数据以及计算所需的依赖数据;
    发射单元,用于根据所述的调度算法所指定的顺序,由片上缓存单元中读取相应的区块的数据并发送给预重组网络;
    主计算阵列,用于完成主要的区块的数据的计算;
    预重组网络,用于在区块的数据计算前对区块的数据进行任意数据重组;
    后重组网络,用于在区块的数据计算后对区块的数据进行任意数据重组;
    数据依赖控制器,用于处理区块的数据之间的数据依赖关系;
    全局调度器,用于执行预设的调度算法,控制区块的数据的预取、发射、计算、数据重组、和数据依赖关系处理。
  2. 根据权利要求1所述的计算架构,还包括:
    预取单元,用于完成区块的数据在片下存储与片上缓存之间的搬运;
    写回缓存单元,用于在区块的数据计算后将区块的数据写回片上缓存单元;
    辅助计算阵列,用于协助数据依赖控制器进行依赖数据的提取、预处理和计算。
  3. 根据权利要求1所述的计算架构,其中,所述区块的数据在内存中存储。
  4. 根据权利要求1所述的计算架构,其中,发射单元,还用于在发射每个区块时为其添加相应的标签位。
  5. 根据权利要求4所述的计算架构,所述标签位指示了区块所需要执行的计算任务、数据依赖信息以及区块的数据重组信息。
  6. 根据权利要求1所述的计算架构,其中,所述数据依赖关系包括直接依赖和间接依赖;所述直接依赖指需要多个区块的数据直接参与运算,得到的运算结果直接用于更新区块的数据,或者作为中间依赖数据;所述间接依赖指某个区块的数据的计算需要借助其他区块的数据完成。
  7. 根据权利要求1所述的计算架构,其中,数据依赖控制器,还用于:判断当前的区块运算是否依赖之前存储的区块的数据,如果是,则读取相关的依赖数据, 并将其提供给主计算阵列以进行当前区块的数据的运算。
  8. 根据权利要求2所述的计算架构,其中,数据依赖控制器,还用于:判断当前的区块中是否包含后续区块所依赖的依赖数据,如果包含,则对该依赖数据进行提取、计算和保存,其中对依赖数据的计算依靠辅助计算阵列来完成。
  9. 根据权利要求1所述的计算架构,其中,所述依赖数据包括本地依赖数据和全局依赖数据;所述本地依赖数据是指由多个区块组成的某个区块组产生,且仅在本区块组运算中需要使用的中间数据;所述全局依赖数据是指由多个区块组成的某个区块组产生的,且在本区块组和其他区块组运算中都需要使用的中间数据。
  10. 根据权利要求1所述的计算架构,其中,片上缓存单元被实现分区为区块的数据、本地依赖数据和全局依赖数据。
PCT/CN2020/087814 2020-04-27 2020-04-29 一种计算架构 WO2021217502A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/864,014 US11886347B2 (en) 2020-04-27 2022-07-13 Large-scale data processing computer architecture

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010343215.9 2020-04-27
CN202010343215.9A CN111522776B (zh) 2020-04-27 2020-04-27 一种计算架构

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/864,014 Continuation US11886347B2 (en) 2020-04-27 2022-07-13 Large-scale data processing computer architecture

Publications (1)

Publication Number Publication Date
WO2021217502A1 true WO2021217502A1 (zh) 2021-11-04

Family

ID=71910852

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/087814 WO2021217502A1 (zh) 2020-04-27 2020-04-29 一种计算架构

Country Status (3)

Country Link
US (1) US11886347B2 (zh)
CN (1) CN111522776B (zh)
WO (1) WO2021217502A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023088119A1 (en) * 2021-11-18 2023-05-25 International Business Machines Corporation Automatic data domain identification

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118409874B (zh) * 2024-07-02 2024-10-18 支付宝(杭州)信息技术有限公司 基于gpu片上内存的数据处理方法、装置及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126372A (zh) * 2016-06-16 2016-11-16 上海天玑科技股份有限公司 一种针对Oracle Exadata一体机的异构容灾装置及方法
CN108596331A (zh) * 2018-04-16 2018-09-28 浙江大学 一种细胞神经网络硬件架构的优化方法
CN108769684A (zh) * 2018-06-06 2018-11-06 郑州云海信息技术有限公司 基于WebP图像压缩算法的图像处理方法以及装置
CN109447241A (zh) * 2018-09-29 2019-03-08 西安交通大学 一种面向物联网领域的动态可重构卷积神经网络加速器架构
US20190392297A1 (en) * 2016-12-30 2019-12-26 Intel Corporation Deep learning hardware
CN110727911A (zh) * 2018-07-17 2020-01-24 展讯通信(上海)有限公司 一种矩阵的运算方法及装置、存储介质、终端

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7451297B2 (en) * 2005-06-01 2008-11-11 Microsoft Corporation Computing system and method that determines current configuration dependent on operand input from another configuration
CN100557581C (zh) * 2008-05-15 2009-11-04 中国人民解放军国防科学技术大学 一种面向数据流的Cache管理方法
US10795815B2 (en) * 2016-05-27 2020-10-06 Arm Limited Method and apparatus for maintaining data coherence in a non-uniform compute device
US10146738B2 (en) * 2016-12-31 2018-12-04 Intel Corporation Hardware accelerator architecture for processing very-sparse and hyper-sparse matrix data
CN107729990B (zh) * 2017-07-20 2021-06-08 上海寒武纪信息科技有限公司 支持离散数据表示的用于执行正向运算的装置及方法
CN108958801B (zh) * 2017-10-30 2021-06-25 上海寒武纪信息科技有限公司 神经网络处理器及使用处理器执行向量最大值指令的方法
US11636327B2 (en) * 2017-12-29 2023-04-25 Intel Corporation Machine learning sparse computation mechanism for arbitrary neural networks, arithmetic compute microarchitecture, and sparsity for training mechanism

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126372A (zh) * 2016-06-16 2016-11-16 上海天玑科技股份有限公司 一种针对Oracle Exadata一体机的异构容灾装置及方法
US20190392297A1 (en) * 2016-12-30 2019-12-26 Intel Corporation Deep learning hardware
CN108596331A (zh) * 2018-04-16 2018-09-28 浙江大学 一种细胞神经网络硬件架构的优化方法
CN108769684A (zh) * 2018-06-06 2018-11-06 郑州云海信息技术有限公司 基于WebP图像压缩算法的图像处理方法以及装置
CN110727911A (zh) * 2018-07-17 2020-01-24 展讯通信(上海)有限公司 一种矩阵的运算方法及装置、存储介质、终端
CN109447241A (zh) * 2018-09-29 2019-03-08 西安交通大学 一种面向物联网领域的动态可重构卷积神经网络加速器架构

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023088119A1 (en) * 2021-11-18 2023-05-25 International Business Machines Corporation Automatic data domain identification

Also Published As

Publication number Publication date
CN111522776A (zh) 2020-08-11
CN111522776B (zh) 2022-04-05
US11886347B2 (en) 2024-01-30
US20220350745A1 (en) 2022-11-03

Similar Documents

Publication Publication Date Title
Liang et al. Evaluating fast algorithms for convolutional neural networks on FPGAs
CN111291859B (zh) 通用矩阵-矩阵乘法数据流加速器半导体电路
WO2019128404A1 (zh) 矩阵乘法器
Kim et al. FPGA-based CNN inference accelerator synthesized from multi-threaded C software
CN106846235B (zh) 一种利用NVIDIA Kepler GPU汇编指令加速的卷积优化方法及系统
US11886347B2 (en) Large-scale data processing computer architecture
CN114970294B (zh) 基于神威架构的三维应变仿真pcg并行优化方法及系统
Yamazaki et al. One-sided dense matrix factorizations on a multicore with multiple GPU accelerators
CN111859277B (zh) 一种稀疏矩阵向量乘法向量化实现方法
CN110086602A (zh) 基于gpu的sm3密码散列算法的快速实现方法
CN116710912A (zh) 一种矩阵乘法器及矩阵乘法器的控制方法
CN117539546A (zh) 基于非空列存储的稀疏矩阵向量乘加速方法及装置
CN117992396B (zh) 流式张量处理器
Shahbahrami et al. FPGA implementation of parallel histogram computation
CN115965067B (zh) 一种针对ReRAM的神经网络加速器
CN109948787B (zh) 用于神经网络卷积层的运算装置、芯片及方法
CN111475205A (zh) 一种基于数据流解耦合的粗粒度可重构阵列结构设计方法
CN116431562A (zh) 一种基于加速处理器的多头注意力机制融合计算分配方法
CN116227615A (zh) 面向超级计算的量子搜索模拟方法及系统
CN112052941B (zh) 一种应用于cnn网络卷积层的高效存算系统及其运算方法
CN115170381A (zh) 一种基于深度学习的视觉slam加速系统及方法
CN111340224B (zh) 适用于低资源嵌入式芯片的cnn网络的加速设计方法
CN113177877B (zh) 一种面向slam后端优化的舒尔消除加速器
CN118171710B (zh) 一种稀疏矩阵乘法的npu加速方法
US11714649B2 (en) RISC-V-based 3D interconnected multi-core processor architecture and working method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933399

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933399

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20933399

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.05.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20933399

Country of ref document: EP

Kind code of ref document: A1