CN117581200A - Loading data from memory during dispatch - Google Patents

Loading data from memory during dispatch Download PDF

Info

Publication number
CN117581200A
CN117581200A CN202280045776.8A CN202280045776A CN117581200A CN 117581200 A CN117581200 A CN 117581200A CN 202280045776 A CN202280045776 A CN 202280045776A CN 117581200 A CN117581200 A CN 117581200A
Authority
CN
China
Prior art keywords
memory
data
tile
chiplet
dispatch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280045776.8A
Other languages
Chinese (zh)
Inventor
D·瓦内斯科
B·霍尔农
T·M·布鲁尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Publication of CN117581200A publication Critical patent/CN117581200A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/062Securing storage systems
    • G06F3/0622Securing storage systems in relation to access
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Advance Control (AREA)

Abstract

The dispatch element interfaces with the host processor and dispatches the thread to one or more tiles of the hybrid thread fabric. The data structure in memory to be used by a tile may be identified by a starting address and size included as parameters provided by the host. The dispatch element sends a command to a memory interface to transfer identified data to the tile that will use the data. Thus, when the tile begins processing the thread, the data is already available in the local memory of the tile and does not need to be accessed from the memory controller. Data may be transferred by the dispatch element while the tile is performing an operation for another thread, increasing the percentage of operations performed by the tile that are performing useful work and decreasing the percentage of operations that merely retrieve data.

Description

Loading data from memory during dispatch
Priority application
The present application claims the benefit of priority from U.S. application Ser. No. 17/360,455 filed on 6/28 of 2021, which is incorporated herein by reference in its entirety.
Technical Field
Embodiments of the present disclosure relate generally to Hybrid Thread Fabrics (HTFs), and more particularly, to methods of loading data from a memory interface during the assignment of processing threads to tiles in an HTF.
Background
Various computer architectures (e.g., von neumann architectures) conventionally use shared memory of data, buses for accessing shared memory, arithmetic units, and program control units. However, moving data between a processor and memory may require a significant amount of time and energy, which in turn may constrain the performance and capacity of the computer system. In view of these limitations, new computing architectures and devices are desired to enable computing performance beyond the practice of transistor scaling (i.e., moore's law).
A process may be initiated on a processing element. The processing element issues a memory load instruction to retrieve data to be processed from the memory device.
Drawings
The present disclosure will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the disclosure. However, the drawings should not be construed as limiting the disclosure to particular embodiments, but rather as merely illustrative and understanding.
To facilitate the discussion of identifying any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element first appears.
FIG. 1 generally illustrates a first example of a first memory computing device in the context of a memory computing system, according to an embodiment.
FIG. 2 generally illustrates an example of a memory subsystem of a memory computing device according to an embodiment.
Fig. 3 generally illustrates an example of a Programmable Atomic Unit (PAU) for a memory controller according to an embodiment.
FIG. 4 illustrates an example of a Hybrid Thread Processor (HTP) accelerator of a memory computing device in accordance with an embodiment.
FIG. 5 illustrates an example of a representation of an HTF of a memory computing device according to an embodiment.
Fig. 6A generally illustrates an example of a chiplet system according to an embodiment.
FIG. 6B generally illustrates a block diagram showing various components in the chiplet system from the example of FIG. 6A.
Fig. 7 generally illustrates an example of a chiplet-based implementation of a memory computing device according to an embodiment.
FIG. 8 illustrates an example tiling of memory computing device chiplets according to an embodiment.
Fig. 9 is a flowchart showing the operations of a method performed by circuitry when loading data from memory during dispatch according to some embodiments of the present disclosure.
Fig. 10 is a flowchart showing the operations of a method performed by circuitry when loading data from memory during dispatch according to some embodiments of the present disclosure.
FIG. 11 illustrates a block diagram of an example machine with which, in or through which any one or more techniques (e.g., methodologies) discussed herein may be implemented.
Detailed Description
Recent advances in materials, devices, and integration techniques may be utilized to provide memory-centric computing topologies. For example, such topologies may enable improvements in computational efficiency and workload throughput for applications constrained by size, weight, or power requirements. The topology may be used to facilitate low-latency computation near or within a memory or other data storage element. The method may be particularly suitable for various computationally intensive operations with sparse lookups, such as in transform computation (e.g., fast fourier transform computation (FFT)), or in applications such as neural networks or Artificial Intelligence (AI), financial analysis, or simulation or modeling for Computational Fluid Dynamics (CFD), enhanced Engineer Acoustic Simulator (EASE), integrated circuit-centric Simulation Program (SPICE), and so forth.
The systems, devices, and methods discussed herein may include or use a memory computing system having a processor or processing capability provided in, near, or integrated with a memory or data storage component. Such systems are generally referred to herein as CNM systems. A CNM system may be a node-based system in which individual nodes in the system are coupled using a system scaling fabric. Each node may include or use a dedicated or general purpose processor and user accessible accelerators and custom computing fabrics to facilitate intensive operations, particularly in environments where higher cache miss rates are expected.
In an example, each node in a CNM system may have one or several host processors. Within each node, a dedicated hybrid thread processor may occupy discrete endpoints of the network on chip. The hybrid thread processor may access some or all of the memory in a particular node of the system, or the hybrid thread processor may access memory across a network of multiple nodes via a system scaling fabric. The custom computing fabric or hybrid thread fabric at each node may have its own processor(s) or accelerator(s) and may operate at a higher bandwidth than the hybrid thread processor. Different nodes in a CNM system may be configured differently, e.g., with different computing capabilities, different types of memory, different interfaces, or other differences. However, the nodes may be commonly coupled to share data and computing resources within a defined address space.
In an example, a CNM system or a node within the system may be configured by a user for custom operations. The user may provide instructions using a high-level programming language (e.g., C/c++), which may be compiled and mapped directly into the data flow architecture of one or more nodes in the system or CNM system. That is, nodes in the system may include hardware blocks (e.g., memory controllers, atomic units, other client accelerators, etc.) that may be configured to directly implement or support user instructions to thereby enhance system performance and reduce latency.
In examples, CNM systems may be particularly suitable for implementing hierarchies of instructions and nested loops (e.g., two, three, or more deep loops, or multidimensional loops). A standard compiler may be used to accept high-level language instructions and in turn compile directly into a dataflow architecture for one or more of the nodes. For example, a node in the system may include a hybrid thread fabric accelerator. The hybrid thread fabric accelerator may execute in the user space of the CNM system and may launch its own threads or sub-threads, which may operate in parallel. Each thread may be mapped to a different loop iteration to thereby support a multidimensional loop. With the ability to initiate such nested loops, as well as other capabilities, CNM systems can achieve significant time savings and latency improvements for computationally intensive operations.
The CNM system or a node or component of the CNM system may include or use various memory devices, controllers, and interconnects, and the like. In an example, the system may include various interconnected nodes, and the node or group of nodes may be implemented using chiplets. Chiplets are an emerging technology for integrating various processing functionalities. Typically, chiplet systems consist of discrete chips, such as Integrated Circuits (ICs) on different substrates or dies, integrated on an interposer and packaged together. Such an arrangement is different from a single chip (e.g., IC), such as a system on chip (SoC), or discrete packaged devices integrated on a board, containing different device blocks, such as Intellectual Property (IP) blocks, on one substrate, such as a single die. Generally, chiplets provide more production benefits than single die chips, including higher yields or reduced development costs. Fig. 6A and 6B, discussed below, generally illustrate examples of chiplet systems that can include CNM systems, for example.
A dispatch element (e.g., host interface and dispatch module) may interface with the host processor and dispatch threads to one or more tiles of the hybrid thread fabric. The data structure in memory to be used by a tile may be identified by a starting address and size included as parameters provided by the host. The dispatch element sends a command to the memory interface to transfer the identified data to the tile that will use the data. Thus, when the tile begins processing threads, the data is already available in the local memory of the tile and does not need to be accessed from the memory controller.
Using a dispatch element to transfer data rather than having a tile request data reduces the number of commands that must be executed by a tile. For example, if some data to be operated on by a tile is preloaded, the accelerator resource need not request the data itself. The reduction in the number of commands executed by the accelerator resource may increase the responsiveness of the tile, reduce the power consumption of the tile, reduce the time period between thread creation instructions (also referred to as the start-up interval), or any suitable combination thereof. In addition, data may be transferred by the dispatch element while the tile is performing an operation for another thread, increasing the percentage of operations performed by the tile that perform useful work and decreasing the percentage of operations that merely retrieve data.
Fig. 1 generally illustrates a first example of a CNM system 102. Examples of CNM system 102 include a plurality of different memory compute nodes that may each include various CNM devices, for example. Each node in the system may operate in its own Operating System (OS) domain (e.g., linux, etc.). In an example, the nodes may co-exist in a common OS domain of CNM system 102.
The example of fig. 1 includes an example of a first memory compute node 104 of a CNM system 102. CNM system 102 may have multiple nodes coupled using scaling fabric 106, e.g., including different examples of first memory computing node 104. In an example, the architecture of CNM system 102 may support scaling up to n different memory compute nodes (e.g., n=4096) using scaling fabric 106. As discussed further below, each node in CNM system 102 may be an assembly of multiple devices.
CNM system 102 may include a global controller for various nodes in the system, or a particular memory compute node in the system may optionally act as a host or controller for one or more other memory compute nodes in the same system. Accordingly, the various nodes in CNM system 102 may be similarly or differently configured.
In an example, each node in CNM system 102 may include a host system that uses the specified operating system. The operating system may be common or different among the various nodes in CNM system 102. In the example of fig. 1, the first memory computing node 104 includes a host system 108, a first switch 110, and a first memory computing device 112. Host system 108 may include a processor, which may include, for example, an X86, ARM, RISC-V, or other type of processor. The first switch 110 may be configured to facilitate communication between or among the first memory compute node 104 or devices of the CNM system 102, for example, using a proprietary or other communication protocol, collectively referred to herein as a chip-to-chip protocol interface (CTCPI). That is, CTCPI may include a dedicated interface that is specific to CNM system 102, or may include or use other interfaces, such as a computing fast link (CXL) interface, a peripheral component interconnect express (PCIe) interface, or a Chiplet Protocol Interface (CPI), among others. The first switch 110 may include a switch configured to use CTCPI. For example, the first switch 110 may include a CXL switch, a PCIe switch, a CPI switch, or other types of switches. In an example, the first switch 110 may be configured to couple endpoints configured differently. For example, the first switch 110 may be configured to translate packet formats, such as between PCIe and CPI formats, and so on.
CNM system 102 is described herein in various example configurations, such as a system comprising nodes, and each node may comprise various chips (e.g., processors, switches, memory devices, etc.). In an example, the first memory compute node 104 in the CNM system 102 may include various chips implemented using chiplets. In the chiplet-based configuration described below for CNM system 102, the inter-chiplet communications, as well as additional communications within the system, may use a CPI network. The CPI network described herein is an example of CTCPI, i.e., a chiplet-specific implementation as CTCPI. Thus, the following structure, operation, and functionality of CPI may be equally applicable to the structure, operation, and functionality as may be implemented using a non-chiplet-based CTCPI implementation. Unless explicitly indicated otherwise, any discussion herein of CPI applies equally to ctpi.
The CPI interface includes a packet-based network that supports virtual channels to enable flexible and high-speed interactions between chiplets, such as may include portions of the first memory compute node 104 or CNM system 102. CPI can enable bridging from an on-chip network to a more extensive on-chip network. For example, advanced extensible interface (AXI) is a specification for on-chip communications. However, the AXI specification encompasses a variety of physical design options, such as the number of physical channels, signal timing, power, and so on. In a single chip, these options are typically selected to meet design goals, such as power consumption, speed, and the like. However, to achieve flexibility in a chiplet-based memory computing system, an adapter using CPI, for example, may interface between various AXI design options that can be implemented in various chiplets. By enabling mapping of physical channels to virtual channels and encapsulating time-based signaling with a packetized protocol, CPI can be used to bridge the on-chip network across a wider chiplet network (e.g., across the first memory compute node 104 or across the CNM system 102), for example, within a particular memory compute node.
CNM system 102 may be scalable to include multi-node configurations. That is, a plurality of different examples of first memory compute node 104 or other differently configured memory compute nodes may be coupled using scaling fabric 106 to provide a scaled system. Each of the memory compute nodes may run its own operating system and may be configured to jointly coordinate system-wide resource usage.
In the example of fig. 1, a first switch 110 of the first memory computing node 104 is coupled to the scaling fabric 106. Scaling fabric 106 may provide a switch (e.g., CTCPI switch, PCIe switch, CPI switch, or other switch) that may facilitate communications among and between different memory compute nodes. In an example, scaling fabric 106 may facilitate various nodes communicating in a Partitioned Global Address Space (PGAS).
In an example, the first switch 110 from the first memory computing node 104 is coupled to one or more different memory computing devices, including for example the first memory computing device 112. The first memory computing device 112 may include a chiplet-based architecture referred to herein as a CNM chiplet. The packaged version of the first memory computing device 112 may include, for example, one or more CNM chiplets. For high bandwidth and low latency, the chiplets can be communicatively coupled using CTCPI.
In the example of fig. 1, the first memory computing device 112 may include a Network On Chip (NOC) or a first NOC 118. Typically, a NOC is an interconnected network within a device that connects a specific set of endpoints. In fig. 1, the first NOC 118 may provide communications and connectivity between various memories, computing resources, and ports of the first memory computing device 112.
In an example, the first NOC 118 may include a folded Clos topology, such as within each instance of a memory computing device or as a grid coupling multiple memory computing devices in a node. For example, a plurality of smaller radix crossbars may be used to provide the Clos topology of functionality associated with a higher radix crossbar topology to provide various benefits. For example, clos topologies may exhibit consistent latency and fractional bandwidth across NOCs.
The first NOC 118 may include a variety of different switch types, including center switches, edge switches, and endpoint switches. Each of the switches may be configured as a crossbar that provides substantially uniform delay and bandwidth between input and output nodes. In an example, the endpoint switch and the edge switch may include two separate crossbars, one for traffic destined for the central switch and the other for traffic remote from the central switch. The central switch may be configured as a single crossbar switch that switches all inputs to all outputs.
In an example, the hub switches may each have multiple ports (e.g., four or six ports each), e.g., depending on whether a particular hub switch is engaged in inter-chip communications. The number of central switches involved in inter-chip communication may be set according to the inter-chip bandwidth requirements.
The first NOC 118 may support various payloads (e.g., from 8 to 64 byte payloads; other payload sizes may be similarly used) between computing elements and memory. In an example, the first NOC 118 may be optimized for relatively small payloads (e.g., 8-16 bytes) to efficiently handle access to sparse data structures.
In an example, the first NOC 118 may be coupled to an external host via a first physical layer interface 114, PCIe slave module 116, or endpoint, PCIe master module 126, or root port. That is, the first physical layer interface 114 may include an interface to allow an external host processor to be coupled to the first memory computing device 112. The external host processor may optionally be coupled to one or more different memory computing devices, for example using a PCIe switch or other native protocol switch. Communication with an external host processor through a PCIe-based switch may limit device-to-device communication to that supported by the switch. In contrast, communication through a memory computing device-native protocol switch, e.g., using CTCPI, may allow for more adequate communication between or among different memory computing devices, including support for partitioned global address spaces, e.g., for creating worker threads and sending events.
In an example, the ctpi protocol may be used by the first NOC 118 in the first memory computing device 112, and the first switch 110 may comprise a ctpi switch. The ctpi switch may allow ctpi packets to be transferred from a source memory computing device (e.g., first memory computing device 112) to a different destination memory computing device (e.g., on the same node or other node), e.g., without conversion to another packet format.
In an example, the first memory computing device 112 may include an internal host processor 122. The internal host processor 122 may be configured to communicate with the first NOC 118 or other components or modules of the first memory computing device 112, for example, using an internal PCIe master module 126, which may help eliminate the physical layer that would consume time and energy. In an example, the internal host processor 122 may be based on a RISC-V Instruction Set Architecture (ISA) processor, and may use the first physical layer interface 114 to communicate external to the first memory computing device 112, such as with other storage devices, networking devices, or other peripheral devices of the first memory computing device 112. The internal host processor 122 may control the first memory computing device 112 and may act as a proxy for operating system related functionality. The internal host processor 122 may include a relatively small number of processing cores (e.g., 2-4 cores) and a host memory device 124 (e.g., including Dynamic Random Access Memory (DRAM) modules).
In an example, the internal host processor 122 may include a PCI root port. When the internal host processor 122 is in use, one of its root ports may then be connected to the PCIe slave module 116. Another one of the root ports of the internal host processor 122 may be connected to the first physical layer interface 114, for example, to provide communication with external PCI peripheral devices. When the internal host processor 122 is disabled, the PCIe slave module 116 may then be coupled to the first physical layer interface 114 to allow the external host processor to communicate with the first NOC 118. In an example of a system having multiple memory computing devices, the first memory computing device 112 may be configured to act as a system host or controller. In this example, the internal host processor 122 may be in use, and other examples of internal host processors in respective other memory computing devices may be disabled.
The internal host processor 122 may be configured at power-up of the first memory computing device 112, for example, to allow host initialization. In an example, the internal host processor 122 and its associated data paths (e.g., including the first physical layer interface 114, PCIe slave module 116, etc.) may be configured from the input pins to the first memory computing device 112. One or more of the pins may be used to enable or disable the internal host processor 122 and configure the PCI (or other) data path accordingly.
In an example, the first NOC 118 may be connected to the scaling fabric 106 via a scaling fabric interface module 136 and a second physical layer interface 138. The scaling fabric interface module 136 or SIF may facilitate communication between the first memory computing device 112 and a device space (e.g., PGAS). The PGAS may be configured such that a particular memory computing device (e.g., the first memory computing device 112) may access memory or other resources on a different memory computing device (e.g., on the same node or a different node), for example, using a load/store paradigm. Various scalable fabric techniques may be used, including CTCPI, CPI, gen-Z, PCI or ethernet bridging over CXL. Scaling fabric 106 may be configured to support various packet formats. In an example, scaling fabric 106 supports out-of-order packet communications or in-order packets, e.g., path identifiers may be used to spread bandwidth across multiple equivalent paths. Scaling fabric 106 may generally support remote operations such as remote memory reads, writes, and other built-in atoms, remote memory computing device send events, and remote memory computing device call and return operations.
In an example, the first NOC 118 may be coupled to one or more different memory modules, including for example, the first memory device 128. The first memory device 128 may include various types of memory devices, such as low power double data rate 5 (LPDDR 5) Synchronous DRAM (SDRAM) or graphics double data rate 6 (GDDR 6) DRAM, etc. In the example of fig. 1, the first NOC 118 may coordinate communications with the first memory device 128 via a memory controller 130 that may be dedicated to a particular memory module. In an example, the memory controller 130 may include a memory module cache and an atomic operation module. The atomic operation module may be configured to provide relatively high throughput atomic operators, including integer and floating point operators, for example. The atomic operation module may be configured to apply its operators to data within the memory module cache (e.g., including a Static Random Access Memory (SRAM) memory side cache), thereby allowing back-to-back atomic operations with minimal throughput degradation using the same memory location.
The memory module cache may provide storage for frequently accessed memory locations, e.g., without having to re-access the first memory device 128. In an example, the memory module cache may be configured to cache only data of a particular instance of the memory controller 130. In an example, the memory controller 130 includes a DRAM controller configured to interface with the first memory device 128 (e.g., including a DRAM device). Memory controller 130 may provide access scheduling and bit error management, among other functions.
In an example, the first NOC 118 may be coupled to HTPs (HTP 140), HTFs (HTF 142), and host interface and dispatch modules (HIF 120). HIF 120 may be configured to facilitate access to host-based command request queues and response queues. In an example, HIF 120 may dispatch a new thread of execution on a processor or computing element of HTP 140 or HTF 142. In an example, HIF 120 may be configured to maintain workload balancing across HTP 140 and HTF 142 modules.
The hybrid thread processor or HTP 140 may include an accelerator, which may be based on the RISC-V instruction set, for example. The HTP 140 may include a highly threaded event driven processor in which threads may execute in a single instruction rotation, e.g., to maintain high instruction throughput. The HTP 140 includes relatively few custom instructions to support low overhead thread capabilities, event send/receive, and shared memory atom operators.
The hybrid thread fabric or HTF 142 may include accelerators, e.g., may include non-von neumann, coarse-grained reconfigurable processors. HTF 142 may be optimized for high-level language operations and data types (e.g., integer or floating point). In an example, HTF 142 may support data stream computation. The HTF 142 may be configured to use substantially all of the memory bandwidth available on the first memory computing device 112, such as when executing a memory-constrained computing kernel.
The HTP and HTF accelerators of CNM system 102 may be programmed using a variety of high-level structured programming languages. For example, HTP and HTF accelerators may be programmed using C/c++ programming (e.g., using LLVM compiler framework). HTP accelerators may utilize, for example, an open source compiler environment with various added custom instruction sets configured to improve memory access efficiency, provide messaging mechanisms, and manage events, among other things. In an example, the HTF accelerator may be designed to enable programming of the HTF 142 using a high-level programming language, and the compiler may generate a simulator configuration file or binary file that runs on the HTF 142 hardware. The HTF 142 may provide a medium level language for accurately and concisely expressing the algorithm while hiding configuration details of the HTF accelerator itself. In an example, the HTF accelerator tool chain may use an LLVM front-end compiler and an LLVM Intermediate Representation (IR) to interface with the HTF accelerator back-end.
FIG. 2 generally illustrates an example of a memory subsystem 200 of a memory computing device according to an embodiment. An example of a memory subsystem 200 includes a controller 202, a PAU 208, and a second NOC 206. Controller 202 may include or use programmable atomic unit 208 to perform operations using information in memory device 204. In an example, the memory subsystem 200 includes a portion of the first memory computing device 112 from the example of fig. 1, such as a portion including the first NOC 118 or the memory controller 130.
In the example of fig. 2, the second NOC 206 is coupled to the controller 202, and the controller 202 may include a memory control module 210, a local cache module 212, and a built-in atomic module 214. In an example, the built-in atom module 214 may be configured to handle relatively simple, single-cycle integer atoms. The built-in atomic module 214 may execute atoms with the same throughput as, for example, normal memory read or write operations. In an example, an atomic memory operation may include a combination of storing data to memory, performing an atomic memory operation, and then responding with load data from memory.
A local cache module 212 (which may include an SRAM cache, for example) may be provided to help reduce latency of repeatedly accessed memory locations. In an example, the local cache module 212 may provide a read buffer for sub-memory line accesses. The local cache module 212 is particularly beneficial for computing elements with relatively little or no data cache. In some example embodiments, the local cache module 212 is a 2 kilobyte read-only cache.
Memory control module 210 (which may include, for example, a DRAM controller) may provide low-level request buffering and scheduling, for example, to provide efficient access to memory devices 204 (which may include, for example, DRAM devices). In an example, the memory device 204 may include or use, for example, a GDDR6 DRAM device having a 16Gb density and a 64Gb/sec peak bandwidth. Other devices may be similarly used.
In an example, the PAU 208 can include, for example, single-loop or multi-loop operators that can be configured to perform integer additions or more complex multi-instruction operations (e.g., bloom filter inserts). In an example, the PAU 208 can be configured to perform load and store to memory operations. The PAU 208 may be configured to facilitate interaction with the controller 202 to atomically perform user-defined operations using a RISC-V ISA with a set of specialized instructions.
Programmable atomic requests, such as received from hosts on or off the node, may be routed to the PAU 208 via the second NOC 206 and the controller 202. In an example, the custom atomic operation (e.g., performed by the PAU 208) may be the same as the built-in atomic operation (e.g., performed by the built-in atomic module 214), except that the programming atomic operation may be defined or programmed by a user instead of a system architect. In an example, a programmable atom request packet may be sent to the controller 202 through the second NOC 206, and the controller 202 may identify the request as a custom atom. The controller 202 may then forward the identified request to the PAU 208.
Fig. 3 generally illustrates an example of a PAU 302 for use with a memory controller according to an embodiment. In an example, the PAU 302 can include or correspond to the PAU 208 from the example of fig. 2. That is, fig. 3 illustrates components in an example of a PAU 302, such as those described above with respect to fig. 2 (e.g., in PAU 208) or fig. 1 (e.g., in an atomic operations module of memory controller 130). As illustrated in fig. 3, the PAU 302 includes a PAU processor or PAU core 306, a PAU thread control 304, an instruction SRAM 308, a data cache 310, and a memory interface 312 to interface with a memory controller 314. In an example, the memory controller 314 includes an example of the controller 202 from the example of fig. 2.
In an example, the PAU core 306 is a pipelined processor such that multiple stages of different instructions are executed together per clock cycle. The PAU core 306 may include a barrel multithreaded processor in which the thread control 304 circuitry switches between different register files (e.g., register sets containing the current processing state) at each clock cycle. This enables efficient context switching between currently executing threads. In an example, the PAU core 306 supports eight threads, resulting in eight register files. In an example, some or all of the register files are not integrated into the PAU core 306, but instead reside in the local data cache 310 or instruction SRAM 308. This reduces circuit complexity in the PAU core 306 by eliminating conventional flip-flops for registers in such memories.
The local PAU memory can include instruction SRAM 308, which can include instructions for various atoms, for example. The instructions include sets of instructions to support atomic operators for various application loads. When an atomic operator is requested, for example, by an application chiplet, a set of instructions corresponding to the atomic operator is executed by the PAU core 306. In an example, the instruction SRAM 308 may be partitioned to establish sets of instructions. In this example, the particular programmable atomic operator requested by the requesting process may identify the programmable atomic operator by a partition number. The partition number may be established when a programmable atomic operator is registered with the PAU 302 (e.g., loaded into the PAU 302). Other metadata for the programmable instructions may be stored in memory local to the PAU 302 (e.g., in a partition table).
In an example, the atomic operator manipulates a data cache 310, which is typically synchronized (e.g., flushed) when the thread of the atomic operator is completed. Thus, in addition to initial loading from external memory (e.g., from memory controller 314), latency of most memory operations may be reduced during execution of the programmable atomic operator thread.
When an executing thread attempts to issue a memory request, a pipeline processor (e.g., the PAU core 306) may encounter a problem if a potentially dangerous condition would block the request. Here, the memory request is to retrieve data from the memory controller 314, whether from a cache on the memory controller 314 or off-die memory. To address this issue, the PAU core 306 is configured to reject memory requests to threads. In general, the PAU core 306 or thread control 304 may include circuitry to enable one or more thread rescheduling points in the pipeline. Here, the rejection occurs at points in the pipeline that are outside (e.g., after) the rescheduling of these threads. In an example, the hazard occurs outside of the rescheduling point. Here, after the last thread rescheduling point before the memory request instruction passes through the pipeline stage in which the memory request can be made, the previous instruction in the thread creates a hazard.
In an example, to reject a memory request, the PAU core 306 is configured to determine (e.g., detect) that there is a hazard on the memory indicated in the memory request. Here, hazards represent any condition that causes a memory request to be allowed (e.g., executed) that would result in an inconsistent state of a thread. In an example, the hazard is an in-flight memory request. Here, the presence of an in-flight memory request makes it uncertain what the data in the data cache 310 at that address should be, whether the data cache 310 contains data for the requested memory address or not. Thus, the thread waits for the in-flight memory request to complete to operate on the current data. When the memory request is completed, the hazard is cleared.
In an example, the hazard is a dirty cache line in the data cache 310 for the requested memory address. Although dirty cache lines generally indicate that the data in the cache is current and the memory controller version of this data is not current, problems may occur on thread instructions that do not operate from the cache. An example of such an instruction uses built-in atomic operators or other separate hardware blocks of the memory controller 314. In the context of a memory controller, the built-in atomic operator may be separate from the PAU 302 and not be able to access the data cache 310 or instruction SRAM 308 inside the PAU. If the cache line is dirty, the built-in atomic operator will not operate on the most recent data until the data cache 310 is flushed to synchronize the cache with other memory or off-die memory. This same situation may occur on other hardware blocks of the memory controller (e.g., cipher blocks, encoders, etc.).
Fig. 4 illustrates an example of an HTP accelerator or HTP accelerator 400. According to an embodiment, the HTP accelerator 400 may comprise a portion of a memory computing device. In an example, the HTP accelerator 400 may include or include the HTP 140 from the example of fig. 1. The HTP accelerator 400 includes, for example, an HTP core 402, an instruction cache 404, a data cache 406, a translation block 408, a memory interface 410, and a thread controller 412. The HTP accelerator 400 may further include, for example, a dispatch interface 414 and a NOC interface 416 for interfacing with a NOC (e.g., the first NOC 118 from the example of fig. 1, the second NOC 206 from the example of fig. 2, or any other NOC).
In an example, the HTP accelerator 400 includes a RISC-V instruction set-based module and may include a relatively small amount of other or additional custom instructions to support low-overhead, thread-capable Hybrid Thread (HT) languages. The HTP accelerator 400 may include a high-thread processor core (HTP core 402) in or with which threads may execute in a single instruction rotation, e.g., to maintain high instruction throughput. In an example, a thread may be suspended while it waits for other pending events to complete. This may allow computing resources to be efficiently used for related work rather than polling. In an example, multithreaded barrier synchronization may use efficient HTP-to-HTP and HTP-to/back-to-host messaging, e.g., may allow thousands of threads to initialize or wake up within, e.g., tens of clock cycles.
In an example, the dispatch interface 414 may include functional blocks of the HTP accelerator 400 to handle hardware-based thread management. That is, dispatch interface 414 may manage the dispatch of work to HTP core 402 or other accelerator. However, non-HTP accelerators are typically not able to dispatch work. In an example, work dispatched from a host may use a dispatch queue residing in, for example, host main memory (e.g., DRAM-based memory). On the other hand, work dispatched from the HTP accelerator 400 may use a dispatch queue residing in SRAM, such as in a dispatch to a target HTP accelerator 400 within a particular node.
In an example, the HTP cores 402 may include one or more cores that execute instructions on behalf of threads. That is, the HTP core 402 may include instruction processing blocks. The HTP core 402 may further include or may be coupled to a thread controller 412. Thread controller 412 may provide thread control and status for each active thread within HTP core 402. The data cache 406 may include caches for host processors (e.g., for local and remote memory computing devices, including for the HTP core 402), and the instruction cache 404 may include caches for use by the HTP core 402. In an example, the data cache 406 may be configured for read and write operations and the instruction cache 404 may be configured for read-only operations.
In an example, the data cache 406 is a small cache provided per hardware thread. The data cache 406 may temporarily store data used by owning threads. The data cache 406 may be managed by hardware or software in the HTP accelerator 400. For example, when load and store operations are performed by the HTP core 402, the hardware may be configured to automatically allocate or row by row as needed. Software, such as that using RISC-V instructions, can determine which memory accesses should be cached and when a line should be invalidated or written back to other memory locations.
The data caching on the HTP accelerator 400 has various benefits including making larger accesses more efficient to the memory controller, thereby allowing the thread of execution to avoid stalling. However, there are situations where using a cache may cause inefficiency. Examples include accesses in which data is accessed only once and causes a cache line to bump. To help solve this problem, the HTP accelerator 400 may use a set of custom load instructions to force the load instructions to check for cache hits and upon a cache miss, issue a memory request for the requested operand and not place the obtained data into the data cache 406. Thus, the HTP accelerator 400 includes various different types of load instructions, including uncached and cache line loads. If dirty data is present in the cache, the non-cache load instruction will use the cache data. The non-cache load instruction ignores clean data in the cache and does not write the accessed data to the data cache. For a cache line load instruction, a complete data cache line (e.g., comprising 64 bytes) may be loaded from memory into data cache 406 and the addressed memory may be loaded into the specified registers. If clean or dirty data is in the data cache 406, then these loads may use the cached data. If the referenced memory location is not in the data cache 406, then the entire cache line may be accessed from memory. Using a cache line load instruction may reduce cache misses when sequential memory locations (e.g., memory copy operations) are being referenced, but memory and bandwidth at NOC interface 416 may also be wasted if the referenced memory data is not used.
In an example, the HTP accelerator 400 includes non-cached custom store instructions. Non-cached store instructions may help avoid thrashing the data cache 406 with write data that is not written sequentially to memory.
In an example, the HTP accelerator 400 further includes a translation block 408. The translation block 408 may include a virtual-to-physical translation block for local memory of the memory computing device. For example, a host processor (e.g., HTP core 402) may execute a load or store instruction, and the instruction may generate a virtual address. The virtual address may be translated to a physical address of the host processor, for example, using a translation table from translation block 408. For example, the memory interface 410 may include an interface between the HTP core 402 and the NOC interface 416.
FIG. 5 illustrates an example of a representation of an HTF 500 of a memory computing device in accordance with an embodiment. In an example, HTF 500 may include or include HTF 142 from the example of fig. 1. HTF 500 is a coarse-grained, reconfigurable computing fabric that may be optimized for high-level language operand types and operators (e.g., using C/c++ or other high-level language). In an example, HTF 500 may include a configurable, n-bit wide (e.g., 512-bit wide) data path interconnecting enhanced single instruction/multiple data (SIMD) operation units.
In an example, HTF 500 includes HTF cluster 502 that includes a plurality of HTF tiles (including example tile 504 or tile N). Each HTF tile may include one or more computing elements having local memory and arithmetic functions. For example, each tile may include a computation pipeline that supports integer and floating point operations. In an example, data paths, computing elements, and other infrastructure may be implemented to strengthen IP to provide maximum performance while minimizing power consumption and reconfiguration time.
In the example of fig. 5, the tiles comprising HTF cluster 502 are arranged linearly, and each tile in the cluster may be coupled to one or more other tiles in HTF cluster 502. IN the example of fig. 5, example tile 504 or tile N is coupled to four other tiles, including to base tile 510 (e.g., tile N-2) via a port labeled SF IN N-2, to adjacent tile 512 (e.g., tile N-1) via a port labeled SF IN N-1, and to tile n+1 via a port labeled SF IN n+1, and to tile n+2 via a port labeled SF IN n+2. Example tile 504 may be coupled to the same tile or other tiles via respective output ports, such as the output ports labeled SF OUT N-1, SF OUT N-2, SF OUT N+1, and SF OUT N+2. In this example, the ordered list of names of the various tiles is a conceptual indication of the location of the tile. In other examples, tiles comprising HTF clusters 502 may be arranged in a grid or other configuration, with each tile similarly coupled to one or several of its nearest neighbors in the grid. Tiles disposed at the edges of a cluster may optionally have fewer connections to neighboring tiles. For example, tile N-2 or base tile 510 in the example of FIG. 5 may be coupled only to adjacent tile 512 (tile N-1) and example tile 504 (tile N). Similarly, fewer or additional inter-tile connections may be used.
The HTF cluster 502 may further include a memory interface module including a first memory interface module 506. The memory interface module may couple the HTF cluster 502 to a NOC, such as the first NOC 118. In an example, a memory interface module may allow tiles within a cluster to make requests to other locations in a memory computing system (e.g., in the same or different nodes in the system). That is, the representation of HTF 500 may include a portion of a larger organization that may be distributed across multiple nodes, such as having one or more HTF tiles or HTF clusters at each of the nodes. Requests may be made between tiles or nodes in the context of a larger fabric.
In the example of fig. 5, tiles in HTF cluster 502 are coupled using a Synchronous Fabric (SF). The synchronization fabric may provide communication between a particular tile and its neighboring tiles in the HTF cluster 502, as described above. Each HTF cluster 502 may further include an Asynchronous Fabric (AF) that may provide, for example, communication among tiles in the cluster, memory interfaces in the cluster, and dispatch interfaces 508 in the cluster.
In an example, the synchronization fabric may exchange messages including data and control information. The control information may include, among other things, instruction RAM address information or a thread identifier. The control information may be used to set a data path and the data message field may be selected as the source of the path. In general, control fields may be provided or received earlier so that they may be used to configure the data path. For example, to help minimize any delay through the synchronous domain pipeline in the tile, the control information may arrive at the tile a few clock cycles earlier than the data field. Various registers may be provided to help coordinate the timing of the data streams in the pipeline.
In an example, each tile in HTF cluster 502 may include multiple memories. Each memory may have the same width as the data path (e.g., 512 bits) and may have a specified depth, such as in the range of 512 to 1024 elements. The tile memory may be used to store data that supports datapath operations. For example, the stored data may include constants that are part of the loading of the cluster configuration of the kernel, or may include variables that are calculated as part of the data flow. In an example, the tile memory may be written from the asynchronous fabric as a data transfer from another synchronous domain, or may include, for example, the result of a load operation initiated by another synchronous domain. The tile memory may be read via synchronous datapath instruction execution in the synchronous domain.
In an example, each tile in HTF cluster 502 may have dedicated instruction RAM (INST RAM). In the example of an HTF cluster 502 with 16 tiles and an instruction RAM example with 64 entries, the cluster may allow mapping algorithms with up to 1024 multiply-shift and/or Arithmetic Logic Unit (ALU) operations. The various tiles may optionally be pipelined together, e.g., using a synchronous fabric, to allow data stream computation with minimal memory access, minimizing latency and reducing power consumption. In an example, an asynchronous fabric may allow memory references to be made in parallel with computation, thereby providing a more efficient stream kernel. In an example, the various tiles may include built-in support for loop-based structures and may support nested loop kernels.
The synchronous fabric may allow multiple tiles to be pipelined, e.g., without data queuing. For example, tiles participating in a synchronization domain may act as a single pipeline data path. The first or base chunk of the synchronization domain (e.g., chunk N-2 in the example of fig. 5) may launch a worker thread through the pipeline chunk. The base tile may be responsible for initiating work on a predefined cadence (referred to herein as spoke count). For example, if the spoke count is 3, then the base tile may initiate work every two clock cycles.
In an example, the synchronization domain includes a set of connected pieces in the HTF cluster 502. Execution of a thread may begin at a base tile of the domain and may proceed from the base tile to other tiles in the same domain via a synchronous fabric. The base tile may provide instructions to be executed for the first tile. By default, a first tile may provide the same instructions for execution by other connected tiles. However, in some examples, a base tile or a subsequent tile may conditionally specify or use alternative instructions. The replacement instruction may be selected by having the datapath of the tile generate a boolean condition value, and then the boolean value may be used to select between the instruction set of the current tile and the replacement instruction.
Asynchronous fabrics may be used to perform operations that occur asynchronously with respect to the synchronous domain. Each tile in HTF cluster 502 may include an interface to an asynchronous fabric. The inbound interface may include, for example, a first IN/first out (FIFO) buffer or QUEUE (e.g., AF IN QUEUE) to provide storage for messages that cannot be immediately processed. Similarly, an outbound interface of an asynchronous fabric may include a FIFO buffer or QUEUE (e.g., AF OUT QUEUE) to provide storage for messages that cannot be immediately sent OUT.
In an example, the messages in the AF may be classified as data messages or control messages. The data message may contain SIMD width data values written to tile memory 0 (mem_0) or memory 1 (mem_1). The control message may be configured to control the thread to create, release resources, or issue external memory references.
The tiles in the HTF cluster 502 may perform various computing operations for the HTF. The computing operation may be performed by configuring a data path within a tile. In an example, a tile contains two functional blocks that perform computing operations for the tile: a multiplication and shift operation block (MS OP) and an arithmetic, logic and bit operation block (ALB OP). The two blocks may be configured to perform pipelining, such as multiplication and addition, or shifting and addition, or the like.
In an example, each instance of a memory computing device in a system may have a complete set of support instructions for its operator blocks (e.g., MS OP and ALB OP). In this case, binary compatibility may be achieved across all devices in the system. However, in some examples, it is helpful to maintain a set of basic functionality and optional instruction set categories, e.g., to meet various design tradeoffs, such as die size. The method may be similar to the manner in which the RISC-V instruction set has a base set and a plurality of optional instruction subsets.
In an example, the example tile 504 may include spoke RAM. Spoke RAM may be used to specify which input (e.g., from among the four SF tile inputs and the base tile input) is the primary input for each clock cycle. The spoke RAM read address input may originate from a counter that decreases from a zero count to a spoke count by one. In an example, different spoke counts may be used on different tiles (e.g., within the same HTF cluster 502) to allow for several slices or unique tile instances that are used by the inner loop to determine the performance of a particular application or instruction set. In an example, spoke RAM may specify when to write a synchronization input to the tile memory, such as when to use multiple inputs of a particular tile instruction and when one of the inputs arrives before the other. Inputs arriving earlier may be written to the tile memory and may later be read when all inputs are available. In this example, the tile memory may be accessed as FIFO memory, and FIFO read and write pointers may be stored in a register-based memory area or structure in the tile memory.
Fig. 6A and 6B generally illustrate examples of chiplet systems that can be used to implement one or more aspects of CNM system 102. As similarly mentioned above, the nodes in CNM system 102 or devices within the nodes in CNM system 102 may include chiplet-based architectures or CNM chiplets. The packaged memory computing device may include, for example, one, two, or four CNM chiplets. The chiplets can be interconnected using high bandwidth, low latency interconnects (e.g., using CPI interfaces). Typically, a chiplet system consists of discrete modules (each "chiplet") integrated on an interposer and interconnected as needed through one or more established networks in many instances to provide the system with the desired functionality. The interposer and contained chiplets can be packaged together to facilitate interconnection with other components of a larger system. Each chiplet can include one or more individual ICs or "chips" potentially combined with discrete circuit components and can be coupled to a respective substrate to facilitate attachment to an interposer. Most or all chiplets in a system can be individually configured for communication over an established network.
Configuring a chiplet as an individual module of a system is different than implementing such a system on a single chip containing different device blocks (e.g., IP blocks) on one substrate (e.g., a single die), such as a system-on-chip (SoC), or discrete packaged devices integrated on a Printed Circuit Board (PCB). In general, chiplets provide better performance (e.g., lower power consumption, reduced latency, etc.) than discrete packaged devices, and chiplets provide greater production benefits than single-die chips. These production benefits may include higher yields or reduced development costs and time.
A chiplet system can include, for example, one or more application (or processor) chiplets and one or more support chiplets. The distinction between application chiplets and support chiplets is here merely a reference to possible design scenarios for a chiplet system. Thus, for example only, a synthetic vision chiplet system can include an application chiplet to generate a synthetic vision output along with a support chiplet, such as a memory controller chiplet, a sensor interface chiplet, or a communication chiplet. In a typical use case, a synthetic vision designer may design an application chiplet and obtain a support chiplet from other parties. Thus, design expenditure (e.g., in terms of time or complexity) is reduced by avoiding designs and production that support the functionality embodied in the chiplet.
Chiplets also support tight integration of IP blocks that might otherwise be difficult to achieve, such as IP blocks fabricated using different processing techniques or using different feature sizes (or using different contact techniques or pitches). Thus, multiple ICs or IC assemblies having different physical, electrical or communication properties may be assembled in a modular manner to provide an assembly having various desired functionalities. The chiplet system can also facilitate adaptation to accommodate the needs of different larger systems into which the chiplet system is to be incorporated. In an example, an IC or other assembly that may be optimized for power, speed, or heat generation of a particular function (as may occur on a sensor) may be more easily integrated with other devices than attempting to integrate with the other devices on a single die. In addition, by reducing the overall size of the die, the yield of the chiplet tends to be higher than that of more complex single-die devices.
Fig. 6A and 6B generally illustrate examples of chiplet systems according to embodiments. Fig. 6A is a representation of a chiplet system 602 mounted (e.g., over PCIe) on a peripheral board 604 that can be connected to a broader computer system. The chiplet system 602 includes a package substrate 606, an interposer 608, and four chiplets: application chiplet 610, host interface chiplet 612, memory controller chiplet 614, and memory device chiplet 616. Other systems may include many additional chiplets to provide additional functionality as will be apparent from the discussion below. The packaging of the chiplet system 602 is illustrated with a lid or cover 618, although other packaging techniques and structures of the chiplet system can be used. Fig. 6B is a block diagram of components in the labeled chiplet system for clarity.
The application chiplet 610 is illustrated as including a chiplet system NOC 620 to support a chiplet network 622 for inter-chiplet communications. In an example embodiment, the chiplet system NOC 620 can be included on an application chiplet 610. In an example, the first NOC 118 from the example of fig. 1 may be defined in response to a selected support chiplet (e.g., host interface chiplet 612, memory controller chiplet 614, and memory device chiplet 616), enabling a designer to select an appropriate number of chiplet network connections or switches for the chiplet system NOC 620. In an example, the chiplet system NOC 620 can be located on a separate chiplet or within an interposer 608. In the example as discussed herein, the chiplet system NOC 620 implements a CPI network.
In an example, the chiplet system 602 can include or include a portion of the first memory computing node 104 or the first memory computing device 112. That is, the various blocks or components of first memory computing device 112 may include chiplets that may be mounted on peripheral board 604, package substrate 606, and interposer 608. The interface components of the first memory computing device 112 may generally include a host interface chiplet 612. The memory of the first memory computing device 112 and memory control related components may generally include a memory controller chiplet 614. The various accelerator and processor components of the first memory computing device 112 may generally include an application chiplet 610 or an instance thereof, and so forth.
CPI interfaces, which may be used for communication between or among chiplets in a system, for example, are packet-based networks that support virtual channels to enable flexible and high-speed interaction between chiplets. CPI enables bridging from the on-chip network to the chiplet network 622. AXI, for example, is a specification widely used to design on-chip communications. However, the AXI specification covers a wide variety of physical design options, such as the number of physical channels, signal timing, power, etc. Within a single chip, these options are typically selected to meet design goals, such as power consumption, speed, and the like. However, to achieve flexibility in chiplet systems, adapters (e.g., CPIs) are used to interface between various AXI design options that may be implemented in various chiplets. CPI bridges the on-chip network across the on-chip network 622 by enabling mapping of physical channels to virtual channels and encapsulating time-based signaling with a packetized protocol.
CPI may use a variety of different physical layers to transmit packets. The physical layer may include simple conductive connections, drivers to increase voltage or otherwise facilitate transmitting signals over longer distances. An example of one such physical layer may include an Advanced Interface Bus (AIB), which may be implemented in the interposer 608 in various examples. The AIB transmits and receives data using a source synchronous data transfer with a forwarded clock. Packets are transferred across the AIB at a Single Data Rate (SDR) or Double Data Rate (DDR) with respect to the transmitted clock. The AIB supports various channel widths. The channel may be configured to have a symmetric number of Transmit (TX) and Receive (RX) input/outputs (I/O) or to have an asymmetric number of transmitters and receivers (e.g., a full transmitter or a full receiver). Depending on which chiplet provides the master clock, the channel may act as either an AIB master channel or a slave channel. The AIB I/O unit supports three clocked modes: asynchronous (i.e., non-clocked), SDR, and DDR. In various examples, a non-clocked mode is used for the clock and some control signals. The SDR mode may use a dedicated SDR-only I/O unit or a dual-purpose SDR/DDR I/O unit.
In an example, the CPI packet protocol (e.g., point-to-point or routable) may use symmetric receive and transmit I/O units within the AIB channel. The CPI streaming protocol allows for more flexibility in using the AIB I/O units. In an example, the AIB channel for streaming mode may configure the I/O unit as full TX, full RX, or half TX half RX. The CPI packet protocol may use the AIB channel in SDR or DDR modes of operation. In an example, the AIB channel is configured in increments of 80I/O units (i.e., 40TX and 40 RX) for SDR mode and 40I/O units for DDR mode. The CPI streaming protocol may use the AIB channel in SDR or DDR modes of operation. Here, in the example, the AIB channel is incremented by 40I/O units for both SDR and DDR modes. In an example, each AIB channel is assigned a unique interface identifier. The identifier is used during CPI reset and initialization to determine paired AIB channels across neighboring chiplets. In an example, the interface identifier is a 20-bit value that includes a 7-bit chiplet identifier, a 7-bit column identifier, and a 6-bit link identifier. The AIB physical layer uses AIB out-of-band shift registers to transmit interface identifiers. Using bits 32 to 51 of the shift register, the 20-bit interface identifier is transferred across the AIB interface in both directions.
The AIB defines a set of stacked AIB channels as an AIB channel column. The AIB channel column has a certain number of AIB channels plus a supplemental channel. The supplemental channel contains a signal for AIB initialization. All AIB channels within a column (except for the supplemental channels) have the same configuration (e.g., full TX, full RX, or half TX and half RX) and have the same number of data I/O signals. In an example, the AIB channels are numbered in a continuously increasing order starting with the AIB channel adjacent to the AUX channel. The AIB channel adjacent to AUX is defined as AIB channel zero.
In general, the CPI interface on individual chiplets may include serialization-deserialization (SERDES) hardware. SERDES interconnects are well suited for scenarios where high-speed signaling with low signal counts is desired. However, SERDES may result in additional power consumption and longer latency for multiplexing and demultiplexing, error detection or correction (e.g., using block-level Cyclic Redundancy Check (CRC)), link-level retry, or forward error correction. However, when low latency or power consumption is a major concern for ultra-short distance chiplet-to-chiplet interconnects, parallel interfaces with clock rates that allow data transfer with minimal latency may be utilized. CPI includes elements to minimize both latency and power consumption in these ultra-short distance chiplet interconnects.
For flow control, CPI employs a credit-based technique. The recipient (e.g., application chiplet 610) provides credit to the sender (e.g., memory controller chiplet 614) indicating the available buffers. In an example, the CPI receiver includes a buffer for each virtual channel within a given transmission time unit. Thus, if the CPI receiver supports five messages in time and a single virtual channel, the receiver has five buffers arranged in five rows (e.g., one row per unit time). If four virtual channels are supported, the recipient has twenty buffers arranged in five rows. Each buffer holds the payload of one CPI packet.
When a sender transmits to a receiver, the sender decrements the available credit based on the transmission. Once all the credits for the recipient are consumed, the sender stops sending packets to the recipient. This ensures that the recipient always has an available buffer to store the transmission.
As the recipient processes the received packet and frees up the buffer, the recipient communicates available buffer space to the sender. The sender may then use this credit return to allow transmission of additional information.
The example of fig. 6A includes a chiplet mesh network 624 using direct chiplet-to-chiplet technology without the need for a chiplet system NOC 620. The chiplet mesh network 624 may be implemented in the CPI or another chiplet-to-chiplet protocol. The chiplet mesh network 624 typically implements a pipeline of chiplets, with one chiplet acting as an interface to the pipeline and the other chiplets in the pipeline only interfacing with themselves.
Additionally, a dedicated device interface, such as one or more industry standard memory interfaces (such as, for example, synchronous memory interfaces, e.g., DDR5, DDR 6), may be used to connect the device to the chiplet. The connection of the chiplet system or individual chiplets to external devices (e.g., larger systems) can be made over a desired interface (e.g., PCIe interface). In the example, this external interface may be implemented by a host interface chiplet 612, which in the depicted example provides a PCIe interface external to the chiplet system. Such a dedicated chiplet interface 626 is typically employed when conventions or standards in the industry focus on such an interface. It is this industry practice for the illustrated example of a DDR interface to connect the memory controller chiplet 614 to the DRAM memory device chiplet 616.
Among the many possible support chiplets, the memory controller chiplet 614 can exist in a chiplet system because of the almost ubiquitous use of storage for computer processing and the sophisticated technology for memory devices. Thus, the use of the memory device chiplet 616 and the memory controller chiplet 614 produced by others enables chiplet system designers to obtain robust products produced by mature manufacturers. In general, the memory controller chiplet 614 provides memory device specific interfaces to read, write or erase data. In general, the memory controller chiplet 614 can provide additional features such as error detection, error correction, maintenance operations, or atomic operator execution. For some types of memory, the maintenance operations tend to be specific to memory device chiplets 616, such as waste item collection in NAND flash memory or storage class memory and temperature regulation (e.g., cross temperature management) in NAND flash memory. In an example, the maintenance operation may include a logical-to-physical (L2P) mapping or management to provide a level of indirection between the physical and logical representations of the data. In other types of memory, such as DRAM, some memory operations, such as refresh, may sometimes be controlled by a host processor or memory controller, and at other times by a DRAM memory device or by logic associated with one or more DRAM devices, such as an interface chip (in the example, a buffer).
Atomic operators are data manipulations that can be performed, for example, by the memory controller chiplet 614. In other chiplet systems, the atomic operators can be performed by other chiplets. For example, the atomic operator of "increment" may be specified by the application chiplet 610 in a command that includes a memory address and possibly an increment value. Upon receiving the command, the memory controller chiplet 614 retrieves the number from the specified memory address, increments the number by the amount specified in the command, and stores the result. Upon successful completion, the memory controller chiplet 614 provides an indication of the command success to the application chiplet 610. The atomic operators avoid transmitting data across the chiplet mesh 624, enabling lower latency execution of such commands.
The atomic operators may be classified as built-in atoms or programmable (e.g., custom) atoms. Built-in atoms are a finite set of operations that are invariably implemented in hardware. A programmable atom is a applet that can be executed on a PAU (e.g., a Custom Atomic Unit (CAU)) of the memory controller chiplet 614.
The memory device chiplet 616 can be a volatile memory device or non-volatile memory, or include any combination of volatile memory devices or non-volatile memory. Examples of volatile memory devices include, but are not limited to, RAM such as DRAM, SDRAM, GDDR6 SDRAM, and the like. Examples of non-volatile memory devices include, but are not limited to, NAND-type flash memory and storage class memory (e.g., phase change memory or memristor-based technology), ferroelectric RAM (FeRAM), and the like. The illustrated example includes a memory device chiplet 616 as a chiplet; however, the device may reside elsewhere, such as in a different package on peripheral board 604. For many applications, multiple memory device chiplets may be provided. In examples, these memory device chiplets can each implement one or more storage technologies and can include an integrated computing host. In an example, a memory chiplet can include multiple stacked memory dies of different technologies (e.g., one or more SRAM devices stacked together with or otherwise in communication with one or more DRAM devices). In an example, the memory controller chiplet 614 can be used to coordinate operations between multiple memory chiplets in the chiplet system 602 (e.g., using one or more memory chiplets in one or more levels of cache storage and using one or more additional memory chiplets as main memory). The chiplet system 602 can include multiple memory controller chiplet 614 examples as can be used to provide memory control functionality for individual hosts, processors, sensors, networks, and the like. The chiplet architecture in the illustrated system provides advantages in allowing adaptation to different memory storage technologies and different memory interfaces by updated chiplet configurations, such as without requiring redesign of the rest of the system structure.
Fig. 7 generally illustrates an example of a chiplet-based implementation of a memory computing device according to an embodiment. The example includes an implementation with four CNM chiplets, and each of the CNM chiplets can include or include a portion of the first memory computing device 112 or the first memory computing node 104 from the example of fig. 1. The various portions themselves may comprise or include respective chiplets. The chiplet-based implementation can include or use CPI-based intra-system communication as similarly discussed above in the example chiplet system 602 from fig. 6A and 6B.
The example of fig. 7 includes a first CNM package 700 including a plurality of chiplets. First CNM package 700 includes a first chiplet 702, a second chiplet 704, a third chiplet 706, and a fourth chiplet 708 all coupled to a CNM NOC hub 710. Each of the first through fourth chiplets can include instances of the same or substantially the same component or module. For example, the chiplets can each include respective examples of HTP accelerators, HTF accelerators, and memory controllers for accessing internal or external memory.
In the example of fig. 7, first chiplet 702 includes a first NOC hub edge 714 coupled to a CNM NOC hub 710. Other chiplets in the first CNM package 700 similarly include NOC hub edges or endpoints. Switches in the edges of the NOC hubs facilitate on-chip or on-chip communications via the CNM NOC hub 710.
The first chiplet 702 can further include one or more memory controllers 716. The memory controller 716 may correspond to a respective different NOC endpoint switch interfacing with the first NOC hub edge 714. In an example, the memory controller 716 includes a memory controller chiplet 614, a memory controller 130, a memory subsystem 200, or other memory computing implementation. The memory controller 716 may be coupled to a respective different memory device, such as including a first external memory module 712a or a second external memory module 712b. The external memory module may include, for example, GDDR6 memory selectively accessible by respective different chiplets in the system.
The first chiplet 702 can further include a first HTP chiplet 718 and a second HTP chiplet 720 coupled to the first NOC hub edge 714, e.g., via respective different NOC endpoint switches. The HTP chiplet may correspond to an HTP accelerator, such as HTP 140 from the example of fig. 1, or HTP accelerator 400 from the example of fig. 4. The HTP chiplet can communicate with HTF chiplet 722. The HTF chiplet 722 can correspond to an HTF accelerator, such as HTF 142 from the example of fig. 1, or HTF 500 from the example of fig. 5.
The CNM NOC hub 710 may be coupled to NOC hub instances in other chiplets or other CNM packages by means of various interfaces and switches. For example, the CNM NOC hub 710 may be coupled to the CPI interface by way of a plurality of different NOC endpoints on the first CNM package 700. Each of the plurality of different NOC endpoints may be coupled to a different node external to, for example, the first CNM package 700. In an example, CNM NOC hub 710 may be coupled to other peripheral devices, nodes, or devices using CTCPI or other non-CPI protocols. For example, the first CNM package 700 may include a PCIe scaling fabric interface (e.g., PCIe or Streaming Fabric Interface (SFI)) or a CXL interface configured to interface the first CNM package 700 with other devices. In an example, devices to which the first CNM package 700 is coupled using various CPI, PCIe, CXL or other fabrics may constitute a common global address space.
In the example of fig. 7, first CNM package 700 includes host interface 724 (HIF) and host processor (R5). Host interface 724 may correspond to HIF 120, for example, from the example of fig. 1. The host processor or R5 may correspond to the internal host processor 122 from the example of fig. 1. Host interface 724 may include a PCI interface for coupling first CNM package 700 to other external devices or systems. In an example, work may be initiated by host interface 724 on first CNM package 700 or on a tile cluster within first CNM package 700. For example, host interface 724 may be configured to command individual HTF tile clusters, e.g., among the various chiplets in first CNM package 700, to enter and exit power/clock gating modes.
FIG. 8 illustrates an example tiling of a memory computing device according to an embodiment. In fig. 8, tiled chiplet example 800 includes four examples of different CNM clusters of chiplets, where the clusters are coupled together. Each instance of the CNM chiplet itself can include one or more constituent chiplets (e.g., host processor chiplets, memory device chiplets, interface chiplets, etc.).
Tiled chiplet instance 800 includes an example of first CNM encapsulation 700 from the example of fig. 7 as one or more of its CNM clusters. For example, tiled chiplet example 800 can include a first CNM cluster 802 including a first chiplet 810 (e.g., corresponding to first chiplet 702), a second chiplet 812 (e.g., corresponding to second chiplet 704), a third chiplet 814 (e.g., corresponding to third chiplet 706), and a fourth chiplet 816 (e.g., corresponding to fourth chiplet 708). The chiplets in the first CNM cluster 802 can be coupled to a common NOC hub, which in turn can be coupled to a NOC hub in a neighboring cluster or clusters (e.g., in the second CNM cluster 804 or the fourth CNM cluster 808).
In the example of fig. 8, tiled chiplet example 800 includes first, second, third, and fourth CNM clusters 802, 804, 806, 808. Various CNM chiplets can be configured in a common address space such that the chiplets can allocate and share resources across different tiles. In an example, chiplets in a cluster can communicate with each other. For example, first CNM cluster 802 may be communicatively coupled to second CNM cluster 804 via inter-chip CPI interface 818, and first CNM cluster 802 may be communicatively coupled to fourth CNM cluster 808 via another or the same CPI interface. The second CNM cluster 804 may be communicatively coupled to a third CNM cluster 806 via the same or other CPI interface, and so on.
In an example, one of the CNM chiplets in tiled chiplet example 800 can include a host interface responsible for workload balancing across tiled chiplet example 800 (e.g., corresponding to host interface 724 from the example of fig. 7). The host interface may facilitate access to host-based command request queues and response queues, for example, from outside of tiled chiplet instance 800. The host interface may dispatch new execution threads using a hybrid thread processor and hybrid thread fabric in one or more of the CNM chiplets in tiled chiplet instance 800.
Fig. 9 is a flowchart showing the operations of a method 900 performed by circuitry when loading data from memory during dispatch according to some embodiments of the present disclosure. The method 900 includes operations 910, 920, and 930. By way of example, and not limitation, operations 910-930 are performed by HIF 120 of fig. 1.
In operation 910, HIF 120 receives a dispatch request identifying a tile and an address in memory. The address in memory may be a pointer to a data structure stored in memory. The dispatch request may be received from host 120 and include an identifier of a tile (e.g., tile 510 of fig. 5) of an HTF (e.g., HTF 142 of fig. 1) and an address in memory device 128. In some example embodiments, the dispatch interface request includes fields in the following table. In various embodiments, additional fields or subsets of fields in the table are used.
/>
After the TID is assigned, the message table is referenced using the data message count. An asynchronous data message is sent for each referenced table entry, which allows data to be distributed and/or copied to the tile memory. After the data is distributed, a continue or loop message is sent to the tile base at the tile specified by the entry destination to effect the start of the first thread. An Ack message is sent from NOC 118 to HIF 120 to allow it to send another assignment.
In operation 920, HIF 120 requests the transfer of data at the address from memory device 128 to the identified tile. In some example embodiments, requesting data from the memory includes sending a data request to a memory controller chiplet (e.g., memory controller chiplet 614 of FIG. 6) that controls access to the memory. The dispatch interface message from HIF 120 to memory controller 130 may include some or all of the following fields.
In operation 930, HIF 120 initiates a thread on the identified tile in response to receiving the dispatch request. A thread starts executing after the requested data is copied to the chunk on which the thread is to execute.
Some or all of the dispatch interface primitive operations in the following table may be used to implement one or more operations 910 through 930.
/>
As described in the table above, the HTF cluster load kernel command identifies a virtual address at which data (e.g., register state) is to be loaded from. In some example embodiments, the amount of data loaded from the virtual address is a fixed amount. In other example embodiments, the amount of data loaded from the virtual address is a parameter of the HTF cluster load kernel command and HIF requests the indicated amount of data from a memory controller that provides access to physical memory corresponding to the virtual address. Thus, in some example embodiments, the dispatch request received in operation 910 identifies the size of the data to be transferred; and the request for transfer data of the memory identifies a size in operation 920.
Using method 900, memory copy operations are performed by HIF 120 instead of executing a chunk of a thread. Thus, the tile performs fewer memory access operations, thereby enabling the tile to perform a higher percentage of its operations executing the dispatched thread. Improved efficiency increases throughput, reduces power consumption, reduces device size, reduces device weight, or any suitable combination thereof.
Fig. 10 is a flowchart showing the operations of a method performed by circuitry when loading data from memory during dispatch according to some embodiments of the present disclosure. The method 1000 includes operations 1010, 1020, 1030, 1040, 1050, and 1060. By way of example, and not limitation, operations 1010-1060 are performed by memory controller 130 of fig. 1.
In operation 1010, the memory controller 130 receives a data message indicating data to be transmitted. The data message includes a field. For example, a load operation may be initiated from a tile or HIF in the synchronization domain. In some example embodiments, the request tile or HIF sends an AfLdAddr message (asynchronous fabric load address) that includes one or more fields shown in the following table.
/>
In operation 1020, the memory controller 130 determines the source of the data message received in operation 1110. For example, a hardware bus connecting memory controller 130 to an originating device may include a 1-bit signal indicating whether the originating device is a tile or HIF. As another example, a hardware bus connecting memory controller 130 to an originating device may include a multi-bit signal that provides an identifier of the originating device. By comparing the identifier to the reference data, memory controller 130 determines whether the originating device is a tile or HIF. If the originating device is a tile, method 1000 proceeds to operation 1030. If the originating device is HIF, then method 1000 continues with operation 1040.
In operation 1030, the memory controller 130 determines the chunk memory region to which the data is to be written from the field (e.g., chunk memory region/request index field in the table above).
Alternatively, in operation 1040, the memory controller 130 determines the memory request index from a field (e.g., the tile memory region/request index field in the table above). Based on the memory request index, the memory controller 130 determines the tile memory region from an entry in the memory interface message table (operation 1050). For example, the index may be multiplied by a fixed size of each entry in the table to determine the offset within the table from which the data is accessed. Within the entry, a tile memory region field is accessed. In some example embodiments, the memory interface message table includes one or more fields of the following table.
/>
In operation 1060, the memory controller 130 sends the data to the determined tile memory region. Thus, using method 1000, memory controller 130 is enabled to handle data messages that originate from both HIF 120 and tiles, and to send data to a tile memory region in response to the data messages. Method 1000 may be performed by memory controller 130 to service a request from HIF 120 sent in operation 920 of method 900.
In some example embodiments, the data type field discussed with respect to fig. 9 and 10 has a value selected from the following table.
/>
FIG. 11 illustrates a block diagram of an example machine 1100 with which, in or through which any one or more techniques (e.g., methodologies) discussed herein may be implemented. As discussed herein, an example may include or be operated by logic or several components or mechanisms in machine 1100. Circuitry (e.g., processing circuitry) is a collection of circuits implemented in a tangible entity of machine 1100 including hardware (e.g., simple circuitry, gates, logic, etc.). Circuitry membership may change over time. Circuitry includes members that can perform specified operations when operated upon, alone or in combination. In an example, the hardware of the circuitry may be invariably designed to perform a particular operation (e.g., hardwired). In an example, hardware of the circuitry may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.), including a machine-readable medium physically modified (e.g., magnetically, electrically, movable placement of unchanged aggregated particles, etc.), to encode instructions of a particular operation. When connecting physical components, the underlying electrical properties of the hardware components change, for example, from an insulator to a conductor, or vice versa. The instructions enable embedded hardware (e.g., execution units or loading mechanisms) to create members of circuitry in the hardware via variable connections to carry out portions of a particular operation when operated. Thus, in an example, a machine-readable medium element is part of circuitry or other component communicatively coupled to circuitry when a device is operating. In an example, any of the physical components may be used in more than one member of more than one circuitry. For example, in operation, an execution unit may be used in a first circuit of a first circuitry system at one point in time and reused by a second circuit in the first circuitry system, or reused by a third circuit in the second circuitry system at a different time. These components pertain to additional examples of machine 1100.
In alternative embodiments, machine 1100 may operate as a stand-alone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 1100 may operate in the capacity of a server machine, a client machine, or both, in server-client network environment. In an example, machine 1100 may act as a peer machine in a peer-to-peer (P2P) (or other distributed) network environment. Machine 1100 may be a Personal Computer (PC), tablet PC, set-top box (STB), personal Digital Assistant (PDA), mobile telephone, web appliance, network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Furthermore, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein (e.g., cloud computing, software as a service (SaaS), other computer cluster configurations).
Machine 1100 (e.g., a computer system) may include a hardware processor 1102 (e.g., a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a hardware processor core, or any combination thereof), a main memory 1104, a static memory 1106 (e.g., memory or storage for firmware, microcode, basic Input Output (BIOS), unified Extensible Firmware Interface (UEFI), etc.), and a mass storage device 1108 (e.g., a hard disk drive, tape drive, flash memory device, or other block device), some or all of which may communicate with each other via an interconnection link 1130 (e.g., a bus). The machine 1100 may further include a display device 1110, an alphanumeric input device 1112 (e.g., a keyboard), and a User Interface (UI) navigation device 1114 (e.g., a mouse). In an example, the display device 1110, the input device 1112, and the UI navigation device 1114 may be a touch screen display. The machine 1100 may additionally include a signal generating device 1118 (e.g., a speaker), a network interface device 1120, and one or more sensors 1116, such as a Global Positioning System (GPS) sensor, compass, accelerometer, or other sensor. The machine 1100 may include an output controller 1128, such as a serial (e.g., universal Serial Bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near Field Communication (NFC), etc.) connection to communicate with or control one or more peripheral devices (e.g., printer, card reader, etc.).
The registers of the hardware processor 1102, the main memory 1104, the static memory 1106, or the mass storage device 1108 may be or include a machine-readable medium 1122 having stored thereon one or more sets of data structures or instructions 1124 (e.g., software) embodying or used by any one or more of the techniques or functions discussed herein. The instructions 1124 may also reside, completely or at least partially, within any of the registers of the hardware processor 1102, the main memory 1104, the static memory 1106, or the mass storage device 1108 during execution thereof by the machine 1100. In an example, one or any combination of the hardware processor 1102, the main memory 1104, the static memory 1106, or the mass storage device 1108 may constitute a machine-readable medium 1122. While the machine-readable medium 1122 is illustrated as a single medium, the term "machine-readable medium" may include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) configured to store the one or more instructions 1124.
The term "machine-readable medium" can include any medium that is capable of storing, encoding or carrying instructions for execution by the machine 1100 and that cause the machine 1100 to perform any one or more of the techniques of this disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine-readable medium examples can include solid state memory, optical media, magnetic media, and signals (e.g., radio frequency signals, other photon-based signals, acoustic signals, etc.). In examples, a non-transitory machine-readable medium includes a machine-readable medium with a plurality of particles having a constant (e.g., stationary) mass, and is thus a composition of matter. Thus, a non-transitory machine-readable medium is a machine-readable medium that does not include a transitory propagating signal. Specific examples of non-transitory machine-readable media may include: nonvolatile memory such as semiconductor memory devices (e.g., electrically Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disk; and CD-ROM and DVD-ROM discs.
In an example, information stored or otherwise provided on the machine-readable medium 1122 may represent instructions 1124, such as the instructions 1124 themselves or a format from which the instructions 1124 may be derived. Such a format from which the instructions 1124 may be derived may include source code, encoded instructions (e.g., in compressed or encrypted form), packaged instructions (e.g., divided into multiple packets), and so forth. Information representing instructions 1124 in machine-readable medium 1122 may be processed by processing circuitry into instructions to implement any of the operations discussed herein. For example, deriving instructions 1124 from information (e.g., processed by processing circuitry) may include: compile (e.g., from source code, object code, etc.), interpret, load, organize (e.g., dynamically or statically linked), encode, decode, encrypt, decrypt, package, unpack information, or otherwise manipulate information into instructions 1124.
In an example, derivation of the instructions 1124 may include assembling, compiling, or interpreting information (e.g., by processing circuitry) to create the instructions 1124 from some intermediate or pre-processing format provided by the machine-readable medium 1122. The information, when provided in multiple parts, may be combined, unpacked, and modified to create instructions 1124. For example, the information may be in multiple compressed source code packages (or object code, or binary executable code, etc.) on one or several remote servers. The source code packets may be encrypted as transmitted over the network and may be decrypted, decompressed, assembled (e.g., linked), compiled or interpreted (e.g., in a library, independently executable file, etc.) at the local machine, and executed by the local machine, as necessary.
The instructions 1124 may be further transmitted or received over a communication network 1126 using a transmission medium via the network interface device 1120 utilizing one of several transfer protocols, such as frame relay, internet protocol, transmission Control Protocol (TCP), user Datagram Protocol (UDP), hypertext transfer protocol (HTTP), etc. Example communication networks may include Local Area Networks (LANs), wide Area Networks (WANs), packet data networks (e.g., the internet), mobile telephone networks (e.g., cellular networks), plain Old Telephone (POTS) networks, and wireless data networks (e.g., known asIs called +.o.A Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards>IEEE 802.16 family of standards), IEEE 802.15.4 family of standards, P2P networks, etc. In an example, the network interface device 1120 may include one or more physical jacks (e.g., ethernet, coaxial, or telephone jacks) or one or more antennas to connect to the network 1126. In an example, the network interface device 1120 may include multiple antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 1100, and includes digital or analog communication signals or other intangible medium to facilitate communication of such software. The transmission medium is a machine-readable medium.
To better illustrate the methods and apparatus discussed herein, a set of non-limiting example embodiments are set forth below as digital identification examples.
Example 1 is a system, comprising: a memory; and a plurality of tiles coupled to the memory, each tile of the plurality of tiles including one or more processing elements; and a dispatch interface block coupled to the memory and the plurality of tiles and configured to perform operations comprising: receiving a dispatch request identifying a tile and an address in the memory; transferring data at the address from the memory request to an identified tile; and initiating a thread on the identified tile in response to receiving a dispatch request.
In example 2, the subject matter of example 1 includes, wherein: the dispatch request is received from a host processor.
In example 3, the subject matter of examples 1-2 includes, wherein the requesting the data from the memory includes sending a data request to a memory controller chiplet that controls access to the memory.
In example 4, the subject matter described in examples 1-3 comprises, wherein: the dispatch request further identifying a size of the data to be transferred; and the request to the memory to transfer the data identifies the size.
In example 5, the subject matter of examples 1-4 comprises, wherein: the dispatch request further identifies a number of messages to send the data to the identified tile.
In example 6, the subject matter of examples 1-5 includes wherein the dispatch request further identifies a width of the data.
In example 7, the subject matter of examples 1-6 includes wherein the dispatch request further identifies that the data is to be copied into all single input/multiple output (SIMD) lanes.
In example 8, the subject matter of examples 1-7 includes wherein the dispatch request further identifies a base context of the data to be transferred to the identified tile instead of tile memory.
Example 9 is a non-transitory machine-readable medium storing instructions that, when executed by a host interface and dispatch module coupled to a chunk of a host and hybrid thread fabric, cause the host interface and dispatch module to perform operations comprising: receiving a dispatch request from the host identifying the tile and an address in memory; transferring data at the address from the memory request to an identified tile; and initiating a thread on the identified tile in response to receiving a dispatch request.
In example 10, the subject matter of example 9 includes wherein the requesting the data from the memory includes sending a data request to a memory controller chiplet that controls access to the memory.
In example 11, the subject matter of examples 9-10 includes, wherein: the dispatch request further identifying a size of the data to be transferred; and the request to the memory to transfer the data identifies the size.
In example 12, the subject matter of examples 9-11 includes, wherein: the dispatch request further identifies a number of messages to send the data to the identified tile.
In example 13, the subject matter of examples 9-12 includes wherein the dispatch request further identifies a width of the data.
In example 14, the subject matter of examples 9-13 includes wherein the dispatch request further identifies that the data is to be copied into all single input/multiple output (SIMD) lanes.
In example 15, the subject matter of examples 9-14 includes wherein the dispatch request further identifies a base context of the chunk to which the data is to be transferred, rather than a chunk memory.
Example 16 is a method comprising: receiving a dispatch request identifying an address in a tile and a memory; transferring data at the address from the memory request to an identified tile; and initiating a thread on the identified tile in response to receiving a dispatch request.
In example 17, the subject matter of example 16 includes, wherein: the dispatch request is received from a host processor.
In example 18, the subject matter of examples 16-17 includes, wherein the requesting the data from the memory includes sending a data request to a memory controller chiplet that controls access to the memory.
In example 19, the subject matter of examples 16-18 includes, wherein: the dispatch request further identifying a size of the data to be transferred; and the request to the memory to transfer the data identifies the size.
In example 20, the subject matter of examples 16-19 includes, wherein: the dispatch request further identifies a number of messages to send the data to the identified tile.
In example 21, the subject matter of examples 16-20 includes wherein the dispatch request further identifies a width of the data.
In example 22, the subject matter of examples 16-21 includes, wherein the dispatch request further identifies that the data is to be copied into all single input/multiple output (SIMD) lanes.
In example 23, the subject matter of examples 16-22 includes wherein the dispatch request further identifies a base context of the chunk to which the data is to be transferred, rather than a chunk memory.
Example 24 is at least one machine-readable medium comprising instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement any of examples 1-23.
Example 25 is an apparatus comprising means to implement any of examples 1 to 23.
Example 26 is a system to implement any of examples 1 to 23.
Example 27 is a method to implement any one of examples 1 to 23.
The foregoing detailed description includes references to the accompanying drawings, which form a part thereof. The drawings show, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are also referred to herein as "examples". Such examples may include elements other than those shown or described. However, the inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the inventors contemplate the use of any combination or permutation of those shown or described elements (or one or more aspects thereof) for a particular example (or one or more aspects thereof), or for other examples (or one or more aspects thereof) shown or described herein.
In this document, the terms "a" or "an" are used to include one or more than one, as is common in patent documents, regardless of any other examples or usage of "at least one" or "one or more". In this document, the term "or" is used to refer to a non-exclusive "or" such that "a or B" may include "a but not B", "B but not a" and "a and B" unless otherwise indicated. In the appended claims, the terms "including" and "in which (in white)" are used as the plain-English equivalents of the respective terms "comprising" and "in which (whoein)". Moreover, in the appended claims, the terms "comprising" and "including" are open-ended, i.e., a system, device, article, or process that comprises an element other than the element listed after that term in a claim is still considered to fall within the scope of that claim. Furthermore, in the appended claims, the terms "first," "second," "third," and the like are used merely as labels and are not intended to impose numerical requirements on their objects.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon review of the above description. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Moreover, in the foregoing detailed description, various features may be grouped together to simplify the present disclosure. This should not be construed to mean that the disclosed features are not required to be essential to any of the claims. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments may be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (23)

1. A system, comprising:
a memory; and
A plurality of tiles coupled to the memory, each tile of the plurality of tiles including one or more processing elements; and
A dispatch interface block coupled to the memory and the plurality of tiles and configured to perform operations comprising:
receiving a dispatch request identifying a tile and an address in the memory;
transferring data at the address from the memory request to an identified tile; and
In response to receiving a dispatch request, a thread is initiated on the identified tile.
2. The system of claim 1, wherein:
the dispatch request is received from a host processor.
3. The system of claim 1, wherein the requesting the data from the memory comprises sending a data request to a memory controller chiplet that controls access to the memory.
4. The system of claim 1, wherein:
the dispatch request further identifying a size of the data to be transferred; and is also provided with
The request to the memory to transfer the data identifies the size.
5. The system of claim 1, wherein:
The dispatch request further identifies a number of messages to send the data to the identified tile.
6. The system of claim 1, wherein the dispatch request further identifies a width of the data.
7. The system of claim 1, wherein the dispatch request further identifies that the data is to be copied into all single input/multiple output (SIMD) lanes.
8. The system of claim 1, wherein the dispatch request further identifies a base context to which the data is to be transferred to the identified tile instead of tile memory.
9. A non-transitory machine-readable medium storing instructions that, when executed by a host interface and dispatch module coupled to a slice of a host and hybrid thread fabric, cause the host interface and dispatch module to perform operations comprising:
receiving a dispatch request from the host identifying the tile and an address in memory;
transferring data at the address from the memory request to an identified tile; and
In response to receiving a dispatch request, a thread is initiated on the identified tile.
10. The non-transitory machine-readable medium of claim 9, wherein the requesting the data from the memory comprises sending a data request to a memory controller chiplet that controls access to the memory.
11. The non-transitory machine-readable medium of claim 9, wherein:
the dispatch request further identifying a size of the data to be transferred; and is also provided with
The request to the memory to transfer the data identifies the size.
12. The non-transitory machine-readable medium of claim 9, wherein:
the dispatch request further identifies a number of messages to send the data to the identified tile.
13. The non-transitory machine-readable medium of claim 9, wherein the dispatch request further identifies a width of the data.
14. The non-transitory machine-readable medium of claim 9, wherein the dispatch request further identifies that the data is to be copied into all single input/multiple output (SIMD) lanes.
15. The non-transitory machine-readable medium of claim 9, wherein the dispatch request further identifies a base context for the data to be transferred to the tile instead of tile memory.
16. A method, comprising:
receiving a dispatch request identifying an address in a tile and a memory;
transferring data at the address from the memory request to an identified tile; and
In response to receiving a dispatch request, a thread is initiated on the identified tile.
17. The method according to claim 16, wherein:
the dispatch request is received from a host processor.
18. The method of claim 16, wherein the requesting the data from the memory comprises sending a data request to a memory controller chiplet that controls access to the memory.
19. The method according to claim 16, wherein:
the dispatch request further identifying a size of the data to be transferred; and is also provided with
The request to the memory to transfer the data identifies the size.
20. The method according to claim 16, wherein:
the dispatch request further identifies a number of messages to send the data to the identified tile.
21. The method of claim 16, wherein the dispatch request further identifies a width of the data.
22. The method of claim 16, wherein the dispatch request further identifies that the data is to be copied into all single input/multiple output (SIMD) lanes.
23. The method of claim 16, wherein the dispatch request further identifies a base context to which the data is to be transferred to the tile instead of tile memory.
CN202280045776.8A 2021-06-28 2022-05-12 Loading data from memory during dispatch Pending CN117581200A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US17/360,455 US11789642B2 (en) 2021-06-28 2021-06-28 Loading data from memory during dispatch
US17/360,455 2021-06-28
PCT/US2022/029007 WO2023278015A1 (en) 2021-06-28 2022-05-12 Loading data from memory during dispatch

Publications (1)

Publication Number Publication Date
CN117581200A true CN117581200A (en) 2024-02-20

Family

ID=84543178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280045776.8A Pending CN117581200A (en) 2021-06-28 2022-05-12 Loading data from memory during dispatch

Country Status (3)

Country Link
US (1) US11789642B2 (en)
CN (1) CN117581200A (en)
WO (1) WO2023278015A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11815935B2 (en) * 2022-03-25 2023-11-14 Micron Technology, Inc. Programming a coarse grained reconfigurable array through description of data flow graphs

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5717882A (en) * 1994-01-04 1998-02-10 Intel Corporation Method and apparatus for dispatching and executing a load operation to memory
US20030158842A1 (en) * 2002-02-21 2003-08-21 Eliezer Levy Adaptive acceleration of retrieval queries
US20060259665A1 (en) 2005-05-13 2006-11-16 Sanjive Agarwala Configurable multiple write-enhanced direct memory access unit
US8122229B2 (en) * 2007-09-12 2012-02-21 Convey Computer Dispatch mechanism for dispatching instructions from a host processor to a co-processor
GB2459331B (en) * 2008-04-24 2012-02-15 Icera Inc Direct Memory Access (DMA) via a serial link
US8296411B2 (en) * 2010-03-01 2012-10-23 International Business Machines Corporation Programmatically determining an execution mode for a request dispatch utilizing historic metrics
JP6083687B2 (en) 2012-01-06 2017-02-22 インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation Distributed calculation method, program, host computer, and distributed calculation system (distributed parallel calculation using accelerator device)
US9802124B2 (en) * 2013-03-15 2017-10-31 Skyera, Llc Apparatus and method for cloning and snapshotting in multi-dimensional to linear address space translation
US9454310B2 (en) * 2014-02-14 2016-09-27 Micron Technology, Inc. Command queuing
US10390114B2 (en) * 2016-07-22 2019-08-20 Intel Corporation Memory sharing for physical accelerator resources in a data center
US10209887B2 (en) * 2016-12-20 2019-02-19 Texas Instruments Incorporated Streaming engine with fetch ahead hysteresis
US10417734B2 (en) * 2017-04-24 2019-09-17 Intel Corporation Compute optimization mechanism for deep neural networks
US10747954B2 (en) * 2017-10-31 2020-08-18 Baidu Usa Llc System and method for performing tasks based on user inputs using natural language processing
US11093251B2 (en) 2017-10-31 2021-08-17 Micron Technology, Inc. System having a hybrid threading processor, a hybrid threading fabric having configurable computing elements, and a hybrid interconnection network
US11487473B2 (en) * 2018-07-23 2022-11-01 SK Hynix Inc. Memory system
US11914860B2 (en) * 2018-08-20 2024-02-27 Macronix International Co., Ltd. Data storage for artificial intelligence-based applications
US11281579B2 (en) 2020-01-28 2022-03-22 Intel Corporation Cryptographic separation of MMIO on device
US11669274B2 (en) * 2021-03-31 2023-06-06 Advanced Micro Devices, Inc. Write bank group mask during arbitration

Also Published As

Publication number Publication date
US20220413742A1 (en) 2022-12-29
WO2023278015A1 (en) 2023-01-05
US11789642B2 (en) 2023-10-17

Similar Documents

Publication Publication Date Title
CN117296048A (en) Transmitting request types with different delays
CN114691354A (en) Dynamic decomposition and thread allocation
CN118043796A (en) Tile-based result buffering in a memory computing system
CN118076944A (en) Data storage during loop execution in a reconfigurable computing fabric
CN118043795A (en) Masking for coarse-grained reconfigurable architecture
CN114691317A (en) Loop execution in reconfigurable computing fabric
CN118043815A (en) Debugging dataflow computer architecture
US20240086324A1 (en) High bandwidth gather cache
CN117581200A (en) Loading data from memory during dispatch
CN117795496A (en) Parallel matrix operations in reconfigurable computing fabrics
CN118043792A (en) Mechanism for providing reliable reception of event messages
US11762661B2 (en) Counter for preventing completion of a thread including a non-blocking external device call with no-return indication
CN117546133A (en) Mitigating memory hotspots on a system having multiple memory controllers
CN117280332A (en) Avoiding deadlock by architecture with multiple systems-on-chip
CN117677927A (en) Efficient complex multiplication and accumulation
US11861366B2 (en) Efficient processing of nested loops for computing device with multiple configurable processing elements using multiple spoke counts
US20240070112A1 (en) Context load mechanism in a coarse-grained reconfigurable array processor
US20240028526A1 (en) Methods and systems for requesting atomic operations in a computing system
US20230055320A1 (en) Loop execution in a reconfigurable compute fabric.
CN117435548A (en) Method and system for communication between hardware components
CN117632256A (en) Apparatus and method for handling breakpoints in a multi-element processor
CN117435549A (en) Method and system for communication between hardware components
CN118140209A (en) Loop execution in a reconfigurable computing fabric
CN118056181A (en) Chained resource locking
CN117632403A (en) Parking threads in a barrel processor for managing hazard cleaning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication