WO2019055675A1 - Directed and interconnected grid dataflow architecture - Google Patents

Directed and interconnected grid dataflow architecture Download PDF

Info

Publication number
WO2019055675A1
WO2019055675A1 PCT/US2018/050910 US2018050910W WO2019055675A1 WO 2019055675 A1 WO2019055675 A1 WO 2019055675A1 US 2018050910 W US2018050910 W US 2018050910W WO 2019055675 A1 WO2019055675 A1 WO 2019055675A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing grid
computing
grid
runtime
ports
Prior art date
Application number
PCT/US2018/050910
Other languages
French (fr)
Inventor
Elad RAZ
Ilan Tayari
Original Assignee
Next Silicon, Ltd.
M&B IP Analysts, LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Next Silicon, Ltd., M&B IP Analysts, LLC filed Critical Next Silicon, Ltd.
Priority to EP18856462.9A priority Critical patent/EP3682353A4/en
Publication of WO2019055675A1 publication Critical patent/WO2019055675A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7867Architectures of general purpose stored program computers comprising a single central processing unit with reconfigurable architecture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5672Multiplexing, e.g. coding, scrambling

Definitions

  • the disclosure generally relates to system architectures, and more specifically to embedded computing architectures and reconfigurable computing architectures.
  • each dynamic instruction must be fetched and decoded even though programs mostly iterate over small static portions of the code.
  • explicit storage for example, a register file or memory
  • intermediate results are transferred repeatedly between the functional units and register files.
  • Some embodiments disclosed herein include a computing grid, comprising an interconnect network including input ports and output ports; a plurality of egress ports; a plurality of configurable data routing junctions; a plurality of logical elements interconnected using the plurality of configurable data routing junctions; a plurality of ingress ports, wherein at least one compute graph is projected onto the computing grid as a configuration of various elements of the computing grid.
  • Figure 1 is a schematic diagram of a computing grid designed according to one embodiment.
  • Figure 2 is a schematic diagram of a compute graph projected and executed on the disclosed computing grid according to an embodiment.
  • Figure 3 is a flowchart illustrating a method for compute graph optimization and reconfiguration on a computing grid, according to an embodiment.
  • the various disclosed embodiments allow for the execution of a software program (including at least one computational task) on a reconfigurable hardware, analyzing in runtime what the program requires, and optimizing the operation based on the analysis.
  • the disclosed embodiments may be realized by a grid computing architecture designed to enable simplification of compute-intensive algorithms or tasks.
  • the disclosed architecture includes a directed grid configured to receive data flows for processing. The received data flows are interconnected due to the structure of the grid computing architecture.
  • the implementation of the grid computing architecture further enables an asynchronous and clock-less computing implementation.
  • the grid computing architecture is designed to further optimize code execution using a routing topology such as a network, a bus, or both.
  • the routing topology enables routing of execution portions throughout the grid. It should be noted that the grid enables a synchronized implementation.
  • Fig. 1 depicts an example schematic diagram of a directed, interconnected grid computing architecture 100 (hereinafter the "computing grid 100") according to an embodiment.
  • the computing grid 100 includes an interconnect 160, egress ports (Frs) 140 and ingress ports (Wrs) 150, data routing junctions 130, and logical elements 120 (collectively referred to as LEs 120 or individually as a LE 120).
  • the LEs 120 are interconnected via the data routing junctions 130.
  • the Frs 140 are connected via the routing junctions 130 to LEs 120 and via the interconnect 160 to Wrs 150, as illustrated in Fig. 1 .
  • each LE 120 may perform a unary, binary, and/or ternary operation. Examples for such operations include adding, subtracting, multiplying, negating, incrementing, decrementing, adding with carry, subtraction with borrow, and the like.
  • a LE 120 may be a logical and/or bitwise operator such as AND, OR, NOT, XOR, size-casting (zero or sign extension) or a combination thereof.
  • a LE 120 may be configured to perform a lookup table (LUT) operation.
  • LUT lookup table
  • a LE 120 may be configured to perform a high-level arithmetic function, such as a fixed point or floating-point number addition, subtraction, multiplication, division, exponent, and the like.
  • a LE 120 may perform a shift operation such as a shift left, a bitwise or arithmetic shift right, and so on.
  • each LE 120 can execute an operation, do nothing, or pass the information downward to a junction 130 connected thereto.
  • the LE 120 may do nothing when, for example, the condition of a conditional execution configuration (e.g., as determined based on a comparison) is not met.
  • a LE 120 may perform selection between possible inputs according to a conditional execution configuration.
  • each of the data routing junctions 130 may be realized as a multiplexer, a de-multiplexer, a switch, and the like.
  • Each data routing junction 130 is configured to route data to and from the LEs 120.
  • a data routing junction is illustrated as a MUX 130 in Fig. 1 .
  • the LEs 120 may employ flow control semantics to synchronize data movement in the grid.
  • every computer program can be represented by a series of basic- blocks, i.e., blocks of consecutive instructions such that there is no jump from or to the middle of a block.
  • Each basic-block may be represented by a compute graph.
  • a typical compute graph may be a directed acyclic graph in which the nodes correspond to operations and edges correspond to data movement.
  • the compute graph may be projected onto the computing grid 100 and, specifically, the nodes of the compute graph (i.e., operations) are assigned to LEs 120 as demonstrated in Fig. 2.
  • Fig. 2 is a diagram 200 of an example projection of an optimized compute graph onto a computing grid designed according to an embodiment. Fig. 2 further demonstrates collection of runtime telemetry generated by the computing grid for determining the likely path(s).
  • the code to be executed is a function (foo) that receives integer arguments 'a' and 'b' and returns their sum.
  • the optimized compute graph includes 3 Frs 210-1 , 210-2, and 210-3, feeding a return address, and the integer parameters 'a' and 'b' respectively.
  • the selected LE 220-1 performs an addition operation and is connected via data routing junctions to the Frs 210-2 and 210-3.
  • the selected LE 220-4 does not perform any operation, and acts as pass-through (or NOP).
  • the Wr 220-2 returns of the sum (a+b) as computed by the LE 220-1 .
  • the LE 220-2 is directly connected via data routing junction to LE 220-1 .
  • the Wr 220-2 is further connected to LE 220-4 which is connected to Fr 210-1 .
  • a likely compute path may be determined.
  • the likely compute path is determined at runtime based on telemetry collected by the telemetry collection circuit 240.
  • the collected runtime telemetry includes, for example, information on the number of times that each logic path has been taken, to compute or execute a given function or program.
  • the telemetry may also include parameters, such as flow-control counters (e.g., number of Ack, Pause, etc.).
  • flow-control counters e.g., number of Ack, Pause, etc.
  • a path which has been taken more times is statistically more likely to be taken again, and therefore a good target for optimization.
  • An example likely path is labeled as 230 in Fig. 3.
  • the telemetry collection circuit 240 may be realized as hardware counters at the Wrs 150. In another embodiment, the telemetry collection circuit 240 may be realized as hardware counters at the Frs 140. In another embodiment, the telemetry collection circuit 240 may be realized as hardware counters at the data routing junctions 130. In another embodiment, the telemetry collection circuit 240 may be realized as hardware counters at the interconnect network 160. In another embodiment, the telemetry collection circuit 240 may be realized as hardware counters at the LEs 120 or the flow control mechanism of the LEs 120.
  • the likely compute paths are then optimized, at runtime, by removing bottlenecks in latency, throughput, and the like.
  • the likely compute path may be optimized by identifying the available resources, such as the LEs 120 and their respective locations in the computing grid 100, in the interconnect 160, or their proximity to critical hardware resources such as memory, a network interface, a host BUS, and the like.
  • operations in the compute graph of a basic-block that are memory bound may be relocated in close proximity to a memory of the computational device (not shown).
  • I/O related operations are relocated in close proximity to I/O devices (not shown) such as network ports, PCI-e bus, and the like.
  • a technique for optimizing the path based on critical hardware resources is disclosed in US Patent Application No. 16/053,382, the contents of which are incorporated herein by reference.
  • the likely compute path may be optimized by identifying the proximity or lack thereof, of basic-blocks to one another, respective to the call- graph of the basic blocks.
  • Proximity in such cases, may be physical, logical, topological or otherwise such that it may affect overall performance or cost of the computation.
  • the optimized compute graph is projected onto the computing grid 100. It should be noted that an optimized compute graph can be further optimized at runtime and injected to the interconnected LEs again, on-the-fly.
  • the LEs 120, the data routing junctions 130, the Frs 140 and the Wrs 150 may be implemented in hardware, software, firmware, or any combination thereof.
  • the computing grid 100 (and its various elements) is implemented as a semiconductor device. In another embodiment, the computing grid 100 is implemented in part as a semiconductor device and in part as software, firmware, or a combination thereof.
  • the computing grid 100 is configured to accelerate the operation of computational devices.
  • Examples for such devices may include a multi- core central processing unit (CPU), a field-programmable gate array (FPGA), a graphics processing unit (GPU), an application-specific integrated circuit (ASIC), a quantum computer, an optical computing device, a neural-network accelerator, or a combination thereof.
  • the acceleration is achieved by, for example, executing program code over the computing grid 100 instead of over a computational device (not shown).
  • the computing grid 100 may be incorporated in a computation device or connected to it.
  • Fig. 3 shows an example flowchart 300 of a method for reconfigurable code projection according to an embodiment.
  • the method is performed with respect to a computing grid (e.g., the computing grid 100, Fig. 1 ).
  • a likely compute path is determined for a received portion of code or program logic.
  • the likely compute path is determined using runtime telemetry.
  • the likely compute path is optimized in the compute graph.
  • the optimization is performed at runtime, to remove bottlenecks in latency or/and throughput.
  • Various embodiments for performing the optimization are discussed above.
  • the embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces.
  • CPUs central processing units
  • the computer platform may also include an operating system and microinstruction code.
  • the various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such computer or processor is explicitly shown.
  • a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
  • any reference to an element herein using a designation such as “first,” “second,” and so forth does not generally limit the quantity or order of those elements. Rather, these designations are generally used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise, a set of elements comprises one or more elements.
  • the phrase "at least one of” followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including "at least one of A, B, and C," the system can include A alone; B alone; C alone; A and B in combination; B and C in combination; A and C in combination; or A, B, and C in combination.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Logic Circuits (AREA)

Abstract

A computing grid including an interconnect network including input ports and output ports; a plurality of egress ports; a plurality of configurable data routing junctions; a plurality of logical elements interconnected using the plurality of configurable data routing junctions; a plurality of ingress ports. In an embodiment at least one compute graph is projected onto the computing grid as a configuration of various elements of the computing grid.

Description

DIRECTED AND INTERCONNECTED GRID DATAFLOW ARCHITECTURE
CROSS-REFERENCE TO RELATED APPLICATIONS
[001] This application claims the benefit of US Provisional Application No.
62/558,090 filed on September 13, 2017, the contents of which are hereby incorporated by reference.
TECHNICAL FIELD
[002] The disclosure generally relates to system architectures, and more specifically to embedded computing architectures and reconfigurable computing architectures.
BACKGROUND
[003] As technology advances, the need for stronger processing systems and computing power rapidly increase. Currently, processors are expected to deliver high computational throughput and are highly power efficient. Nevertheless, existing processing systems execute sequential streams of instructions. The instructions are retrieved from, and their results are written to, explicit memory or storage. As such, the execution of sequential streams of instructions suffer from, among other things, power inefficiencies.
[004] Specifically, in some existing processing systems, each dynamic instruction must be fetched and decoded even though programs mostly iterate over small static portions of the code. Furthermore, because explicit storage (for example, a register file or memory) is the only channel for communicating data among instructions, intermediate results are transferred repeatedly between the functional units and register files. These inefficiencies dramatically reduce the energy efficiency of modern processing systems.
SUMMARY [005] A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term "some embodiments" may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.
[006] Some embodiments disclosed herein include a computing grid, comprising an interconnect network including input ports and output ports; a plurality of egress ports; a plurality of configurable data routing junctions; a plurality of logical elements interconnected using the plurality of configurable data routing junctions; a plurality of ingress ports, wherein at least one compute graph is projected onto the computing grid as a configuration of various elements of the computing grid.
BRIEF DESCRIPTION OF THE DRAWINGS
[007] The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features and advantages of the disclosure will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
[008] Figure 1 is a schematic diagram of a computing grid designed according to one embodiment.
[009] Figure 2 is a schematic diagram of a compute graph projected and executed on the disclosed computing grid according to an embodiment.
[0010] Figure 3 is a flowchart illustrating a method for compute graph optimization and reconfiguration on a computing grid, according to an embodiment.
DETAILED DESCRIPTION
[0011] The embodiments disclosed by the invention are only examples of the many possible advantageous uses and implementations of the innovative teachings presented herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.
[0012] The various disclosed embodiments allow for the execution of a software program (including at least one computational task) on a reconfigurable hardware, analyzing in runtime what the program requires, and optimizing the operation based on the analysis. The disclosed embodiments may be realized by a grid computing architecture designed to enable simplification of compute-intensive algorithms or tasks. In an embodiment, the disclosed architecture includes a directed grid configured to receive data flows for processing. The received data flows are interconnected due to the structure of the grid computing architecture.
[0013] In an example embodiment, the implementation of the grid computing architecture further enables an asynchronous and clock-less computing implementation. The grid computing architecture is designed to further optimize code execution using a routing topology such as a network, a bus, or both. The routing topology enables routing of execution portions throughout the grid. It should be noted that the grid enables a synchronized implementation.
[0014] Fig. 1 depicts an example schematic diagram of a directed, interconnected grid computing architecture 100 (hereinafter the "computing grid 100") according to an embodiment. The computing grid 100 includes an interconnect 160, egress ports (Frs) 140 and ingress ports (Wrs) 150, data routing junctions 130, and logical elements 120 (collectively referred to as LEs 120 or individually as a LE 120).
[0015] The LEs 120 are interconnected via the data routing junctions 130. The Frs 140 are connected via the routing junctions 130 to LEs 120 and via the interconnect 160 to Wrs 150, as illustrated in Fig. 1 .
[0016] In an embodiment, each LE 120 may perform a unary, binary, and/or ternary operation. Examples for such operations include adding, subtracting, multiplying, negating, incrementing, decrementing, adding with carry, subtraction with borrow, and the like.
[0017] In an embodiment, a LE 120 may be a logical and/or bitwise operator such as AND, OR, NOT, XOR, size-casting (zero or sign extension) or a combination thereof.
[0018] In another embodiment, a LE 120 may be configured to perform a lookup table (LUT) operation.
[0019] In yet another embodiment, a LE 120 may be configured to perform a high-level arithmetic function, such as a fixed point or floating-point number addition, subtraction, multiplication, division, exponent, and the like.
[0020] In yet another embodiment, a LE 120 may perform a shift operation such as a shift left, a bitwise or arithmetic shift right, and so on.
[0021] In yet another embodiment, each LE 120 can execute an operation, do nothing, or pass the information downward to a junction 130 connected thereto. The LE 120 may do nothing when, for example, the condition of a conditional execution configuration (e.g., as determined based on a comparison) is not met.
[0022] In yet another embodiment, a LE 120 may perform selection between possible inputs according to a conditional execution configuration.
[0023] In an embodiment, each of the data routing junctions 130 may be realized as a multiplexer, a de-multiplexer, a switch, and the like. Each data routing junction 130 is configured to route data to and from the LEs 120. Without departing from the scope of the disclosed embodiments, a data routing junction is illustrated as a MUX 130 in Fig. 1 .
[0024] In yet another embodiment, the LEs 120 may employ flow control semantics to synchronize data movement in the grid.
[0025] Typically, every computer program can be represented by a series of basic- blocks, i.e., blocks of consecutive instructions such that there is no jump from or to the middle of a block. Each basic-block may be represented by a compute graph. A typical compute graph may be a directed acyclic graph in which the nodes correspond to operations and edges correspond to data movement. [0026] The compute graph may be projected onto the computing grid 100 and, specifically, the nodes of the compute graph (i.e., operations) are assigned to LEs 120 as demonstrated in Fig. 2.
[0027] Fig. 2 is a diagram 200 of an example projection of an optimized compute graph onto a computing grid designed according to an embodiment. Fig. 2 further demonstrates collection of runtime telemetry generated by the computing grid for determining the likely path(s).
[0028] In the example of Fig. 2, the code to be executed is a function (foo) that receives integer arguments 'a' and 'b' and returns their sum. In this example, the optimized compute graph includes 3 Frs 210-1 , 210-2, and 210-3, feeding a return address, and the integer parameters 'a' and 'b' respectively. The selected LE 220-1 performs an addition operation and is connected via data routing junctions to the Frs 210-2 and 210-3. In this example, the selected LE 220-4 does not perform any operation, and acts as pass-through (or NOP).
[0029] The Wr 220-2 returns of the sum (a+b) as computed by the LE 220-1 . The LE 220-2 is directly connected via data routing junction to LE 220-1 . The Wr 220-2 is further connected to LE 220-4 which is connected to Fr 210-1 .
[0030] According to the disclosed embodiments, based on a piece of code in a programming language, a likely compute path may be determined. In an embodiment, the likely compute path is determined at runtime based on telemetry collected by the telemetry collection circuit 240.
[0031] In an embodiment, the collected runtime telemetry includes, for example, information on the number of times that each logic path has been taken, to compute or execute a given function or program. The telemetry may also include parameters, such as flow-control counters (e.g., number of Ack, Pause, etc.). A path which has been taken more times is statistically more likely to be taken again, and therefore a good target for optimization. An example likely path is labeled as 230 in Fig. 3.
[0032] In an embodiment, the telemetry collection circuit 240 may be realized as hardware counters at the Wrs 150. In another embodiment, the telemetry collection circuit 240 may be realized as hardware counters at the Frs 140. In another embodiment, the telemetry collection circuit 240 may be realized as hardware counters at the data routing junctions 130. In another embodiment, the telemetry collection circuit 240 may be realized as hardware counters at the interconnect network 160. In another embodiment, the telemetry collection circuit 240 may be realized as hardware counters at the LEs 120 or the flow control mechanism of the LEs 120.
[0033] Based on the collected telemetry, the likely compute paths are then optimized, at runtime, by removing bottlenecks in latency, throughput, and the like.
[0034] Returning to Fig. 1 , in an embodiment, the likely compute path may be optimized by identifying the available resources, such as the LEs 120 and their respective locations in the computing grid 100, in the interconnect 160, or their proximity to critical hardware resources such as memory, a network interface, a host BUS, and the like.
[0035] For example, operations in the compute graph of a basic-block that are memory bound may be relocated in close proximity to a memory of the computational device (not shown). As another example, I/O related operations are relocated in close proximity to I/O devices (not shown) such as network ports, PCI-e bus, and the like. A technique for optimizing the path based on critical hardware resources is disclosed in US Patent Application No. 16/053,382, the contents of which are incorporated herein by reference.
[0036] In an embodiment, the likely compute path may be optimized by identifying the proximity or lack thereof, of basic-blocks to one another, respective to the call- graph of the basic blocks. Proximity, in such cases, may be physical, logical, topological or otherwise such that it may affect overall performance or cost of the computation.
[0037] Thereafter, the optimized compute graph is projected onto the computing grid 100. It should be noted that an optimized compute graph can be further optimized at runtime and injected to the interconnected LEs again, on-the-fly.
[0038] In some example embodiments, the LEs 120, the data routing junctions 130, the Frs 140 and the Wrs 150 may be implemented in hardware, software, firmware, or any combination thereof. In an exemplary embodiment, the computing grid 100 (and its various elements) is implemented as a semiconductor device. In another embodiment, the computing grid 100 is implemented in part as a semiconductor device and in part as software, firmware, or a combination thereof.
[0039] In an embodiment, the computing grid 100 is configured to accelerate the operation of computational devices. Examples for such devices may include a multi- core central processing unit (CPU), a field-programmable gate array (FPGA), a graphics processing unit (GPU), an application-specific integrated circuit (ASIC), a quantum computer, an optical computing device, a neural-network accelerator, or a combination thereof. According to the disclosed embodiments, the acceleration is achieved by, for example, executing program code over the computing grid 100 instead of over a computational device (not shown). Furthermore, the computing grid 100 may be incorporated in a computation device or connected to it.
[0040] Fig. 3 shows an example flowchart 300 of a method for reconfigurable code projection according to an embodiment. In an embodiment, the method is performed with respect to a computing grid (e.g., the computing grid 100, Fig. 1 ).
[0041] At S310, a likely compute path is determined for a received portion of code or program logic. In an embodiment, the likely compute path is determined using runtime telemetry.
[0042] At S320, the likely compute path is optimized in the compute graph. The optimization is performed at runtime, to remove bottlenecks in latency or/and throughput. Various embodiments for performing the optimization are discussed above.
[0043] At S330, the optimized compute graph is projected and injected again to the computing grid. Then, execution returns to S310.
[0044] The embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units ("CPUs"), a memory, and input/output interfaces.
[0045] The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such computer or processor is explicitly shown.
[0046] In addition, various other peripheral units may be connected to the computer platform such as an additional network fabric, storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
[0047] All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
[0048] It should be understood that any reference to an element herein using a designation such as "first," "second," and so forth does not generally limit the quantity or order of those elements. Rather, these designations are generally used herein as a convenient method of distinguishing between two or more elements or instances of an element. Thus, a reference to first and second elements does not mean that only two elements may be employed there or that the first element must precede the second element in some manner. Also, unless stated otherwise, a set of elements comprises one or more elements.
[0049] As used herein, the phrase "at least one of" followed by a listing of items means that any of the listed items can be utilized individually, or any combination of two or more of the listed items can be utilized. For example, if a system is described as including "at least one of A, B, and C," the system can include A alone; B alone; C alone; A and B in combination; B and C in combination; A and C in combination; or A, B, and C in combination.

Claims

CLAIMS What is claimed is:
1 . A computing grid, comprising:
an interconnect network including input ports and output ports;
a plurality of egress ports;
a plurality of configurable data routing junctions;
a plurality of logical elements interconnected using the plurality of configurable data routing junctions;
a plurality of ingress ports,
wherein at least one compute graph is projected onto the computing grid as a configuration of various elements of the computing grid.
2. The computing grid of claim 1 , wherein an optimized compute graph is projected onto the interconnected logical elements at runtime.
3. The computing grid of claim 1 , wherein each of the plurality of configurable data routing junctions is any one of: a multiplexer, a de-multiplexer, and a switch.
4. The computing grid of claim 1 , wherein each of the plurality of logical elements is configured to perform any one of: a unary operation, a binary operation, a ternary operation.
5. The computing grid of claim 1 , wherein each logical element of the plurality of logical elements is configured to perform one operation of a set of predefined operations.
6. The computing grid of claim 5, wherein the computing grid is further adapted to allow re-configuration of the plurality of logical elements at runtime.
7. The computing grid of claim 1 , wherein the computing grid further comprises:
a telemetry collection circuit.
8. The computing grid of claim 1 , wherein the computing grid is further adapted to:
collect runtime telemetry on at least logical paths; and
determine at least one likely compute path based on the collected runtime telemetry.
9. The computing grid of claim 8, wherein the collected runtime telemetry includes any one of: a number of times that each of the logical paths have been taken and flow-control parameters.
10. The computing grid of claim 1 , wherein the computing grid is further adapted to allow re-configuration of the plurality of configurable data routing junctions at runtime.
1 1 . The computing grid of claim 1 , wherein the plurality of ingress ports are connected to some of the plurality data routing junctions.
12. The computing grid of claim 1 , wherein the plurality of egress ports are connected to some of the plurality data routing junctions.
13. The computing grid of claim 1 , wherein the computing grid is configured to accelerate the processing of a computing device.
14. The computing grid of claim 13, wherein the computing device is at least one of: a central processing unit (CPU), a field-programmable gate array (FPGA), a graphics processing unit (GPU), an application-specific integrated circuit (ASIC), a quantum computer, a neural network, and an optical computing device.
15. The computing grid of claim 1 , wherein the input ports serve as the plurality of egress ports and the output ports serve as the plurality of ingress ports.
16. A method for runtime reconfiguring of code projection on a directed and interconnected computing grid, comprising:
projecting program logic onto the computing grid;
analyzing runtime telemetry from the computing grid to determine a likely compute path;
optimizing, at runtime, the likely compute path; and
re-configuring the computing grid to projection of the optimized compute graph.
17. The method of claim 16, wherein the runtime telemetry is gathered in response to the program logic projected on the computing grid.
PCT/US2018/050910 2017-09-13 2018-09-13 Directed and interconnected grid dataflow architecture WO2019055675A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP18856462.9A EP3682353A4 (en) 2017-09-13 2018-09-13 Directed and interconnected grid dataflow architecture

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762558090P 2017-09-13 2017-09-13
US62/558,090 2017-09-13

Publications (1)

Publication Number Publication Date
WO2019055675A1 true WO2019055675A1 (en) 2019-03-21

Family

ID=65631099

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/050910 WO2019055675A1 (en) 2017-09-13 2018-09-13 Directed and interconnected grid dataflow architecture

Country Status (3)

Country Link
US (1) US10817344B2 (en)
EP (1) EP3682353A4 (en)
WO (1) WO2019055675A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10817309B2 (en) 2017-08-03 2020-10-27 Next Silicon Ltd Runtime optimization of configurable hardware
US10817344B2 (en) 2017-09-13 2020-10-27 Next Silicon Ltd Directed and interconnected grid dataflow architecture
US11269526B2 (en) 2020-04-23 2022-03-08 Next Silicon Ltd Interconnected memory grid with bypassable units

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222636B (en) * 2020-01-07 2023-06-06 深圳鲲云信息科技有限公司 Deep learning model conversion method, device, server and storage medium
US12001311B2 (en) * 2022-01-06 2024-06-04 Next Silicon Ltd Automatic generation of computation kernels for approximating elementary functions

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050265258A1 (en) * 2004-05-28 2005-12-01 Kodialam Muralidharan S Efficient and robust routing independent of traffic pattern variability
US20060242288A1 (en) * 2004-06-24 2006-10-26 Sun Microsystems, Inc. inferential diagnosing engines for grid-based computing systems
US20060294150A1 (en) * 2005-06-27 2006-12-28 Stanfill Craig W Managing metadata for graph-based computations
US20160342396A1 (en) * 2015-05-20 2016-11-24 Ab lnitio Technology LLC Visual program specification and compilation of graph-based computation

Family Cites Families (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5347346A (en) 1989-12-25 1994-09-13 Minolta Camera Kabushiki Kaisha Image forming apparatus with improved efficiency of maintenance control
US5321806A (en) 1991-08-21 1994-06-14 Digital Equipment Corporation Method and apparatus for transmitting graphics command in a computer graphics system
US5367653A (en) 1991-12-26 1994-11-22 International Business Machines Corporation Reconfigurable multi-way associative cache memory
US5933642A (en) 1995-04-17 1999-08-03 Ricoh Corporation Compiling system and method for reconfigurable computing
US5970254A (en) 1997-06-27 1999-10-19 Cooke; Laurence H. Integrated processor and programmable data path chip for reconfigurable computing
US6347346B1 (en) 1999-06-30 2002-02-12 Chameleon Systems, Inc. Local memory unit system with global access for use on reconfigurable chips
US6871341B1 (en) 2000-03-24 2005-03-22 Intel Corporation Adaptive scheduling of function cells in dynamic reconfigurable logic
US7269174B2 (en) 2003-03-28 2007-09-11 Modular Mining Systems, Inc. Dynamic wireless network
EP1808774A1 (en) * 2005-12-22 2007-07-18 St Microelectronics S.A. A hierarchical reconfigurable computer architecture
US7904848B2 (en) 2006-03-14 2011-03-08 Imec System and method for runtime placement and routing of a processing array
US8156307B2 (en) 2007-08-20 2012-04-10 Convey Computer Multi-processor system having at least one processor that comprises a dynamically reconfigurable instruction set
US20110213950A1 (en) 2008-06-11 2011-09-01 John George Mathieson System and Method for Power Optimization
JP5294304B2 (en) 2008-06-18 2013-09-18 日本電気株式会社 Reconfigurable electronic circuit device
US20110099562A1 (en) 2008-07-01 2011-04-28 Morphing Machines Pvt Ltd Method and System on Chip (SoC) for Adapting a Reconfigurable Hardware for an Application at Runtime
US8554074B2 (en) * 2009-05-06 2013-10-08 Ciena Corporation Colorless, directionless, and gridless optical network, node, and method
US8230176B2 (en) 2009-06-26 2012-07-24 International Business Machines Corporation Reconfigurable cache
KR101076869B1 (en) 2010-03-16 2011-10-25 광운대학교 산학협력단 Memory centric communication apparatus in coarse grained reconfigurable array
US8601013B2 (en) * 2010-06-10 2013-12-03 Micron Technology, Inc. Analyzing data using a hierarchical structure
US8880866B2 (en) 2010-10-15 2014-11-04 Coherent Logix, Incorporated Method and system for disabling communication paths in a multiprocessor fabric by setting register values to disable the communication paths specified by a configuration
US8621151B2 (en) 2010-11-23 2013-12-31 IP Cube Partners (IPC) Co., Ltd. Active memory processor system
US8504778B2 (en) 2010-11-24 2013-08-06 IP Cube Partners (ICP) Co., Ltd. Multi-core active memory processor system
US8589628B2 (en) 2010-11-29 2013-11-19 IP Cube Partners (ICP) Co., Ltd. Hybrid active memory processor system
US20120284501A1 (en) * 2011-05-06 2012-11-08 Xcelemor, Inc. Computing system with hardware reconfiguration mechanism and method of operation thereof
US9024655B2 (en) 2012-02-21 2015-05-05 Wave Semiconductor, Inc. Multi-threshold flash NCL circuitry
US8767501B2 (en) 2012-07-17 2014-07-01 International Business Machines Corporation Self-reconfigurable address decoder for associative index extended caches
US9563401B2 (en) 2012-12-07 2017-02-07 Wave Computing, Inc. Extensible iterative multiplier
US9588773B2 (en) 2013-01-07 2017-03-07 Wave Computing, Inc. Software based application specific integrated circuit
US9590629B2 (en) 2013-11-02 2017-03-07 Wave Computing, Inc. Logical elements with switchable connections
US9460012B2 (en) 2014-02-18 2016-10-04 National University Of Singapore Fusible and reconfigurable cache architecture
US20150268963A1 (en) 2014-03-23 2015-09-24 Technion Research & Development Foundation Ltd. Execution of data-parallel programs on coarse-grained reconfigurable architecture hardware
US9553818B2 (en) * 2014-06-27 2017-01-24 Adtran, Inc. Link biased data transmission
GB201415796D0 (en) 2014-09-07 2014-10-22 Technion Res & Dev Foundation Logical-to-physical block mapping inside the disk controller: accessing data objects without operating system intervention
US9946832B2 (en) * 2014-11-13 2018-04-17 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Optimized placement design of network and infrastructure components
US9692419B2 (en) 2014-11-15 2017-06-27 Wave Computing, Inc. Compact logic evaluation gates using null convention
US10503524B2 (en) 2016-03-22 2019-12-10 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Interception of a function call, selecting a function from available functions and rerouting the function call
US10884761B2 (en) 2016-03-22 2021-01-05 Lenovo Enterprise Solutions (Singapore) Pte. Ltd Best performance delivery in heterogeneous computing unit environment
US10416999B2 (en) 2016-12-30 2019-09-17 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
US10469397B2 (en) * 2017-07-01 2019-11-05 Intel Corporation Processors and methods with configurable network-based dataflow operator circuits
US10445451B2 (en) * 2017-07-01 2019-10-15 Intel Corporation Processors, methods, and systems for a configurable spatial accelerator with performance, correctness, and power reduction features
US10467183B2 (en) * 2017-07-01 2019-11-05 Intel Corporation Processors and methods for pipelined runtime services in a spatial array
US10515046B2 (en) * 2017-07-01 2019-12-24 Intel Corporation Processors, methods, and systems with a configurable spatial accelerator
EP3662384A4 (en) 2017-08-03 2021-05-05 Next Silicon Ltd Runtime optimization of configurable hardware
WO2019055675A1 (en) 2017-09-13 2019-03-21 Next Silicon, Ltd. Directed and interconnected grid dataflow architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050265258A1 (en) * 2004-05-28 2005-12-01 Kodialam Muralidharan S Efficient and robust routing independent of traffic pattern variability
US20060242288A1 (en) * 2004-06-24 2006-10-26 Sun Microsystems, Inc. inferential diagnosing engines for grid-based computing systems
US20060294150A1 (en) * 2005-06-27 2006-12-28 Stanfill Craig W Managing metadata for graph-based computations
US20160342396A1 (en) * 2015-05-20 2016-11-24 Ab lnitio Technology LLC Visual program specification and compilation of graph-based computation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3682353A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10817309B2 (en) 2017-08-03 2020-10-27 Next Silicon Ltd Runtime optimization of configurable hardware
US10817344B2 (en) 2017-09-13 2020-10-27 Next Silicon Ltd Directed and interconnected grid dataflow architecture
US11269526B2 (en) 2020-04-23 2022-03-08 Next Silicon Ltd Interconnected memory grid with bypassable units
US11644990B2 (en) 2020-04-23 2023-05-09 Next Silicon Ltd Interconnected memory grid with bypassable units

Also Published As

Publication number Publication date
EP3682353A4 (en) 2021-12-08
EP3682353A1 (en) 2020-07-22
US20190079803A1 (en) 2019-03-14
US10817344B2 (en) 2020-10-27

Similar Documents

Publication Publication Date Title
US10817344B2 (en) Directed and interconnected grid dataflow architecture
US10380063B2 (en) Processors, methods, and systems with a configurable spatial accelerator having a sequencer dataflow operator
US10515046B2 (en) Processors, methods, and systems with a configurable spatial accelerator
US10564980B2 (en) Apparatus, methods, and systems for conditional queues in a configurable spatial accelerator
US10565134B2 (en) Apparatus, methods, and systems for multicast in a configurable spatial accelerator
US11307873B2 (en) Apparatus, methods, and systems for unstructured data flow in a configurable spatial accelerator with predicate propagation and merging
CN108268278B (en) Processor, method and system with configurable spatial accelerator
US11086816B2 (en) Processors, methods, and systems for debugging a configurable spatial accelerator
CN109213723B (en) Processor, method, apparatus, and non-transitory machine-readable medium for dataflow graph processing
EP3726389B1 (en) Apparatuses, methods, and systems for memory interface circuit allocation in a configurable spatial accelerator
US10445451B2 (en) Processors, methods, and systems for a configurable spatial accelerator with performance, correctness, and power reduction features
US10445234B2 (en) Processors, methods, and systems for a configurable spatial accelerator with transactional and replay features
US10416999B2 (en) Processors, methods, and systems with a configurable spatial accelerator
CN111566623A (en) Apparatus, method and system for integrated performance monitoring in configurable spatial accelerators
US10853073B2 (en) Apparatuses, methods, and systems for conditional operations in a configurable spatial accelerator
US20200210358A1 (en) Apparatuses, methods, and systems for in-network storage in a configurable spatial accelerator
CN104111818B (en) For the processor of batch thread process, processing method and code generating device
US10817309B2 (en) Runtime optimization of configurable hardware

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18856462

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018856462

Country of ref document: EP

Effective date: 20200414