US20210200706A1 - High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays - Google Patents

High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays Download PDF

Info

Publication number
US20210200706A1
US20210200706A1 US17/201,022 US202117201022A US2021200706A1 US 20210200706 A1 US20210200706 A1 US 20210200706A1 US 202117201022 A US202117201022 A US 202117201022A US 2021200706 A1 US2021200706 A1 US 2021200706A1
Authority
US
United States
Prior art keywords
bit image
configuration bit
configurable logic
logic circuit
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/201,022
Inventor
Robert Trout
Jeremy B. Chritz
Gregory M. Edvenson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/201,824 external-priority patent/US9734284B2/en
Priority claimed from US14/213,495 external-priority patent/US9740798B2/en
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Priority to US17/201,022 priority Critical patent/US20210200706A1/en
Publication of US20210200706A1 publication Critical patent/US20210200706A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4022Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F16/90344Query processing by using string matching techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express

Definitions

  • U.S. patent application Ser. No. 14/608,414 also is a continuation-in-part of and further claims priority to U.S. patent application Ser. No. 14/213,495, filed Mar. 14, 2014, inventors Paul T. Draghicescu, Gregory M. Edvenson, and Corey B. Olson, titled “Inexact Search Acceleration”, which is a continuation-in-part of and further claims priority to U.S. patent application Ser. No. 14/201,824, filed Mar. 8, 2014, inventor Corey B. Olson, titled “Hardware Acceleration of Short Read Mapping for Genomic and Other Types of Analyses”, both of which further claim priority to and the benefit of U.S. Provisional Patent Application No.
  • the present invention relates generally to computing applications, and more specifically to the parallel loading of one or more configuration bit images from a host device into a configurable logic circuits such as a plurality of FPGAs.
  • Configurable logic circuits such as field programmable gate arrays (“FPGAs”) historically may take a considerable time period to load one or more configurations, typically referred to as configuration bit images (or bit images), which are typically stored in adjacent FLASH memory (as a type of nonvolatile memory). This problem is magnified when many FPGAs require configuration, such as upon system power up or when another, different application is to be performed by the FPGAs, especially for supercomputing applications.
  • FPGAs field programmable gate arrays
  • a nonvolatile memory such as a FLASH memory
  • a nonvolatile memory such as FLASH memory
  • the FLASH memory must be updated, which also may take a considerable period of time, i.e., minutes rather than seconds, and the updated bit image must be reloaded into the FPGA, and both of which again are compounded when multiple FPGAs with local nonvolatile memory are to be configured, such as 50-1000 FPGAs, for example and without limitation.
  • Exemplary embodiments of the present invention provide numerous advantages. Exemplary embodiments provide a very rapid and parallel method for configuring a large number of field programmable gate arrays (“FPGAs”), largely bypassing local nonvolatile memory such as FLASH.
  • FPGAs field programmable gate arrays
  • a representative method of configuring a system having at least one host computing system and one of more field programmable gate arrays (“FPGAs”) comprises: using a host processor, storing a first configuration bit image for an application in a host memory; configuring the one of more field programmable gate arrays with a communication functionality, the communication functionality provided in a second configuration bit image stored in a nonvolatile memory; using the host processor, transmitting a message to the one of more field programmable gate arrays, the message comprises a memory address of the first configuration bit image in the host memory; using a DMA engine, for each field programmable gate array, accessing the host memory and obtaining the first configuration bit image; and using the first configuration bit image, configuring the field programmable gate array.
  • the message is transmitted to the one of more field programmable gate arrays through PCIe communication lines.
  • a representative method may further comprise: using the field programmable gate array, transmitting the first configuration bit image to one or more secondary field programmable gate arrays; and using the first configuration bit image, configuring the secondary field programmable gate arrays.
  • the first configuration bit image is transmitted to the one of more secondary field programmable gate arrays through JTAG communication lines.
  • the communication functionality is PCIe.
  • the message further comprises a file size of the configuration bit image.
  • a representative method may further comprise configuring a DMA engine in the one of more field programmable gate arrays, the DMA engine functionality provided in a third configuration bit image stored in the nonvolatile memory.
  • a representative method may further comprise: using the host processor, transmitting a message to the one of more field programmable gate arrays, the message comprises a memory address of a third configuration bit image in the host memory; using a DMA engine, for each field programmable gate array, accessing the host memory and obtaining the third configuration bit image; and using the third configuration bit image, reconfiguring the field programmable gate array.
  • a representative system comprises: a host computing system comprising a host processor and a host memory, a first configuration bit image for an application stored in the host memory, the host processor to transmit a message comprising a memory address of the first configuration bit image in the host memory; one or more nonvolatile memories, each nonvolatile memory storing a second configuration bit image for a communication functionality; and a plurality of primary field programmable gate arrays, each primary field programmable gate array coupled to the host processor and coupled to a nonvolatile memory, each primary field programmable gate array configurable for the communication functionality using the second configuration bit image, each primary field programmable gate array having a DMA engine and, in response to the message, to use the DMA engine to access the host memory and obtain the first configuration bit image, and each primary field programmable gate array configurable for the application using the first configuration bit image.
  • a representative system may further comprise: a plurality of secondary field programmable gate arrays coupled to a corresponding primary field programmable gate array of the plurality of primary field programmable gate arrays, each corresponding primary field programmable gate array to transmit the first configuration bit image to one or more secondary field programmable gate arrays of the plurality of secondary field programmable gate arrays.
  • each secondary field programmable gate array is configurable for the application using the first configuration bit image.
  • a representative system may further comprise at least one tertiary field programmable gate array configured as a non-blocking crossbar switch and coupled to the plurality of primary field programmable gate arrays and to the plurality of secondary field programmable gate arrays.
  • Another representative system may further comprise a plurality of JTAG communication lines coupling a corresponding primary field programmable gate array to the one or more secondary field programmable gate arrays for transmission of the first configuration bit image.
  • a representative system may further comprise: a PCIe switch; and a plurality of PCIe communication lines coupling the plurality of primary field programmable gate arrays through the PCIe switch to the host processor.
  • the communication functionality is PCIe.
  • the message is transmitted to the one of more primary field programmable gate arrays through the plurality of PCIe communication lines.
  • the one of more of the primary field programmable gate arrays are configured for the DMA engine functionality using a third configuration bit image stored in the nonvolatile memory.
  • each primary field programmable gate array in response to a second message transmitted from the host processor and comprising a memory address of a third configuration bit image in the host memory, to access the host memory and obtain the third configuration bit image; and to reconfigure using the third configuration bit image.
  • a representative system may further comprise: a PCIe switch; a plurality of PCIe communication lines coupled to the PCIe switch; a host computing system coupled to at least one PCIe communication line of the plurality of PCIe communication lines, the host computing system comprising a host processor and a host memory, a first configuration bit image for an application stored in the host memory, the host processor to transmit a message on the at least one PCIe communication line, the message comprising a memory address of the first configuration bit image in the host memory; one or more nonvolatile memories, each nonvolatile memory storing a second configuration bit image for a communication functionality; a plurality of JTAG communication lines; a plurality of primary field programmable gate arrays, each primary field programmable gate array coupled to a corresponding PCIe communication line of the plurality of PCIe communication lines and to one or more corresponding JTAG communication lines of the plurality of JTAG communication lines; each primary field programmable gate array coupled to a nonvolatile memory, each primary field
  • FIG. 1 is a block diagram illustrating an exemplary or representative first system embodiment.
  • FIG. 2 is a block diagram illustrating an exemplary or representative second system embodiment.
  • FIG. 3 is a block diagram illustrating an exemplary or representative third system embodiment.
  • FIG. 4 is a block diagram illustrating an exemplary or representative fourth system embodiment.
  • FIG. 5 is a flow diagram illustrating an exemplary or representative configuration method embodiment.
  • FIG. 6 is a block diagram illustrating exemplary or representative fields for a (stream) packet header.
  • FIG. 7 is a flow diagram illustrating an exemplary or representative communication method embodiment.
  • FIG. 1 is a block diagram illustrating an exemplary or representative first system 100 embodiment.
  • FIG. 2 is a block diagram illustrating an exemplary or representative second system 200 embodiment.
  • FIG. 3 is a block diagram illustrating an exemplary or representative third system 300 embodiment and first apparatus embodiment.
  • FIG. 4 is a block diagram illustrating an exemplary or representative fourth system 400 embodiment.
  • the systems 100 , 200 , 300 , 400 include one or more host computing systems 105 , such as a computer or workstation, having one or more central processing units (CPUs) 110 , which may be any type of processor, and host memory 120 , which may be any type of memory, such as a hard drive or a solid state drive, and which may be located with or separate from the host CPU 110 , all for example and without limitation, and as discussed in greater detail below.
  • the memory 120 typically stores data to be utilized in or was generated by a selected application and also generally a configuration bit file or image for a selected application.
  • any of the host computing systems 105 may include a plurality of different types of processors, such as graphics processors, multi-core processors, etc., also as discussed in greater detail below.
  • the various systems 100 , 200 , 300 , 400 differ from one another in terms of the arrangements of circuit components (including on or in various modules), types of components, and types of communication between and among the various components, as described in greater detail below.
  • the one or more host computing systems 105 are typically coupled through one or more communication channels or lines, illustrated as PCI express (Peripheral Component Interconnect Express or “PCIe”) lines 130 , either directly or through a PCIe switch 125 , to one or more configurable logic elements such as one or more FPGAs 150 (including FPGAs 160 , 170 ) (such as a Spartan 6 FPGA or a Kintex-7 FPGA, both available from Xilinx, Inc. of San Jose, Calif., US, or a Stratix 10 or Cyclone V FPGA available from Altera Corp.
  • PCI express Peripheral Component Interconnect Express
  • each of which in turn is coupled to a nonvolatile memory 140 , such as a FLASH memory (such as for storing configuration bit images), and to a plurality of random access memories 190 , such as a plurality of DDR3 (SODIMM) memory integrated circuits, such as for data storage for computation, communication, etc., for example and without limitation.
  • a nonvolatile memory 140 such as a FLASH memory (such as for storing configuration bit images)
  • random access memories 190 such as a plurality of DDR3 (SODIMM) memory integrated circuits, such as for data storage for computation, communication, etc., for example and without limitation.
  • each FPGA 150 and corresponding memories 140 , 190 directly coupled to that FPGA 150 are collocated on a corresponding computing module (or circuit board) 175 as a module or board in a rack mounted system having many such computing modules 175 , such as those available from Pico Computing of Seattle, Wash. US.
  • each computing module 175 includes as an option PCIe input and output (I/O) connector(s) 230 to provide the PCIe 130 connections, such as for a rack mounted system.
  • I/O connector(s) 230 , 235 may also include additional coupling functionality, such as JTAG coupling, input power, ground, etc., for example and without limitation, and are illustrated with such additional connectivity in FIG. 4 .
  • the PCIe switch 125 may be located or positioned anywhere in a system 100 , 200 , 300 , 400 , such as on a separate computing module (such as a backplane circuit board, which can be implemented with computing module 195 , for example), or on any of the computing modules 175 , 180 , 185 , 195 , 115 for example and without limitation.
  • a separate computing module such as a backplane circuit board, which can be implemented with computing module 195 , for example
  • other types of communication lines or channels may be utilized to couple the one or more host computing systems 105 to the FPGAs 150 , such as an Ethernet line, which in turn may be coupled to other intervening rack-mounted components to provide communication to and from one or more FPGAs 150 ( 160 , 170 ) and other modules.
  • the various FPGAs 150 may have additional or alternative types of communication between and among the PCIe switch 125 and other FPGAs 150 ( 160 , 170 ), such as via general purpose (GP) I/O lines 131 (illustrated in FIG. 4 ).
  • GP general purpose
  • PCIe switch 125 (e.g., available from PLX Technology, Inc. of Sunnyvale, Calif., US), or one or more of the FPGAs 150 ( 160 , 170 ), may also be configured (as an option) as one or more non-blocking crossbar switches 220 , illustrated in FIG. 1 as part of (or a configuration of) PCIe switch 125 .
  • the non-blocking crossbar switch 220 provides for pairwise and concurrent communication (communication lines 221 ) between and among the FPGAs 150 , 160 , 170 and any of various memories ( 120 , 190 , for example and without limitation), without communication between any given pair of FPGAs 150 , 160 , 170 blocking any other communication between another pair of FPGAs 150 , 160 , 170 .
  • one or more non-blocking crossbar switches 220 are provided (within a PCIe switch 125 ) to have sufficient capacity to enable direct FPGA to FPGA communication between and among all of the FPGAs 150 , 160 , 170 in a selected portion of the system 100 , 200 , 300 , 400 .
  • one or more non-blocking crossbar switches 220 are implemented using one or more FPGAs 150 which have been configured accordingly, as illustrated in FIG. 2 , which may also be considered a tertiary (or third) FPGA 150 when included in the various hierarchical embodiments, such as illustrated in FIG. 2 .
  • one or more non-blocking crossbar switches 220 are implemented using one or more PCIe switches 125 which also have been configured accordingly, illustrated as second PCIe switch 125 A in FIG. 4 .
  • one or more non-blocking crossbar switches 220 are provided internally within any of the one or more FPGAs 150 , 160 , 170 for concurrent accesses to a plurality of memories 190 , for example and without limitation.
  • the system 200 differs insofar as the various FPGAs are hierarchically organized into one or more primary (or central) configurable logic elements such as one or more primary FPGAs 170 and a plurality of secondary (or remote) configurable logic elements such as one or more secondary FPGAs 160 (FPGAs 150 , 160 , 170 may be any type of configurable logic elements (such as a Spartan 6 FPGA, a Kintex-7 FPGA, a Stratix 10, a Cyclone V FPGA as mentioned above, also for example and without limitation).
  • a Spartan 6 FPGA such as a Spartan 6 FPGA, a Kintex-7 FPGA, a Stratix 10, a Cyclone V FPGA as mentioned above, also for example and without limitation.
  • the one or more host computing systems 105 are typically coupled through one or more communication channels or lines, illustrated as PCI express (Peripheral Component Interconnect Express or “PCIe”) lines 130 , either directly or through a PCIe switch 125 , to primary FPGAs 170 , each of which in turn is coupled to a plurality of secondary FPGAs 160 , also through one or more corresponding communication channels, illustrated as a plurality of JTAG lines 145 (Joint Test Action Group (“JTAG”) is the common name for the IEEE 1149.1 Standard Test Access Port and Boundary-Scan Architecture), or through any of the PCIe lines 130 or GP I/O lines 131 .
  • PCI express Peripheral Component Interconnect Express
  • PCIe PCI express
  • PCIe PCIe
  • primary FPGAs 170 each of which in turn is coupled to a plurality of secondary FPGAs 160 , also through one or more corresponding communication channels, illustrated as a plurality of JTAG lines 145
  • each of the secondary FPGAs 160 is provided on a separate computing module 185 which is couplable (through I/O connector(s) 235 and PCIe lines 130 and/or JTAG lines 145 ) to the computing module 180 having the primary FPGA 170 .
  • the PCIe lines 130 and JTAG lines 145 are illustrated as part of a larger bus (which may also include GP I/O lines 131 ), and typically routed to different pins on the various FPGAs 150 , 160 , 170 , typically via I/O connectors 235 , for example, for the various modular configurations or arrangements.
  • PCIe switch 125 also may be coupled to a separate FPGA, such as an FPGA 150 , such as illustrated in FIG. 1 , which also may be coupled to a nonvolatile memory 140 , for example and without limitation.
  • the PCIe switch 125 may be positioned anywhere in a system 100 , 200 , 300 , 400 , such as on a separate computing module, for example and without limitation, or on one or more of the computing modules 180 having the primary FPGA 170 , as illustrated in FIG. 4 for computing module 195 , which can be utilized to implement a backplane for multiple modules 175 , as illustrated.
  • the PCIe switch 125 is typically located on the backplane of a rack-mounted system (available from Pico Computing, Inc. of Seattle, Wash. US).
  • a PCIe switch 125 may also be collocated on various computing modules (e.g., 195 ), to which many other modules (e.g., 175 ) connect (e.g., through PCIe connector(s) 230 or, more generally, I/O connectors 235 which include PCIe, JTAG, GPIO, power, ground, and other signaling lines).
  • modules e.g., 175
  • I/O connectors 235 which include PCIe, JTAG, GPIO, power, ground, and other signaling lines.
  • other types of communication lines or channels may be utilized to couple the one or more host computing systems 105 to the primary FPGAs 170 and or secondary FPGAs 160 , such as an Ethernet line, which in turn may be coupled to other intervening rack-mounted components to provide communication to and from one or more primary FPGAs 170 and other modules.
  • the primary and secondary FPGAs 170 and 160 are located on separate computing modules 180 and 185 , also in a rack mounted system having many such computing modules 180 and 185 , also such as those available from Pico Computing of Seattle, Wash. US.
  • the computing modules 180 and 185 may be coupled to each other via any type of communication lines, including PCIe and/or JTAG.
  • each of the secondary FPGAs 160 is located on a modular computing module (or circuit board) 185 which have corresponding I/O connectors 235 to plug into a region or slot of the primary FPGA 170 computing module 180 , up to the capacity of the primary FPGA 170 computing module 180 , such as one to six modular computing modules 185 having secondary FPGAs 160 .
  • the I/O connector(s) 235 may include a wide variety of coupling functionality, such as JTAG coupling, PCIe coupling, GP I/O, input power, ground, etc., for example and without limitation.
  • coupling functionality such as JTAG coupling, PCIe coupling, GP I/O, input power, ground, etc.
  • systems 100 , 200 , 300 , 400 function similarly, and any and all of these system configurations are within the scope of the disclosure.
  • each of the various computing modules 175 , 180 , 185 , 195 , 115 typically include many additional components, such as power supplies, additional memory, additional input and output circuits and connectors, switching components, clock circuitry, etc.
  • the various systems 100 , 200 , 300 , 400 may also be combined into a plurality of system configurations, such as mixing the different types of FPGAs 150 , 160 , 170 and computing modules 175 , 180 , 185 , 195 , 115 into the same system, including within the same rack-mounted system.
  • FIGS. 3 and 4 Additional representative system 300 , 400 configurations or arrangements are illustrated in FIGS. 3 and 4 .
  • the primary and secondary FPGAs 150 and 160 along with PCIe switch 125 , are all collocated on a dedicated computing module 115 as a large module in a rack mounted system having many such computing modules 115 , such as those available from Pico Computing of Seattle, Wash. US.
  • each of the secondary FPGAs 160 is provided on a separate computing module 175 which is couplable to the computing module 195 having the primary FPGA 170 .
  • PCIe switches 125 are also illustrated as collocated on computing module 195 for communication with secondary FPGAs 160 over PCIe communication lines 130 , although this is not required and such a PCIe switch 125 may be positioned elsewhere in a system 100 , 200 , 300 , 400 , such as on a separate computing module, for example and without limitation.
  • the representative system 300 illustrates some additional features which may be included as options in a computing module, and is further illustrated as an example computing module 115 which does not include the optional nonblocking crossbar switch 220 (e.g., in a PCIe switch 125 or as a configuration of an FPGA 150 , 160 , 170 ).
  • the various secondary FPGAs 160 also have direct communication to each other, with each FPGA 160 coupled through communication lines 210 to its neighboring FPGAs 160 , such as serially or “daisy-chained” to each other.
  • one of the FPGAs 160 has been coupled through high speed serial lines 215 , to a hybrid memory cube (“HMC”) 205 , which incorporates multiple layers of memory and at least one logic layer, with very high memory density capability.
  • the FPGA 160 A has been configured as a memory controller (and potentially a switch or router), providing access and communication to and from the HMC 205 for any of the various FPGAs 160 , 170 .
  • a system 100 , 200 , 300 , 400 comprises one or more host computing systems 105 , couplable through one or more communication lines (such as GP I/O lines 131 or PCIe communication lines ( 130 ), directly or through a PCIe switch 125 ), to one or more FPGAs 150 and/or primary FPGAs 170 .
  • each primary FPGA 170 is coupled through one or more communication lines, such as JTAG lines 145 or PCIe communication lines 130 or GP I/O lines 131 , to one or more secondary FPGAs 160 .
  • each FPGA 150 , 160 , 170 is optionally coupled to a non-blocking crossbar switch 220 (e.g., in a PCIe switch 125 or as a configuration of an FPGA 150 , 160 , 170 ) for pairwise communication with any other FPGA 150 , 160 , 170 .
  • each FPGA 150 , 160 , 170 is typically coupled to one or more nonvolatile memories 140 and one or more random access memories 190 , which may be any type of random access memory.
  • the configuration of the FPGAs 150 , 160 , 170 may be performed in a massively parallel process, allowing significant time savings.
  • the full configurations of the FPGAs 150 , 160 , 170 are not required to be stored in nonvolatile memory 140 (such as FLASH), with corresponding read/write cycles which are comparatively slow, configuration of the FPGAs 150 , 160 , 170 may proceed at a significantly more rapid rate, including providing new or updated configurations.
  • the various FPGAs 150 , 160 , 170 may also be configured as known in the art, such as by loading a complete configuration from nonvolatile memory 140 .
  • Another significant feature of the systems 100 , 200 , 300 , 400 is that only basic (or base) resources for the FPGAs 150 or primary FPGAs 170 are stored in the nonvolatile memory 140 (coupled to a FPGA 150 or a primary FPGA 170 ), such as a configuration for communication over the PCIe lines 130 (and possibly GP I/O lines 131 or JTAG lines 145 , such as for secondary FPGAs 160 ), and potentially also a configuration for one or more DMA engines (depending upon the selected FPGA 150 , 160 , 170 , the FPGA 150 , 160 , 170 may be available with incorporated DMA engines).
  • basic (or base) resources for the FPGAs 150 or primary FPGAs 170 are stored in the nonvolatile memory 140 (coupled to a FPGA 150 or a primary FPGA 170 ), such as a configuration for communication over the PCIe lines 130 (and possibly GP I/O lines 131 or JTAG lines 145 , such as for secondary FPGAs
  • the only configurations required to be loaded into the FPGA 150 or primary FPGA 170 is limited or minimal, namely, communication (e.g., PCIe and possibly JTAG) functionality and or DMA functionality.
  • communication e.g., PCIe and possibly JTAG
  • the only configuration required to be loaded into the FPGA 150 or a primary FPGA 170 is a communication configuration for PCIe functionality.
  • this base PCIe configuration may be loaded quite rapidly from the nonvolatile memory 140 .
  • use of the nonvolatile memory 140 for FPGA configuration is bypassed entirely, both for loading of an initial configuration or an updated configuration.
  • Configuration of the FPGAs 150 or primary FPGAs 170 and secondary FPGAs 160 begins with the host CPU 110 merely transmitting a message or command to one or more FPGAs 150 or primary FPGAs 170 with a memory address or location in the host memory 120 (and typically also a file size) of the configuration bit image (or file) which has been stored in the host memory 120 , i.e., the host CPU 110 sets the DMA registers of the FPGA 150 or primary FPGA 170 with the memory address and file size for the selected configuration bit image (or file) in the host memory 120 .
  • Such a “load FPGA” command is repeated for each of the FPGAs 150 or primary FPGAs 170 (and possibly each secondary FPGA 160 , depending upon the selected embodiment), i.e., continuing until the host CPU 110 does not find any more FPGAs 150 or primary FPGAs 170 (and/or secondary FPGAs 160 ) in the system 100 , 200 , 300 , 400 and an error message may be returned.
  • the host CPU 110 transmits one such message or command to each FPGA 150 or primary FPGA 170 that will be handling a thread of a parallel, multi-threaded computation.
  • the host CPU 110 is then literally done with the configuration process, and is typically notified with an interrupt signal from a FPGA 150 or primary FPGA 170 once configuration is complete.
  • each FPGA 150 or primary FPGA 170 accesses the host memory 120 and obtains the configuration bit image (or file) (which configuration also generally is loaded into the FPGA 150 or primary FPGA 170 ).
  • the DMA engine By using the DMA engine, much larger files may be transferred quite rapidly, particularly compared to any packet- or word-based transmission (which would otherwise have to be assembled by the host CPU 110 , a comparatively slow and labor-intensive task). This is generally performed in parallel (or serially, depending upon the capability of the host memory 120 ) for all of the FPGAs 150 or primary FPGAs 170 .
  • each primary FPGA 170 then transmits (typically over JTAG lines 145 or PCIe communication lines 130 ) the configuration bit image (or file) to each of the secondary FPGAs 160 , also typically in parallel.
  • each primary FPGA 150 may re-transmit (typically over JTAG lines 145 or PCIe communication lines 130 ) the information of the load FPGA message or command to each of the secondary FPGAs 160 , namely the memory address in the host memory 120 and the file size, and each secondary FPGA 160 may read or otherwise obtain the configuration bit image, also using DMA engines, for example and without limitation.
  • the host computing system 105 may transmit the load FPGA message or command to each of the FPGAs 150 or primary FPGAs 170 and secondary FPGAs 160 , which then obtain the configuration bit image, also using DMA engines as described above. All such variations are within the scope of the disclosure.
  • the configuration bit image is loaded quite rapidly into not only into each of the FPGAs 150 and primary FPGAs 170 but also into each of the secondary FPGAs 160 .
  • This allows not only for an entire computing module 175 (or computing modules 180 , 185 , 195 ) to be reloaded in seconds, rather than hours, but the entire system 100 , 200 , 300 , 400 may be configured and reconfigured in seconds, also rather than hours.
  • read and write operations to local memory e.g., nonvolatile memory 140
  • largely may be bypassed almost completely in the configuration process, resulting in a huge time savings.
  • the configuration bit image may also be stored locally, such as in nonvolatile memory 140 (and/or nonvolatile memory 190 (e.g., FLASH) associated with computing modules 175 , 180 , 185 , 195 , 115 ).
  • nonvolatile memory 140 and/or nonvolatile memory 190 (e.g., FLASH) associated with computing modules 175 , 180 , 185 , 195 , 115 ).
  • FIG. 5 is a flow diagram illustrating an exemplary or representative method embodiment for system configuration and reconfiguration, and provides a useful summary of this process.
  • start step 240 and one or more FPGA 150 , 160 , 170 configurations (as configuration bit images) having been stored in a host memory 120 , the system 100 , 200 , 300 , 400 powers on or otherwise starts up, and the FPGAs 150 , 160 , 170 load the base communication functionality such as a PCIe configuration image (and possibly DMA functionality) from nonvolatile memory 140 , step 245 .
  • base communication functionality such as a PCIe configuration image (and possibly DMA functionality) from nonvolatile memory 140 , step 245 .
  • Step 245 is optional, as such communication functionality also can be provided to FPGAs 150 , 160 , 170 via GPIO (or GP I/O) lines 131 (general purpose input and output lines), for example and without limitation.
  • the host CPU 110 (or more generally, host computing system 105 ) then generates and transmits a “load FPGA” command or message to one or more FPGAs 150 or primary FPGAs 170 (and/or secondary FPGAs 160 ), step 250 , in which the load FPGA command or message includes a starting memory address (in host memory 120 ) and a file size designation for the selected configuration bit image which is to be utilized.
  • the one or more FPGAs 150 or primary FPGAs 170 obtain the configuration bit image from the host memory 120 , step 255 , and use it to configure. Also depending upon the selected embodiment, the one or more FPGAs 150 or primary FPGAs 170 may also transfer the configuration bit image to each of the secondary FPGAs 160 , step 260 , such as over JTAG lines 145 and bypassing nonvolatile memory 140 , 190 , which the secondary FPGAs 160 also use to configure. Also depending upon the selected embodiment, the configuration bit image may be stored locally, step 265 , as a possible option as mentioned above. Having loaded the configuration bit image into the FPGAs 150 , 160 , 170 , the method may end, return step 270 , such as by generating an interrupt signal back to the host computing system 105 .
  • the systems 100 , 200 , 300 , 400 enable one of the significant features of the present disclosure, namely, the highly limited involvement of the host CPU 110 in data transfers between the host computing system 105 and any of the FPGAs 150 , 160 , 170 , and their associated memories 190 , and additionally, the highly limited involvement of the host CPU 110 in data transfers between and among any of the FPGAs 150 , 160 , 170 , and their associated memories 190 , all of which frees the host computing system 105 to be engaged in other tasks, and further is a significant departure from prior art systems.
  • the data transfer paths are established by the host CPU 110 (or an FPGA 150 , 160 , 170 configured for this task) merely transmitting a message or command to one or more FPGAs 150 , 160 , 170 to set the base DMA registers within the FPGA 150 , 160 , 170 with a memory 190 address (or address or location in the host memory 120 , as the case may be), optionally a file size of the data file, and a stream number, i.e., the host CPU 110 (or another FPGA 150 , 160 , 170 configured for this task) sets the DMA registers of the FPGA(s) 150 , 160 , 170 with the memory address (and optionally a file size) for the selected data file in the host memory 120 or in one of the memories 190 , and also assigns a stream number, including a tie (or tied) stream number if applicable.
  • the system 100 , 200 , 300 , 400 is initialized for data transfer, and these assignments persist for the duration of the application, and do not need to be re-established for subsequent data transfers.
  • the data to be transferred may originate from anywhere within a system 100 , 200 , 300 , 400 , including real-time generation by any of the FPGAs 150 , 160 , 170 , any of the local memories, including memories 190 , in addition to the host memory 120 , and in addition to reception from an external source, for example and without limitation.
  • the host CPU 110 (or an FPGA 150 , 160 , 170 configured for this task) has therefore established the various data transfer paths between and among the host computing system 105 and the FPGAs 150 , 160 , 170 for the selected application.
  • header information for any data transfer includes not only a system address (e.g., PCIe address) for the FPGA 150 , 160 , 170 and/or its associated memories 190 , but also includes the “stream” designations (or information) and “tie (or tied) stream” designations (or information), and is particularly useful for multi-threaded or other parallel computation tasks.
  • the header (e.g., a PCIe data packet header) for any selected data transfer path includes: (1) bits for an FPGA 150 , 160 , 170 and/or memory 190 address and optionally a file size; (2) additional bits for an assignment a stream number to the data transfer (which stream number can be utilized repeatedly for additional data to be transferred subsequently for ongoing computations); and (3) additional bits for any “tie stream” designations, if any are utilized or needed.
  • each FPGA 150 , 160 , 170 may be coupled to a plurality of memories 190
  • each memory address typically also includes a designation of which memory 190 associated with the designated FPGA 150 , 160 , 170 .
  • FIG. 6 is a block diagram illustrating exemplary or representative fields for a (stream) packet header 350 , comprising a plurality of bits designating a first memory address (field 305 ) (typically a memory 190 address), a plurality of bits designating a file size (field 310 ) (as an optional field), a plurality of bits designating a (first) stream number (field 315 ), and as may be necessary or desirable, two additional and optional tie stream fields, namely, a plurality of bits designating the (second) memory 190 address for the tied stream (field 320 ) and a plurality of bits designating a tie (or tied) stream number (field 325 ).
  • Any application may then merely write to the selected stream number or read from the selected stream number for the selected memory 190 address (or FPGA 150 , 160 , 170 address), without any involvement by the host computing system 105 , for as long as the application is running on the system 100 , 200 , 300 , 400 .
  • data transfer in one stream may be tied to a data transfer of another stream, allowing two separate processes to occur without involvement of the host computing system 105 .
  • the first “tie stream” process allows the “daisy chaining” of data transfers, so a data transfer to a first stream number for a selected memory 190 (or FPGA 150 , 160 , 170 process) on a first computing module 175 , 180 , 185 , 195 , 115 may be tied or chained to a subsequent transfer of the same data to another, second stream number for a selected memory 190 (or FPGA 150 , 160 , 170 process) on a second computing module 175 , 180 , 185 , 195 , 115 , e.g., data transferred from the host computing system 105 or from a first memory 190 on a first computing module 175 , 180 , 185 , 195 , 115 (e.g., card “A”) (stream “1”) to a second memory 190 on a second computing module 175 , 180 , 185 , 195 , 115 (e.g., card “B”) will also be further transmitted from the
  • the second “tie stream” process allows the chaining or sequencing of data transfers between and among any of the FPGAs 150 , 160 , 170 without any involvement of the host computing system 105 after the initial setup of the DMA registers in the FPGAs 150 , 160 , 170 .
  • a data result output from a first stream number for a selected memory 190 (or FPGA 150 , 160 , 170 process) on a first computing module 175 , 180 , 185 , 195 , 115 may be tied or chained to be input data for another, second stream number for a selected memory 190 (or FPGA 150 , 160 , 170 process) on a second computing module 175 , 180 , 185 , 195 , 115 , e.g., stream “3” data transferred from the a first memory 190 on a first computing module 175 , 180 , 185 , 195 , 115 (e.g., card “A”) will transferred as a stream “4” to a second memory 190 on a second computing module 175 , 180 , 185 , 195 , 115 (e.g., card “B”), thereby tying streams 3 and 4, not only for the current data transfer, but for the entire duration of the application (also until changed
  • any of these various data transfers may occur through any of the various communication channels of the systems 100 , 200 , 300 , 400 , and to and from any available internal or external resource, in addition to transmission over the PCIe network (PCIe switch 125 with PCIe communication lines 130 ), including through the non-blocking crossbar switch 220 (as an option) and over the JTAG lines 145 and/or GP I/O lines 131 and/or communication lines 210 , depending upon the selected system 100 , 200 , 300 , 400 configuration. All of these various mechanisms provide for several types of direct FPGA-to-FPGA communication, without any ongoing involvement by host computing system 105 once the DMA registers have been established.
  • the host CPU 110 is then literally done with the data transfer process, and from the perspective of the host computing system 105 , following transmission of the DMA setup messages having a designation of a memory 190 address, a file size (as an option), and a stream number, the data transfer configuration process is complete.
  • This is a huge advance over prior art methods of data transfer in supercomputing systems utilizing FPGAs.
  • each FPGA 150 , 160 , 170 accesses the host memory 120 , or a memory 190 , or any other data source, and obtains the data file for a read operation, or performs a corresponding write operation, all using the established address and stream number.
  • the DMA engine much larger files may be transferred quite rapidly, particularly compared to any packet- or word-based transmission. This is generally performed in parallel (or serially, depending upon the application) for all of the FPGAs 150 , 160 , 170 .
  • FIG. 7 is a flow diagram illustrating an exemplary or representative method embodiment for data transfer within a system 100 , 200 , 300 , 400 and provides a useful summary.
  • one or more DMA registers associated with any of the FPGAs 150 , 160 , 170 and their associated memories 190 are setup, step 410 , with a memory ( 120 , 190 ) address, a file size (as an option, and not necessarily required), a stream number, and any tie (or tied) stream number.
  • step 415 data is transferred between and among the FPGAs 150 , 160 , 170 using the designated addresses and stream numbers, step 415 .
  • step 420 the data is transferred to the next tied stream, step 425 , as the case may be.
  • step 430 the method returns to step 415 , and the process iterates. Otherwise, the method determines whether the application is complete, step 435 , and if not, returns to step 415 and iterates as well.
  • step 440 the method returns to step 410 to set up the DMA registers for the next application, and iterates.
  • the method may end, return step 445 .
  • Coupled means and includes any direct or indirect electrical, structural or magnetic coupling, connection or attachment, or adaptation or capability for such a direct or indirect electrical, structural or magnetic coupling, connection or attachment, including integrally formed components and components which are coupled via or through another component.
  • a CPU or “processor” 110 may be any type of processor, and may be embodied as one or more processors 110 , configured, designed, programmed or otherwise adapted to perform the functionality discussed herein.
  • a processor 110 may include use of a single integrated circuit (“IC”), or may include use of a plurality of integrated circuits or other components connected, arranged or grouped together, such as controllers, microprocessors, digital signal processors (“DSPs”), parallel processors, multiple core processors, custom ICs, application specific integrated circuits (“ASICs”), field programmable gate arrays (“FPGAs”), adaptive computing ICs, associated memory (such as RAM, DRAM and ROM), and other ICs and components, whether analog or digital.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • adaptive computing ICs associated memory (such as RAM, DRAM and ROM), and other ICs and components, whether analog or digital.
  • processor should be understood to equivalently mean and include a single IC, or arrangement of custom ICs, ASICs, processors, microprocessors, controllers, FPGAs, adaptive computing ICs, or some other grouping of integrated circuits which perform the functions discussed below, with associated memory, such as microprocessor memory or additional RAM, DRAM, SDRAM, SRAM, MRAM, ROM, FLASH, EPROM or E 2 PROM.
  • a processor (such as processor 110 ), with its associated memory, may be adapted or configured (via programming, FPGA interconnection, or hard-wiring) to perform the methodology of the invention, as discussed above.
  • the methodology may be programmed and stored, in a processor 110 with its associated memory (and/or memory 120 ) and other equivalent components, as a set of program instructions or other code (or equivalent configuration or other program) for subsequent execution when the processor is operative (i.e., powered on and functioning).
  • the processor 110 may implemented in whole or part as FPGAs, custom ICs and/or ASICs, the FPGAs, custom ICs or ASICs also may be designed, configured and/or hard-wired to implement the methodology of the invention.
  • the processor 110 may be implemented as an arrangement of analog and/or digital circuits, controllers, microprocessors, DSPs and/or ASICs, collectively referred to as a “controller”, which are respectively hard-wired, programmed, designed, adapted or configured to implement the methodology of the invention, including possibly in conjunction with a memory 120 .
  • the memory 120 which may include a data repository (or database), may be embodied in any number of forms, including within any computer or other machine-readable data storage medium, memory device or other storage or communication device for storage or communication of information, currently known or which becomes available in the future, including, but not limited to, a memory integrated circuit (“IC”), or memory portion of an integrated circuit (such as the resident memory within a processor 110 ), whether volatile or non-volatile, whether removable or non-removable, including without limitation RAM, FLASH, DRAM, SDRAM, SRAM, MRAM, FeRAM, ROM, EPROM or EPROM, or any other form of memory device, such as a magnetic hard drive, an optical drive, a magnetic disk or tape drive, a hard disk drive, other machine-readable storage or memory media such as a floppy disk, a CDROM, a CD-RW, digital versatile disk (DVD) or other optical memory, or any other type of memory, storage medium, or data storage apparatus or circuit, which is known or which becomes known, depending upon the selected
  • the processor 110 is hard-wired or programmed, using software and data structures of the invention, for example, to perform the methodology of the present invention.
  • the system and method of the present invention may be embodied as software which provides such programming or other instructions, such as a set of instructions and/or metadata embodied within a non-transitory computer readable medium, discussed above.
  • metadata may also be utilized to define the various data structures of a look up table or a database.
  • Such software may be in the form of source or object code, by way of example and without limitation. Source code further may be compiled into some form of instructions or object code (including assembly language instructions or configuration information).
  • the software, source code or metadata of the present invention may be embodied as any type of code, such as C, C++, SystemC, LISA, XML, Java, Brew, SQL and its variations (e.g., SQL 99 or proprietary versions of SQL), DB2, Oracle, or any other type of programming language which performs the functionality discussed herein, including various hardware definition or hardware modeling languages (e.g., Verilog, VHDL, RTL) and resulting database files (e.g., GDSII).
  • code such as C, C++, SystemC, LISA, XML, Java, Brew, SQL and its variations (e.g., SQL 99 or proprietary versions of SQL), DB2, Oracle, or any other type of programming language which performs the functionality discussed herein, including various hardware definition or hardware modeling languages (e.g., Verilog, VHDL, RTL) and resulting database files (e.g., GDSII).
  • a “construct”, “program construct”, “software construct” or “software”, as used equivalently herein, means and refers to any programming language, of any kind, with any syntax or signatures, which provides or can be interpreted to provide the associated functionality or methodology specified (when instantiated or loaded into a processor or computer and executed, including the processor 110 , for example).
  • the software, metadata, or other source code of the present invention and any resulting bit file may be embodied within any tangible, non-transitory storage medium, such as any of the computer or other machine-readable data storage media, as computer-readable instructions, data structures, program modules or other data, such as discussed above with respect to the memory 120 , e.g., a floppy disk, a CDROM, a CD-RW, a DVD, a magnetic hard drive, an optical drive, or any other type of data storage apparatus or medium, as mentioned above.
  • any tangible, non-transitory storage medium such as any of the computer or other machine-readable data storage media, as computer-readable instructions, data structures, program modules or other data, such as discussed above with respect to the memory 120 , e.g., a floppy disk, a CDROM, a CD-RW, a DVD, a magnetic hard drive, an optical drive, or any other type of data storage apparatus or medium, as mentioned above.
  • any signal arrows in the drawings/ Figures should be considered only exemplary, and not limiting, unless otherwise specifically noted. Combinations of components of steps will also be considered within the scope of the present invention, particularly where the ability to separate or combine is unclear or foreseeable.
  • the disjunctive term “or”, as used herein and throughout the claims that follow, is generally intended to mean “and/or”, having both conjunctive and disjunctive meanings (and is not confined to an “exclusive or” meaning), unless otherwise indicated.
  • “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
  • the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.

Abstract

Representative embodiments are disclosed for a rapid and highly parallel configuration process for field programmable gate arrays (FPGAs). In a representative method embodiment, using a host processor, a first configuration bit image for an application is stored in a host memory; one of more FPGAs are configured with a communication functionality such as PCIe using a second configuration bit image stored in a nonvolatile memory; a message is transmitted by the host processor to the FPGAs, usually via PCIe lines, with the message comprising a memory address and also a file size of the first configuration bit image in the host memory; using a DMA engine, each FPGA obtains the first configuration bit image from the host memory and is then configured using the first configuration bit image. Primary FPGAs may further transmit the first configuration bit image to additional, secondary FPGAs, such as via JTAG lines, for their configuration.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of and claims the benefit of and priority to U.S. patent application Ser. No. 15/594,627, filed May 14, 2017, inventors Robert Trout et al., titled “High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays”, which is a continuation of and claims the benefit of and priority to U.S. patent application Ser. No. 14/608,414, filed Jan. 29, 2015 and issued May 23, 2017 as U.S. Pat. No. 9,658,977, inventors Robert Trout et al., titled “High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays”, which is a nonprovisional of and, under 35 U.S.C. Section 119, further claims the benefit of and priority to U.S. Provisional Patent Application No. 61/940,009, filed Feb. 14, 2014, inventors Jeremy B. Chritz et al., titled “High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays”, which are commonly assigned herewith, the entire contents of which are incorporated herein by reference with the same full force and effect as if set forth in their entireties herein, and with priority claimed for all commonly disclosed subject matter.
  • U.S. patent application Ser. No. 14/608,414 also is a nonprovisional of and, under 35 U.S.C. Section 119, further claims the benefit of and priority to U.S. Provisional Patent Application No. 61/940,472, filed Feb. 16, 2014, inventors Jeremy B. Chritz et al., titled “System and Method for Independent, Direct and Parallel Communication Among Multiple Field Programmable Gate Arrays”, which is commonly assigned herewith, the entire contents of which are incorporated herein by reference with the same full force and effect as if set forth in its entirety herein, and with priority claimed for all commonly disclosed subject matter.
  • U.S. patent application Ser. No. 14/608,414 also is a continuation-in-part of and further claims priority to U.S. patent application Ser. No. 14/213,495, filed Mar. 14, 2014, inventors Paul T. Draghicescu, Gregory M. Edvenson, and Corey B. Olson, titled “Inexact Search Acceleration”, which is a continuation-in-part of and further claims priority to U.S. patent application Ser. No. 14/201,824, filed Mar. 8, 2014, inventor Corey B. Olson, titled “Hardware Acceleration of Short Read Mapping for Genomic and Other Types of Analyses”, both of which further claim priority to and the benefit of U.S. Provisional Patent Application No. 61/940,472 and U.S. Provisional Patent Application No. 61/940,009 as referenced above, and further claim priority to and the benefit under 35 U.S.C. Section 119 of U.S. Provisional Patent Application No. 61/790,407, filed Mar. 15, 2013, inventor Corey B. Olson, titled “Hardware Acceleration of Short Read Mapping”, and of U.S. Provisional Patent Application No. 61/790,720, filed Mar. 15, 2013, inventors Paul T. Draghicescu, Gregory M. Edvenson, and Corey B. Olson, titled “Inexact Search Acceleration on FPGAs Using the Burrows-Wheeler Transform”, which are commonly assigned herewith, the entire contents of which are incorporated herein by reference with the same full force and effect as if set forth in their entireties herein, and with priority claimed for all commonly disclosed subject matter.
  • FIELD OF THE INVENTION
  • The present invention relates generally to computing applications, and more specifically to the parallel loading of one or more configuration bit images from a host device into a configurable logic circuits such as a plurality of FPGAs.
  • BACKGROUND
  • Configurable logic circuits such as field programmable gate arrays (“FPGAs”) historically may take a considerable time period to load one or more configurations, typically referred to as configuration bit images (or bit images), which are typically stored in adjacent FLASH memory (as a type of nonvolatile memory). This problem is magnified when many FPGAs require configuration, such as upon system power up or when another, different application is to be performed by the FPGAs, especially for supercomputing applications.
  • Storing a bit image in a nonvolatile memory (such a FLASH memory) local to a given FPGA also may be comparatively slow for configuring an FPGA, and in addition, such local storage does not help when a bit image for an application is modified or updated, or when another new or different application is to be performed on the FPGA. For such systems, the FLASH memory must be updated, which also may take a considerable period of time, i.e., minutes rather than seconds, and the updated bit image must be reloaded into the FPGA, and both of which again are compounded when multiple FPGAs with local nonvolatile memory are to be configured, such as 50-1000 FPGAs, for example and without limitation.
  • Accordingly, a need remains for a system having both hardware and software co-design to provide for rapid loading and updating of FPGA configurations or configuration bit images. Such a system should further provide for minimal host involvement, and for significantly parallel and rapid configuration.
  • SUMMARY OF THE INVENTION
  • The exemplary embodiments of the present invention provide numerous advantages. Exemplary embodiments provide a very rapid and parallel method for configuring a large number of field programmable gate arrays (“FPGAs”), largely bypassing local nonvolatile memory such as FLASH.
  • A representative method of configuring a system having at least one host computing system and one of more field programmable gate arrays (“FPGAs”), comprises: using a host processor, storing a first configuration bit image for an application in a host memory; configuring the one of more field programmable gate arrays with a communication functionality, the communication functionality provided in a second configuration bit image stored in a nonvolatile memory; using the host processor, transmitting a message to the one of more field programmable gate arrays, the message comprises a memory address of the first configuration bit image in the host memory; using a DMA engine, for each field programmable gate array, accessing the host memory and obtaining the first configuration bit image; and using the first configuration bit image, configuring the field programmable gate array.
  • In a representative embodiment, the message is transmitted to the one of more field programmable gate arrays through PCIe communication lines.
  • A representative method may further comprise: using the field programmable gate array, transmitting the first configuration bit image to one or more secondary field programmable gate arrays; and using the first configuration bit image, configuring the secondary field programmable gate arrays.
  • In a representative embodiment, the first configuration bit image is transmitted to the one of more secondary field programmable gate arrays through JTAG communication lines. In a representative embodiment, the communication functionality is PCIe. Also in a representative embodiment, the message further comprises a file size of the configuration bit image.
  • A representative method may further comprise configuring a DMA engine in the one of more field programmable gate arrays, the DMA engine functionality provided in a third configuration bit image stored in the nonvolatile memory.
  • A representative method may further comprise: using the host processor, transmitting a message to the one of more field programmable gate arrays, the message comprises a memory address of a third configuration bit image in the host memory; using a DMA engine, for each field programmable gate array, accessing the host memory and obtaining the third configuration bit image; and using the third configuration bit image, reconfiguring the field programmable gate array.
  • A representative system comprises: a host computing system comprising a host processor and a host memory, a first configuration bit image for an application stored in the host memory, the host processor to transmit a message comprising a memory address of the first configuration bit image in the host memory; one or more nonvolatile memories, each nonvolatile memory storing a second configuration bit image for a communication functionality; and a plurality of primary field programmable gate arrays, each primary field programmable gate array coupled to the host processor and coupled to a nonvolatile memory, each primary field programmable gate array configurable for the communication functionality using the second configuration bit image, each primary field programmable gate array having a DMA engine and, in response to the message, to use the DMA engine to access the host memory and obtain the first configuration bit image, and each primary field programmable gate array configurable for the application using the first configuration bit image.
  • A representative system may further comprise: a plurality of secondary field programmable gate arrays coupled to a corresponding primary field programmable gate array of the plurality of primary field programmable gate arrays, each corresponding primary field programmable gate array to transmit the first configuration bit image to one or more secondary field programmable gate arrays of the plurality of secondary field programmable gate arrays. In a representative embodiment, each secondary field programmable gate array is configurable for the application using the first configuration bit image. A representative system may further comprise at least one tertiary field programmable gate array configured as a non-blocking crossbar switch and coupled to the plurality of primary field programmable gate arrays and to the plurality of secondary field programmable gate arrays. Another representative system may further comprise a plurality of JTAG communication lines coupling a corresponding primary field programmable gate array to the one or more secondary field programmable gate arrays for transmission of the first configuration bit image.
  • A representative system may further comprise: a PCIe switch; and a plurality of PCIe communication lines coupling the plurality of primary field programmable gate arrays through the PCIe switch to the host processor. In a representative embodiment, the communication functionality is PCIe. Also in a representative embodiment, the message is transmitted to the one of more primary field programmable gate arrays through the plurality of PCIe communication lines.
  • In a representative embodiment, the one of more of the primary field programmable gate arrays are configured for the DMA engine functionality using a third configuration bit image stored in the nonvolatile memory.
  • Also in a representative embodiment, each primary field programmable gate array, in response to a second message transmitted from the host processor and comprising a memory address of a third configuration bit image in the host memory, to access the host memory and obtain the third configuration bit image; and to reconfigure using the third configuration bit image.
  • A representative system may further comprise: a PCIe switch; a plurality of PCIe communication lines coupled to the PCIe switch; a host computing system coupled to at least one PCIe communication line of the plurality of PCIe communication lines, the host computing system comprising a host processor and a host memory, a first configuration bit image for an application stored in the host memory, the host processor to transmit a message on the at least one PCIe communication line, the message comprising a memory address of the first configuration bit image in the host memory; one or more nonvolatile memories, each nonvolatile memory storing a second configuration bit image for a communication functionality; a plurality of JTAG communication lines; a plurality of primary field programmable gate arrays, each primary field programmable gate array coupled to a corresponding PCIe communication line of the plurality of PCIe communication lines and to one or more corresponding JTAG communication lines of the plurality of JTAG communication lines; each primary field programmable gate array coupled to a nonvolatile memory, each primary field programmable gate array configurable for the communication functionality using the second configuration bit image, each primary field programmable gate array having a DMA engine and, in response to the message, to use the DMA engine to access the host memory and obtain the first configuration bit image, each primary field programmable gate array configurable for the application using the first configuration bit image; each primary field programmable gate array to transmit the first configuration bit image over the one or more corresponding JTAG communication lines; and a plurality of secondary field programmable gate arrays, each secondary field programmable gate array coupled to a JTAG communication line of the plurality of JTAG communication lines, each secondary field programmable gate configurable for the application using the first configuration bit image transmitted over the plurality of JTAG communication lines.
  • Numerous other advantages and features of the present invention will become readily apparent from the following detailed description of the invention and the embodiments thereof, from the claims and from the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The objects, features and advantages of the present invention will be more readily appreciated upon reference to the following disclosure when considered in conjunction with the accompanying drawings, wherein like reference numerals are used to identify identical components in the various views, and wherein reference numerals with alphabetic characters are utilized to identify additional types, instantiations or variations of a selected component embodiment in the various views, in which:
  • FIG. 1 is a block diagram illustrating an exemplary or representative first system embodiment.
  • FIG. 2 is a block diagram illustrating an exemplary or representative second system embodiment.
  • FIG. 3 is a block diagram illustrating an exemplary or representative third system embodiment.
  • FIG. 4 is a block diagram illustrating an exemplary or representative fourth system embodiment.
  • FIG. 5 is a flow diagram illustrating an exemplary or representative configuration method embodiment.
  • FIG. 6 is a block diagram illustrating exemplary or representative fields for a (stream) packet header.
  • FIG. 7 is a flow diagram illustrating an exemplary or representative communication method embodiment.
  • DETAILED DESCRIPTION OF REPRESENTATIVE EMBODIMENTS
  • While the present invention is susceptible of embodiment in many different forms, there are shown in the drawings and will be described herein in detail specific exemplary embodiments thereof, with the understanding that the present disclosure is to be considered as an exemplification of the principles of the invention and is not intended to limit the invention to the specific embodiments illustrated. In this respect, before explaining at least one embodiment consistent with the present invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and to the arrangements of components set forth above and below, illustrated in the drawings, or as described in the examples. Methods and apparatuses consistent with the present invention are capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein, as well as the abstract included below, are for the purposes of description and should not be regarded as limiting.
  • FIG. 1 is a block diagram illustrating an exemplary or representative first system 100 embodiment. FIG. 2 is a block diagram illustrating an exemplary or representative second system 200 embodiment. FIG. 3 is a block diagram illustrating an exemplary or representative third system 300 embodiment and first apparatus embodiment. FIG. 4 is a block diagram illustrating an exemplary or representative fourth system 400 embodiment.
  • As illustrated in FIGS. 1-4, the systems 100, 200, 300, 400 include one or more host computing systems 105, such as a computer or workstation, having one or more central processing units (CPUs) 110, which may be any type of processor, and host memory 120, which may be any type of memory, such as a hard drive or a solid state drive, and which may be located with or separate from the host CPU 110, all for example and without limitation, and as discussed in greater detail below. The memory 120 typically stores data to be utilized in or was generated by a selected application and also generally a configuration bit file or image for a selected application. Not separately illustrated, any of the host computing systems 105 may include a plurality of different types of processors, such as graphics processors, multi-core processors, etc., also as discussed in greater detail below. The various systems 100, 200, 300, 400 differ from one another in terms of the arrangements of circuit components (including on or in various modules), types of components, and types of communication between and among the various components, as described in greater detail below.
  • The one or more host computing systems 105 are typically coupled through one or more communication channels or lines, illustrated as PCI express (Peripheral Component Interconnect Express or “PCIe”) lines 130, either directly or through a PCIe switch 125, to one or more configurable logic elements such as one or more FPGAs 150 (including FPGAs 160, 170) (such as a Spartan 6 FPGA or a Kintex-7 FPGA, both available from Xilinx, Inc. of San Jose, Calif., US, or a Stratix 10 or Cyclone V FPGA available from Altera Corp. of San Jose, Calif., US, for example and without limitation), each of which in turn is coupled to a nonvolatile memory 140, such as a FLASH memory (such as for storing configuration bit images), and to a plurality of random access memories 190, such as a plurality of DDR3 (SODIMM) memory integrated circuits, such as for data storage for computation, communication, etc., for example and without limitation. In a first embodiment as illustrated, each FPGA 150 and corresponding memories 140, 190 directly coupled to that FPGA 150 are collocated on a corresponding computing module (or circuit board) 175 as a module or board in a rack mounted system having many such computing modules 175, such as those available from Pico Computing of Seattle, Wash. US. As illustrated, each computing module 175 includes as an option PCIe input and output (I/O) connector(s) 230 to provide the PCIe 130 connections, such as for a rack mounted system. In representative embodiments, the I/O connector(s) 230, 235 may also include additional coupling functionality, such as JTAG coupling, input power, ground, etc., for example and without limitation, and are illustrated with such additional connectivity in FIG. 4. The PCIe switch 125 may be located or positioned anywhere in a system 100, 200, 300, 400, such as on a separate computing module (such as a backplane circuit board, which can be implemented with computing module 195, for example), or on any of the computing modules 175, 180, 185, 195, 115 for example and without limitation. In addition, other types of communication lines or channels may be utilized to couple the one or more host computing systems 105 to the FPGAs 150, such as an Ethernet line, which in turn may be coupled to other intervening rack-mounted components to provide communication to and from one or more FPGAs 150 (160, 170) and other modules. Also in addition, the various FPGAs 150 (160, 170) may have additional or alternative types of communication between and among the PCIe switch 125 and other FPGAs 150 (160, 170), such as via general purpose (GP) I/O lines 131 (illustrated in FIG. 4).
  • PCIe switch 125 (e.g., available from PLX Technology, Inc. of Sunnyvale, Calif., US), or one or more of the FPGAs 150 (160, 170), may also be configured (as an option) as one or more non-blocking crossbar switches 220, illustrated in FIG. 1 as part of (or a configuration of) PCIe switch 125. The non-blocking crossbar switch 220 provides for pairwise and concurrent communication (communication lines 221) between and among the FPGAs 150, 160, 170 and any of various memories (120, 190, for example and without limitation), without communication between any given pair of FPGAs 150, 160, 170 blocking any other communication between another pair of FPGAs 150, 160, 170. In exemplary embodiment, one or more non-blocking crossbar switches 220 are provided (within a PCIe switch 125) to have sufficient capacity to enable direct FPGA to FPGA communication between and among all of the FPGAs 150, 160, 170 in a selected portion of the system 100, 200, 300, 400. In another representative embodiment, one or more non-blocking crossbar switches 220 are implemented using one or more FPGAs 150 which have been configured accordingly, as illustrated in FIG. 2, which may also be considered a tertiary (or third) FPGA 150 when included in the various hierarchical embodiments, such as illustrated in FIG. 2. In another representative embodiment, one or more non-blocking crossbar switches 220 are implemented using one or more PCIe switches 125 which also have been configured accordingly, illustrated as second PCIe switch 125A in FIG. 4. In another exemplary embodiment not separately illustrated, one or more non-blocking crossbar switches 220 are provided internally within any of the one or more FPGAs 150, 160, 170 for concurrent accesses to a plurality of memories 190, for example and without limitation.
  • Referring to FIG. 2, the system 200 differs insofar as the various FPGAs are hierarchically organized into one or more primary (or central) configurable logic elements such as one or more primary FPGAs 170 and a plurality of secondary (or remote) configurable logic elements such as one or more secondary FPGAs 160 ( FPGAs 150, 160, 170 may be any type of configurable logic elements (such as a Spartan 6 FPGA, a Kintex-7 FPGA, a Stratix 10, a Cyclone V FPGA as mentioned above, also for example and without limitation). The one or more host computing systems 105 are typically coupled through one or more communication channels or lines, illustrated as PCI express (Peripheral Component Interconnect Express or “PCIe”) lines 130, either directly or through a PCIe switch 125, to primary FPGAs 170, each of which in turn is coupled to a plurality of secondary FPGAs 160, also through one or more corresponding communication channels, illustrated as a plurality of JTAG lines 145 (Joint Test Action Group (“JTAG”) is the common name for the IEEE 1149.1 Standard Test Access Port and Boundary-Scan Architecture), or through any of the PCIe lines 130 or GP I/O lines 131. In this embodiment, (illustrated in FIG. 2), each of the secondary FPGAs 160 is provided on a separate computing module 185 which is couplable (through I/O connector(s) 235 and PCIe lines 130 and/or JTAG lines 145) to the computing module 180 having the primary FPGA 170. In various embodiments, the PCIe lines 130 and JTAG lines 145 are illustrated as part of a larger bus (which may also include GP I/O lines 131), and typically routed to different pins on the various FPGAs 150, 160, 170, typically via I/O connectors 235, for example, for the various modular configurations or arrangements. As mentioned above, other lines, such as for power, ground, clocking (in some embodiments), etc., also may be provided to a computing module 185 via I/O connectors 235, for example and without limitation. Not separately illustrated in FIG. 2, PCIe switch 125 also may be coupled to a separate FPGA, such as an FPGA 150, such as illustrated in FIG. 1, which also may be coupled to a nonvolatile memory 140, for example and without limitation.
  • The PCIe switch 125 may be positioned anywhere in a system 100, 200, 300, 400, such as on a separate computing module, for example and without limitation, or on one or more of the computing modules 180 having the primary FPGA 170, as illustrated in FIG. 4 for computing module 195, which can be utilized to implement a backplane for multiple modules 175, as illustrated. In an exemplary embodiment, due to a significantly large fan out of the PCIe lines 130 to other modules and cards in the various systems 100, 200, 300, 400, the PCIe switch 125 is typically located on the backplane of a rack-mounted system (available from Pico Computing, Inc. of Seattle, Wash. US). A PCIe switch 125 may also be collocated on various computing modules (e.g., 195), to which many other modules (e.g., 175) connect (e.g., through PCIe connector(s) 230 or, more generally, I/O connectors 235 which include PCIe, JTAG, GPIO, power, ground, and other signaling lines). In addition, other types of communication lines or channels may be utilized to couple the one or more host computing systems 105 to the primary FPGAs 170 and or secondary FPGAs 160, such as an Ethernet line, which in turn may be coupled to other intervening rack-mounted components to provide communication to and from one or more primary FPGAs 170 and other modules.
  • In this system 200 embodiment, the primary and secondary FPGAs 170 and 160 are located on separate computing modules 180 and 185, also in a rack mounted system having many such computing modules 180 and 185, also such as those available from Pico Computing of Seattle, Wash. US. The computing modules 180 and 185 may be coupled to each other via any type of communication lines, including PCIe and/or JTAG. For example, in an exemplary embodiment, each of the secondary FPGAs 160 is located on a modular computing module (or circuit board) 185 which have corresponding I/O connectors 235 to plug into a region or slot of the primary FPGA 170 computing module 180, up to the capacity of the primary FPGA 170 computing module 180, such as one to six modular computing modules 185 having secondary FPGAs 160. In representative embodiments, the I/O connector(s) 235 may include a wide variety of coupling functionality, such as JTAG coupling, PCIe coupling, GP I/O, input power, ground, etc., for example and without limitation. For purposes of the present disclosure, systems 100, 200, 300, 400 function similarly, and any and all of these system configurations are within the scope of the disclosure.
  • Not separately illustrated in FIGS. 1-4, each of the various computing modules 175, 180, 185, 195, 115 typically include many additional components, such as power supplies, additional memory, additional input and output circuits and connectors, switching components, clock circuitry, etc.
  • The various systems 100, 200, 300, 400 may also be combined into a plurality of system configurations, such as mixing the different types of FPGAs 150, 160, 170 and computing modules 175, 180, 185, 195, 115 into the same system, including within the same rack-mounted system.
  • Additional representative system 300, 400 configurations or arrangements are illustrated in FIGS. 3 and 4. In the system 300 embodiment, the primary and secondary FPGAs 150 and 160, along with PCIe switch 125, are all collocated on a dedicated computing module 115 as a large module in a rack mounted system having many such computing modules 115, such as those available from Pico Computing of Seattle, Wash. US. In the system 400 embodiment, (illustrated in FIG. 4), each of the secondary FPGAs 160 is provided on a separate computing module 175 which is couplable to the computing module 195 having the primary FPGA 170. PCIe switches 125 are also illustrated as collocated on computing module 195 for communication with secondary FPGAs 160 over PCIe communication lines 130, although this is not required and such a PCIe switch 125 may be positioned elsewhere in a system 100, 200, 300, 400, such as on a separate computing module, for example and without limitation.
  • The representative system 300 illustrates some additional features which may be included as options in a computing module, and is further illustrated as an example computing module 115 which does not include the optional nonblocking crossbar switch 220 (e.g., in a PCIe switch 125 or as a configuration of an FPGA 150, 160, 170). As illustrated in FIG. 3, the various secondary FPGAs 160 also have direct communication to each other, with each FPGA 160 coupled through communication lines 210 to its neighboring FPGAs 160, such as serially or “daisy-chained” to each other. Also, one of the FPGAs 160, illustrated as FPGA 160A, has been coupled through high speed serial lines 215, to a hybrid memory cube (“HMC”) 205, which incorporates multiple layers of memory and at least one logic layer, with very high memory density capability. For this system 300, the FPGA 160A has been configured as a memory controller (and potentially a switch or router), providing access and communication to and from the HMC 205 for any of the various FPGAs 160, 170.
  • As a consequence, for purposes of the present disclosure, a system 100, 200, 300, 400 comprises one or more host computing systems 105, couplable through one or more communication lines (such as GP I/O lines 131 or PCIe communication lines (130), directly or through a PCIe switch 125), to one or more FPGAs 150 and/or primary FPGAs 170. In turn, each primary FPGA 170 is coupled through one or more communication lines, such as JTAG lines 145 or PCIe communication lines 130 or GP I/O lines 131, to one or more secondary FPGAs 160. Depending upon the selected embodiment, each FPGA 150, 160, 170 is optionally coupled to a non-blocking crossbar switch 220 (e.g., in a PCIe switch 125 or as a configuration of an FPGA 150, 160, 170) for pairwise communication with any other FPGA 150, 160, 170. In addition, each FPGA 150, 160, 170 is typically coupled to one or more nonvolatile memories 140 and one or more random access memories 190, which may be any type of random access memory.
  • Significant features are enabled in the system 100, 200, 300, 400 as an option, namely, the highly limited involvement of the host CPU 110 in configuring any and all of the FPGAs 150, 160, 170, which frees the host computing system 105 to be engaged in other tasks. In addition, the configuration of the FPGAs 150, 160, 170 may be performed in a massively parallel process, allowing significant time savings. Moreover, because the full configurations of the FPGAs 150, 160, 170 are not required to be stored in nonvolatile memory 140 (such as FLASH), with corresponding read/write cycles which are comparatively slow, configuration of the FPGAs 150, 160, 170 may proceed at a significantly more rapid rate, including providing new or updated configurations. The various FPGAs 150, 160, 170 may also be configured as known in the art, such as by loading a complete configuration from nonvolatile memory 140.
  • Another significant feature of the systems 100, 200, 300, 400 is that only basic (or base) resources for the FPGAs 150 or primary FPGAs 170 are stored in the nonvolatile memory 140 (coupled to a FPGA 150 or a primary FPGA 170), such as a configuration for communication over the PCIe lines 130 (and possibly GP I/O lines 131 or JTAG lines 145, such as for secondary FPGAs 160), and potentially also a configuration for one or more DMA engines (depending upon the selected FPGA 150, 160, 170, the FPGA 150, 160, 170 may be available with incorporated DMA engines). As a result, upon system 100, 200, 300, 400 startup, the only configurations required to be loaded into the FPGA 150 or primary FPGA 170 is limited or minimal, namely, communication (e.g., PCIe and possibly JTAG) functionality and or DMA functionality. In a representative embodiment, upon system 100, 200, 300, 400 startup, the only configuration required to be loaded into the FPGA 150 or a primary FPGA 170 is a communication configuration for PCIe functionality. As a consequence, this base PCIe configuration may be loaded quite rapidly from the nonvolatile memory 140. Stated another way, except for loading of the base communication configuration for PCIe functionality, use of the nonvolatile memory 140 for FPGA configuration is bypassed entirely, both for loading of an initial configuration or an updated configuration.
  • Instead of a host CPU 110 “bit banging” or transferring a very large configuration bit image to each FPGA 150 or primary FPGA 170, configuration of the system 100, 200, 300, 400 occurs rapidly and in parallel when implemented in representative embodiments. Configuration of the FPGAs 150 or primary FPGAs 170 and secondary FPGAs 160 begins with the host CPU 110 merely transmitting a message or command to one or more FPGAs 150 or primary FPGAs 170 with a memory address or location in the host memory 120 (and typically also a file size) of the configuration bit image (or file) which has been stored in the host memory 120, i.e., the host CPU 110 sets the DMA registers of the FPGA 150 or primary FPGA 170 with the memory address and file size for the selected configuration bit image (or file) in the host memory 120. Such a “load FPGA” command is repeated for each of the FPGAs 150 or primary FPGAs 170 (and possibly each secondary FPGA 160, depending upon the selected embodiment), i.e., continuing until the host CPU 110 does not find any more FPGAs 150 or primary FPGAs 170 (and/or secondary FPGAs 160) in the system 100, 200, 300, 400 and an error message may be returned. Typically, the host CPU 110 transmits one such message or command to each FPGA 150 or primary FPGA 170 that will be handling a thread of a parallel, multi-threaded computation. In the representative embodiments, the host CPU 110 is then literally done with the configuration process, and is typically notified with an interrupt signal from a FPGA 150 or primary FPGA 170 once configuration is complete. Stated another way, from the perspective of the host computing system 105, following transmission of generally a single message or command having a designation of a memory address (and possibly a file size), the configuration process is complete. This is a huge advance over prior art methods of FPGA configuration in supercomputing systems.
  • Using a DMA engine, along with communication lines such PCIe lines 130 which support communication of large bit streams, each FPGA 150 or primary FPGA 170 then accesses the host memory 120 and obtains the configuration bit image (or file) (which configuration also generally is loaded into the FPGA 150 or primary FPGA 170). By using the DMA engine, much larger files may be transferred quite rapidly, particularly compared to any packet- or word-based transmission (which would otherwise have to be assembled by the host CPU 110, a comparatively slow and labor-intensive task). This is generally performed in parallel (or serially, depending upon the capability of the host memory 120) for all of the FPGAs 150 or primary FPGAs 170. In turn, each primary FPGA 170 then transmits (typically over JTAG lines 145 or PCIe communication lines 130) the configuration bit image (or file) to each of the secondary FPGAs 160, also typically in parallel. Alternatively, each primary FPGA 150 may re-transmit (typically over JTAG lines 145 or PCIe communication lines 130) the information of the load FPGA message or command to each of the secondary FPGAs 160, namely the memory address in the host memory 120 and the file size, and each secondary FPGA 160 may read or otherwise obtain the configuration bit image, also using DMA engines, for example and without limitation. As another alternative, the host computing system 105 may transmit the load FPGA message or command to each of the FPGAs 150 or primary FPGAs 170 and secondary FPGAs 160, which then obtain the configuration bit image, also using DMA engines as described above. All such variations are within the scope of the disclosure.
  • By using communication lines such as PCIe lines 130 and JTAG lines 145 with the design of the system 100, 200, 300, 400, the configuration bit image is loaded quite rapidly into not only into each of the FPGAs 150 and primary FPGAs 170 but also into each of the secondary FPGAs 160. This allows not only for an entire computing module 175 (or computing modules 180, 185, 195) to be reloaded in seconds, rather than hours, but the entire system 100, 200, 300, 400 may be configured and reconfigured in seconds, also rather than hours. As a result, read and write operations to local memory (e.g., nonvolatile memory 140) largely may be bypassed almost completely in the configuration process, resulting in a huge time savings. In selected embodiments, if desired but certainly not required, the configuration bit image (or file) may also be stored locally, such as in nonvolatile memory 140 (and/or nonvolatile memory 190 (e.g., FLASH) associated with computing modules 175, 180, 185, 195, 115).
  • As a result of this ultrafast loading of configurations, another significant advantage of the system 100, 200, 300, 400 is the corresponding capability, using the same process, for ultrafast reconfiguration of the entire system 100, 200, 300, 400. This is particularly helpful for the design, testing and optimization of system 100, 200, 300, 400 configurations for any given application, including various computationally intensive applications such as bioinformatics applications (e.g., gene sequencing).
  • FIG. 5 is a flow diagram illustrating an exemplary or representative method embodiment for system configuration and reconfiguration, and provides a useful summary of this process. Beginning with start step 240 and one or more FPGA 150, 160, 170 configurations (as configuration bit images) having been stored in a host memory 120, the system 100, 200, 300, 400 powers on or otherwise starts up, and the FPGAs 150, 160, 170 load the base communication functionality such as a PCIe configuration image (and possibly DMA functionality) from nonvolatile memory 140, step 245. Step 245 is optional, as such communication functionality also can be provided to FPGAs 150, 160, 170 via GPIO (or GP I/O) lines 131 (general purpose input and output lines), for example and without limitation. The host CPU 110 (or more generally, host computing system 105) then generates and transmits a “load FPGA” command or message to one or more FPGAs 150 or primary FPGAs 170 (and/or secondary FPGAs 160), step 250, in which the load FPGA command or message includes a starting memory address (in host memory 120) and a file size designation for the selected configuration bit image which is to be utilized. Using the DMA engines, and depending upon the selected variation (of any of the variations described above), the one or more FPGAs 150 or primary FPGAs 170 (and/or secondary FPGAs 160) obtain the configuration bit image from the host memory 120, step 255, and use it to configure. Also depending upon the selected embodiment, the one or more FPGAs 150 or primary FPGAs 170 may also transfer the configuration bit image to each of the secondary FPGAs 160, step 260, such as over JTAG lines 145 and bypassing nonvolatile memory 140, 190, which the secondary FPGAs 160 also use to configure. Also depending upon the selected embodiment, the configuration bit image may be stored locally, step 265, as a possible option as mentioned above. Having loaded the configuration bit image into the FPGAs 150, 160, 170, the method may end, return step 270, such as by generating an interrupt signal back to the host computing system 105.
  • The systems 100, 200, 300, 400 enable one of the significant features of the present disclosure, namely, the highly limited involvement of the host CPU 110 in data transfers between the host computing system 105 and any of the FPGAs 150, 160, 170, and their associated memories 190, and additionally, the highly limited involvement of the host CPU 110 in data transfers between and among any of the FPGAs 150, 160, 170, and their associated memories 190, all of which frees the host computing system 105 to be engaged in other tasks, and further is a significant departure from prior art systems. Once data transfer directions or routes have been established for a given or selected application within the systems 100, 200, 300, 400, moreover, these data communication paths are persistent for the duration of the application, continuing without any further involvement by the host computing system 105, which is also a sharp contrast with prior art systems.
  • Instead of a host CPU 110 “bit banging” or transferring a data file, including a very large data file, to each FPGA 150, 160, 170 or its associated memories 190, data transfers within the system 100, 200, 300, 400 occur rapidly and in parallel, and following setup of the DMA registers in the various FPGAs 150, 160, 170, largely without involvement of the host computing system 105. The data transfer paths are established by the host CPU 110 (or an FPGA 150, 160, 170 configured for this task) merely transmitting a message or command to one or more FPGAs 150, 160, 170 to set the base DMA registers within the FPGA 150, 160, 170 with a memory 190 address (or address or location in the host memory 120, as the case may be), optionally a file size of the data file, and a stream number, i.e., the host CPU 110 (or another FPGA 150, 160, 170 configured for this task) sets the DMA registers of the FPGA(s) 150, 160, 170 with the memory address (and optionally a file size) for the selected data file in the host memory 120 or in one of the memories 190, and also assigns a stream number, including a tie (or tied) stream number if applicable. Once this is established, the system 100, 200, 300, 400 is initialized for data transfer, and these assignments persist for the duration of the application, and do not need to be re-established for subsequent data transfers. It should be noted that the data to be transferred may originate from anywhere within a system 100, 200, 300, 400, including real-time generation by any of the FPGAs 150, 160, 170, any of the local memories, including memories 190, in addition to the host memory 120, and in addition to reception from an external source, for example and without limitation.
  • The host CPU 110 (or an FPGA 150, 160, 170 configured for this task) has therefore established the various data transfer paths between and among the host computing system 105 and the FPGAs 150, 160, 170 for the selected application. As data is then transferred throughout the system 100, 200, 300, 400, header information for any data transfer includes not only a system address (e.g., PCIe address) for the FPGA 150, 160, 170 and/or its associated memories 190, but also includes the “stream” designations (or information) and “tie (or tied) stream” designations (or information), and is particularly useful for multi-threaded or other parallel computation tasks. The header (e.g., a PCIe data packet header) for any selected data transfer path includes: (1) bits for an FPGA 150, 160, 170 and/or memory 190 address and optionally a file size; (2) additional bits for an assignment a stream number to the data transfer (which stream number can be utilized repeatedly for additional data to be transferred subsequently for ongoing computations); and (3) additional bits for any “tie stream” designations, if any are utilized or needed. In addition, as each FPGA 150, 160, 170 may be coupled to a plurality of memories 190, each memory address typically also includes a designation of which memory 190 associated with the designated FPGA 150, 160, 170.
  • FIG. 6 is a block diagram illustrating exemplary or representative fields for a (stream) packet header 350, comprising a plurality of bits designating a first memory address (field 305) (typically a memory 190 address), a plurality of bits designating a file size (field 310) (as an optional field), a plurality of bits designating a (first) stream number (field 315), and as may be necessary or desirable, two additional and optional tie stream fields, namely, a plurality of bits designating the (second) memory 190 address for the tied stream (field 320) and a plurality of bits designating a tie (or tied) stream number (field 325).
  • Any application may then merely write to the selected stream number or read from the selected stream number for the selected memory 190 address (or FPGA 150, 160, 170 address), without any involvement by the host computing system 105, for as long as the application is running on the system 100, 200, 300, 400. In addition, for data transfer throughout the systems 100, 200, 300, 400, data transfer in one stream may be tied to a data transfer of another stream, allowing two separate processes to occur without involvement of the host computing system 105. The first “tie stream” process allows the “daisy chaining” of data transfers, so a data transfer to a first stream number for a selected memory 190 (or FPGA 150, 160, 170 process) on a first computing module 175, 180, 185, 195, 115 may be tied or chained to a subsequent transfer of the same data to another, second stream number for a selected memory 190 (or FPGA 150, 160, 170 process) on a second computing module 175, 180, 185, 195, 115, e.g., data transferred from the host computing system 105 or from a first memory 190 on a first computing module 175, 180, 185, 195, 115 (e.g., card “A”) (stream “1”) to a second memory 190 on a second computing module 175, 180, 185, 195, 115 (e.g., card “B”) will also be further transmitted from the second computing module 175, 180, 185, 195, 115 (e.g., card “B”) as a stream “2” to a third memory 190 on a third computing module 175, 180, 185, 195, 115 (e.g., card “C”), thereby tying streams 1 and 2, not only for the current data transfer, but for the entire duration of the application (until changed by the host computing system 105).
  • The second “tie stream” process allows the chaining or sequencing of data transfers between and among any of the FPGAs 150, 160, 170 without any involvement of the host computing system 105 after the initial setup of the DMA registers in the FPGAs 150, 160, 170. As a result, a data result output from a first stream number for a selected memory 190 (or FPGA 150, 160, 170 process) on a first computing module 175, 180, 185, 195, 115 may be tied or chained to be input data for another, second stream number for a selected memory 190 (or FPGA 150, 160, 170 process) on a second computing module 175, 180, 185, 195, 115, e.g., stream “3” data transferred from the a first memory 190 on a first computing module 175, 180, 185, 195, 115 (e.g., card “A”) will transferred as a stream “4” to a second memory 190 on a second computing module 175, 180, 185, 195, 115 (e.g., card “B”), thereby tying streams 3 and 4, not only for the current data transfer, but for the entire duration of the application (also until changed by the host computing system 105).
  • Any of these various data transfers may occur through any of the various communication channels of the systems 100, 200, 300, 400, and to and from any available internal or external resource, in addition to transmission over the PCIe network (PCIe switch 125 with PCIe communication lines 130), including through the non-blocking crossbar switch 220 (as an option) and over the JTAG lines 145 and/or GP I/O lines 131 and/or communication lines 210, depending upon the selected system 100, 200, 300, 400 configuration. All of these various mechanisms provide for several types of direct FPGA-to-FPGA communication, without any ongoing involvement by host computing system 105 once the DMA registers have been established. Stated another way, in the representative embodiments, the host CPU 110 is then literally done with the data transfer process, and from the perspective of the host computing system 105, following transmission of the DMA setup messages having a designation of a memory 190 address, a file size (as an option), and a stream number, the data transfer configuration process is complete. This is a huge advance over prior art methods of data transfer in supercomputing systems utilizing FPGAs.
  • Using a DMA engine, along with communication lines such PCIe lines 130 which support communication of large bit streams, each FPGA 150, 160, 170 then accesses the host memory 120, or a memory 190, or any other data source, and obtains the data file for a read operation, or performs a corresponding write operation, all using the established address and stream number. By using the DMA engine, much larger files may be transferred quite rapidly, particularly compared to any packet- or word-based transmission. This is generally performed in parallel (or serially, depending upon the application) for all of the FPGAs 150, 160, 170.
  • By using communication lines such as PCIe lines 130 and JTAG lines 145 with the design of the system 100, 200, 300, 400, data transfer occurs quite rapidly, not only into each of the FPGAs 150 or primary FPGAs 170 but also into each of the secondary FPGAs 160, and their associated memories 190. As a result, resources, including memory 190, may be shared across the entire system 100, 200, 300, 400, with any FPGA 150, 160, 170 being able to access any resource anywhere in the system 100, 200, 300, 400, include any of the memories 190 on any of the computing modules or cards (modules) 175, 180, 185, 195, 115.
  • FIG. 7 is a flow diagram illustrating an exemplary or representative method embodiment for data transfer within a system 100, 200, 300, 400 and provides a useful summary. Beginning with start step 405, one or more DMA registers associated with any of the FPGAs 150, 160, 170 and their associated memories 190 are setup, step 410, with a memory (120, 190) address, a file size (as an option, and not necessarily required), a stream number, and any tie (or tied) stream number. Using the DMA engines for read and write operations, or using other available configurations within FPGAs 150, 160, 170, data is transferred between and among the FPGAs 150, 160, 170 using the designated addresses and stream numbers, step 415. When there are any tied streams, step 420, then the data is transferred to the next tied stream, step 425, as the case may be. When there are additional data transfers, step 430, the method returns to step 415, and the process iterates. Otherwise, the method determines whether the application is complete, step 435, and if not, returns to step 415 and iterates as well. When the application is complete in step 435, and there is another application to be run, step 440, the method returns to step 410 to set up the DMA registers for the next application, and iterates. When there are no more applications to be run, the method may end, return step 445.
  • The present disclosure is to be considered as an exemplification of the principles of the invention and is not intended to limit the invention to the specific embodiments illustrated. In this respect, it is to be understood that the invention is not limited in its application to the details of construction and to the arrangements of components set forth above and below, illustrated in the drawings, or as described in the examples. Systems, methods and apparatuses consistent with the present invention are capable of other embodiments and of being practiced and carried out in various ways.
  • Although the invention has been described with respect to specific embodiments thereof, these embodiments are merely illustrative and not restrictive of the invention. In the description herein, numerous specific details are provided, such as examples of electronic components, electronic and structural connections, materials, and structural variations, to provide a thorough understanding of embodiments of the present invention. One skilled in the relevant art will recognize, however, that an embodiment of the invention can be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, components, materials, parts, etc. In other instances, well-known structures, materials, or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the present invention. In addition, the various Figures are not drawn to scale and should not be regarded as limiting.
  • Reference throughout this specification to “one embodiment”, “an embodiment”, or a specific “embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention and not necessarily in all embodiments, and further, are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any specific embodiment of the present invention may be combined in any suitable manner and in any suitable combination with one or more other embodiments, including the use of selected features without corresponding use of other features. In addition, many modifications may be made to adapt a particular application, situation or material to the essential scope and spirit of the present invention. It is to be understood that other variations and modifications of the embodiments of the present invention described and illustrated herein are possible in light of the teachings herein and are to be considered part of the spirit and scope of the present invention.
  • It will also be appreciated that one or more of the elements depicted in the Figures can also be implemented in a more separate or integrated manner, or even removed or rendered inoperable in certain cases, as may be useful in accordance with a particular application. Integrally formed combinations of components are also within the scope of the invention, particularly for embodiments in which a separation or combination of discrete components is unclear or indiscernible. In addition, use of the term “coupled” herein, including in its various forms such as “coupling” or “couplable”, means and includes any direct or indirect electrical, structural or magnetic coupling, connection or attachment, or adaptation or capability for such a direct or indirect electrical, structural or magnetic coupling, connection or attachment, including integrally formed components and components which are coupled via or through another component.
  • A CPU or “processor” 110 may be any type of processor, and may be embodied as one or more processors 110, configured, designed, programmed or otherwise adapted to perform the functionality discussed herein. As the term processor is used herein, a processor 110 may include use of a single integrated circuit (“IC”), or may include use of a plurality of integrated circuits or other components connected, arranged or grouped together, such as controllers, microprocessors, digital signal processors (“DSPs”), parallel processors, multiple core processors, custom ICs, application specific integrated circuits (“ASICs”), field programmable gate arrays (“FPGAs”), adaptive computing ICs, associated memory (such as RAM, DRAM and ROM), and other ICs and components, whether analog or digital. As a consequence, as used herein, the term processor should be understood to equivalently mean and include a single IC, or arrangement of custom ICs, ASICs, processors, microprocessors, controllers, FPGAs, adaptive computing ICs, or some other grouping of integrated circuits which perform the functions discussed below, with associated memory, such as microprocessor memory or additional RAM, DRAM, SDRAM, SRAM, MRAM, ROM, FLASH, EPROM or E2PROM. A processor (such as processor 110), with its associated memory, may be adapted or configured (via programming, FPGA interconnection, or hard-wiring) to perform the methodology of the invention, as discussed above. For example, the methodology may be programmed and stored, in a processor 110 with its associated memory (and/or memory 120) and other equivalent components, as a set of program instructions or other code (or equivalent configuration or other program) for subsequent execution when the processor is operative (i.e., powered on and functioning). Equivalently, when the processor 110 may implemented in whole or part as FPGAs, custom ICs and/or ASICs, the FPGAs, custom ICs or ASICs also may be designed, configured and/or hard-wired to implement the methodology of the invention. For example, the processor 110 may be implemented as an arrangement of analog and/or digital circuits, controllers, microprocessors, DSPs and/or ASICs, collectively referred to as a “controller”, which are respectively hard-wired, programmed, designed, adapted or configured to implement the methodology of the invention, including possibly in conjunction with a memory 120.
  • The memory 120, which may include a data repository (or database), may be embodied in any number of forms, including within any computer or other machine-readable data storage medium, memory device or other storage or communication device for storage or communication of information, currently known or which becomes available in the future, including, but not limited to, a memory integrated circuit (“IC”), or memory portion of an integrated circuit (such as the resident memory within a processor 110), whether volatile or non-volatile, whether removable or non-removable, including without limitation RAM, FLASH, DRAM, SDRAM, SRAM, MRAM, FeRAM, ROM, EPROM or EPROM, or any other form of memory device, such as a magnetic hard drive, an optical drive, a magnetic disk or tape drive, a hard disk drive, other machine-readable storage or memory media such as a floppy disk, a CDROM, a CD-RW, digital versatile disk (DVD) or other optical memory, or any other type of memory, storage medium, or data storage apparatus or circuit, which is known or which becomes known, depending upon the selected embodiment. The memory 120 may be adapted to store various look up tables, parameters, coefficients, other information and data, programs or instructions (of the software of the present invention), and other types of tables such as database tables.
  • As indicated above, the processor 110 is hard-wired or programmed, using software and data structures of the invention, for example, to perform the methodology of the present invention. As a consequence, the system and method of the present invention may be embodied as software which provides such programming or other instructions, such as a set of instructions and/or metadata embodied within a non-transitory computer readable medium, discussed above. In addition, metadata may also be utilized to define the various data structures of a look up table or a database. Such software may be in the form of source or object code, by way of example and without limitation. Source code further may be compiled into some form of instructions or object code (including assembly language instructions or configuration information). The software, source code or metadata of the present invention may be embodied as any type of code, such as C, C++, SystemC, LISA, XML, Java, Brew, SQL and its variations (e.g., SQL 99 or proprietary versions of SQL), DB2, Oracle, or any other type of programming language which performs the functionality discussed herein, including various hardware definition or hardware modeling languages (e.g., Verilog, VHDL, RTL) and resulting database files (e.g., GDSII). As a consequence, a “construct”, “program construct”, “software construct” or “software”, as used equivalently herein, means and refers to any programming language, of any kind, with any syntax or signatures, which provides or can be interpreted to provide the associated functionality or methodology specified (when instantiated or loaded into a processor or computer and executed, including the processor 110, for example).
  • The software, metadata, or other source code of the present invention and any resulting bit file (object code, database, or look up table) may be embodied within any tangible, non-transitory storage medium, such as any of the computer or other machine-readable data storage media, as computer-readable instructions, data structures, program modules or other data, such as discussed above with respect to the memory 120, e.g., a floppy disk, a CDROM, a CD-RW, a DVD, a magnetic hard drive, an optical drive, or any other type of data storage apparatus or medium, as mentioned above.
  • Furthermore, any signal arrows in the drawings/Figures should be considered only exemplary, and not limiting, unless otherwise specifically noted. Combinations of components of steps will also be considered within the scope of the present invention, particularly where the ability to separate or combine is unclear or foreseeable. The disjunctive term “or”, as used herein and throughout the claims that follow, is generally intended to mean “and/or”, having both conjunctive and disjunctive meanings (and is not confined to an “exclusive or” meaning), unless otherwise indicated. As used in the description herein and throughout the claims that follow, “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise. Also as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
  • The foregoing description of illustrated embodiments of the present invention, including what is described in the summary or in the abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed herein. From the foregoing, it will be observed that numerous variations, modifications and substitutions are intended and may be effected without departing from the spirit and scope of the novel concept of the invention. It is to be understood that no limitation with respect to the specific methods and apparatus illustrated herein is intended or should be inferred. It is, of course, intended to cover by the appended claims all such modifications as fall within the scope of the claims.

Claims (20)

It is claimed:
1. A method of configuring a computing system having a first configuration bit image for a first application stored in a memory circuit, the method comprising:
configuring a first configurable logic circuit with a communication functionality, the communication functionality provided in a second configuration bit image stored in a nonvolatile memory;
using a processor circuit, transmitting a first message to the first configurable logic circuit, the first message comprising a first memory address of the first configuration bit image in the memory circuit;
using a DMA engine of the first configurable logic circuit, accessing the memory circuit and obtaining the first configuration bit image;
using the first configuration bit image, the first configurable logic circuit self-configuring for the first application; and
using the first field programmable gate array, transmitting the first configuration bit image to at least one second field programmable gate array of the plurality of field programmable gate arrays.
2. The method of claim 1, further comprising:
using the first configurable logic circuit, transmitting the first configuration bit image to a second configurable logic circuit.
3. The method of claim 2, further comprising:
using the first configuration bit image transmitted from the first configurable logic circuit, configuring the second configurable logic circuit.
4. The method of claim 1, wherein the first configuration bit image is transmitted to the second configurable logic circuit through one or more JTAG communication lines.
5. The method of claim 1, wherein the communication functionality is PCIe.
6. The method of claim 1, wherein the message further comprises a file size of the first configuration bit image.
7. The method of claim 1, further comprising:
configuring the DMA engine in the first configurable logic circuit, the DMA engine functionality provided in a third configuration bit image stored in the nonvolatile memory.
8. The method of claim 1, wherein the first message is transmitted to the first configurable logic circuit through a plurality of PCIe communication lines.
9. The method of claim 1, further comprising:
using the processor circuit, transmitting a second message to the first configurable logic circuit, the second message comprising a second memory address in the memory circuit of a third configuration bit image for a second application;
using the DMA engine of the first configurable logic circuit, accessing the memory circuit and obtaining the third configuration bit image; and
using the third configuration bit image, the first configurable logic circuit self-reconfiguring for the second application.
10. A computing system comprising:
a memory circuit storing a first configuration bit image for a first application;
a processor circuit adapted to transmit a first message comprising a first memory address of the first configuration bit image in the memory circuit;
one or more nonvolatile memories, each nonvolatile memory storing a second configuration bit image for a communication functionality; and
a first configurable logic circuit coupled to the processor circuit and coupled to a nonvolatile memory of the one or more nonvolatile memories, the first configurable logic circuit configured for the communication functionality using the second configuration bit image, the first configurable logic circuit having a DMA engine and, in response to the first message, configured to use the DMA engine to access the memory circuit, obtain the first configuration bit image, and to self-configure for the first application using the first configuration bit image.
11. The computing system of claim 10, further comprising:
at least one second configurable logic circuit coupled to the first configurable logic circuit, the first configurable logic circuit configured to transmit the first configuration bit image to the at least one second configurable logic circuit.
12. The computing system of claim 11, wherein the at least one second configurable logic circuit is configured for the first application using the first configuration bit image.
13. The computing system of claim 11, further comprising:
one or more JTAG communication lines coupling the first configurable logic circuit to the at least one second configurable logic circuit for transmission of the first configuration bit image.
14. The computing system of claim 10, further comprising:
a PCIe switch; and
a plurality of PCIe communication lines coupling the first field configurable logic circuit through the PCIe switch to the processor circuit.
15. The computing system of claim 14, wherein the communication functionality is PCIe.
16. The computing system of claim 14, wherein the first message is transmitted to the first configurable logic circuit through the plurality of PCIe communication lines.
17. The computing system of claim 10, wherein the first message further comprises a file size of the first configuration bit image.
18. The computing system of claim 10, wherein the first configurable logic circuit is configured for the DMA engine functionality using a third configuration bit image stored in the nonvolatile memory.
19. The computing system of claim 10, wherein in response to a second message transmitted from the processor circuit, the second message comprising a second memory address in the memory circuit for a third configuration bit image for a second application, the first configurable logic circuit is configured to access the memory circuit, obtain the third configuration bit image, and to reconfigure for the second application using the third configuration bit image.
20. A configurable computing system comprising:
a memory circuit storing a first configuration bit image for an application;
one or more nonvolatile memories, each nonvolatile memory storing a second configuration bit image for a communication functionality;
a first configurable logic circuit coupled to a nonvolatile memory of the one or more nonvolatile memories, the first configurable logic circuit configured for the communication functionality using the second configuration bit image, the first configurable logic circuit having a DMA engine and, in response to a message comprising a memory address of the first configuration bit image in the memory circuit, the first configurable logic circuit is configured to use the DMA engine to access the memory circuit, obtain the first configuration bit image, and self-configure for the application using the first configuration bit image; the first configurable logic circuit further configured to transmit the first configuration bit image over one or more communication lines; and
at least one second configurable logic circuit coupled to the one or more communication lines, the at least one second configurable logic circuit configured for the application using the first configuration bit image transmitted over the one or more communication lines.
US17/201,022 2013-03-15 2021-03-15 High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays Pending US20210200706A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/201,022 US20210200706A1 (en) 2013-03-15 2021-03-15 High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US201361790407P 2013-03-15 2013-03-15
US201361790720P 2013-03-15 2013-03-15
US201461940009P 2014-02-14 2014-02-14
US201461940472P 2014-02-16 2014-02-16
US14/201,824 US9734284B2 (en) 2013-03-15 2014-03-08 Hardware acceleration of short read mapping for genomic and other types of analyses
US14/213,495 US9740798B2 (en) 2013-03-15 2014-03-14 Inexact search acceleration
US14/608,414 US9658977B2 (en) 2013-03-15 2015-01-29 High speed, parallel configuration of multiple field programmable gate arrays
US15/594,627 US10990551B2 (en) 2013-03-15 2017-05-14 High speed, parallel configuration of multiple field programmable gate arrays
US17/201,022 US20210200706A1 (en) 2013-03-15 2021-03-15 High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/594,627 Continuation US10990551B2 (en) 2013-03-15 2017-05-14 High speed, parallel configuration of multiple field programmable gate arrays

Publications (1)

Publication Number Publication Date
US20210200706A1 true US20210200706A1 (en) 2021-07-01

Family

ID=53174462

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/608,414 Active US9658977B2 (en) 2013-03-15 2015-01-29 High speed, parallel configuration of multiple field programmable gate arrays
US15/594,627 Active 2036-02-06 US10990551B2 (en) 2013-03-15 2017-05-14 High speed, parallel configuration of multiple field programmable gate arrays
US17/201,022 Pending US20210200706A1 (en) 2013-03-15 2021-03-15 High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US14/608,414 Active US9658977B2 (en) 2013-03-15 2015-01-29 High speed, parallel configuration of multiple field programmable gate arrays
US15/594,627 Active 2036-02-06 US10990551B2 (en) 2013-03-15 2017-05-14 High speed, parallel configuration of multiple field programmable gate arrays

Country Status (1)

Country Link
US (3) US9658977B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114217953A (en) * 2021-11-18 2022-03-22 成都理工大学 Target positioning system and identification method based on FPGA image processing
US11836128B1 (en) 2023-07-21 2023-12-05 Sadram, Inc. Self-addressing memory

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017136648A1 (en) * 2016-02-03 2017-08-10 Commscope Technologies Llc Priority based reconfiguration scheme for remote units
US10540186B1 (en) 2017-04-18 2020-01-21 Amazon Technologies, Inc. Interception of identifier from client configurable hardware logic
CN107122222B (en) * 2017-04-20 2019-02-19 深圳大普微电子科技有限公司 A kind of search system and method for character string
US11238203B2 (en) * 2017-06-30 2022-02-01 Intel Corporation Systems and methods for accessing storage-as-memory
US10474600B2 (en) * 2017-09-14 2019-11-12 Samsung Electronics Co., Ltd. Heterogeneous accelerator for highly efficient learning systems
CN108196798B (en) * 2018-01-31 2021-03-23 新华三信息技术有限公司 RAID control card configuration method and device
CN108319465B (en) * 2018-04-09 2024-04-05 中国科学院微电子研究所 Circuit and method for upgrading FPGA configuration data
KR102570581B1 (en) 2018-06-07 2023-08-24 삼성전자 주식회사 Storage device set including storage device and reconfigurable logic chip, and storage system including storage device set
CN111273923B (en) * 2018-12-05 2022-04-12 华为技术有限公司 FPGA (field programmable Gate array) upgrading method based on PCIe (peripheral component interface express) interface
CN109710560A (en) * 2018-12-25 2019-05-03 杭州迪普科技股份有限公司 A kind of method and apparatus that CPU interacts confirmation with FPGA
CN109785224B (en) * 2019-01-29 2021-09-17 华中科技大学 Graph data processing method and system based on FPGA
CN110704365A (en) * 2019-08-20 2020-01-17 浙江大华技术股份有限公司 Reconstruction device based on FPGA
CN110825674B (en) * 2019-10-30 2021-02-12 北京计算机技术及应用研究所 PCIE DMA (peripheral component interface express) interaction system and interaction method based on FPGA (field programmable Gate array)
KR20210072503A (en) 2019-12-09 2021-06-17 삼성전자주식회사 Storage device set including storage device and reconfigurable logic chip, and storage system including storage device set
CN111221759B (en) * 2020-01-17 2021-05-28 深圳市风云实业有限公司 Data processing system and method based on DMA
DE102020116872A1 (en) * 2020-03-27 2021-09-30 Dspace Digital Signal Processing And Control Engineering Gmbh Method for programming a programmable gate array in a distributed computer system
US20230205420A1 (en) * 2021-12-29 2023-06-29 Advanced Micro Devices, Inc. Flexible memory system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114217953A (en) * 2021-11-18 2022-03-22 成都理工大学 Target positioning system and identification method based on FPGA image processing
US11836128B1 (en) 2023-07-21 2023-12-05 Sadram, Inc. Self-addressing memory

Also Published As

Publication number Publication date
US20150143003A1 (en) 2015-05-21
US10990551B2 (en) 2021-04-27
US20170249274A1 (en) 2017-08-31
US9658977B2 (en) 2017-05-23

Similar Documents

Publication Publication Date Title
US20210200706A1 (en) High Speed, Parallel Configuration of Multiple Field Programmable Gate Arrays
US20210209045A1 (en) System and Method for Independent, Direct and Parallel Communication Among Multiple Field Programmable Gate Arrays
US11677662B2 (en) FPGA-efficient directional two-dimensional router
US20210202036A1 (en) Hardware Acceleration of Short Read Mapping for Genomic and Other Types of Analyses
US8063660B1 (en) Method and apparatus for configurable address translation
US9698790B2 (en) Computer architecture using rapidly reconfigurable circuits and high-bandwidth memory interfaces
JP7279064B2 (en) Memory organization for tensor data
US20190012089A1 (en) Interconnect systems and methods using memory links to send packetized data over different endpoints of a data handling device
US20210200817A1 (en) Inexact Search Acceleration
US11080449B2 (en) Modular periphery tile for integrated circuit device
US11860782B2 (en) Compensating for DRAM activation penalties
EP3497722B1 (en) Standalone interface for stacked silicon interconnect (ssi) technology integration
WO2016191304A1 (en) Directional two-dimensional router and interconnection network for field programmable gate arrays, and other circuits, and applications of the router and network
US20180358313A1 (en) High bandwidth memory (hbm) bandwidth aggregation switch
US11670589B2 (en) Fabric die to fabric die interconnect for modularized integrated circuit devices
TWI616764B (en) Layouts for memory and logic circuits in a system-on-chip
CN103577347A (en) Method for operating memory device, and system for memory operation
US10126361B1 (en) Processing of a circuit design for debugging
US20230325345A1 (en) Mesh network-on-a-chip (noc) with heterogeneous routers
US20230053664A1 (en) Full Die and Partial Die Tape Outs from Common Design
Liu Design of Configurable and Extensible Accelerator Architecture for Machine Learning Algorithms
KR20220113515A (en) Repurposed Byte Enable as Clock Enable to Save Power

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION