US20190065281A1 - Technologies for auto-migration in accelerated architectures - Google Patents
Technologies for auto-migration in accelerated architectures Download PDFInfo
- Publication number
- US20190065281A1 US20190065281A1 US15/859,385 US201715859385A US2019065281A1 US 20190065281 A1 US20190065281 A1 US 20190065281A1 US 201715859385 A US201715859385 A US 201715859385A US 2019065281 A1 US2019065281 A1 US 2019065281A1
- Authority
- US
- United States
- Prior art keywords
- compute
- sled
- hardware threads
- sleds
- migrate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000013508 migration Methods 0.000 title claims abstract description 44
- 238000005516 engineering process Methods 0.000 title abstract description 16
- 230000008859 change Effects 0.000 claims abstract description 90
- 238000012545 processing Methods 0.000 claims abstract description 51
- 230000004044 response Effects 0.000 claims abstract description 49
- 238000001514 detection method Methods 0.000 claims abstract description 35
- 238000004891 communication Methods 0.000 claims description 58
- 230000006870 function Effects 0.000 claims description 28
- 238000012544 monitoring process Methods 0.000 claims description 9
- 239000000758 substrate Substances 0.000 description 73
- 238000000034 method Methods 0.000 description 30
- 230000003287 optical effect Effects 0.000 description 28
- 230000005012 migration Effects 0.000 description 23
- 238000013500 data storage Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 18
- 238000001816 cooling Methods 0.000 description 13
- 230000002093 peripheral effect Effects 0.000 description 9
- 239000007787 solid Substances 0.000 description 9
- 230000007246 mechanism Effects 0.000 description 7
- 238000012546 transfer Methods 0.000 description 7
- 239000004744 fabric Substances 0.000 description 4
- 239000013307 optical fiber Substances 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 3
- 239000005387 chalcogenide glass Substances 0.000 description 3
- 230000020169 heat generation Effects 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 229910044991 metal oxide Inorganic materials 0.000 description 3
- 150000004706 metal oxides Chemical class 0.000 description 3
- 239000002070 nanowire Substances 0.000 description 3
- 229910052760 oxygen Inorganic materials 0.000 description 3
- 239000001301 oxygen Substances 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000005641 tunneling Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 239000004593 Epoxy Substances 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000002648 laminated material Substances 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000013341 scale-up Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000005476 soldering Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/18—Packaging or power distribution
- G06F1/183—Internal mounting support structures, e.g. for printed circuit boards, internal connecting means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J15/00—Gripping heads and other end effectors
- B25J15/0014—Gripping heads and other end effectors having fork, comb or plate shaped means for engaging the lower surface on a object to be transported
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/20—Cooling means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3006—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3442—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for planning or managing the needed capacity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
- G06F12/0615—Address space extension
- G06F12/0623—Address space extension for memory modules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
- G06F13/1652—Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
- G06F13/1657—Access to multiple memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
- G06F13/30—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal with priority control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4204—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
- G06F13/4221—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/161—Computing infrastructure, e.g. computer clusters, blade chassis or hardware partitioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7807—System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7867—Architectures of general purpose stored program computers comprising a single central processing unit with reconfigurable architecture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/0816—Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5019—Ensuring fulfilment of SLA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5019—Ensuring fulfilment of SLA
- H04L41/5025—Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/04—Processing captured monitoring data, e.g. for logfile generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/06—Generation of reports
- H04L43/065—Generation of reports related to network devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/25—Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/76—Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
- H04L47/762—Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/83—Admission control; Resource allocation based on usage prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/40—Constructional details, e.g. power supply, mechanical construction or backplane
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q1/00—Details of selecting apparatus or arrangements
- H04Q1/02—Constructional details
- H04Q1/10—Exchange station construction
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/14—Mounting supporting structure in casing or on frame or rack
- H05K7/1485—Servers; Data center rooms, e.g. 19-inch computer racks
- H05K7/1488—Cabinets therefor, e.g. chassis or racks or mechanical interfaces between blades and support structures
- H05K7/1489—Cabinets therefor, e.g. chassis or racks or mechanical interfaces between blades and support structures characterized by the mounting of blades therein, e.g. brackets, rails, trays
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/14—Mounting supporting structure in casing or on frame or rack
- H05K7/1485—Servers; Data center rooms, e.g. 19-inch computer racks
- H05K7/1498—Resource management, Optimisation arrangements, e.g. configuration, identification, tracking, physical location
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/18—Construction of rack or frame
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/20—Modifications to facilitate cooling, ventilating, or heating
- H05K7/20009—Modifications to facilitate cooling, ventilating, or heating using a gaseous coolant in electronic enclosures
- H05K7/20209—Thermal management, e.g. fan control
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/20—Modifications to facilitate cooling, ventilating, or heating
- H05K7/20709—Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
- H05K7/20718—Forced ventilation of a gaseous coolant
- H05K7/20736—Forced ventilation of a gaseous coolant within cabinets for removing heat from server blades
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4022—Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
- G06F21/105—Arrangements for software license management or administration, e.g. for managing licenses at corporate level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2200/00—Indexing scheme relating to G06F1/04 - G06F1/32
- G06F2200/20—Indexing scheme relating to G06F1/20
- G06F2200/201—Cooling arrangements using cooling fluid
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/86—Event-based monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/885—Monitoring specific for caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
- G06F9/4856—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0631—Resource planning, allocation, distributing or scheduling for enterprises or organisations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0283—Price estimation or determination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/04—Network management architectures or arrangements
- H04L41/044—Network management architectures or arrangements comprising hierarchical management structures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/16—Threshold monitoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
Definitions
- FPGA field-programmable gate array
- certain application functionality may be offloaded to an FPGA.
- the hardware threads of the application are paused while waiting for the compute kernels to execute in the FPGA.
- the software stack is required to make the decision to pause/resume, which can be an ineffective solution under certain conditions.
- pausing/resuming the application threads presently works in the order of milliseconds (e.g., driven by software interactions) and consider only a binary solution (i.e., pause/resume), which can be an inflexible solution in certain computing environments.
- FIG. 1 is a simplified diagram of at least one embodiment of a data center for executing workloads with disaggregated resources
- FIG. 2 is a simplified diagram of at least one embodiment of a pod of the data center of FIG. 1 ;
- FIG. 3 is a perspective view of at least one embodiment of a rack that may be included in the pod of FIG. 2 ;
- FIG. 4 is a side plan elevation view of the rack of FIG. 3 ;
- FIG. 5 is a perspective view of the rack of FIG. 3 having a sled mounted therein;
- FIG. 6 is a is a simplified block diagram of at least one embodiment of a top side of the sled of FIG. 5 ;
- FIG. 7 is a simplified block diagram of at least one embodiment of a bottom side of the sled of FIG. 6 ;
- FIG. 8 is a simplified block diagram of at least one embodiment of a compute sled usable in the data center of FIG. 1 ;
- FIG. 9 is a top perspective view of at least one embodiment of the compute sled of FIG. 8 ;
- FIG. 10 is a simplified block diagram of at least one embodiment of an accelerator sled usable in the data center of FIG. 1 ;
- FIG. 11 is a top perspective view of at least one embodiment of the accelerator sled of FIG. 10 ;
- FIG. 12 is a simplified block diagram of at least one embodiment of a storage sled usable in the data center of FIG. 1 ;
- FIG. 13 is a top perspective view of at least one embodiment of the storage sled of FIG. 12 ;
- FIG. 14 is a simplified block diagram of at least one embodiment of a memory sled usable in the data center of FIG. 1 ;
- FIG. 15 is a simplified block diagram of a system that may be established within the data center of FIG. 1 to execute workloads with managed nodes composed of disaggregated resources.
- FIG. 16 is a simplified block diagram of at least one embodiment of a system for auto-migration in accelerated architectures which includes multiple compute sleds, a storage sled, multiple accelerator sleds, a network switch, and a resource manager server;
- FIG. 17 is a simplified block diagram of at least one embodiment of one of the compute sleds of the system of FIG. 16 ;
- FIG. 18 is a simplified block diagram of at least one embodiment of an environment that may be established by one of the compute sleds of FIGS. 16 and 17 ;
- FIG. 19 is a simplified block diagram of at least one embodiment of the network switch of the system of FIG. 16 ;
- FIG. 20 is a simplified flow diagram of at least one embodiment of a method for offloading a compute kernel to a field-programmable gate array (FPGA) that may be performed by an application presently executing on one or more compute sleds of the system of FIG. 16 ;
- FPGA field-programmable gate array
- FIGS. 21A and 21B are a simplified flow diagram of at least one embodiment of a method for auto-migration in accelerated architectures that may be performed by one of the compute sleds of FIGS. 16-18 ;
- FIGS. 22A and 22B are simplified block diagrams of at least one embodiment of an auto-migration of an application being consolidated with another application in one of the compute sleds of the system of FIG. 16 having a high-performance central processing unit (CPU);
- CPU central processing unit
- FIGS. 23A and 23B are simplified block diagrams of at least one embodiment of an auto-migration of an application being migrated from a high-performance CPU of one of the compute sleds of the system of FIG. 16 to another of the compute sleds having a low-performance CPU;
- FIGS. 24A and 24B are simplified block diagrams of at least one embodiment of an auto-migration of an application and a compute kernel to one of the accelerator sleds of the system of FIG. 16 .
- references in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
- items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
- the disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof.
- the disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors.
- a machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
- a data center 100 in which disaggregated resources may cooperatively execute one or more workloads includes multiple pods 110 , 120 , 130 , 140 , each of which includes one or more rows of racks.
- each rack houses multiple sleds, which each may be embodied as a compute device, such as a server, that is primarily equipped with a particular type of resource (e.g., memory devices, data storage devices, accelerator devices, general purpose processors).
- the sleds in each pod 110 , 120 , 130 , 140 are connected to multiple pod switches (e.g., switches that route data communications to and from sleds within the pod).
- the pod switches connect with spine switches 150 that switch communications among pods (e.g., the pods 110 , 120 , 130 , 140 ) in the data center 100 .
- the sleds may be connected with a fabric using Intel Omni-Path technology.
- resources within sleds in the data center 100 may be allocated to a group (referred to herein as a “managed node”) containing resources from one or more other sleds to be collectively utilized in the execution of a workload.
- the workload can execute as if the resources belonging to the managed node were located on the same sled.
- the resources in a managed node may even belong to sleds belonging to different racks, and even to different pods 110 , 120 , 130 , 140 .
- Some resources of a single sled may be allocated to one managed node while other resources of the same sled are allocated to a different managed node (e.g., one processor assigned to one managed node and another processor of the same sled assigned to a different managed node).
- the data center 100 By disaggregating resources to sleds comprised predominantly of a single type of resource (e.g., compute sleds comprising primarily compute resources, memory sleds containing primarily memory resources), and selectively allocating and deallocating the disaggregated resources to form a managed node assigned to execute a workload, the data center 100 provides more efficient resource usage over typical data centers comprised of hyperconverged servers containing compute, memory, storage and perhaps additional resources). As such, the data center 100 may provide greater performance (e.g., throughput, operations per second, latency, etc.) than a typical data center that has the same number of resources.
- compute sleds comprising primarily compute resources
- the data center 100 may provide greater performance (e.g., throughput, operations per second, latency, etc.) than a typical data center that has the same number of resources.
- the pod 110 in the illustrative embodiment, includes a set of rows 200 , 210 , 220 , 230 of racks 240 .
- Each rack 240 may house multiple sleds (e.g., sixteen sleds) and provide power and data connections to the housed sleds, as described in more detail herein.
- the racks in each row 200 , 210 , 220 , 230 are connected to multiple pod switches 250 , 260 .
- the pod switch 250 includes a set of ports 252 to which the sleds of the racks of the pod 110 are connected and another set of ports 254 that connect the pod 110 to the spine switches 150 to provide connectivity to other pods in the data center 100 .
- the pod switch 260 includes a set of ports 262 to which the sleds of the racks of the pod 110 are connected and a set of ports 264 that connect the pod 110 to the spine switches 150 . As such, the use of the pair of switches 250 , 260 provides an amount of redundancy to the pod 110 .
- the switches 150 , 250 , 260 may be embodied as dual-mode optical switches, capable of routing both Ethernet protocol communications carrying Internet Protocol (IP) packets and communications according to a second, high-performance link-layer protocol (e.g., Intel's Omni-Path Architecture's, Infiniband) via optical signaling media of an optical fabric.
- IP Internet Protocol
- a second, high-performance link-layer protocol e.g., Intel's Omni-Path Architecture's, Infiniband
- each of the other pods 120 , 130 , 140 may be similarly structured as, and have components similar to, the pod 110 shown in and described in regard to FIG. 2 (e.g., each pod may have rows of racks housing multiple sleds as described above). Additionally, while two pod switches 250 , 260 are shown, it should be understood that in other embodiments, each pod 110 , 120 , 130 , 140 may be connected to different number of pod switches (e.g., providing even more failover capacity).
- each illustrative rack 240 of the data center 100 includes two elongated support posts 302 , 304 , which are arranged vertically.
- the elongated support posts 302 , 304 may extend upwardly from a floor of the data center 100 when deployed.
- the rack 240 also includes one or more horizontal pairs 310 of elongated support arms 312 (identified in FIG. 3 via a dashed ellipse) configured to support a sled of the data center 100 as discussed below.
- One elongated support arm 312 of the pair of elongated support arms 312 extends outwardly from the elongated support post 302 and the other elongated support arm 312 extends outwardly from the elongated support post 304 .
- each sled of the data center 100 is embodied as a chassis-less sled. That is, each sled has a chassis-less circuit board substrate on which physical resources (e.g., processors, memory, accelerators, storage, etc.) are mounted as discussed in more detail below.
- the rack 240 is configured to receive the chassis-less sleds.
- each pair 310 of elongated support arms 312 defines a sled slot 320 of the rack 240 , which is configured to receive a corresponding chassis-less sled.
- each illustrative elongated support arm 312 includes a circuit board guide 330 configured to receive the chassis-less circuit board substrate of the sled.
- Each circuit board guide 330 is secured to, or otherwise mounted to, a top side 332 of the corresponding elongated support arm 312 .
- each circuit board guide 330 is mounted at a distal end of the corresponding elongated support arm 312 relative to the corresponding elongated support post 302 , 304 .
- not every circuit board guide 330 may be referenced in each Figure.
- Each circuit board guide 330 includes an inner wall that defines a circuit board slot 380 configured to receive the chassis-less circuit board substrate of a sled 400 when the sled 400 is received in the corresponding sled slot 320 of the rack 240 .
- a user aligns the chassis-less circuit board substrate of an illustrative chassis-less sled 400 to a sled slot 320 .
- the user, or robot may then slide the chassis-less circuit board substrate forward into the sled slot 320 such that each side edge 414 of the chassis-less circuit board substrate is received in a corresponding circuit board slot 380 of the circuit board guides 330 of the pair 310 of elongated support arms 312 that define the corresponding sled slot 320 as shown in FIG. 4 .
- each type of resource can be upgraded independently of each other and at their own optimized refresh rate.
- the sleds are configured to blindly mate with power and data communication cables in each rack 240 , enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced.
- the data center 100 may operate (e.g., execute workloads, undergo maintenance and/or upgrades, etc.) without human involvement on the data center floor.
- a human may facilitate one or more maintenance or upgrade operations in the data center 100 .
- each circuit board guide 330 is dual sided. That is, each circuit board guide 330 includes an inner wall that defines a circuit board slot 380 on each side of the circuit board guide 330 . In this way, each circuit board guide 330 can support a chassis-less circuit board substrate on either side. As such, a single additional elongated support post may be added to the rack 240 to turn the rack 240 into a two-rack solution that can hold twice as many sled slots 320 as shown in FIG. 3 .
- the illustrative rack 240 includes seven pairs 310 of elongated support arms 312 that define a corresponding seven sled slots 320 , each configured to receive and support a corresponding sled 400 as discussed above.
- the rack 240 may include additional or fewer pairs 310 of elongated support arms 312 (i.e., additional or fewer sled slots 320 ). It should be appreciated that because the sled 400 is chassis-less, the sled 400 may have an overall height that is different than typical servers. As such, in some embodiments, the height of each sled slot 320 may be shorter than the height of a typical server (e.g., shorter than a single rank unit, “1 U”).
- each of the elongated support posts 302 , 304 may have a length of six feet or less.
- the rack 240 may have different dimensions.
- the rack 240 does not include any walls, enclosures, or the like. Rather, the rack 240 is an enclosure-less rack that is opened to the local environment.
- an end plate may be attached to one of the elongated support posts 302 , 304 in those situations in which the rack 240 forms an end-of-row rack in the data center 100 .
- each elongated support post 302 , 304 includes an inner wall that defines an inner chamber in which the interconnect may be located.
- the interconnects routed through the elongated support posts 302 , 304 may be embodied as any type of interconnects including, but not limited to, data or communication interconnects to provide communication connections to each sled slot 320 , power interconnects to provide power to each sled slot 320 , and/or other types of interconnects.
- the rack 240 in the illustrative embodiment, includes a support platform on which a corresponding optical data connector (not shown) is mounted.
- Each optical data connector is associated with a corresponding sled slot 320 and is configured to mate with an optical data connector of a corresponding sled 400 when the sled 400 is received in the corresponding sled slot 320 .
- optical connections between components (e.g., sleds, racks, and switches) in the data center 100 are made with a blind mate optical connection.
- a door on each cable may prevent dust from contaminating the fiber inside the cable.
- the door is pushed open when the end of the cable enters the connector mechanism. Subsequently, the optical fiber inside the cable enters a gel within the connector mechanism and the optical fiber of one cable comes into contact with the optical fiber of another cable within the gel inside the connector mechanism.
- the illustrative rack 240 also includes a fan array 370 coupled to the cross-support arms of the rack 240 .
- the fan array 370 includes one or more rows of cooling fans 372 , which are aligned in a horizontal line between the elongated support posts 302 , 304 .
- the fan array 370 includes a row of cooling fans 372 for each sled slot 320 of the rack 240 .
- each sled 400 does not include any on-board cooling system in the illustrative embodiment and, as such, the fan array 370 provides cooling for each sled 400 received in the rack 240 .
- Each rack 240 also includes a power supply associated with each sled slot 320 .
- Each power supply is secured to one of the elongated support arms 312 of the pair 310 of elongated support arms 312 that define the corresponding sled slot 320 .
- the rack 240 may include a power supply coupled or secured to each elongated support arm 312 extending from the elongated support post 302 .
- Each power supply includes a power connector configured to mate with a power connector of the sled 400 when the sled 400 is received in the corresponding sled slot 320 .
- the sled 400 does not include any on-board power supply and, as such, the power supplies provided in the rack 240 supply power to corresponding sleds 400 when mounted to the rack 240 .
- each sled 400 in the illustrative embodiment, is configured to be mounted in a corresponding rack 240 of the data center 100 as discussed above.
- each sled 400 may be optimized or otherwise configured for performing particular tasks, such as compute tasks, acceleration tasks, data storage tasks, etc.
- the sled 400 may be embodied as a compute sled 800 as discussed below in regard to FIGS. 8-9 , an accelerator sled 1000 as discussed below in regard to FIGS. 10-11 , a storage sled 1200 as discussed below in regard to FIGS. 12-13 , or as a sled optimized or otherwise configured to perform other specialized tasks, such as a memory sled 1400 , discussed below in regard to FIG. 14 .
- the illustrative sled 400 includes a chassis-less circuit board substrate 602 , which supports various physical resources (e.g., electrical components) mounted thereon.
- the circuit board substrate 602 is “chassis-less” in that the sled 400 does not include a housing or enclosure. Rather, the chassis-less circuit board substrate 602 is open to the local environment.
- the chassis-less circuit board substrate 602 may be formed from any material capable of supporting the various electrical components mounted thereon.
- the chassis-less circuit board substrate 602 is formed from an FR-4 glass-reinforced epoxy laminate material. Of course, other materials may be used to form the chassis-less circuit board substrate 602 in other embodiments.
- the chassis-less circuit board substrate 602 includes multiple features that improve the thermal cooling characteristics of the various electrical components mounted on the chassis-less circuit board substrate 602 .
- the chassis-less circuit board substrate 602 does not include a housing or enclosure, which may improve the airflow over the electrical components of the sled 400 by reducing those structures that may inhibit air flow.
- the chassis-less circuit board substrate 602 is not positioned in an individual housing or enclosure, there is no backplane (e.g., a backplate of the chassis) to the chassis-less circuit board substrate 602 , which could inhibit air flow across the electrical components.
- the chassis-less circuit board substrate 602 has a geometric shape configured to reduce the length of the airflow path across the electrical components mounted to the chassis-less circuit board substrate 602 .
- the illustrative chassis-less circuit board substrate 602 has a width 604 that is greater than a depth 606 of the chassis-less circuit board substrate 602 .
- the chassis-less circuit board substrate 602 has a width of about 21 inches and a depth of about 9 inches, compared to a typical server that has a width of about 17 inches and a depth of about 39 inches.
- an airflow path 608 that extends from a front edge 610 of the chassis-less circuit board substrate 602 toward a rear edge 612 has a shorter distance relative to typical servers, which may improve the thermal cooling characteristics of the sled 400 .
- the various physical resources mounted to the chassis-less circuit board substrate 602 are mounted in corresponding locations such that no two substantively heat-producing electrical components shadow each other as discussed in more detail below.
- no two electrical components which produce appreciable heat during operation (i.e., greater than a nominal heat sufficient enough to adversely impact the cooling of another electrical component), are mounted to the chassis-less circuit board substrate 602 linearly in-line with each other along the direction of the airflow path 608 (i.e., along a direction extending from the front edge 610 toward the rear edge 612 of the chassis-less circuit board substrate 602 ).
- the illustrative sled 400 includes one or more physical resources 620 mounted to a top side 650 of the chassis-less circuit board substrate 602 .
- the physical resources 620 may be embodied as any type of processor, controller, or other compute circuit capable of performing various tasks such as compute functions and/or controlling the functions of the sled 400 depending on, for example, the type or intended functionality of the sled 400 .
- the physical resources 620 may be embodied as high-performance processors in embodiments in which the sled 400 is embodied as a compute sled, as accelerator co-processors or circuits in embodiments in which the sled 400 is embodied as an accelerator sled, storage controllers in embodiments in which the sled 400 is embodied as a storage sled, or a set of memory devices in embodiments in which the sled 400 is embodied as a memory sled.
- the sled 400 also includes one or more additional physical resources 630 mounted to the top side 650 of the chassis-less circuit board substrate 602 .
- the additional physical resources include a network interface controller (NIC) as discussed in more detail below.
- NIC network interface controller
- the physical resources 630 may include additional or other electrical components, circuits, and/or devices in other embodiments.
- the physical resources 620 are communicatively coupled to the physical resources 630 via an input/output (I/O) subsystem 622 .
- the I/O subsystem 622 may be embodied as circuitry and/or components to facilitate input/output operations with the physical resources 620 , the physical resources 630 , and/or other components of the sled 400 .
- the I/O subsystem 622 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations.
- the I/O subsystem 622 is embodied as, or otherwise includes, a double data rate 4 (DDR4) data bus or a DDRS data bus.
- DDR4 double data rate 4
- the sled 400 may also include a resource-to-resource interconnect 624 .
- the resource-to-resource interconnect 624 may be embodied as any type of communication interconnect capable of facilitating resource-to-resource communications.
- the resource-to-resource interconnect 624 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622 ).
- the resource-to-resource interconnect 624 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to resource-to-resource communications.
- QPI QuickPath Interconnect
- UPI UltraPath Interconnect
- the sled 400 also includes a power connector 640 configured to mate with a corresponding power connector of the rack 240 when the sled 400 is mounted in the corresponding rack 240 .
- the sled 400 receives power from a power supply of the rack 240 via the power connector 640 to supply power to the various electrical components of the sled 400 . That is, the sled 400 does not include any local power supply (i.e., an on-board power supply) to provide power to the electrical components of the sled 400 .
- the exclusion of a local or on-board power supply facilitates the reduction in the overall footprint of the chassis-less circuit board substrate 602 , which may increase the thermal cooling characteristics of the various electrical components mounted on the chassis-less circuit board substrate 602 as discussed above.
- power is provided to the processors 820 through vias directly under the processors 820 (e.g., through the bottom side 750 of the chassis-less circuit board substrate 602 ), providing an increased thermal budget, additional current and/or voltage, and better voltage control over typical boards.
- the sled 400 may also include mounting features 642 configured to mate with a mounting arm, or other structure, of a robot to facilitate the placement of the sled 600 in a rack 240 by the robot.
- the mounting features 642 may be embodied as any type of physical structures that allow the robot to grasp the sled 400 without damaging the chassis-less circuit board substrate 602 or the electrical components mounted thereto.
- the mounting features 642 may be embodied as non-conductive pads attached to the chassis-less circuit board substrate 602 .
- the mounting features may be embodied as brackets, braces, or other similar structures attached to the chassis-less circuit board substrate 602 .
- the particular number, shape, size, and/or make-up of the mounting feature 642 may depend on the design of the robot configured to manage the sled 400 .
- the sled 400 in addition to the physical resources 630 mounted on the top side 650 of the chassis-less circuit board substrate 602 , the sled 400 also includes one or more memory devices 720 mounted to a bottom side 750 of the chassis-less circuit board substrate 602 . That is, the chassis-less circuit board substrate 602 is embodied as a double-sided circuit board.
- the physical resources 620 are communicatively coupled to the memory devices 720 via the I/O subsystem 622 .
- the physical resources 620 and the memory devices 720 may be communicatively coupled by one or more vias extending through the chassis-less circuit board substrate 602 .
- Each physical resource 620 may be communicatively coupled to a different set of one or more memory devices 720 in some embodiments. Alternatively, in other embodiments, each physical resource 620 may be communicatively coupled to each memory devices 720 .
- the memory devices 720 may be embodied as any type of memory device capable of storing data for the physical resources 620 during operation of the sled 400 , such as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory.
- Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium.
- Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM).
- RAM random access memory
- DRAM dynamic random access memory
- SRAM static random access memory
- SDRAM synchronous dynamic random access memory
- DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org).
- LPDDR Low Power DDR
- Such standards may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces.
- the memory device is a block addressable memory device, such as those based on NAND or NOR technologies.
- a memory device may also include next-generation nonvolatile devices, such as Intel 3D XPointTM memory or other byte addressable write-in-place nonvolatile memory devices.
- the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory.
- PCM Phase Change Memory
- MRAM magnetoresistive random access memory
- MRAM magnetoresistive random access memory
- STT spin transfer torque
- the memory device may refer to the die itself and/or to a packaged memory product.
- the memory device may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance.
- the sled 400 may be embodied as a compute sled 800 .
- the compute sled 800 is optimized, or otherwise configured, to perform compute tasks.
- the compute sled 800 may rely on other sleds, such as acceleration sleds and/or storage sleds, to perform such compute tasks.
- the compute sled 800 includes various physical resources (e.g., electrical components) similar to the physical resources of the sled 400 , which have been identified in FIG. 8 using the same reference numbers.
- the description of such components provided above in regard to FIGS. 6 and 7 applies to the corresponding components of the compute sled 800 and is not repeated herein for clarity of the description of the compute sled 800 .
- the physical resources 620 are embodied as processors 820 . Although only two processors 820 are shown in FIG. 8 , it should be appreciated that the compute sled 800 may include additional processors 820 in other embodiments.
- the processors 820 are embodied as high-performance processors 820 and may be configured to operate at a relatively high power rating. Although the processors 820 generate additional heat operating at power ratings greater than typical processors (which operate at around 155-230 W), the enhanced thermal cooling characteristics of the chassis-less circuit board substrate 602 discussed above facilitate the higher power operation.
- the processors 820 are configured to operate at a power rating of at least 250 W. In some embodiments, the processors 820 may be configured to operate at a power rating of at least 350 W.
- the compute sled 800 may also include a processor-to-processor interconnect 842 .
- the processor-to-processor interconnect 842 may be embodied as any type of communication interconnect capable of facilitating processor-to-processor interconnect 842 communications.
- the processor-to-processor interconnect 842 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622 ).
- processor-to-processor interconnect 842 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.
- QPI QuickPath Interconnect
- UPI UltraPath Interconnect
- point-to-point interconnect dedicated to processor-to-processor communications.
- the compute sled 800 also includes a communication circuit 830 .
- the illustrative communication circuit 830 includes a network interface controller (NIC) 832 , which may also be referred to as a host fabric interface (HFI).
- NIC network interface controller
- HFI host fabric interface
- the NIC 832 may be embodied as, or otherwise include, any type of integrated circuit, discrete circuits, controller chips, chipsets, add-in-boards, daughtercards, network interface cards, other devices that may be used by the compute sled 800 to connect with another compute device (e.g., with other sleds 400 ).
- the NIC 832 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors.
- the NIC 832 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 832 .
- the local processor of the NIC 832 may be capable of performing one or more of the functions of the processors 820 .
- the local memory of the NIC 832 may be integrated into one or more components of the compute sled at the board level, socket level, chip level, and/or other levels.
- the communication circuit 830 is communicatively coupled to an optical data connector 834 .
- the optical data connector 834 is configured to mate with a corresponding optical data connector of the rack 240 when the compute sled 800 is mounted in the rack 240 .
- the optical data connector 834 includes a plurality of optical fibers which lead from a mating surface of the optical data connector 834 to an optical transceiver 836 .
- the optical transceiver 836 is configured to convert incoming optical signals from the rack-side optical data connector to electrical signals and to convert electrical signals to outgoing optical signals to the rack-side optical data connector.
- the optical transceiver 836 may form a portion of the communication circuit 830 in other embodiments.
- the compute sled 800 may also include an expansion connector 840 .
- the expansion connector 840 is configured to mate with a corresponding connector of an expansion chassis-less circuit board substrate to provide additional physical resources to the compute sled 800 .
- the additional physical resources may be used, for example, by the processors 820 during operation of the compute sled 800 .
- the expansion chassis-less circuit board substrate may be substantially similar to the chassis-less circuit board substrate 602 discussed above and may include various electrical components mounted thereto. The particular electrical components mounted to the expansion chassis-less circuit board substrate may depend on the intended functionality of the expansion chassis-less circuit board substrate.
- the expansion chassis-less circuit board substrate may provide additional compute resources, memory resources, and/or storage resources.
- the additional physical resources of the expansion chassis-less circuit board substrate may include, but is not limited to, processors, memory devices, storage devices, and/or accelerator circuits including, for example, field programmable gate arrays (FPGA), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits.
- processors memory devices, storage devices, and/or accelerator circuits including, for example, field programmable gate arrays (FPGA), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits.
- FPGA field programmable gate arrays
- ASICs application-specific integrated circuits
- security co-processors graphics processing units (GPUs)
- GPUs graphics processing units
- machine learning circuits or other specialized processors, controllers, devices, and/or circuits.
- the processors 820 , communication circuit 830 , and optical data connector 834 are mounted to the top side 650 of the chassis-less circuit board substrate 602 .
- Any suitable attachment or mounting technology may be used to mount the physical resources of the compute sled 800 to the chassis-less circuit board substrate 602 .
- the various physical resources may be mounted in corresponding sockets (e.g., a processor socket), holders, or brackets.
- some of the electrical components may be directly mounted to the chassis-less circuit board substrate 602 via soldering or similar techniques.
- the individual processors 820 and communication circuit 830 are mounted to the top side 650 of the chassis-less circuit board substrate 602 such that no two heat-producing, electrical components shadow each other.
- the processors 820 and communication circuit 830 are mounted in corresponding locations on the top side 650 of the chassis-less circuit board substrate 602 such that no two of those physical resources are linearly in-line with others along the direction of the airflow path 608 .
- the optical data connector 834 is in-line with the communication circuit 830 , the optical data connector 834 produces no or nominal heat during operation.
- the memory devices 720 of the compute sled 800 are mounted to the bottom side 750 of the of the chassis-less circuit board substrate 602 as discussed above in regard to the sled 400 . Although mounted to the bottom side 750 , the memory devices 720 are communicatively coupled to the processors 820 located on the top side 650 via the I/O subsystem 622 . Because the chassis-less circuit board substrate 602 is embodied as a double-sided circuit board, the memory devices 720 and the processors 820 may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-less circuit board substrate 602 . Of course, each processor 820 may be communicatively coupled to a different set of one or more memory devices 720 in some embodiments.
- each processor 820 may be communicatively coupled to each memory device 720 .
- the memory devices 720 may be mounted to one or more memory mezzanines on the bottom side of the chassis-less circuit board substrate 602 and may interconnect with a corresponding processor 820 through a ball-grid array.
- Each of the processors 820 includes a heatsink 850 secured thereto. Due to the mounting of the memory devices 720 to the bottom side 750 of the chassis-less circuit board substrate 602 (as well as the vertical spacing of the sleds 400 in the corresponding rack 240 ), the top side 650 of the chassis-less circuit board substrate 602 includes additional “free” area or space that facilitates the use of heatsinks 850 having a larger size relative to traditional heatsinks used in typical servers. Additionally, due to the improved thermal cooling characteristics of the chassis-less circuit board substrate 602 , none of the processor heatsinks 850 include cooling fans attached thereto. That is, each of the heatsinks 850 is embodied as a fan-less heatsinks.
- the sled 400 may be embodied as an accelerator sled 1000 .
- the accelerator sled 1000 is optimized, or otherwise configured, to perform specialized compute tasks, such as machine learning, encryption, hashing, or other computational-intensive task.
- a compute sled 800 may offload tasks to the accelerator sled 1000 during operation.
- the accelerator sled 1000 includes various components similar to components of the sled 400 and/or compute sled 800 , which have been identified in FIG. 10 using the same reference numbers. The description of such components provided above in regard to FIGS. 6, 7, and 8 apply to the corresponding components of the accelerator sled 1000 and is not repeated herein for clarity of the description of the accelerator sled 1000 .
- the physical resources 620 are embodied as accelerator circuits 1020 .
- the accelerator sled 1000 may include additional accelerator circuits 1020 in other embodiments.
- the accelerator sled 1000 may include four accelerator circuits 1020 in some embodiments.
- the accelerator circuits 1020 may be embodied as any type of processor, co-processor, compute circuit, or other device capable of performing compute or processing operations.
- the accelerator circuits 1020 may be embodied as, for example, field programmable gate arrays (FPGA), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits.
- FPGA field programmable gate arrays
- ASICs application-specific integrated circuits
- GPUs graphics processing units
- machine learning circuits or other specialized processors, controllers, devices, and/or circuits.
- the accelerator sled 1000 may also include an accelerator-to-accelerator interconnect 1042 . Similar to the resource-to-resource interconnect 624 of the sled 600 discussed above, the accelerator-to-accelerator interconnect 1042 may be embodied as any type of communication interconnect capable of facilitating accelerator-to-accelerator communications. In the illustrative embodiment, the accelerator-to-accelerator interconnect 1042 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622 ).
- the accelerator-to-accelerator interconnect 1042 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.
- the accelerator circuits 1020 may be daisy-chained with a primary accelerator circuit 1020 connected to the NIC 832 and memory 720 through the I/O subsystem 622 and a secondary accelerator circuit 1020 connected to the NIC 832 and memory 720 through a primary accelerator circuit 1020 .
- FIG. 11 an illustrative embodiment of the accelerator sled 1000 is shown.
- the accelerator circuits 1020 , communication circuit 830 , and optical data connector 834 are mounted to the top side 650 of the chassis-less circuit board substrate 602 .
- the individual accelerator circuits 1020 and communication circuit 830 are mounted to the top side 650 of the chassis-less circuit board substrate 602 such that no two heat-producing, electrical components shadow each other as discussed above.
- the memory devices 720 of the accelerator sled 1000 are mounted to the bottom side 750 of the of the chassis-less circuit board substrate 602 as discussed above in regard to the sled 600 .
- each of the accelerator circuits 1020 may include a heatsink 1070 that is larger than a traditional heatsink used in a server. As discussed above with reference to the heatsinks 870 , the heatsinks 1070 may be larger than tradition heatsinks because of the “free” area provided by the memory devices 750 being located on the bottom side 750 of the chassis-less circuit board substrate 602 rather than on the top side 650 .
- the sled 400 may be embodied as a storage sled 1200 .
- the storage sled 1200 is optimized, or otherwise configured, to store data in a data storage 1250 local to the storage sled 1200 .
- a compute sled 800 or an accelerator sled 1000 may store and retrieve data from the data storage 1250 of the storage sled 1200 .
- the storage sled 1200 includes various components similar to components of the sled 400 and/or the compute sled 800 , which have been identified in FIG. 12 using the same reference numbers. The description of such components provided above in regard to FIGS. 6, 7 , and 8 apply to the corresponding components of the storage sled 1200 and is not repeated herein for clarity of the description of the storage sled 1200 .
- the physical resources 620 are embodied as storage controllers 1220 . Although only two storage controllers 1220 are shown in FIG. 12 , it should be appreciated that the storage sled 1200 may include additional storage controllers 1220 in other embodiments.
- the storage controllers 1220 may be embodied as any type of processor, controller, or control circuit capable of controlling the storage and retrieval of data into the data storage 1250 based on requests received via the communication circuit 830 .
- the storage controllers 1220 are embodied as relatively low-power processors or controllers.
- the storage controllers 1220 may be configured to operate at a power rating of about 75 watts.
- the storage sled 1200 may also include a controller-to-controller interconnect 1242 .
- the controller-to-controller interconnect 1242 may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications.
- the controller-to-controller interconnect 1242 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622 ).
- controller-to-controller interconnect 1242 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.
- QPI QuickPath Interconnect
- UPI UltraPath Interconnect
- point-to-point interconnect dedicated to processor-to-processor communications.
- the data storage 1250 is embodied as, or otherwise includes, a storage cage 1252 configured to house one or more solid state drives (SSDs) 1254 .
- the storage cage 1252 includes a number of mounting slots 1256 , each of which is configured to receive a corresponding solid state drive 1254 .
- Each of the mounting slots 1256 includes a number of drive guides 1258 that cooperate to define an access opening 1260 of the corresponding mounting slot 1256 .
- the storage cage 1252 is secured to the chassis-less circuit board substrate 602 such that the access openings face away from (i.e., toward the front of) the chassis-less circuit board substrate 602 .
- solid state drives 1254 are accessible while the storage sled 1200 is mounted in a corresponding rack 204 .
- a solid state drive 1254 may be swapped out of a rack 240 (e.g., via a robot) while the storage sled 1200 remains mounted in the corresponding rack 240 .
- the storage cage 1252 illustratively includes sixteen mounting slots 1256 and is capable of mounting and storing sixteen solid state drives 1254 .
- the storage cage 1252 may be configured to store additional or fewer solid state drives 1254 in other embodiments.
- the solid state drivers are mounted vertically in the storage cage 1252 , but may be mounted in the storage cage 1252 in a different orientation in other embodiments.
- Each solid state drive 1254 may be embodied as any type of data storage device capable of storing long term data. To do so, the solid state drives 1254 may include volatile and non-volatile memory devices discussed above.
- the storage controllers 1220 , the communication circuit 830 , and the optical data connector 834 are illustratively mounted to the top side 650 of the chassis-less circuit board substrate 602 .
- any suitable attachment or mounting technology may be used to mount the electrical components of the storage sled 1200 to the chassis-less circuit board substrate 602 including, for example, sockets (e.g., a processor socket), holders, brackets, soldered connections, and/or other mounting or securing techniques.
- the individual storage controllers 1220 and the communication circuit 830 are mounted to the top side 650 of the chassis-less circuit board substrate 602 such that no two heat-producing, electrical components shadow each other.
- the storage controllers 1220 and the communication circuit 830 are mounted in corresponding locations on the top side 650 of the chassis-less circuit board substrate 602 such that no two of those electrical components are linearly in-line with other along the direction of the airflow path 608 .
- the memory devices 720 of the storage sled 1200 are mounted to the bottom side 750 of the of the chassis-less circuit board substrate 602 as discussed above in regard to the sled 400 . Although mounted to the bottom side 750 , the memory devices 720 are communicatively coupled to the storage controllers 1220 located on the top side 650 via the I/O subsystem 622 . Again, because the chassis-less circuit board substrate 602 is embodied as a double-sided circuit board, the memory devices 720 and the storage controllers 1220 may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-less circuit board substrate 602 . Each of the storage controllers 1220 includes a heatsink 1270 secured thereto.
- each of the heatsinks 1270 includes cooling fans attached thereto. That is, each of the heatsinks 1270 is embodied as a fan-less heatsink.
- the sled 400 may be embodied as a memory sled 1400 .
- the storage sled 1400 is optimized, or otherwise configured, to provide other sleds 400 (e.g., compute sleds 800 , accelerator sleds 1000 , etc.) with access to a pool of memory (e.g., in two or more sets 1430 , 1432 of memory devices 720 ) local to the memory sled 1200 .
- a compute sled 800 or an accelerator sled 1000 may remotely write to and/or read from one or more of the memory sets 1430 , 1432 of the memory sled 1200 using a logical address space that maps to physical addresses in the memory sets 1430 , 1432 .
- the memory sled 1400 includes various components similar to components of the sled 400 and/or the compute sled 800 , which have been identified in FIG. 14 using the same reference numbers. The description of such components provided above in regard to FIGS. 6, 7, and 8 apply to the corresponding components of the memory sled 1400 and is not repeated herein for clarity of the description of the memory sled 1400 .
- the physical resources 620 are embodied as memory controllers 1420 . Although only two memory controllers 1420 are shown in FIG. 14 , it should be appreciated that the memory sled 1400 may include additional memory controllers 1420 in other embodiments.
- the memory controllers 1420 may be embodied as any type of processor, controller, or control circuit capable of controlling the writing and reading of data into the memory sets 1430 , 1432 based on requests received via the communication circuit 830 .
- each storage controller 1220 is connected to a corresponding memory set 1430 , 1432 to write to and read from memory devices 720 within the corresponding memory set 1430 , 1432 and enforce any permissions (e.g., read, write, etc.) associated with sled 400 that has sent a request to the memory sled 1400 to perform a memory access operation (e.g., read or write).
- a memory access operation e.g., read or write
- the memory sled 1400 may also include a controller-to-controller interconnect 1442 .
- the controller-to-controller interconnect 1442 may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications.
- the controller-to-controller interconnect 1442 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622 ).
- the controller-to-controller interconnect 1442 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.
- a memory controller 1420 may access, through the controller-to-controller interconnect 1442 , memory that is within the memory set 1432 associated with another memory controller 1420 .
- a scalable memory controller is made of multiple smaller memory controllers, referred to herein as “chiplets”, on a memory sled (e.g., the memory sled 1400 ).
- the chiplets may be interconnected (e.g., using EMIB (Embedded Multi-Die Interconnect Bridge)).
- the combined chiplet memory controller may scale up to a relatively large number of memory controllers and I/O ports, (e.g., up to 16 memory channels).
- the memory controllers 1420 may implement a memory interleave (e.g., one memory address is mapped to the memory set 1430 , the next memory address is mapped to the memory set 1432 , and the third address is mapped to the memory set 1430 , etc.).
- the interleaving may be managed within the memory controllers 1420 , or from CPU sockets (e.g., of the compute sled 800 ) across network links to the memory sets 1430 , 1432 , and may improve the latency associated with performing memory access operations as compared to accessing contiguous memory addresses from the same memory device.
- the memory sled 1400 may be connected to one or more other sleds 400 (e.g., in the same rack 240 or an adjacent rack 240 ) through a waveguide, using the waveguide connector 1480 .
- the waveguides are 64 millimeter waveguides that provide 16 Rx (i.e., receive) lanes and 16 Rt (i.e., transmit) lanes.
- Each lane in the illustrative embodiment, is either 16 Ghz or 32 Ghz. In other embodiments, the frequencies may be different.
- Using a waveguide may provide high throughput access to the memory pool (e.g., the memory sets 1430 , 1432 ) to another sled (e.g., a sled 400 in the same rack 240 or an adjacent rack 240 as the memory sled 1400 ) without adding to the load on the optical data connector 834 .
- the memory pool e.g., the memory sets 1430 , 1432
- another sled e.g., a sled 400 in the same rack 240 or an adjacent rack 240 as the memory sled 1400
- the system 1510 includes an orchestrator server 1520 , which may be embodied as a managed node comprising a compute device (e.g., a compute sled 800 ) executing management software (e.g., a cloud operating environment, such as OpenStack) that is communicatively coupled to multiple sleds 400 including a large number of compute sleds 1530 (e.g., each similar to the compute sled 800 ), memory sleds 1540 (e.g., each similar to the memory sled 1400 ), accelerator sleds 1550 (e.g., each similar to the memory sled 1000 ), and storage sleds 1560 (e.g., each similar to the storage sled 1200 ).
- a compute device e.g., a compute sled 800
- management software e.g., a cloud operating environment, such as OpenStack
- multiple sleds 400 including a large number of compute sleds 1530 (e.g., each
- One or more of the sleds 1530 , 1540 , 1550 , 1560 may be grouped into a managed node 1570 , such as by the orchestrator server 1520 , to collectively perform a workload (e.g., an application 1232 executed in a virtual machine or in a container).
- the managed node 1570 may be embodied as an assembly of physical resources 620 , such as processors 820 , memory resources 720 , accelerator circuits 1020 , or data storage 1250 , from the same or different sleds 400 .
- the managed node may be established, defined, or “spun up” by the orchestrator server 1520 at the time a workload is to be assigned to the managed node or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node.
- the orchestrator server 1520 may selectively allocate and/or deallocate physical resources 620 from the sleds 400 and/or add or remove one or more sleds 400 from the managed node 1570 as a function of quality of service (QoS) targets (e.g., performance targets associated with a throughput, latency, instructions per second, etc.) associated with a service level agreement for the workload (e.g., the application 1532 ).
- QoS quality of service
- the orchestrator server 1520 may receive telemetry data indicative of performance conditions (e.g., throughput, latency, instructions per second, etc.) in each sled 400 of the managed node 1570 and compare the telemetry data to the quality of service targets to determine whether the quality of service targets are being satisfied. If the so, the orchestrator server 1520 may additionally determine whether one or more physical resources may be deallocated from the managed node 1570 while still satisfying the QoS targets, thereby freeing up those physical resources for use in another managed node (e.g., to execute a different workload). Alternatively, if the QoS targets are not presently satisfied, the orchestrator server 1520 may determine to dynamically allocate additional physical resources to assist in the execution of the workload (e.g., the application 1532 ) while the workload is executing
- performance conditions e.g., throughput, latency, instructions per second, etc.
- the orchestrator server 1520 may identify trends in the resource utilization of the workload (e.g., the application 1532 ), such as by identifying phases of execution (e.g., time periods in which different operations, each having different resource utilizations characteristics, are performed) of the workload (e.g., the application 1532 ) and pre-emptively identifying available resources in the data center 100 and allocating them to the managed node 1570 (e.g., within a predefined time period of the associated phase beginning).
- phases of execution e.g., time periods in which different operations, each having different resource utilizations characteristics, are performed
- the orchestrator server 1520 may model performance based on various latencies and a distribution scheme to place workloads among compute sleds and other resources (e.g., accelerator sleds, memory sleds, storage sleds) in the data center 100 .
- the orchestrator server 1520 may utilize a model that accounts for the performance of resources on the sleds 400 (e.g., FPGA performance, memory access latency, etc.) and the performance (e.g., congestion, latency, bandwidth) of the path through the network to the resource (e.g., FPGA).
- the orchestrator server 1520 may determine which resource(s) should be used with which workloads based on the total latency associated with each potential resource available in the data center 100 (e.g., the latency associated with the performance of the resource itself in addition to the latency associated with the path through the network between the compute sled executing the workload and the sled 400 on which the resource is located).
- the orchestrator server 1520 may generate a map of heat generation in the data center 100 using telemetry data (e.g., temperatures, fan speeds, etc.) reported from the sleds 400 and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in the data center 100 .
- telemetry data e.g., temperatures, fan speeds, etc.
- the orchestrator server 1520 may organize received telemetry data into a hierarchical model that is indicative of a relationship between the managed nodes (e.g., a spatial relationship such as the physical locations of the resources of the managed nodes within the data center 100 and/or a functional relationship, such as groupings of the managed nodes by the customers the managed nodes provide services for, the types of functions typically performed by the managed nodes, managed nodes that typically share or exchange workloads among each other, etc.). Based on differences in the physical locations and resources in the managed nodes, a given workload may exhibit different resource utilizations (e.g., cause a different internal temperature, use a different percentage of processor or memory capacity) across the resources of different managed nodes.
- resource utilizations e.g., cause a different internal temperature, use a different percentage of processor or memory capacity
- the orchestrator server 1520 may determine the differences based on the telemetry data stored in the hierarchical model and factor the differences into a prediction of future resource utilization of a workload if the workload is reassigned from one managed node to another managed node, to accurately balance resource utilization in the data center 100 .
- the orchestrator server 1520 may send self-test information to the sleds 400 to enable each sled 400 to locally (e.g., on the sled 400 ) determine whether telemetry data generated by the sled 400 satisfies one or more conditions (e.g., an available capacity that satisfies a predefined threshold, a temperature that satisfies a predefined threshold, etc.). Each sled 400 may then report back a simplified result (e.g., yes or no) to the orchestrator server 1520 , which the orchestrator server 1520 may utilize in determining the allocation of resources to managed nodes.
- a simplified result e.g., yes or no
- a system 1600 for auto-migration in accelerated architectures may be implemented in accordance with the data center 100 described above with reference to FIG. 1 .
- the illustrative system 1600 includes a resource hardware manager 1608 communicatively coupled via a network switch 1612 to multiple compute sleds, a storage sled 1614 , and multiple accelerator sleds 1618 .
- various software applications are executed on the compute sleds 1602 to perform required computations on data.
- At least a portion of the application workloads can be offloaded to a field-programmable gate array (FPGA) (see, e.g., the FPGAs 1622 ( a ) and 1622 ( b )) of an accelerator sled 1618 .
- FPGA field-programmable gate array
- applications being executed in a host e.g., one of the compute sleds 1602 , one of the accelerator sleds 1618 , etc.
- IPC required instructions per cycle
- a phase detection logic unit 1610 of each compute sled 1602 collects telemetry data (e.g., top-down microarchitecture analysis method (TMAM) metrics) indicative of a resource usage and/or performance condition of the respective sleds as application workloads are being performed on the respective sleds.
- the phase detection logic unit 1610 which will be described in further detail below, is configured to analyze the collected data to identify when a given application, executing a set of hardware threads on a central processing unit (CPU) of a compute sled 1602 or an accelerator sled 1618 , changes to a different phase, such as one of a compute bound phase, an FPGA bound phase, a memory bound phase, etc.
- phase detection logic unit 1610 is configured to determine whether a given application needs to be migrated to another CPU of the compute sled 1602 or the accelerator sled 1618 on which the application is presently being executed, or migrated to another CPU of a different compute sled 1602 or accelerator sled 1618 . To do so, the phase detection logic unit 1610 is further configured to determine whether the likelihood of staying in the new, present phase is high enough to migrate the hardware threads and/or an associated compute kernel to another sled, or sleds. Such a determination may depend on an anticipated duration of time of the present phase or other prediction algorithm.
- phase detection logic unit 1610 determines that the hardware threads and/or the compute kernel are to be migrated, the phase detection logic unit 1610 orchestrates the migration process and either offlines the previously used CPU or returns the previously used CPU to the operating system of the applicable sled. It should be appreciated that, as illustratively shown, the phase detection logic unit 1610 , or at least a portion thereof, may reside on each of the compute sleds 1602 , the network switch 1612 , and/or the resource hardware manager 1608 , depending on the embodiment.
- Each of the compute sleds 1602 may be embodied as any type of compute device capable of performing the functions described herein.
- the illustrative network switch 1612 includes a compute engine 1702 , an input/output (I/O) subsystem 1708 , one or more data storage devices 1710 , communication circuitry 1712 , and one or more peripheral devices 1716 .
- the compute sleds 1602 may include other or additional components, such as those commonly found in a computing device. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component.
- the compute engine 1702 may be embodied as any type of device or collection of devices capable of performing the various compute functions as described herein.
- the compute engine 1702 may be embodied as a single device such as an integrated circuit, an embedded system, an FPGA, a system-on-a-chip (SOC), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein.
- the compute engine 1702 may include, or may be embodied as, a processor 1704 (i.e., a central processing unit (CPU)) and memory 1706 .
- a processor 1704 i.e., a central processing unit (CPU)
- memory 1706 i.e., a central processing unit
- the processor 1704 may be embodied as any type of processor capable of performing the functions described herein.
- the processor 1704 may be embodied as one or more single-core processors, multi-core processors, digital signal processors, microcontrollers, or other processor(s) or processing/controlling circuit(s).
- the processor 1704 may be embodied as, include, or otherwise be coupled to a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein.
- FPGA field programmable gate array
- ASIC application specific integrated circuit
- reconfigurable hardware or hardware circuitry or other specialized hardware to facilitate performance of the functions described herein.
- the processor 1704 may include the phase detection logic unit 1610 described with reference to FIG. 16 .
- the phase detection logic unit 1610 may be embodied as a specialized device, such as a co-processor, an FPGA, or an ASIC, for performing the automatic migration operations described herein (e.g., collecting and analyzing telemetry data indicative of performance conditions of the sleds as workloads are being performed thereon and analyzing the telemetry data to determine whether an automatic migration is to be performed as a result of a detected phase change).
- the memory 1706 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. It should be appreciated that the memory 1706 may include main memory (i.e., a primary memory) and/or cache memory (i.e., memory that can be accessed more quickly than the main memory). Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM).
- RAM random access memory
- DRAM dynamic random access memory
- SRAM static random access memory
- DRAM synchronous dynamic random access memory
- DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org).
- LPDDR Low Power DDR
- Such standards may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces.
- the memory device is a block addressable memory device, such as those based on NAND or NOR technologies.
- a memory device may also include future generation nonvolatile devices, such as a three dimensional crosspoint memory device (e.g., Intel 3D XPointTM memory), or other byte addressable write-in-place nonvolatile memory devices.
- the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory.
- the memory device may refer to the die itself and/or to a packaged memory product.
- 3D crosspoint memory may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance.
- all or a portion of the memory 1706 may be integrated into the processor 1704 .
- the memory 1706 may store various software and data used during operation such as job request data, kernel map data, telemetry data, applications, programs, libraries, and drivers.
- the compute engine 1702 is communicatively coupled to other components of the network switch 1612 via the I/O subsystem 1708 , which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 1704 , the memory 1706 , and other components of the network switch 1612 .
- the I/O subsystem 1708 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations.
- the I/O subsystem 1708 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of the processor 1704 , the memory 1706 , and other components of the network switch 1612 , on a single integrated circuit chip.
- SoC system-on-a-chip
- the one or more data storage devices 1710 may be embodied as any type of storage device(s) configured for short-term or long-term storage of data, such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices.
- Each data storage device 1710 may include a system partition that stores data and firmware code for the data storage device 1710 .
- Each data storage device 1710 may also include an operating system partition that stores data files and executables for an operating system.
- the communication circuitry 1712 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the network switch 1612 and other compute devices (e.g., the compute sleds 1602 , the storage sled 1614 , the accelerator sleds 1618 , the resource hardware manager 1608 , etc.). Accordingly, the communication circuitry 1712 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication.
- communication technology e.g., wired or wireless communications
- associated protocols e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.
- the illustrative communication circuitry 1712 includes a network interface controller (NIC) 1714 , which may also be referred to as a host fabric interface (HFI).
- NIC network interface controller
- HFI host fabric interface
- the NIC 1714 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the network switch 1612 to connect with another compute device (e.g., one of the compute sleds 1602 of FIG. 16 ).
- the NIC 1714 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors.
- SoC system-on-a-chip
- the NIC 1714 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 1714 .
- the local processor of the NIC 1714 may be capable of performing one or more of the functions of the processor 1704 described herein.
- the local memory of the NIC 1714 may be integrated into one or more components of the network switch 1612 at the board level, socket level, chip level, and/or other levels.
- the one or more peripheral devices 1716 may include any type of device that is usable to input information into the network switch 1612 1606 and/or receive information from the network switch 1612 .
- the peripheral devices 1716 may be embodied as any auxiliary device usable to input information into the network switch 1612 , such as a keyboard, a mouse, a microphone, a barcode reader, an image scanner, etc., or output information from the network switch 1612 , such as a display, a speaker, graphics circuitry, a printer, a projector, etc. It should be appreciated that, in some embodiments, one or more of the peripheral devices 1716 may function as both an input device and an output device (e.g., a touchscreen display, a digitizer on top of a display screen, etc.).
- an output device e.g., a touchscreen display, a digitizer on top of a display screen, etc.
- peripheral devices 1716 connected to the network switch 1612 may depend on, for example, the type and/or intended use of the network switch 1612 . Additionally or alternatively, in some embodiments, the peripheral devices 1716 may include one or more ports, such as a USB port, for example, for connecting external peripheral devices to the network switch 1612 .
- a compute sled 1602 may establish an environment 1800 during operation.
- the illustrative environment 1800 includes a network connection manager 1810 and the phase detection logic unit 1610 of FIG. 16 .
- Each of the components of the environment 1800 may be embodied as hardware, firmware, software, or a combination thereof.
- one or more of the components of the environment 1800 may be embodied as circuitry or a collection of electrical devices (e.g., network connection management circuitry 1810 , phase detection logic circuitry 1610 , etc.).
- one or both of the network connection management circuitry 1810 and the phase detection logic circuitry 1610 may form a portion of one or more of the compute engine 1702 , the one or more data storage devices 1710 , the communication circuitry 1712 , and/or any other components of the network switch 1612 .
- the environment 1800 additionally includes telemetry data 1802 , phase change data 1804 , and migration policy data 1806 , each of which may be embodied as any data established by the network switch 1612 .
- the telemetry data 1802 may include any data usable to resource usage and/or performance of a computing element (e.g., a CPU) of a compute sled 1602 or an accelerator sled 1618 .
- the telemetry data 1802 may also include information about network traffic passing through the network switch 1612 , including network congestion information and frequencies of data access requests and responses to/from the compute sleds 1602 , the accelerator sleds 1618 , the storage sled 1614 , etc.
- the phase change data 1804 may include any data usable to identify phase changes (e.g., thresholds, expected durations, historical information, etc.) of various applications.
- the migration policy data 1806 may include any data (e.g., rules or policies) usable to instruct the network switch 1612 how/where to migrate hardware threads and/or compute kernels (e.g., under certain conditions).
- the network connection manager 1810 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from the network switch 1612 , respectively.
- inbound and outbound network communications e.g., network traffic, network packets, network flows, etc.
- the network connection manager 1810 is configured to receive and process data packets from one system or computing device (e.g., one of the compute sleds 1602 , the resource hardware manager 1608 , the storage sled 1614 , one of the accelerator sleds 1618 , etc.) and to prepare and send data packets to another computing device or system (e.g., one of the compute sleds 1602 , the resource hardware manager 1608 , the storage sled 1614 , one of the accelerator sleds 1618 , etc.). Accordingly, in some embodiments, at least a portion of the functionality of the network connection manager 1810 may be performed by the communication circuitry 1712 , or more particularly by the NIC 1714 .
- the phase detection logic unit 1610 is configured to analyze collected telemetry data to determine a phase change and orchestrate a migration of an application (i.e., the hardware threads of an application) and, under certain conditions, a compute kernel (i.e., a routine compiled for high throughput accelerators) associated with the migrated application.
- a compute kernel i.e., a routine compiled for high throughput accelerators
- the illustrative phase detection logic unit 1610 includes a telemetry data collector 1812 , a phase change detector 1814 , and a migration manager 1816 .
- the telemetry data collector 1812 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to collect telemetry data (e.g., the telemetry data 1802 ) reported by the compute sleds 1602 and the accelerator sleds 1618 the workloads are executed thereon and compute kernels have been offloaded therefrom.
- telemetry data e.g., the telemetry data 1802
- the telemetry data collector 1812 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to collect telemetry data (e.g., the telemetry data 1802 ) reported by the compute sleds 1602 and the accelerator sleds 1618 the workloads are executed thereon and compute kernels have been offloaded therefrom.
- phase detection logic unit 1610 may be performed by the network switch 1612 and/or the resource hardware manager 1608 .
- the telemetry data may be destined for the resource hardware manager 1608 and collected upon receipt.
- the telemetry data is collected by the network switch 1612 , for example, as network packets containing the telemetry data pass through the network switch 1612 (e.g., through the network connection manager 1810 ), the telemetry data identifies those network packets and stores the telemetry data locally in the network switch 1612 .
- an association with an identifier of the corresponding sled, or more particularly a corresponding compute element (i.e., a CPU, an FPGA, etc.) of that sled, is stored with the telemetry data.
- the phase change detector 1814 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to detect a phase change of an application subsequent to having executed a compute kernel.
- the phases include, but are not limited to, a CPU bound phase, an FPGA bound phase, and a memory bound phase.
- the phase change detector 1814 is configured to detect when the application changes its behavior from a CPU bound phase to a different phase after the compute kernel execution has started. To do so, the phase change detector 1814 is configured to analyze the collected telemetry data (e.g., the telemetry data 1802 ) to determine whether a certain condition, or conditions, exists which indicates a phase change.
- the collected telemetry data e.g., the telemetry data 1802
- the phase change detector 1814 may be configured to compare an IPC value to a threshold peak IPC value. In another example, the phase change detector 1814 may be configured to identify an amount of time a particular phase has taken historically to determine whether efficiencies can be realized by migrating the application's hardware threads to another compute element.
- the migration manager 1816 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to migrate hardware threads associated with an application to another compute element. To do so, the migration manager 1816 is configured to receive an indication of a detected phase change (e.g., from the phase change detector 1814 ) that indicates the application is to be migrated. The migration manager 1816 is additionally configured identify the other compute element the application is to be migrated to by transmitting a compute element identification request to the resource hardware manager 1608 which is usable by the resource hardware manager 1608 to identify the other compute element (e.g., based on requirements of the workload associated with the hardware threads).
- a compute element identification request to the resource hardware manager 1608 which is usable by the resource hardware manager 1608 to identify the other compute element (e.g., based on requirements of the workload associated with the hardware threads).
- the migration manager 1816 is configured to pause the running hardware threads, migrate their status to the identified other compute element, and resume the hardware threads. Additionally, the migration manager 1816 is configured to notify the appropriate operating system and/or the resource manager server 1808 of the completed hardware thread migration.
- each of the telemetry data collector 1812 , the phase change detector 1814 , and the migration manager 1816 may be separately embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof.
- the telemetry data collector 1812 may be embodied as a hardware component
- the phase change detector 1814 and/or the migration manager 1816 may be embodied as virtualized hardware components or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof.
- the compute sleds 1602 and/or the resource hardware manager 1608 may include at least a portion of the phase detection logic unit 1610 and may therefor establish an environment similar to the environment 1800 described herein.
- the resource hardware manager 1608 may be embodied as any type of computing device capable of monitoring and managing resources of the compute sleds 1602 , as well as performing the other functions described herein.
- the resource hardware manager 1608 may be embodied as a computer, a distributed computing system, one or more sleds (e.g., the sleds 204 - 1 , 204 - 2 , 204 - 3 , 204 - 4 , etc.), a server (e.g., stand-alone, rack-mounted, blade, etc.), a multiprocessor system, a network appliance (e.g., physical or virtual), a desktop computer, a workstation, a laptop computer, a notebook computer, a processor-based system, or a network appliance.
- a network appliance e.g., physical or virtual
- an illustrative resource hardware manager 1608 has similar components to that of the network switch 1612 of FIG. 17 , including a compute engine 1902 with a processor 1904 and a memory 1906 , an I/O subsystem 1908 , communication circuitry 1912 with a NIC 1914 , and, in some embodiments, one or more data storage devices 1910 and/or one or more peripheral devices 1916 . Accordingly, the similar or like components are not described herein to preserve clarity of the description.
- the compute sleds 1602 may include other or additional components, such as those commonly found in a computing device. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component.
- the network switch 1612 may be embodied as any type of networking device capable of performing the functions described herein, including switching network packets between the compute sleds 1602 , the resource hardware manager 1608 , the storage sled 1614 , and the accelerator sleds 1618 , as well as any other computing devices communicatively coupled to the network switch 1612 .
- the network switch 1612 may be embodied as a top-of-rack switch, a middle-of-rack switch, or other Ethernet switch. It should be appreciated that the network switch 1612 may include components similar to those described in the illustrative compute sled 1602 of FIG.
- the network switch 1612 may include alternative and/or additional components, such as those commonly found in a packet-switching network device (e.g., various input/output devices and/or other components).
- the storage sled 1614 may be embodied as any type of storage device capable of performing the functions described herein, such as managing a pool of storage devices 1616 (e.g., physical storage resources 205 - 1 ). To do so, the storage sled 1614 may a memory pool controller (not shown) embodied as virtual and/or physical hardware, firmware, software, or a combination thereof, which is configured to manage data into and out of the storage devices 1616 . It should be appreciated that while only a single storage sled 1614 is shown, other embodiments may include more than one storage sled 1614 .
- the storage devices 1616 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein.
- Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium.
- Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM).
- RAM random access memory
- DRAM dynamic random access memory
- SRAM static random access memory
- DRAM synchronous dynamic random access memory
- DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org).
- LPDDR Low Power DDR
- Such standards may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces.
- the storage devices 1616 may be embodied as a block addressable memory device, such as those based on NAND or NOR technologies.
- a memory device may also include future generation nonvolatile devices, such as a three dimensional (3D) crosspoint memory device (e.g., Intel 3D XPointTM memory), or other byte addressable write-in-place nonvolatile memory devices.
- the 3D crosspoint memory e.g., Intel 3D XPointTM memory
- the storage devices 1616 may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory.
- the memory device may refer to the die itself and/or to a packaged memory product.
- the compute sleds 1602 may be pooled, as illustratively shown in the high-performance processing sleds 1134 of FIG. 11 .
- the illustrative compute sleds 1602 include a first compute sled, designated as compute sled (1) 1602 a , a second compute sled, designated as compute sled (2) 1602 b , and a third compute sled, designated as compute sled (N) 1602 c (e.g., in which the compute sled (N) 1602 c represents the “Nth” compute sled 1602 and “N” is a positive integer).
- the illustrative compute sled (1) 1602 a includes one or more high-performance CPUs 1604 .
- the illustrative compute sled (2) 1602 b include one or more low-performance CPUs 1606 .
- the high-performance CPUs 1604 are “high-performance” relative to comparable benchmark test results of features of the low-performance CPUs 1606 .
- a high-performance CPU may be defined as a CPU having a clock frequency above a threshold value, a number of cores above a threshold value, a total power rating above a threshold value, and/or other CPU performance metric that is above a corresponding reference threshold value.
- a high-performance CPU 1604 may be embodied as a high-performance Intel® Xeon® processer and a low-performance CPU 1606 may be embodied as a low-performance Intel® Xeon® processor.
- the accelerator sleds 1618 may be pooled, as illustratively shown in the pooled accelerator sleds 1130 of FIG. 11 .
- the accelerator sleds 1618 include a first accelerator sled, designated as accelerator sled (1) 1618 a , a second accelerator sled, designated as accelerator sled (2) 1618 b , and a third accelerator sled, designated as accelerator sled (N) 1618 c (e.g., in which the accelerator sled (N) 1618 c represents the “Nth” accelerator sled 1602 and “N” is a positive integer).
- the illustrative accelerator sled (1) 1602 a includes an FPGA 1622 , designated as FGPA 1622 a .
- the illustrative accelerator sled (2) 1618 b includes another FPGA 1622 , designated as FPGA 1022 b , as well as a low-performance CPU 1620 .
- the low-performance CPU 1620 of the illustrative accelerator sled (2) 1618 b may be the same or similar “low-performance” CPU to the low-performance CPU 1606 of the illustrative compute sled (2) 1602 b.
- one or more of the compute sleds 1602 and/or accelerator sleds 1618 may be grouped into a managed node, such as by the resource hardware manager 1608 , to collectively perform a workload, such as an application.
- a managed node may be embodied as an assembly of resources, such as compute resources, memory resources, storage resources, or other resources from the same or different sleds or racks.
- a managed node may be established, defined, or “spun up” by the resource hardware manager 1608 at the time a workload is to be assigned to the managed node or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node.
- the resource hardware manager 1608 may, in some embodiments, perform one or more orchestration operations in support of a cloud operating environment, such as OpenStack, and managed nodes established by the resource hardware manager 1608 may execute one or more applications or processes (i.e., workloads), such as in the VMs or containers, on behalf of a user of a client device (not shown) communicatively coupled to the resource hardware manager 1608 (e.g., via a network).
- one of the compute sleds 1602 may execute a method 2000 for offloading a compute kernel (see, e.g., the compute kernel 2204 of FIGS. 22A-B , 2304 of FIGS. 23A-B , and 2404 of FIGS. 24A-B ) to an FPGA 1622 of one of the accelerator sleds 1618 by an application (see, e.g., the applications 2202 , 2302 , and 2402 of FIGS. 22A-20B ) presently executing on the compute sled 1602 .
- the method 2000 begins in block 2002 , in which the application determines whether to offload the compute kernel (i.e., a routine compiled for high throughput accelerators).
- the method 2000 advances to block 2004 , in which the application identifies an FPGA of one of the accelerator sleds 1618 to offload the compute kernel (e.g., an accelerator function unit) to.
- the application may transmit an FPGA identification request to the resource hardware manager 1608 , which may perform the actual identification of the FPGA and notify the compute sled 1602 of the identified FPGA.
- the application executes the compute kernel on the identified FPGA.
- the application notifies a phase detection logic unit of the execution of the compute kernel.
- a compute sled (e.g., one of the compute sleds 1602 of FIG. 16 ), or more particularly the phase detection logic unit 1610 of the compute sled 1602 , may execute a method 2100 for auto-migration in accelerated architectures.
- the method 2100 begins in block 2102 , in which the compute sled 1602 determines whether to monitor an application (i.e., the hardware threads associated with the application). For example, the compute sled 1602 may receive an indication from the application which indicates the execution of a compute kernel that was offloaded by the application to an FPGA 1622 .
- the method 2100 advances to block 2104 , in which the compute sled 1602 monitors hardware threads associated with the application to be monitored. To do so, in block 2106 , the compute sled 1602 collects telemetry data corresponding to the hardware threads to be monitored. For example, in block 2108 , the compute sled 1602 collects resource usage information. Additionally, in block 2110 , the compute sled 1602 collects CPU core performance data, such as a number of instructions per cycle being executed at a given point in time and other metrics related to whether the performance of the application is being CPU bound, memory bound, or Acceleration bound.
- CPU core performance data such as a number of instructions per cycle being executed at a given point in time and other metrics related to whether the performance of the application is being CPU bound, memory bound, or Acceleration bound.
- the compute sled 1602 analyzes the collected telemetry to identify a phase change. To do so, in block 2114 , the compute sled 1602 compares at least a portion of the telemetry data to one or more corresponding thresholds. For example, the compute sled 1602 may compare an IPC value against a peak IPC threshold for a particular compute element (e.g., a CPU). In block 2116 , the compute sled 1602 determines whether a phase change has been detected as a result of the analysis performed in block 2112 . As described previously the phases include, but are not limited to, a CPU bound phase, an FPGA bound phase, and a memory bound phase.
- the method 2100 returns to block 2104 to continue to monitor the hardware threads; otherwise, if a phase change has been detected (e.g., from a CPU bound phase to another phase) the method 2100 advances to block 2118 .
- the compute sled 1602 identifies a new compute element to migrate the hardware threads to.
- the compute sled 1602 may not be capable of identifying the new compute element (e.g., due to the compute sled 1602 not having the necessary resource information available to do so). Accordingly, in such embodiments, the compute sled 1602 may transmit a request (e.g., a compute element identification request) to the resource hardware manager 1608 requesting the resource hardware manager 1608 to identify the new compute element and return the identified compute element.
- a request e.g., a compute element identification request
- the resource hardware manager 1608 may be configured to identify the new compute element based on available resources of the compute sled 1602 on which the hardware threads are presently executing, the available resources of the other compute sleds 1602 , and resource requirements of the workload associated with the hardware threads.
- the compute sled 1602 migrates the hardware threads to the identified new compute element. To do so, in block 2122 , the compute sled 1602 pauses the hardware threads running on the present compute element. Additionally, in block 2124 , the compute sled 1602 migrates the hardware thread states to the other new compute element. Further, in block 2126 , the compute sled 1602 resumes the migrated hardware threads. Finally, in block 2128 , the compute sled 1602 takes the previously used compute element offline. In block 2130 , the compute sled 1602 notifies the respective operating system associated with the application of the migration. In some embodiments, the compute sled 1602 may return the offlined compute element to the respective operating system.
- the compute sled 1602 determines whether to also migrate the compute kernel associated with the migrated application from the FPGA 1622 on which the compute kernel is presently executing to a different FPGA 1622 (e.g., of a different one of the accelerator sleds 1618 ). If not, the method 2100 branches to block 2144 of FIG. 21B , which is describe below; otherwise, if the compute sled 1602 determines to migrate the compute kernel, the method 2100 branches to block 2134 of FIG. 21B . In block 2134 , the compute sled 1602 determines another FPGA to migrate the compute kernel to.
- the compute sled 1602 may not be capable of determining the FPGA, in some embodiments. Accordingly, in such embodiments, the compute sled 1602 may transmit a request (e.g., an FPGA identification request) to the resource hardware manager 1608 requesting the resource hardware manager 1608 to identify the new FPGA and return the identified FPGA. It should be appreciated that, in such embodiments, the resource hardware manager 1608 may be configured to identify the new FPGA based on available resources of the accelerator sled 1618 on which the compute kernel is presently executing, the available resources of the other accelerator sleds 1618 , and resource requirements of the compute kernel.
- a request e.g., an FPGA identification request
- the resource hardware manager 1608 may be configured to identify the new FPGA based on available resources of the accelerator sled 1618 on which the compute kernel is presently executing, the available resources of the other accelerator sleds 1618 , and resource requirements of the compute kernel.
- the compute sled 1602 migrates the compute kernel to the determined new FPGA.
- the compute sled 1602 notifies the application associated with the compute kernel of the compute kernel's migration to the new FPGA.
- the compute sled 1602 monitors a completion status of the compute kernel.
- the compute sled 1602 monitors a phase of the corresponding application.
- the compute sled 1602 determines whether to migrate the application from the new compute element which the hardware thread was migrated to in block 2120 . To do so, for example, the compute sled 1602 may determine to migrate the application in response to having determined the compute kernel operation has completed, or is about to complete. Additionally or alternatively, the compute sled 1602 may determine to migrate the application in response to having detected the phase has changed back to a CPU bound phase.
- the method 2100 determines not to migrate the application, the method 2100 returns to block 2140 to continue monitoring the completion statues of the compute kernel, as well as to continue monitoring the phase of the corresponding application in block 2142 . Otherwise, if the compute sled 1602 determines to migrate the application, the method 2100 advances to block 2146 in which the compute sled 1602 identifies another new compute element to migrate the hardware threads to. As noted previously, the compute sled 1602 may rely on the resource hardware manager 1608 to identify the other new compute element and notify the compute sled 1602 of the identified other new compute element. In block 2148 , the compute sled 1602 migrates the hardware threads to the identified other new compute element.
- the compute sled 1602 pauses the hardware threads running on the present compute element. Additionally, in block 2152 , the compute sled 1602 migrates the hardware thread states to the other new compute element. Further, in block 2154 , the compute sled 1602 resumes the migrated hardware threads. Finally, in block 2156 , the compute sled 1602 offlines the previously used compute element. In block 2158 , the compute sled 1602 notifies the respective operating system and the associated application of the successful migration. Accordingly, the application can make any reconfiguration changes to the application's software/network parameters as may be required as a result of the migration.
- phase detection logic unit 1610 may be in one or more of the compute sleds 1602 , the resource hardware manager 1608 , and the network switch 1612 , in other embodiments. Accordingly, it should be appreciated that, in such embodiments, at least a portion of the method 2100 may be performed by the network switch 1612 and/or the resource hardware manager 1608 in addition or alternatively to the compute sleds 1602 as described herein.
- the functions described herein may be performed, in other embodiments, by a platform including a local multi-processor computing device and at least one FPGA, or a configurable platform (e.g., the Intel® Discrete Configurable Platform) having a multiple processors and at least one FPGA.
- a platform including a local multi-processor computing device and at least one FPGA, or a configurable platform (e.g., the Intel® Discrete Configurable Platform) having a multiple processors and at least one FPGA.
- an application presently executing on a compute element may offload a compute kernel to an FPGA 1622 of an accelerator sled 128 (e.g., the FPGA 1622 a of the accelerator sled (1) 1618 a , the FPGA 1622 b of the accelerator sled (2) 1618 b , etc.). Further, the application may notify a phase detection logic unit 1610 of the offload such that, as described in the method 2100 of FIG. 21 , the phase detection logic unit 1610 can monitor a phase of the application via the collection/analysis of telemetry data associated with the resources being used by the application.
- the phase detection logic unit 1610 may determine to migrate the application and, under certain conditions, the associated compute kernel. Accordingly, each of FIGS. 22A and 22B, 23A and 23B, and 24A and 24B illustrate non-limiting example application/compute kernel migrations.
- the compute sled (1) 1602 a includes a first high-performance CPU 1604 , designated as high-performance CPU 1604 a , and a second high-performance CPU 1604 , designated as high-performance CPU 1604 b .
- Each of the high-performance CPUs 1604 a and 1604 b are illustratively shown running an application 2202 .
- a first application 2202 is presently being executed on the high-performance CPU 1604 a .
- a second application 2202 is presently being executed on the high-performance CPU 1604 b .
- the accelerator sled (1) 1618 a is shown having a compute kernel 2204 presently executing in the FPGA 1622 a of the accelerator sled (1) 1618 a .
- the compute kernel 2204 is associated with (i.e., was offloaded by) application (1) 2202 a.
- the application (1) 2202 has been migrated from the high-performance CPU (1) 1604 a to the high-performance CPU (2) 1604 b (i.e., consolidated with the application (2) 2202 b ). Additionally, the compute kernel is migrated from the FPGA 1622 a of the accelerator sled (a) 1618 a to the FPGA 1622 b of the accelerator sled (2) 1618 b .
- the phase detection logic unit 1610 is configured to identify resource usage/performance of a workload of the application (2) 2202 b while the compute kernel 2204 is executing and compare the identified resource usage to a corresponding usage/performance threshold, as well as the phase of the application.
- the phase detection logic unit 1610 may be configured to identify a present IPC value and compare the identified present IPC value to an IPC peak threshold value, as well as identify the present phase (e.g., FPGA bound in FIG. 22A and CPU bound in FIG. 22B ).
- FIGS. 23A and 23B an illustrative example for auto-migration of an application is shown which includes an application being migrated from the high-performance CPU 1604 of the compute sled 1602 a to the low-performance CPU 1606 of the compute sled 1602 b .
- an application 2302 is presently being executed by the high-performance CPU 1604 of the compute sled (1) 1602 a .
- a compute kernel 2304 is presently executing on the FPGA 1622 a of the accelerator sled (1) 1618 a .
- post-migration FIG. 23B the application 2302 has been migrated to the low-performance CPU 1606 of the compute sled (2) 1602 b and the compute kernel has not been migrated.
- FIGS. 24A and 24B in illustrative example for auto-migration of an application and a compute kernel both being migrated to the same accelerator sled (e.g., one of the accelerator sleds 1618 ).
- an application 2402 is presently being executed by the high-performance CPU 1604 of the compute sled (1) 1602 a and a compute kernel 2404 is presently executing on the FPGA 1622 a of the accelerator sled (1) 1618 a .
- the application 2402 has been migrated to the low-performance CPU 1620 of the accelerator sled (2) 1618 b and the compute kernel 2404 has been migrated from the FPGA 1622 a of the accelerator sled (1) 1618 a to the FPGA 1622 b of the accelerator sled (2) 1618 b.
- An embodiment of the technologies disclosed herein may include any one or more, and any combination of, the examples described below.
- Example 1 includes a compute sled for auto-migration in accelerated architectures, the compute sled comprising a compute engine to receive, from an application executed on a first compute element of a compute sled of a plurality of compute sleds, an indication that a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled of a plurality of accelerator sleds, wherein each of the plurality of accelerator sleds and the plurality of compute sleds are communicatively coupled to the compute sled; monitor a plurality of hardware threads associated with the application; detect whether a phase change has been detected as a function of the monitored hardware threads; and migrate, in response to detected detection of the phase change, the hardware threads to a second compute element.
- FPGA field-programmable gate array
- Example 2 includes the subject matter of Example 1, and wherein to monitor the plurality of hardware threads comprises to collect telemetry data corresponding to one or more hardware resources used by the hardware threads during execution.
- Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to collect the telemetry data includes to collect an instructions per cycle (IPC) value of the first compute element.
- IPC instructions per cycle
- Example 4 includes the subject matter of any of Examples 1-3, and wherein to detect whether the phase change has been detected comprises to compare the IPC value of the first compute element to a peak IPC threshold value.
- Example 5 includes the subject matter of any of Examples 1-4, and wherein to detect whether the phase change has been detected comprises to identify a previous phase as a central processing unit (CPU) bound phase and identify a present phase as an FPGA bound phase.
- CPU central processing unit
- Example 6 includes the subject matter of any of Examples 1-5, and wherein to detect whether the phase change has been detected comprises to identify a previous phase as a central processing unit (CPU) bound phase and identify a present phase as a memory bound phase.
- CPU central processing unit
- Example 7 includes the subject matter of any of Examples 1-6, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to another high-performance CPU of the compute sled.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to another high-performance CPU of the compute sled.
- CPU central processing unit
- Example 8 includes the subject matter of any of Examples 1-7, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- CPU central processing unit
- Example 9 includes the subject matter of any of Examples 1-8, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- CPU central processing unit
- Example 10 includes the subject matter of any of Examples 1-9, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of the accelerator sled.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of the accelerator sled.
- CPU central processing unit
- Example 11 includes the subject matter of any of Examples 1-10, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- CPU central processing unit
- Example 12 includes the subject matter of any of Examples 1-11, and wherein to migrate the hardware threads to the second compute element comprises to pause the hardware threads at the first compute element, migrate states of the hardware threads from the first compute element to the second compute element, resume the migrated hardware threads at the second compute element, and offline the first compute element.
- Example 13 includes the subject matter of any of Examples 1-12, and wherein the compute engine is further to migrate the compute kernel to another FPGA of another accelerator sled of the plurality of accelerator sleds.
- Example 14 includes the subject matter of any of Examples 1-13, and wherein the compute engine is further to receive an indication that indicates the compute kernel has completed; and migrate, in response to having received the indication, the application to a third compute element.
- Example 15 includes the subject matter of any of Examples 1-14, and wherein to migrate the application to the third compute element comprises to migrate the application to a high-performance CPU of one of the plurality of compute sleds.
- Example 16 includes the subject matter of any of Examples 1-15, and wherein to migrate the hardware threads to the third compute element comprises to pause the hardware threads at the second compute element, migrate states of the hardware threads from the second compute element to the third compute element, resume the migrated hardware threads at the third compute element, and offline the second compute element.
- Example 17 includes a method for auto-migration in accelerated architectures, the method comprising receiving, by a compute sled, from an application executed on a first compute element of a compute sled of a plurality of compute sleds, an indication that a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled of a plurality of accelerator sleds, wherein each of the plurality of accelerator sleds and the plurality of compute sleds are communicatively coupled to the compute sled; monitoring, by the compute sled, a plurality of hardware threads associated with the application; detecting, by the compute sled, whether a phase change has been detected as a function of the monitored hardware threads; and migrating, by the compute sled and in response to detected detection of the phase change, the hardware threads to a second compute element.
- FPGA field-programmable gate array
- Example 18 includes the subject matter of Example 17, and wherein monitoring the plurality of hardware threads comprises collecting telemetry data corresponding to one or more hardware resources used by the hardware threads during execution.
- Example 19 includes the subject matter of any of Examples 17 and 18, and wherein collecting the telemetry data includes collecting an instructions per cycle (IPC) value of the first compute element.
- IPC instructions per cycle
- Example 20 includes the subject matter of any of Examples 17-19, and wherein detecting whether the phase change has been detected comprises comparing the IPC value of the first compute element to a peak IPC threshold value.
- Example 21 includes the subject matter of any of Examples 17-20, and wherein detecting whether the phase change has been detected comprises identifying a previous phase as a central processing unit (CPU) bound phase and identify a present phase as an FPGA bound phase.
- CPU central processing unit
- Example 22 includes the subject matter of any of Examples 17-21, and wherein detecting whether the phase change has been detected comprises identifying a previous phase as a central processing unit (CPU) bound phase and identify a present phase as a memory bound phase.
- CPU central processing unit
- Example 23 includes the subject matter of any of Examples 17-22, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to another high-performance CPU of the compute sled.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to another high-performance CPU of the compute sled.
- CPU central processing unit
- Example 24 includes the subject matter of any of Examples 17-23, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- CPU central processing unit
- Example 25 includes the subject matter of any of Examples 17-24, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- CPU central processing unit
- Example 26 includes the subject matter of any of Examples 17-25, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of the accelerator sled.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of the accelerator sled.
- CPU central processing unit
- Example 27 includes the subject matter of any of Examples 17-26, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- CPU central processing unit
- Example 28 includes the subject matter of any of Examples 17-27, and wherein migrating the hardware threads to the second compute element comprises pausing the hardware threads at the first compute element; migrating states of the hardware threads from the first compute element to the second compute element; resuming the migrated hardware threads at the second compute element; and offlining the first compute element.
- Example 29 includes the subject matter of any of Examples 17-28, and further including migrating, by the compute sled, the compute kernel to another FPGA of another accelerator sled of the plurality of accelerator sleds.
- Example 30 includes the subject matter of any of Examples 17-29, and further including receiving, by the compute sled, an indication that indicates the compute kernel has completed; and migrating, by the compute sled and in response to having received the indication, the application to a third compute element.
- Example 31 includes the subject matter of any of Examples 17-30, and wherein migrating the application to the third compute element comprises migrating the application to a high-performance CPU of one of the plurality of compute sleds.
- Example 32 includes the subject matter of any of Examples 17-31, and wherein migrating the hardware threads to the third compute element comprises pausing the hardware threads at the second compute element; migrating states of the hardware threads from the second compute element to the third compute element; resuming the migrated hardware threads at the third compute element; and offlining the second compute element.
- Example 33 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a compute sled to perform the method of any of Examples 17-32.
- Example 34 includes a compute sled for improving throughput in a network, the compute sled comprising one or more processors; one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the compute sled to perform the method of any of Examples 17-32.
- Example 35 includes a compute sled for auto-migration in accelerated architectures, the compute sled comprising phase detection logic circuitry to receive, from an application executed on a first compute element of a compute sled of a plurality of compute sleds, an indication that a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled of a plurality of accelerator sleds, wherein each of the plurality of accelerator sleds and the plurality of compute sleds are communicatively coupled to the compute sled; monitor a plurality of hardware threads associated with the application; detect whether a phase change has been detected as a function of the monitored hardware threads; and migrate, in response to detected detection of the phase change, the hardware threads to a second compute element.
- FPGA field-programmable gate array
- Example 36 includes the subject matter of Example 35, and wherein to monitor the plurality of hardware threads comprises to collect telemetry data corresponding to one or more hardware resources used by the hardware threads during execution.
- Example 37 includes the subject matter of any of Examples 35 and 36, and wherein to collect the telemetry data includes to collect an instructions per cycle (IPC) value of the first compute element.
- IPC instructions per cycle
- Example 38 includes the subject matter of any of Examples 35-37, and wherein to detect whether the phase change has been detected comprises to compare the IPC value of the first compute element to a peak IPC threshold value.
- Example 39 includes the subject matter of any of Examples 35-38, and wherein to detect whether the phase change has been detected comprises to identify a previous phase as a central processing unit (CPU) bound phase and identify a present phase as an FPGA bound phase.
- CPU central processing unit
- Example 40 includes the subject matter of any of Examples 35-39, and wherein to detect whether the phase change has been detected comprises to identify a previous phase as a central processing unit (CPU) bound phase and identify a present phase as a memory bound phase.
- CPU central processing unit
- Example 41 includes the subject matter of any of Examples 35-40, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to another high-performance CPU of the compute sled.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to another high-performance CPU of the compute sled.
- CPU central processing unit
- Example 42 includes the subject matter of any of Examples 35-41, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- CPU central processing unit
- Example 43 includes the subject matter of any of Examples 35-42, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- CPU central processing unit
- Example 44 includes the subject matter of any of Examples 35-43, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of the accelerator sled.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of the accelerator sled.
- CPU central processing unit
- Example 45 includes the subject matter of any of Examples 35-44, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- CPU central processing unit
- Example 46 includes the subject matter of any of Examples 35-45, and wherein to migrate the hardware threads to the second compute element comprises to pause the hardware threads at the first compute element, migrate states of the hardware threads from the first compute element to the second compute element, resume the migrated hardware threads at the second compute element, and offline the first compute element.
- Example 47 includes the subject matter of any of Examples 35-46, and wherein the phase detection logic circuitry is further to migrate the compute kernel to another FPGA of another accelerator sled of the plurality of accelerator sleds.
- Example 48 includes the subject matter of any of Examples 35-47, and wherein the compute engine is further to receive an indication that indicates the compute kernel has completed; and migrate, in response to having received the indication, the application to a third compute element.
- Example 49 includes the subject matter of any of Examples 35-48, and wherein to migrate the application to the third compute element comprises to migrate the application to a high-performance CPU of one of the plurality of compute sleds.
- Example 50 includes the subject matter of any of Examples 35-49, and wherein to migrate the hardware threads to the third compute element comprises to pause the hardware threads at the second compute element, migrate states of the hardware threads from the second compute element to the third compute element, resume the migrated hardware threads at the third compute element, and offline the second compute element.
- Example 35 includes a compute sled for auto-migration in accelerated architectures, the compute sled comprising circuitry for receiving, from an application executed on a first compute element of a compute sled of a plurality of compute sleds, an indication that a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled of a plurality of accelerator sleds, wherein each of the plurality of accelerator sleds and the plurality of compute sleds are communicatively coupled to the compute sled; means for monitoring a plurality of hardware threads associated with the application; means for detecting whether a phase change has been detected as a function of the monitored hardware threads; and circuitry for migrating, in response to detected detection of the phase change, the hardware threads to a second compute element.
- FPGA field-programmable gate array
- Example 36 includes the subject matter of Example 35, and wherein the means for monitoring the plurality of hardware threads comprises means for collecting telemetry data corresponding to one or more hardware resources used by the hardware threads during execution.
- Example 37 includes the subject matter of any of Examples 35 and 36, and wherein the means for collecting the telemetry data includes means for collecting an instructions per cycle (IPC) value of the first compute element.
- IPC instructions per cycle
- Example 38 includes the subject matter of any of Examples 35-37, and wherein the means for detecting whether the phase change has been detected comprises means for comparing the IPC value of the first compute element to a peak IPC threshold value.
- Example 39 includes the subject matter of any of Examples 35-38, and wherein the means for detecting whether the phase change has been detected comprises means for identifying a previous phase as a central processing unit (CPU) bound phase and identify a present phase as an FPGA bound phase.
- CPU central processing unit
- Example 40 includes the subject matter of any of Examples 35-39, and wherein the means for detecting whether the phase change has been detected comprises means for identifying a previous phase as a central processing unit (CPU) bound phase and identify a present phase as a memory bound phase.
- CPU central processing unit
- Example 41 includes the subject matter of any of Examples 35-40, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to another high-performance CPU of the compute sled.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to another high-performance CPU of the compute sled.
- CPU central processing unit
- Example 42 includes the subject matter of any of Examples 35-41, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- CPU central processing unit
- Example 43 includes the subject matter of any of Examples 35-42, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- CPU central processing unit
- Example 44 includes the subject matter of any of Examples 35-43, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of the accelerator sled.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of the accelerator sled.
- CPU central processing unit
- Example 45 includes the subject matter of any of Examples 35-44, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU)
- migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- CPU central processing unit
- Example 46 includes the subject matter of any of Examples 35-45, and wherein the circuitry for migrating the hardware threads to the second compute element comprises circuitry for pausing the hardware threads at the first compute element; circuitry for migrating states of the hardware threads from the first compute element to the second compute element; circuitry for resuming the migrated hardware threads at the second compute element; and circuitry for offlining the first compute element.
- Example 47 includes the subject matter of any of Examples 35-46, and further including circuitry for migrating, by the compute sled, the compute kernel to another FPGA of another accelerator sled of the plurality of accelerator sleds.
- Example 48 includes the subject matter of any of Examples 35-47, and further including circuitry for receiving, by the compute sled, an indication that indicates the compute kernel has completed; and circuitry for migrating, by the compute sled and in response to having received the indication, the application to a third compute element.
- Example 49 includes the subject matter of any of Examples 35-48, and wherein the circuitry for migrating the application to the third compute element comprises circuitry for migrating the application to a high-performance CPU of one of the plurality of compute sleds.
- Example 50 includes the subject matter of any of Examples 35-49, and wherein the circuitry for migrating the hardware threads to the third compute element comprises circuitry for pausing the hardware threads at the second compute element; circuitry for migrating states of the hardware threads from the second compute element to the third compute element; circuitry for resuming the migrated hardware threads at the third compute element; and circuitry for offlining the second compute element.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Quality & Reliability (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Thermal Sciences (AREA)
- Power Engineering (AREA)
- Computer Security & Cryptography (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Environmental & Geological Engineering (AREA)
- Cooling Or The Like Of Electrical Apparatus (AREA)
- Multi Processors (AREA)
- Business, Economics & Management (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Technology Law (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- General Business, Economics & Management (AREA)
- Manufacturing & Machinery (AREA)
- Marketing (AREA)
- Data Mining & Analysis (AREA)
- Game Theory and Decision Science (AREA)
- Multimedia (AREA)
- Small-Scale Networks (AREA)
- Power Sources (AREA)
- Optics & Photonics (AREA)
Abstract
Description
- The present application claims the benefit of Indian Provisional Patent Application No. 201741030632, filed Aug. 30, 2017 and U.S. Provisional Patent Application No. 62/584,401, filed Nov. 10, 2017.
- Network operators and service providers typically rely on various accelerator technologies to accelerate workloads in complex, large-scale computing environments, such as high-performance computing (HPC) and cloud computing environments. These accelerators can be configured to perform special purpose computations (e.g., searching, pattern matching, signal- and image-processing, encryption, etc.) in an efficient, parallel manner. One such accelerator technology is a field-programmable gate array (FPGA) consisting of an array of logic gates that can be hardware-programmed to fulfill specific tasks. In particular, the clock cycles of FPGAs are relatively low compared to processor clock rates, which means the FPGAs are generally more power effective relative to processor cores.
- Accordingly, in some computing architectures, while hardware threads of an application are being executed by a processor, certain application functionality (e.g., compute kernels) may be offloaded to an FPGA. Typically, the hardware threads of the application are paused while waiting for the compute kernels to execute in the FPGA. However, because the hardware threads are merely paused, the software stack is required to make the decision to pause/resume, which can be an ineffective solution under certain conditions. Additionally, pausing/resuming the application threads presently works in the order of milliseconds (e.g., driven by software interactions) and consider only a binary solution (i.e., pause/resume), which can be an inflexible solution in certain computing environments.
- The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
-
FIG. 1 is a simplified diagram of at least one embodiment of a data center for executing workloads with disaggregated resources; -
FIG. 2 is a simplified diagram of at least one embodiment of a pod of the data center ofFIG. 1 ; -
FIG. 3 is a perspective view of at least one embodiment of a rack that may be included in the pod ofFIG. 2 ; -
FIG. 4 is a side plan elevation view of the rack ofFIG. 3 ; -
FIG. 5 is a perspective view of the rack ofFIG. 3 having a sled mounted therein; -
FIG. 6 is a is a simplified block diagram of at least one embodiment of a top side of the sled ofFIG. 5 ; -
FIG. 7 is a simplified block diagram of at least one embodiment of a bottom side of the sled ofFIG. 6 ; -
FIG. 8 is a simplified block diagram of at least one embodiment of a compute sled usable in the data center ofFIG. 1 ; -
FIG. 9 is a top perspective view of at least one embodiment of the compute sled ofFIG. 8 ; -
FIG. 10 is a simplified block diagram of at least one embodiment of an accelerator sled usable in the data center ofFIG. 1 ; -
FIG. 11 is a top perspective view of at least one embodiment of the accelerator sled ofFIG. 10 ; -
FIG. 12 is a simplified block diagram of at least one embodiment of a storage sled usable in the data center ofFIG. 1 ; -
FIG. 13 is a top perspective view of at least one embodiment of the storage sled ofFIG. 12 ; -
FIG. 14 is a simplified block diagram of at least one embodiment of a memory sled usable in the data center ofFIG. 1 ; and -
FIG. 15 is a simplified block diagram of a system that may be established within the data center ofFIG. 1 to execute workloads with managed nodes composed of disaggregated resources. -
FIG. 16 is a simplified block diagram of at least one embodiment of a system for auto-migration in accelerated architectures which includes multiple compute sleds, a storage sled, multiple accelerator sleds, a network switch, and a resource manager server; -
FIG. 17 is a simplified block diagram of at least one embodiment of one of the compute sleds of the system ofFIG. 16 ; -
FIG. 18 is a simplified block diagram of at least one embodiment of an environment that may be established by one of the compute sleds ofFIGS. 16 and 17 ; -
FIG. 19 is a simplified block diagram of at least one embodiment of the network switch of the system ofFIG. 16 ; -
FIG. 20 is a simplified flow diagram of at least one embodiment of a method for offloading a compute kernel to a field-programmable gate array (FPGA) that may be performed by an application presently executing on one or more compute sleds of the system ofFIG. 16 ; -
FIGS. 21A and 21B are a simplified flow diagram of at least one embodiment of a method for auto-migration in accelerated architectures that may be performed by one of the compute sleds ofFIGS. 16-18 ; -
FIGS. 22A and 22B are simplified block diagrams of at least one embodiment of an auto-migration of an application being consolidated with another application in one of the compute sleds of the system ofFIG. 16 having a high-performance central processing unit (CPU); -
FIGS. 23A and 23B are simplified block diagrams of at least one embodiment of an auto-migration of an application being migrated from a high-performance CPU of one of the compute sleds of the system ofFIG. 16 to another of the compute sleds having a low-performance CPU; and -
FIGS. 24A and 24B are simplified block diagrams of at least one embodiment of an auto-migration of an application and a compute kernel to one of the accelerator sleds of the system ofFIG. 16 . - While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
- References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
- The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
- In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
- Referring now to
FIG. 1 , adata center 100 in which disaggregated resources may cooperatively execute one or more workloads (e.g., applications on behalf of customers) includesmultiple pods pod spine switches 150 that switch communications among pods (e.g., thepods data center 100. In some embodiments, the sleds may be connected with a fabric using Intel Omni-Path technology. As described in more detail herein, resources within sleds in thedata center 100 may be allocated to a group (referred to herein as a “managed node”) containing resources from one or more other sleds to be collectively utilized in the execution of a workload. The workload can execute as if the resources belonging to the managed node were located on the same sled. The resources in a managed node may even belong to sleds belonging to different racks, and even todifferent pods data center 100 provides more efficient resource usage over typical data centers comprised of hyperconverged servers containing compute, memory, storage and perhaps additional resources). As such, thedata center 100 may provide greater performance (e.g., throughput, operations per second, latency, etc.) than a typical data center that has the same number of resources. - Referring now to
FIG. 2 , thepod 110, in the illustrative embodiment, includes a set ofrows racks 240. Eachrack 240 may house multiple sleds (e.g., sixteen sleds) and provide power and data connections to the housed sleds, as described in more detail herein. In the illustrative embodiment, the racks in eachrow pod switch 250 includes a set ofports 252 to which the sleds of the racks of thepod 110 are connected and another set ofports 254 that connect thepod 110 to the spine switches 150 to provide connectivity to other pods in thedata center 100. Similarly, thepod switch 260 includes a set ofports 262 to which the sleds of the racks of thepod 110 are connected and a set ofports 264 that connect thepod 110 to the spine switches 150. As such, the use of the pair ofswitches pod 110. For example, if either of theswitches pod 110 may still maintain data communication with the remainder of the data center 100 (e.g., sleds of other pods) through theother switch switches - It should be appreciated that each of the
other pods pod 110 shown in and described in regard toFIG. 2 (e.g., each pod may have rows of racks housing multiple sleds as described above). Additionally, while twopod switches pod - Referring now to
FIGS. 3-5 , eachillustrative rack 240 of thedata center 100 includes two elongated support posts 302, 304, which are arranged vertically. For example, the elongated support posts 302, 304 may extend upwardly from a floor of thedata center 100 when deployed. Therack 240 also includes one or morehorizontal pairs 310 of elongated support arms 312 (identified inFIG. 3 via a dashed ellipse) configured to support a sled of thedata center 100 as discussed below. Oneelongated support arm 312 of the pair ofelongated support arms 312 extends outwardly from theelongated support post 302 and the otherelongated support arm 312 extends outwardly from theelongated support post 304. - In the illustrative embodiments, each sled of the
data center 100 is embodied as a chassis-less sled. That is, each sled has a chassis-less circuit board substrate on which physical resources (e.g., processors, memory, accelerators, storage, etc.) are mounted as discussed in more detail below. As such, therack 240 is configured to receive the chassis-less sleds. For example, eachpair 310 ofelongated support arms 312 defines asled slot 320 of therack 240, which is configured to receive a corresponding chassis-less sled. To do so, each illustrativeelongated support arm 312 includes acircuit board guide 330 configured to receive the chassis-less circuit board substrate of the sled. Eachcircuit board guide 330 is secured to, or otherwise mounted to, atop side 332 of the correspondingelongated support arm 312. For example, in the illustrative embodiment, eachcircuit board guide 330 is mounted at a distal end of the correspondingelongated support arm 312 relative to the correspondingelongated support post circuit board guide 330 may be referenced in each Figure. - Each
circuit board guide 330 includes an inner wall that defines acircuit board slot 380 configured to receive the chassis-less circuit board substrate of asled 400 when thesled 400 is received in thecorresponding sled slot 320 of therack 240. To do so, as shown inFIG. 4 , a user (or robot) aligns the chassis-less circuit board substrate of anillustrative chassis-less sled 400 to asled slot 320. The user, or robot, may then slide the chassis-less circuit board substrate forward into thesled slot 320 such that eachside edge 414 of the chassis-less circuit board substrate is received in a correspondingcircuit board slot 380 of the circuit board guides 330 of thepair 310 ofelongated support arms 312 that define thecorresponding sled slot 320 as shown inFIG. 4 . By having robotically accessible and robotically manipulable sleds comprising disaggregated resources, each type of resource can be upgraded independently of each other and at their own optimized refresh rate. Furthermore, the sleds are configured to blindly mate with power and data communication cables in eachrack 240, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. As such, in some embodiments, thedata center 100 may operate (e.g., execute workloads, undergo maintenance and/or upgrades, etc.) without human involvement on the data center floor. In other embodiments, a human may facilitate one or more maintenance or upgrade operations in thedata center 100. - It should be appreciated that each
circuit board guide 330 is dual sided. That is, eachcircuit board guide 330 includes an inner wall that defines acircuit board slot 380 on each side of thecircuit board guide 330. In this way, eachcircuit board guide 330 can support a chassis-less circuit board substrate on either side. As such, a single additional elongated support post may be added to therack 240 to turn therack 240 into a two-rack solution that can hold twice asmany sled slots 320 as shown inFIG. 3 . Theillustrative rack 240 includes sevenpairs 310 ofelongated support arms 312 that define a corresponding sevensled slots 320, each configured to receive and support acorresponding sled 400 as discussed above. Of course, in other embodiments, therack 240 may include additional orfewer pairs 310 of elongated support arms 312 (i.e., additional or fewer sled slots 320). It should be appreciated that because thesled 400 is chassis-less, thesled 400 may have an overall height that is different than typical servers. As such, in some embodiments, the height of eachsled slot 320 may be shorter than the height of a typical server (e.g., shorter than a single rank unit, “1 U”). That is, the vertical distance between eachpair 310 ofelongated support arms 312 may be less than a standard rack unit “1 U.” Additionally, due to the relative decrease in height of thesled slots 320, the overall height of therack 240 in some embodiments may be shorter than the height of traditional rack enclosures. For example, in some embodiments, each of the elongated support posts 302, 304 may have a length of six feet or less. Again, in other embodiments, therack 240 may have different dimensions. Further, it should be appreciated that therack 240 does not include any walls, enclosures, or the like. Rather, therack 240 is an enclosure-less rack that is opened to the local environment. Of course, in some cases, an end plate may be attached to one of the elongated support posts 302, 304 in those situations in which therack 240 forms an end-of-row rack in thedata center 100. - In some embodiments, various interconnects may be routed upwardly or downwardly through the elongated support posts 302, 304. To facilitate such routing, each
elongated support post sled slot 320, power interconnects to provide power to eachsled slot 320, and/or other types of interconnects. - The
rack 240, in the illustrative embodiment, includes a support platform on which a corresponding optical data connector (not shown) is mounted. Each optical data connector is associated with acorresponding sled slot 320 and is configured to mate with an optical data connector of acorresponding sled 400 when thesled 400 is received in thecorresponding sled slot 320. In some embodiments, optical connections between components (e.g., sleds, racks, and switches) in thedata center 100 are made with a blind mate optical connection. For example, a door on each cable may prevent dust from contaminating the fiber inside the cable. In the process of connecting to a blind mate optical connector mechanism, the door is pushed open when the end of the cable enters the connector mechanism. Subsequently, the optical fiber inside the cable enters a gel within the connector mechanism and the optical fiber of one cable comes into contact with the optical fiber of another cable within the gel inside the connector mechanism. - The
illustrative rack 240 also includes afan array 370 coupled to the cross-support arms of therack 240. Thefan array 370 includes one or more rows of coolingfans 372, which are aligned in a horizontal line between the elongated support posts 302, 304. In the illustrative embodiment, thefan array 370 includes a row of coolingfans 372 for eachsled slot 320 of therack 240. As discussed above, eachsled 400 does not include any on-board cooling system in the illustrative embodiment and, as such, thefan array 370 provides cooling for eachsled 400 received in therack 240. Eachrack 240, in the illustrative embodiment, also includes a power supply associated with eachsled slot 320. Each power supply is secured to one of theelongated support arms 312 of thepair 310 ofelongated support arms 312 that define thecorresponding sled slot 320. For example, therack 240 may include a power supply coupled or secured to eachelongated support arm 312 extending from theelongated support post 302. Each power supply includes a power connector configured to mate with a power connector of thesled 400 when thesled 400 is received in thecorresponding sled slot 320. In the illustrative embodiment, thesled 400 does not include any on-board power supply and, as such, the power supplies provided in therack 240 supply power to correspondingsleds 400 when mounted to therack 240. - Referring now to
FIG. 6 , thesled 400, in the illustrative embodiment, is configured to be mounted in acorresponding rack 240 of thedata center 100 as discussed above. In some embodiments, eachsled 400 may be optimized or otherwise configured for performing particular tasks, such as compute tasks, acceleration tasks, data storage tasks, etc. For example, thesled 400 may be embodied as acompute sled 800 as discussed below in regard toFIGS. 8-9 , anaccelerator sled 1000 as discussed below in regard toFIGS. 10-11 , astorage sled 1200 as discussed below in regard toFIGS. 12-13 , or as a sled optimized or otherwise configured to perform other specialized tasks, such as amemory sled 1400, discussed below in regard toFIG. 14 . - As discussed above, the
illustrative sled 400 includes a chassis-lesscircuit board substrate 602, which supports various physical resources (e.g., electrical components) mounted thereon. It should be appreciated that thecircuit board substrate 602 is “chassis-less” in that thesled 400 does not include a housing or enclosure. Rather, the chassis-lesscircuit board substrate 602 is open to the local environment. The chassis-lesscircuit board substrate 602 may be formed from any material capable of supporting the various electrical components mounted thereon. For example, in an illustrative embodiment, the chassis-lesscircuit board substrate 602 is formed from an FR-4 glass-reinforced epoxy laminate material. Of course, other materials may be used to form the chassis-lesscircuit board substrate 602 in other embodiments. - As discussed in more detail below, the chassis-less
circuit board substrate 602 includes multiple features that improve the thermal cooling characteristics of the various electrical components mounted on the chassis-lesscircuit board substrate 602. As discussed, the chassis-lesscircuit board substrate 602 does not include a housing or enclosure, which may improve the airflow over the electrical components of thesled 400 by reducing those structures that may inhibit air flow. For example, because the chassis-lesscircuit board substrate 602 is not positioned in an individual housing or enclosure, there is no backplane (e.g., a backplate of the chassis) to the chassis-lesscircuit board substrate 602, which could inhibit air flow across the electrical components. Additionally, the chassis-lesscircuit board substrate 602 has a geometric shape configured to reduce the length of the airflow path across the electrical components mounted to the chassis-lesscircuit board substrate 602. For example, the illustrative chassis-lesscircuit board substrate 602 has awidth 604 that is greater than adepth 606 of the chassis-lesscircuit board substrate 602. In one particular embodiment, for example, the chassis-lesscircuit board substrate 602 has a width of about 21 inches and a depth of about 9 inches, compared to a typical server that has a width of about 17 inches and a depth of about 39 inches. As such, anairflow path 608 that extends from afront edge 610 of the chassis-lesscircuit board substrate 602 toward arear edge 612 has a shorter distance relative to typical servers, which may improve the thermal cooling characteristics of thesled 400. Furthermore, although not illustrated inFIG. 6 , the various physical resources mounted to the chassis-lesscircuit board substrate 602 are mounted in corresponding locations such that no two substantively heat-producing electrical components shadow each other as discussed in more detail below. That is, no two electrical components, which produce appreciable heat during operation (i.e., greater than a nominal heat sufficient enough to adversely impact the cooling of another electrical component), are mounted to the chassis-lesscircuit board substrate 602 linearly in-line with each other along the direction of the airflow path 608 (i.e., along a direction extending from thefront edge 610 toward therear edge 612 of the chassis-less circuit board substrate 602). - As discussed above, the
illustrative sled 400 includes one or morephysical resources 620 mounted to atop side 650 of the chassis-lesscircuit board substrate 602. Although twophysical resources 620 are shown inFIG. 6 , it should be appreciated that thesled 400 may include one, two, or morephysical resources 620 in other embodiments. Thephysical resources 620 may be embodied as any type of processor, controller, or other compute circuit capable of performing various tasks such as compute functions and/or controlling the functions of thesled 400 depending on, for example, the type or intended functionality of thesled 400. For example, as discussed in more detail below, thephysical resources 620 may be embodied as high-performance processors in embodiments in which thesled 400 is embodied as a compute sled, as accelerator co-processors or circuits in embodiments in which thesled 400 is embodied as an accelerator sled, storage controllers in embodiments in which thesled 400 is embodied as a storage sled, or a set of memory devices in embodiments in which thesled 400 is embodied as a memory sled. - The
sled 400 also includes one or more additionalphysical resources 630 mounted to thetop side 650 of the chassis-lesscircuit board substrate 602. In the illustrative embodiment, the additional physical resources include a network interface controller (NIC) as discussed in more detail below. Of course, depending on the type and functionality of thesled 400, thephysical resources 630 may include additional or other electrical components, circuits, and/or devices in other embodiments. - The
physical resources 620 are communicatively coupled to thephysical resources 630 via an input/output (I/O)subsystem 622. The I/O subsystem 622 may be embodied as circuitry and/or components to facilitate input/output operations with thephysical resources 620, thephysical resources 630, and/or other components of thesled 400. For example, the I/O subsystem 622 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In the illustrative embodiment, the I/O subsystem 622 is embodied as, or otherwise includes, a double data rate 4 (DDR4) data bus or a DDRS data bus. - In some embodiments, the
sled 400 may also include a resource-to-resource interconnect 624. The resource-to-resource interconnect 624 may be embodied as any type of communication interconnect capable of facilitating resource-to-resource communications. In the illustrative embodiment, the resource-to-resource interconnect 624 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the resource-to-resource interconnect 624 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to resource-to-resource communications. - The
sled 400 also includes apower connector 640 configured to mate with a corresponding power connector of therack 240 when thesled 400 is mounted in thecorresponding rack 240. Thesled 400 receives power from a power supply of therack 240 via thepower connector 640 to supply power to the various electrical components of thesled 400. That is, thesled 400 does not include any local power supply (i.e., an on-board power supply) to provide power to the electrical components of thesled 400. The exclusion of a local or on-board power supply facilitates the reduction in the overall footprint of the chassis-lesscircuit board substrate 602, which may increase the thermal cooling characteristics of the various electrical components mounted on the chassis-lesscircuit board substrate 602 as discussed above. In some embodiments, power is provided to theprocessors 820 through vias directly under the processors 820 (e.g., through thebottom side 750 of the chassis-less circuit board substrate 602), providing an increased thermal budget, additional current and/or voltage, and better voltage control over typical boards. - In some embodiments, the
sled 400 may also include mountingfeatures 642 configured to mate with a mounting arm, or other structure, of a robot to facilitate the placement of the sled 600 in arack 240 by the robot. The mounting features 642 may be embodied as any type of physical structures that allow the robot to grasp thesled 400 without damaging the chassis-lesscircuit board substrate 602 or the electrical components mounted thereto. For example, in some embodiments, the mounting features 642 may be embodied as non-conductive pads attached to the chassis-lesscircuit board substrate 602. In other embodiments, the mounting features may be embodied as brackets, braces, or other similar structures attached to the chassis-lesscircuit board substrate 602. The particular number, shape, size, and/or make-up of the mountingfeature 642 may depend on the design of the robot configured to manage thesled 400. - Referring now to
FIG. 7 , in addition to thephysical resources 630 mounted on thetop side 650 of the chassis-lesscircuit board substrate 602, thesled 400 also includes one ormore memory devices 720 mounted to abottom side 750 of the chassis-lesscircuit board substrate 602. That is, the chassis-lesscircuit board substrate 602 is embodied as a double-sided circuit board. Thephysical resources 620 are communicatively coupled to thememory devices 720 via the I/O subsystem 622. For example, thephysical resources 620 and thememory devices 720 may be communicatively coupled by one or more vias extending through the chassis-lesscircuit board substrate 602. Eachphysical resource 620 may be communicatively coupled to a different set of one ormore memory devices 720 in some embodiments. Alternatively, in other embodiments, eachphysical resource 620 may be communicatively coupled to eachmemory devices 720. - The
memory devices 720 may be embodied as any type of memory device capable of storing data for thephysical resources 620 during operation of thesled 400, such as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In particular embodiments, DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org). Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces. - In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include next-generation nonvolatile devices, such as Intel 3D XPoint™ memory or other byte addressable write-in-place nonvolatile memory devices. In one embodiment, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product. In some embodiments, the memory device may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance.
- Referring now to
FIG. 8 , in some embodiments, thesled 400 may be embodied as acompute sled 800. Thecompute sled 800 is optimized, or otherwise configured, to perform compute tasks. Of course, as discussed above, thecompute sled 800 may rely on other sleds, such as acceleration sleds and/or storage sleds, to perform such compute tasks. Thecompute sled 800 includes various physical resources (e.g., electrical components) similar to the physical resources of thesled 400, which have been identified inFIG. 8 using the same reference numbers. The description of such components provided above in regard toFIGS. 6 and 7 applies to the corresponding components of thecompute sled 800 and is not repeated herein for clarity of the description of thecompute sled 800. - In the
illustrative compute sled 800, thephysical resources 620 are embodied asprocessors 820. Although only twoprocessors 820 are shown inFIG. 8 , it should be appreciated that thecompute sled 800 may includeadditional processors 820 in other embodiments. Illustratively, theprocessors 820 are embodied as high-performance processors 820 and may be configured to operate at a relatively high power rating. Although theprocessors 820 generate additional heat operating at power ratings greater than typical processors (which operate at around 155-230 W), the enhanced thermal cooling characteristics of the chassis-lesscircuit board substrate 602 discussed above facilitate the higher power operation. For example, in the illustrative embodiment, theprocessors 820 are configured to operate at a power rating of at least 250 W. In some embodiments, theprocessors 820 may be configured to operate at a power rating of at least 350 W. - In some embodiments, the
compute sled 800 may also include a processor-to-processor interconnect 842. Similar to the resource-to-resource interconnect 624 of thesled 400 discussed above, the processor-to-processor interconnect 842 may be embodied as any type of communication interconnect capable of facilitating processor-to-processor interconnect 842 communications. In the illustrative embodiment, the processor-to-processor interconnect 842 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the processor-to-processor interconnect 842 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. - The
compute sled 800 also includes acommunication circuit 830. Theillustrative communication circuit 830 includes a network interface controller (NIC) 832, which may also be referred to as a host fabric interface (HFI). TheNIC 832 may be embodied as, or otherwise include, any type of integrated circuit, discrete circuits, controller chips, chipsets, add-in-boards, daughtercards, network interface cards, other devices that may be used by thecompute sled 800 to connect with another compute device (e.g., with other sleds 400). In some embodiments, theNIC 832 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, theNIC 832 may include a local processor (not shown) and/or a local memory (not shown) that are both local to theNIC 832. In such embodiments, the local processor of theNIC 832 may be capable of performing one or more of the functions of theprocessors 820. Additionally or alternatively, in such embodiments, the local memory of theNIC 832 may be integrated into one or more components of the compute sled at the board level, socket level, chip level, and/or other levels. - The
communication circuit 830 is communicatively coupled to anoptical data connector 834. Theoptical data connector 834 is configured to mate with a corresponding optical data connector of therack 240 when thecompute sled 800 is mounted in therack 240. Illustratively, theoptical data connector 834 includes a plurality of optical fibers which lead from a mating surface of theoptical data connector 834 to anoptical transceiver 836. Theoptical transceiver 836 is configured to convert incoming optical signals from the rack-side optical data connector to electrical signals and to convert electrical signals to outgoing optical signals to the rack-side optical data connector. Although shown as forming part of theoptical data connector 834 in the illustrative embodiment, theoptical transceiver 836 may form a portion of thecommunication circuit 830 in other embodiments. - In some embodiments, the
compute sled 800 may also include anexpansion connector 840. In such embodiments, theexpansion connector 840 is configured to mate with a corresponding connector of an expansion chassis-less circuit board substrate to provide additional physical resources to thecompute sled 800. The additional physical resources may be used, for example, by theprocessors 820 during operation of thecompute sled 800. The expansion chassis-less circuit board substrate may be substantially similar to the chassis-lesscircuit board substrate 602 discussed above and may include various electrical components mounted thereto. The particular electrical components mounted to the expansion chassis-less circuit board substrate may depend on the intended functionality of the expansion chassis-less circuit board substrate. For example, the expansion chassis-less circuit board substrate may provide additional compute resources, memory resources, and/or storage resources. As such, the additional physical resources of the expansion chassis-less circuit board substrate may include, but is not limited to, processors, memory devices, storage devices, and/or accelerator circuits including, for example, field programmable gate arrays (FPGA), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits. - Referring now to
FIG. 9 , an illustrative embodiment of thecompute sled 800 is shown. As shown, theprocessors 820,communication circuit 830, andoptical data connector 834 are mounted to thetop side 650 of the chassis-lesscircuit board substrate 602. Any suitable attachment or mounting technology may be used to mount the physical resources of thecompute sled 800 to the chassis-lesscircuit board substrate 602. For example, the various physical resources may be mounted in corresponding sockets (e.g., a processor socket), holders, or brackets. In some cases, some of the electrical components may be directly mounted to the chassis-lesscircuit board substrate 602 via soldering or similar techniques. - As discussed above, the
individual processors 820 andcommunication circuit 830 are mounted to thetop side 650 of the chassis-lesscircuit board substrate 602 such that no two heat-producing, electrical components shadow each other. In the illustrative embodiment, theprocessors 820 andcommunication circuit 830 are mounted in corresponding locations on thetop side 650 of the chassis-lesscircuit board substrate 602 such that no two of those physical resources are linearly in-line with others along the direction of theairflow path 608. It should be appreciated that, although theoptical data connector 834 is in-line with thecommunication circuit 830, theoptical data connector 834 produces no or nominal heat during operation. - The
memory devices 720 of thecompute sled 800 are mounted to thebottom side 750 of the of the chassis-lesscircuit board substrate 602 as discussed above in regard to thesled 400. Although mounted to thebottom side 750, thememory devices 720 are communicatively coupled to theprocessors 820 located on thetop side 650 via the I/O subsystem 622. Because the chassis-lesscircuit board substrate 602 is embodied as a double-sided circuit board, thememory devices 720 and theprocessors 820 may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-lesscircuit board substrate 602. Of course, eachprocessor 820 may be communicatively coupled to a different set of one ormore memory devices 720 in some embodiments. Alternatively, in other embodiments, eachprocessor 820 may be communicatively coupled to eachmemory device 720. In some embodiments, thememory devices 720 may be mounted to one or more memory mezzanines on the bottom side of the chassis-lesscircuit board substrate 602 and may interconnect with acorresponding processor 820 through a ball-grid array. - Each of the
processors 820 includes aheatsink 850 secured thereto. Due to the mounting of thememory devices 720 to thebottom side 750 of the chassis-less circuit board substrate 602 (as well as the vertical spacing of thesleds 400 in the corresponding rack 240), thetop side 650 of the chassis-lesscircuit board substrate 602 includes additional “free” area or space that facilitates the use ofheatsinks 850 having a larger size relative to traditional heatsinks used in typical servers. Additionally, due to the improved thermal cooling characteristics of the chassis-lesscircuit board substrate 602, none of theprocessor heatsinks 850 include cooling fans attached thereto. That is, each of theheatsinks 850 is embodied as a fan-less heatsinks. - Referring now to
FIG. 10 , in some embodiments, thesled 400 may be embodied as anaccelerator sled 1000. Theaccelerator sled 1000 is optimized, or otherwise configured, to perform specialized compute tasks, such as machine learning, encryption, hashing, or other computational-intensive task. In some embodiments, for example, acompute sled 800 may offload tasks to theaccelerator sled 1000 during operation. Theaccelerator sled 1000 includes various components similar to components of thesled 400 and/or computesled 800, which have been identified inFIG. 10 using the same reference numbers. The description of such components provided above in regard toFIGS. 6, 7, and 8 apply to the corresponding components of theaccelerator sled 1000 and is not repeated herein for clarity of the description of theaccelerator sled 1000. - In the
illustrative accelerator sled 1000, thephysical resources 620 are embodied asaccelerator circuits 1020. Although only twoaccelerator circuits 1020 are shown inFIG. 10 , it should be appreciated that theaccelerator sled 1000 may includeadditional accelerator circuits 1020 in other embodiments. For example, as shown inFIG. 11 , theaccelerator sled 1000 may include fouraccelerator circuits 1020 in some embodiments. Theaccelerator circuits 1020 may be embodied as any type of processor, co-processor, compute circuit, or other device capable of performing compute or processing operations. For example, theaccelerator circuits 1020 may be embodied as, for example, field programmable gate arrays (FPGA), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits. - In some embodiments, the
accelerator sled 1000 may also include an accelerator-to-accelerator interconnect 1042. Similar to the resource-to-resource interconnect 624 of the sled 600 discussed above, the accelerator-to-accelerator interconnect 1042 may be embodied as any type of communication interconnect capable of facilitating accelerator-to-accelerator communications. In the illustrative embodiment, the accelerator-to-accelerator interconnect 1042 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the accelerator-to-accelerator interconnect 1042 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. In some embodiments, theaccelerator circuits 1020 may be daisy-chained with aprimary accelerator circuit 1020 connected to theNIC 832 andmemory 720 through the I/O subsystem 622 and asecondary accelerator circuit 1020 connected to theNIC 832 andmemory 720 through aprimary accelerator circuit 1020. - Referring now to
FIG. 11 , an illustrative embodiment of theaccelerator sled 1000 is shown. As discussed above, theaccelerator circuits 1020,communication circuit 830, andoptical data connector 834 are mounted to thetop side 650 of the chassis-lesscircuit board substrate 602. Again, theindividual accelerator circuits 1020 andcommunication circuit 830 are mounted to thetop side 650 of the chassis-lesscircuit board substrate 602 such that no two heat-producing, electrical components shadow each other as discussed above. Thememory devices 720 of theaccelerator sled 1000 are mounted to thebottom side 750 of the of the chassis-lesscircuit board substrate 602 as discussed above in regard to the sled 600. Although mounted to thebottom side 750, thememory devices 720 are communicatively coupled to theaccelerator circuits 1020 located on thetop side 650 via the I/O subsystem 622 (e.g., through vias). Further, each of theaccelerator circuits 1020 may include a heatsink 1070 that is larger than a traditional heatsink used in a server. As discussed above with reference to the heatsinks 870, the heatsinks 1070 may be larger than tradition heatsinks because of the “free” area provided by thememory devices 750 being located on thebottom side 750 of the chassis-lesscircuit board substrate 602 rather than on thetop side 650. - Referring now to
FIG. 12 , in some embodiments, thesled 400 may be embodied as astorage sled 1200. Thestorage sled 1200 is optimized, or otherwise configured, to store data in adata storage 1250 local to thestorage sled 1200. For example, during operation, acompute sled 800 or anaccelerator sled 1000 may store and retrieve data from thedata storage 1250 of thestorage sled 1200. Thestorage sled 1200 includes various components similar to components of thesled 400 and/or thecompute sled 800, which have been identified inFIG. 12 using the same reference numbers. The description of such components provided above in regard toFIGS. 6, 7 , and 8 apply to the corresponding components of thestorage sled 1200 and is not repeated herein for clarity of the description of thestorage sled 1200. - In the
illustrative storage sled 1200, thephysical resources 620 are embodied asstorage controllers 1220. Although only twostorage controllers 1220 are shown inFIG. 12 , it should be appreciated that thestorage sled 1200 may includeadditional storage controllers 1220 in other embodiments. Thestorage controllers 1220 may be embodied as any type of processor, controller, or control circuit capable of controlling the storage and retrieval of data into thedata storage 1250 based on requests received via thecommunication circuit 830. In the illustrative embodiment, thestorage controllers 1220 are embodied as relatively low-power processors or controllers. For example, in some embodiments, thestorage controllers 1220 may be configured to operate at a power rating of about 75 watts. - In some embodiments, the
storage sled 1200 may also include a controller-to-controller interconnect 1242. Similar to the resource-to-resource interconnect 624 of thesled 400 discussed above, the controller-to-controller interconnect 1242 may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications. In the illustrative embodiment, the controller-to-controller interconnect 1242 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the controller-to-controller interconnect 1242 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. - Referring now to
FIG. 13 , an illustrative embodiment of thestorage sled 1200 is shown. In the illustrative embodiment, thedata storage 1250 is embodied as, or otherwise includes, astorage cage 1252 configured to house one or more solid state drives (SSDs) 1254. To do so, thestorage cage 1252 includes a number of mountingslots 1256, each of which is configured to receive a correspondingsolid state drive 1254. Each of the mountingslots 1256 includes a number of drive guides 1258 that cooperate to define anaccess opening 1260 of thecorresponding mounting slot 1256. Thestorage cage 1252 is secured to the chassis-lesscircuit board substrate 602 such that the access openings face away from (i.e., toward the front of) the chassis-lesscircuit board substrate 602. As such, solid state drives 1254 are accessible while thestorage sled 1200 is mounted in a corresponding rack 204. For example, asolid state drive 1254 may be swapped out of a rack 240 (e.g., via a robot) while thestorage sled 1200 remains mounted in thecorresponding rack 240. - The
storage cage 1252 illustratively includes sixteen mountingslots 1256 and is capable of mounting and storing sixteen solid state drives 1254. Of course, thestorage cage 1252 may be configured to store additional or fewer solid state drives 1254 in other embodiments. Additionally, in the illustrative embodiment, the solid state drivers are mounted vertically in thestorage cage 1252, but may be mounted in thestorage cage 1252 in a different orientation in other embodiments. Eachsolid state drive 1254 may be embodied as any type of data storage device capable of storing long term data. To do so, the solid state drives 1254 may include volatile and non-volatile memory devices discussed above. - As shown in
FIG. 13 , thestorage controllers 1220, thecommunication circuit 830, and theoptical data connector 834 are illustratively mounted to thetop side 650 of the chassis-lesscircuit board substrate 602. Again, as discussed above, any suitable attachment or mounting technology may be used to mount the electrical components of thestorage sled 1200 to the chassis-lesscircuit board substrate 602 including, for example, sockets (e.g., a processor socket), holders, brackets, soldered connections, and/or other mounting or securing techniques. - As discussed above, the
individual storage controllers 1220 and thecommunication circuit 830 are mounted to thetop side 650 of the chassis-lesscircuit board substrate 602 such that no two heat-producing, electrical components shadow each other. For example, thestorage controllers 1220 and thecommunication circuit 830 are mounted in corresponding locations on thetop side 650 of the chassis-lesscircuit board substrate 602 such that no two of those electrical components are linearly in-line with other along the direction of theairflow path 608. - The
memory devices 720 of thestorage sled 1200 are mounted to thebottom side 750 of the of the chassis-lesscircuit board substrate 602 as discussed above in regard to thesled 400. Although mounted to thebottom side 750, thememory devices 720 are communicatively coupled to thestorage controllers 1220 located on thetop side 650 via the I/O subsystem 622. Again, because the chassis-lesscircuit board substrate 602 is embodied as a double-sided circuit board, thememory devices 720 and thestorage controllers 1220 may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-lesscircuit board substrate 602. Each of thestorage controllers 1220 includes a heatsink 1270 secured thereto. As discussed above, due to the improved thermal cooling characteristics of the chassis-lesscircuit board substrate 602 of thestorage sled 1200, none of the heatsinks 1270 include cooling fans attached thereto. That is, each of the heatsinks 1270 is embodied as a fan-less heatsink. - Referring now to
FIG. 14 , in some embodiments, thesled 400 may be embodied as amemory sled 1400. Thestorage sled 1400 is optimized, or otherwise configured, to provide other sleds 400 (e.g., compute sleds 800, accelerator sleds 1000, etc.) with access to a pool of memory (e.g., in two ormore sets memory sled 1200. For example, during operation, acompute sled 800 or anaccelerator sled 1000 may remotely write to and/or read from one or more of the memory sets 1430, 1432 of thememory sled 1200 using a logical address space that maps to physical addresses in the memory sets 1430, 1432. Thememory sled 1400 includes various components similar to components of thesled 400 and/or thecompute sled 800, which have been identified inFIG. 14 using the same reference numbers. The description of such components provided above in regard toFIGS. 6, 7, and 8 apply to the corresponding components of thememory sled 1400 and is not repeated herein for clarity of the description of thememory sled 1400. - In the
illustrative memory sled 1400, thephysical resources 620 are embodied asmemory controllers 1420. Although only twomemory controllers 1420 are shown inFIG. 14 , it should be appreciated that thememory sled 1400 may includeadditional memory controllers 1420 in other embodiments. Thememory controllers 1420 may be embodied as any type of processor, controller, or control circuit capable of controlling the writing and reading of data into the memory sets 1430, 1432 based on requests received via thecommunication circuit 830. In the illustrative embodiment, eachstorage controller 1220 is connected to acorresponding memory set memory devices 720 within the correspondingmemory set sled 400 that has sent a request to thememory sled 1400 to perform a memory access operation (e.g., read or write). - In some embodiments, the
memory sled 1400 may also include a controller-to-controller interconnect 1442. Similar to the resource-to-resource interconnect 624 of thesled 400 discussed above, the controller-to-controller interconnect 1442 may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications. In the illustrative embodiment, the controller-to-controller interconnect 1442 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the controller-to-controller interconnect 1442 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. As such, in some embodiments, amemory controller 1420 may access, through the controller-to-controller interconnect 1442, memory that is within the memory set 1432 associated with anothermemory controller 1420. In some embodiments, a scalable memory controller is made of multiple smaller memory controllers, referred to herein as “chiplets”, on a memory sled (e.g., the memory sled 1400). The chiplets may be interconnected (e.g., using EMIB (Embedded Multi-Die Interconnect Bridge)). The combined chiplet memory controller may scale up to a relatively large number of memory controllers and I/O ports, (e.g., up to 16 memory channels). In some embodiments, thememory controllers 1420 may implement a memory interleave (e.g., one memory address is mapped to thememory set 1430, the next memory address is mapped to thememory set 1432, and the third address is mapped to thememory set 1430, etc.). The interleaving may be managed within thememory controllers 1420, or from CPU sockets (e.g., of the compute sled 800) across network links to the memory sets 1430, 1432, and may improve the latency associated with performing memory access operations as compared to accessing contiguous memory addresses from the same memory device. - Further, in some embodiments, the
memory sled 1400 may be connected to one or more other sleds 400 (e.g., in thesame rack 240 or an adjacent rack 240) through a waveguide, using thewaveguide connector 1480. In the illustrative embodiment, the waveguides are 64 millimeter waveguides that provide 16 Rx (i.e., receive) lanes and 16 Rt (i.e., transmit) lanes. Each lane, in the illustrative embodiment, is either 16 Ghz or 32 Ghz. In other embodiments, the frequencies may be different. Using a waveguide may provide high throughput access to the memory pool (e.g., the memory sets 1430, 1432) to another sled (e.g., asled 400 in thesame rack 240 or anadjacent rack 240 as the memory sled 1400) without adding to the load on theoptical data connector 834. - Referring now to
FIG. 15 , a system for executing one or more workloads (e.g., applications) may be implemented in accordance with thedata center 100. In the illustrative embodiment, thesystem 1510 includes anorchestrator server 1520, which may be embodied as a managed node comprising a compute device (e.g., a compute sled 800) executing management software (e.g., a cloud operating environment, such as OpenStack) that is communicatively coupled tomultiple sleds 400 including a large number of compute sleds 1530 (e.g., each similar to the compute sled 800), memory sleds 1540 (e.g., each similar to the memory sled 1400), accelerator sleds 1550 (e.g., each similar to the memory sled 1000), and storage sleds 1560 (e.g., each similar to the storage sled 1200). One or more of thesleds node 1570, such as by theorchestrator server 1520, to collectively perform a workload (e.g., an application 1232 executed in a virtual machine or in a container). The managednode 1570 may be embodied as an assembly ofphysical resources 620, such asprocessors 820,memory resources 720,accelerator circuits 1020, ordata storage 1250, from the same ordifferent sleds 400. Further, the managed node may be established, defined, or “spun up” by theorchestrator server 1520 at the time a workload is to be assigned to the managed node or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node. In the illustrative embodiment, theorchestrator server 1520 may selectively allocate and/or deallocatephysical resources 620 from thesleds 400 and/or add or remove one ormore sleds 400 from the managednode 1570 as a function of quality of service (QoS) targets (e.g., performance targets associated with a throughput, latency, instructions per second, etc.) associated with a service level agreement for the workload (e.g., the application 1532). In doing so, theorchestrator server 1520 may receive telemetry data indicative of performance conditions (e.g., throughput, latency, instructions per second, etc.) in eachsled 400 of the managednode 1570 and compare the telemetry data to the quality of service targets to determine whether the quality of service targets are being satisfied. If the so, theorchestrator server 1520 may additionally determine whether one or more physical resources may be deallocated from the managednode 1570 while still satisfying the QoS targets, thereby freeing up those physical resources for use in another managed node (e.g., to execute a different workload). Alternatively, if the QoS targets are not presently satisfied, theorchestrator server 1520 may determine to dynamically allocate additional physical resources to assist in the execution of the workload (e.g., the application 1532) while the workload is executing - Additionally, in some embodiments, the
orchestrator server 1520 may identify trends in the resource utilization of the workload (e.g., the application 1532), such as by identifying phases of execution (e.g., time periods in which different operations, each having different resource utilizations characteristics, are performed) of the workload (e.g., the application 1532) and pre-emptively identifying available resources in thedata center 100 and allocating them to the managed node 1570 (e.g., within a predefined time period of the associated phase beginning). In some embodiments, theorchestrator server 1520 may model performance based on various latencies and a distribution scheme to place workloads among compute sleds and other resources (e.g., accelerator sleds, memory sleds, storage sleds) in thedata center 100. For example, theorchestrator server 1520 may utilize a model that accounts for the performance of resources on the sleds 400 (e.g., FPGA performance, memory access latency, etc.) and the performance (e.g., congestion, latency, bandwidth) of the path through the network to the resource (e.g., FPGA). As such, theorchestrator server 1520 may determine which resource(s) should be used with which workloads based on the total latency associated with each potential resource available in the data center 100 (e.g., the latency associated with the performance of the resource itself in addition to the latency associated with the path through the network between the compute sled executing the workload and thesled 400 on which the resource is located). - In some embodiments, the
orchestrator server 1520 may generate a map of heat generation in thedata center 100 using telemetry data (e.g., temperatures, fan speeds, etc.) reported from thesleds 400 and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in thedata center 100. Additionally or alternatively, in some embodiments, theorchestrator server 1520 may organize received telemetry data into a hierarchical model that is indicative of a relationship between the managed nodes (e.g., a spatial relationship such as the physical locations of the resources of the managed nodes within thedata center 100 and/or a functional relationship, such as groupings of the managed nodes by the customers the managed nodes provide services for, the types of functions typically performed by the managed nodes, managed nodes that typically share or exchange workloads among each other, etc.). Based on differences in the physical locations and resources in the managed nodes, a given workload may exhibit different resource utilizations (e.g., cause a different internal temperature, use a different percentage of processor or memory capacity) across the resources of different managed nodes. Theorchestrator server 1520 may determine the differences based on the telemetry data stored in the hierarchical model and factor the differences into a prediction of future resource utilization of a workload if the workload is reassigned from one managed node to another managed node, to accurately balance resource utilization in thedata center 100. - To reduce the computational load on the
orchestrator server 1520 and the data transfer load on the network, in some embodiments, theorchestrator server 1520 may send self-test information to thesleds 400 to enable eachsled 400 to locally (e.g., on the sled 400) determine whether telemetry data generated by thesled 400 satisfies one or more conditions (e.g., an available capacity that satisfies a predefined threshold, a temperature that satisfies a predefined threshold, etc.). Eachsled 400 may then report back a simplified result (e.g., yes or no) to theorchestrator server 1520, which theorchestrator server 1520 may utilize in determining the allocation of resources to managed nodes. - Referring now to
FIG. 16 , asystem 1600 for auto-migration in accelerated architectures may be implemented in accordance with thedata center 100 described above with reference toFIG. 1 . Theillustrative system 1600 includes aresource hardware manager 1608 communicatively coupled via anetwork switch 1612 to multiple compute sleds, astorage sled 1614, and multiple accelerator sleds 1618. In use, various software applications are executed on the compute sleds 1602 to perform required computations on data. To accelerate certain computations (e.g., reduce amount of compute power, computation time, etc.), at least a portion of the application workloads (e.g., computationally intensive compute kernels) can be offloaded to a field-programmable gate array (FPGA) (see, e.g., the FPGAs 1622(a) and 1622(b)) of anaccelerator sled 1618. As such, applications being executed in a host (e.g., one of the compute sleds 1602, one of the accelerator sleds 1618, etc.) can reduce the amount of required instructions per cycle (IPC) while waiting for the compute kernel(s) (i.e., routines compiled for high throughput accelerators) to complete. - Further, a phase
detection logic unit 1610 of eachcompute sled 1602 collects telemetry data (e.g., top-down microarchitecture analysis method (TMAM) metrics) indicative of a resource usage and/or performance condition of the respective sleds as application workloads are being performed on the respective sleds. The phasedetection logic unit 1610, which will be described in further detail below, is configured to analyze the collected data to identify when a given application, executing a set of hardware threads on a central processing unit (CPU) of acompute sled 1602 or anaccelerator sled 1618, changes to a different phase, such as one of a compute bound phase, an FPGA bound phase, a memory bound phase, etc. - Additionally, the phase
detection logic unit 1610 is configured to determine whether a given application needs to be migrated to another CPU of thecompute sled 1602 or theaccelerator sled 1618 on which the application is presently being executed, or migrated to another CPU of adifferent compute sled 1602 oraccelerator sled 1618. To do so, the phasedetection logic unit 1610 is further configured to determine whether the likelihood of staying in the new, present phase is high enough to migrate the hardware threads and/or an associated compute kernel to another sled, or sleds. Such a determination may depend on an anticipated duration of time of the present phase or other prediction algorithm. If the phasedetection logic unit 1610 determines that the hardware threads and/or the compute kernel are to be migrated, the phasedetection logic unit 1610 orchestrates the migration process and either offlines the previously used CPU or returns the previously used CPU to the operating system of the applicable sled. It should be appreciated that, as illustratively shown, the phasedetection logic unit 1610, or at least a portion thereof, may reside on each of the compute sleds 1602, thenetwork switch 1612, and/or theresource hardware manager 1608, depending on the embodiment. It should be further appreciated that, while illustratively shown in a disaggregated architecture, the functions described herein may be performed on a local multi-processor computing device or configurable platform (e.g., the Intel® Discrete Configurable Platform) in other embodiments. - Each of the compute sleds 1602 may be embodied as any type of compute device capable of performing the functions described herein. As shown in
FIG. 17 , theillustrative network switch 1612 includes acompute engine 1702, an input/output (I/O)subsystem 1708, one or moredata storage devices 1710,communication circuitry 1712, and one or more peripheral devices 1716. In some embodiments, the compute sleds 1602 may include other or additional components, such as those commonly found in a computing device. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. - The
compute engine 1702 may be embodied as any type of device or collection of devices capable of performing the various compute functions as described herein. In some embodiments, thecompute engine 1702 may be embodied as a single device such as an integrated circuit, an embedded system, an FPGA, a system-on-a-chip (SOC), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. Additionally, in some embodiments, thecompute engine 1702 may include, or may be embodied as, a processor 1704 (i.e., a central processing unit (CPU)) andmemory 1706. - The
processor 1704 may be embodied as any type of processor capable of performing the functions described herein. For example, theprocessor 1704 may be embodied as one or more single-core processors, multi-core processors, digital signal processors, microcontrollers, or other processor(s) or processing/controlling circuit(s). In some embodiments, theprocessor 1704 may be embodied as, include, or otherwise be coupled to a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. - As illustratively shown, the
processor 1704 may include the phasedetection logic unit 1610 described with reference toFIG. 16 . The phasedetection logic unit 1610 may be embodied as a specialized device, such as a co-processor, an FPGA, or an ASIC, for performing the automatic migration operations described herein (e.g., collecting and analyzing telemetry data indicative of performance conditions of the sleds as workloads are being performed thereon and analyzing the telemetry data to determine whether an automatic migration is to be performed as a result of a detected phase change). - The
memory 1706 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. It should be appreciated that thememory 1706 may include main memory (i.e., a primary memory) and/or cache memory (i.e., memory that can be accessed more quickly than the main memory). Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). - One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In particular embodiments, DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org). Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces.
- In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include future generation nonvolatile devices, such as a three dimensional crosspoint memory device (e.g., Intel 3D XPoint™ memory), or other byte addressable write-in-place nonvolatile memory devices. In one embodiment, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product.
- In some embodiments, 3D crosspoint memory (e.g., Intel 3D XPoint™ memory) may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. In some embodiments, all or a portion of the
memory 1706 may be integrated into theprocessor 1704. In operation, thememory 1706 may store various software and data used during operation such as job request data, kernel map data, telemetry data, applications, programs, libraries, and drivers. - The
compute engine 1702 is communicatively coupled to other components of thenetwork switch 1612 via the I/O subsystem 1708, which may be embodied as circuitry and/or components to facilitate input/output operations with theprocessor 1704, thememory 1706, and other components of thenetwork switch 1612. For example, the I/O subsystem 1708 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1708 may form a portion of a system-on-a-chip (SoC) and be incorporated, along with one or more of theprocessor 1704, thememory 1706, and other components of thenetwork switch 1612, on a single integrated circuit chip. - The one or more
data storage devices 1710 may be embodied as any type of storage device(s) configured for short-term or long-term storage of data, such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Eachdata storage device 1710 may include a system partition that stores data and firmware code for thedata storage device 1710. Eachdata storage device 1710 may also include an operating system partition that stores data files and executables for an operating system. - The
communication circuitry 1712 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between thenetwork switch 1612 and other compute devices (e.g., the compute sleds 1602, thestorage sled 1614, the accelerator sleds 1618, theresource hardware manager 1608, etc.). Accordingly, thecommunication circuitry 1712 may be configured to use any one or more communication technology (e.g., wired or wireless communications) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) to effect such communication. - The
illustrative communication circuitry 1712 includes a network interface controller (NIC) 1714, which may also be referred to as a host fabric interface (HFI). TheNIC 1714 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by thenetwork switch 1612 to connect with another compute device (e.g., one of the compute sleds 1602 ofFIG. 16 ). In some embodiments, theNIC 1714 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, theNIC 1714 may include a local processor (not shown) and/or a local memory (not shown) that are both local to theNIC 1714. In such embodiments, the local processor of theNIC 1714 may be capable of performing one or more of the functions of theprocessor 1704 described herein. Additionally or alternatively, in such embodiments, the local memory of theNIC 1714 may be integrated into one or more components of thenetwork switch 1612 at the board level, socket level, chip level, and/or other levels. - The one or more peripheral devices 1716 may include any type of device that is usable to input information into the
network switch 1612 1606 and/or receive information from thenetwork switch 1612. The peripheral devices 1716 may be embodied as any auxiliary device usable to input information into thenetwork switch 1612, such as a keyboard, a mouse, a microphone, a barcode reader, an image scanner, etc., or output information from thenetwork switch 1612, such as a display, a speaker, graphics circuitry, a printer, a projector, etc. It should be appreciated that, in some embodiments, one or more of the peripheral devices 1716 may function as both an input device and an output device (e.g., a touchscreen display, a digitizer on top of a display screen, etc.). It should be further appreciated that the types of peripheral devices 1716 connected to thenetwork switch 1612 may depend on, for example, the type and/or intended use of thenetwork switch 1612. Additionally or alternatively, in some embodiments, the peripheral devices 1716 may include one or more ports, such as a USB port, for example, for connecting external peripheral devices to thenetwork switch 1612. - Referring now to
FIG. 18 , acompute sled 1602 may establish anenvironment 1800 during operation. Theillustrative environment 1800 includes anetwork connection manager 1810 and the phasedetection logic unit 1610 ofFIG. 16 . Each of the components of theenvironment 1800 may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of theenvironment 1800 may be embodied as circuitry or a collection of electrical devices (e.g., networkconnection management circuitry 1810, phasedetection logic circuitry 1610, etc.). It should be appreciated that, in such embodiments, one or both of the networkconnection management circuitry 1810 and the phasedetection logic circuitry 1610 may form a portion of one or more of thecompute engine 1702, the one or moredata storage devices 1710, thecommunication circuitry 1712, and/or any other components of thenetwork switch 1612. - In the illustrative embodiment, the
environment 1800 additionally includestelemetry data 1802,phase change data 1804, andmigration policy data 1806, each of which may be embodied as any data established by thenetwork switch 1612. Thetelemetry data 1802 may include any data usable to resource usage and/or performance of a computing element (e.g., a CPU) of acompute sled 1602 or anaccelerator sled 1618. In some embodiments, thetelemetry data 1802 may also include information about network traffic passing through thenetwork switch 1612, including network congestion information and frequencies of data access requests and responses to/from the compute sleds 1602, the accelerator sleds 1618, thestorage sled 1614, etc. Thephase change data 1804 may include any data usable to identify phase changes (e.g., thresholds, expected durations, historical information, etc.) of various applications. Themigration policy data 1806 may include any data (e.g., rules or policies) usable to instruct thenetwork switch 1612 how/where to migrate hardware threads and/or compute kernels (e.g., under certain conditions). - The
network connection manager 1810, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to facilitate inbound and outbound network communications (e.g., network traffic, network packets, network flows, etc.) to and from thenetwork switch 1612, respectively. To do so, thenetwork connection manager 1810 is configured to receive and process data packets from one system or computing device (e.g., one of the compute sleds 1602, theresource hardware manager 1608, thestorage sled 1614, one of the accelerator sleds 1618, etc.) and to prepare and send data packets to another computing device or system (e.g., one of the compute sleds 1602, theresource hardware manager 1608, thestorage sled 1614, one of the accelerator sleds 1618, etc.). Accordingly, in some embodiments, at least a portion of the functionality of thenetwork connection manager 1810 may be performed by thecommunication circuitry 1712, or more particularly by theNIC 1714. - As described previously, the phase
detection logic unit 1610 is configured to analyze collected telemetry data to determine a phase change and orchestrate a migration of an application (i.e., the hardware threads of an application) and, under certain conditions, a compute kernel (i.e., a routine compiled for high throughput accelerators) associated with the migrated application. To do so, the illustrative phasedetection logic unit 1610 includes atelemetry data collector 1812, aphase change detector 1814, and amigration manager 1816. Thetelemetry data collector 1812, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to collect telemetry data (e.g., the telemetry data 1802) reported by the compute sleds 1602 and the accelerator sleds 1618 the workloads are executed thereon and compute kernels have been offloaded therefrom. - As described previously, at least a portion of the phase
detection logic unit 1610 may be performed by thenetwork switch 1612 and/or theresource hardware manager 1608. In such embodiments in which the telemetry data collection is performed by theresource hardware manager 1608, for example, the telemetry data may be destined for theresource hardware manager 1608 and collected upon receipt. In such embodiments in which the telemetry data is collected by thenetwork switch 1612, for example, as network packets containing the telemetry data pass through the network switch 1612 (e.g., through the network connection manager 1810), the telemetry data identifies those network packets and stores the telemetry data locally in thenetwork switch 1612. It should be appreciated that, in either embodiment, an association with an identifier of the corresponding sled, or more particularly a corresponding compute element (i.e., a CPU, an FPGA, etc.) of that sled, is stored with the telemetry data. - The
phase change detector 1814, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to detect a phase change of an application subsequent to having executed a compute kernel. As described previously the phases include, but are not limited to, a CPU bound phase, an FPGA bound phase, and a memory bound phase. For example, thephase change detector 1814 is configured to detect when the application changes its behavior from a CPU bound phase to a different phase after the compute kernel execution has started. To do so, thephase change detector 1814 is configured to analyze the collected telemetry data (e.g., the telemetry data 1802) to determine whether a certain condition, or conditions, exists which indicates a phase change. For example, thephase change detector 1814 may be configured to compare an IPC value to a threshold peak IPC value. In another example, thephase change detector 1814 may be configured to identify an amount of time a particular phase has taken historically to determine whether efficiencies can be realized by migrating the application's hardware threads to another compute element. - The
migration manager 1816, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to migrate hardware threads associated with an application to another compute element. To do so, themigration manager 1816 is configured to receive an indication of a detected phase change (e.g., from the phase change detector 1814) that indicates the application is to be migrated. Themigration manager 1816 is additionally configured identify the other compute element the application is to be migrated to by transmitting a compute element identification request to theresource hardware manager 1608 which is usable by theresource hardware manager 1608 to identify the other compute element (e.g., based on requirements of the workload associated with the hardware threads). Further, themigration manager 1816 is configured to pause the running hardware threads, migrate their status to the identified other compute element, and resume the hardware threads. Additionally, themigration manager 1816 is configured to notify the appropriate operating system and/or the resource manager server 1808 of the completed hardware thread migration. - It should be appreciated that each of the
telemetry data collector 1812, thephase change detector 1814, and themigration manager 1816 may be separately embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof. For example, thetelemetry data collector 1812 may be embodied as a hardware component, while thephase change detector 1814 and/or themigration manager 1816 may be embodied as virtualized hardware components or as some other combination of hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof. Further it should be appreciated that, in other embodiments, the compute sleds 1602 and/or theresource hardware manager 1608 may include at least a portion of the phasedetection logic unit 1610 and may therefor establish an environment similar to theenvironment 1800 described herein. - Referring back to
FIG. 16 , theresource hardware manager 1608 may be embodied as any type of computing device capable of monitoring and managing resources of the compute sleds 1602, as well as performing the other functions described herein. For example, theresource hardware manager 1608 may be embodied as a computer, a distributed computing system, one or more sleds (e.g., the sleds 204-1, 204-2, 204-3, 204-4, etc.), a server (e.g., stand-alone, rack-mounted, blade, etc.), a multiprocessor system, a network appliance (e.g., physical or virtual), a desktop computer, a workstation, a laptop computer, a notebook computer, a processor-based system, or a network appliance. As shown inFIG. 19 , an illustrativeresource hardware manager 1608 has similar components to that of thenetwork switch 1612 ofFIG. 17 , including acompute engine 1902 with aprocessor 1904 and amemory 1906, an I/O subsystem 1908,communication circuitry 1912 with aNIC 1914, and, in some embodiments, one or moredata storage devices 1910 and/or one or more peripheral devices 1916. Accordingly, the similar or like components are not described herein to preserve clarity of the description. In some embodiments, the compute sleds 1602 may include other or additional components, such as those commonly found in a computing device. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. - The
network switch 1612 may be embodied as any type of networking device capable of performing the functions described herein, including switching network packets between the compute sleds 1602, theresource hardware manager 1608, thestorage sled 1614, and the accelerator sleds 1618, as well as any other computing devices communicatively coupled to thenetwork switch 1612. Depending on the deployment environment, thenetwork switch 1612 may be embodied as a top-of-rack switch, a middle-of-rack switch, or other Ethernet switch. It should be appreciated that thenetwork switch 1612 may include components similar to those described in theillustrative compute sled 1602 ofFIG. 16 (e.g., acompute engine 1902 with one ormore processors 1904, amemory 1906, an I/O subsystem 1908, one or moredata storage devices 1910, acommunication circuitry 1912 with aNIC 1914, one or more peripheral devices 1916, etc.). Accordingly, the similar or like components are not described herein to preserve clarity of the description. It should be further appreciated that thenetwork switch 1612 may include alternative and/or additional components, such as those commonly found in a packet-switching network device (e.g., various input/output devices and/or other components). - The
storage sled 1614 may be embodied as any type of storage device capable of performing the functions described herein, such as managing a pool of storage devices 1616 (e.g., physical storage resources 205-1). To do so, thestorage sled 1614 may a memory pool controller (not shown) embodied as virtual and/or physical hardware, firmware, software, or a combination thereof, which is configured to manage data into and out of thestorage devices 1616. It should be appreciated that while only asingle storage sled 1614 is shown, other embodiments may include more than onestorage sled 1614. - The
storage devices 1616 may be embodied as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage capable of performing the functions described herein. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). - One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In particular embodiments, DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org). Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces.
- In one embodiment, the
storage devices 1616 may be embodied as a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include future generation nonvolatile devices, such as a three dimensional (3D) crosspoint memory device (e.g., Intel 3D XPoint™ memory), or other byte addressable write-in-place nonvolatile memory devices. In such embodiments, the 3D crosspoint memory (e.g., Intel 3D XPoint™ memory) may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. - In another embodiment, the
storage devices 1616 may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product. - As described previously, the compute sleds 1602 may be pooled, as illustratively shown in the high-performance processing sleds 1134 of
FIG. 11 . Theillustrative compute sleds 1602 include a first compute sled, designated as compute sled (1) 1602 a, a second compute sled, designated as compute sled (2) 1602 b, and a third compute sled, designated as compute sled (N) 1602 c (e.g., in which the compute sled (N) 1602 c represents the “Nth”compute sled 1602 and “N” is a positive integer). The illustrative compute sled (1) 1602 a includes one or more high-performance CPUs 1604. The illustrative compute sled (2) 1602 b include one or more low-performance CPUs 1606. It should be appreciated that the high-performance CPUs 1604 are “high-performance” relative to comparable benchmark test results of features of the low-performance CPUs 1606. For example, a high-performance CPU may be defined as a CPU having a clock frequency above a threshold value, a number of cores above a threshold value, a total power rating above a threshold value, and/or other CPU performance metric that is above a corresponding reference threshold value. In an illustrative example, a high-performance CPU 1604 may be embodied as a high-performance Intel® Xeon® processer and a low-performance CPU 1606 may be embodied as a low-performance Intel® Xeon® processor. - As described previously, the accelerator sleds 1618 may be pooled, as illustratively shown in the pooled accelerator sleds 1130 of
FIG. 11 . As shown in theillustrative system 1600, the accelerator sleds 1618 include a first accelerator sled, designated as accelerator sled (1) 1618 a, a second accelerator sled, designated as accelerator sled (2) 1618 b, and a third accelerator sled, designated as accelerator sled (N) 1618 c (e.g., in which the accelerator sled (N) 1618 c represents the “Nth”accelerator sled 1602 and “N” is a positive integer). The illustrative accelerator sled (1) 1602 a includes anFPGA 1622, designated asFGPA 1622 a. The illustrative accelerator sled (2) 1618 b includes anotherFPGA 1622, designated as FPGA 1022 b, as well as a low-performance CPU 1620. It should be appreciated that the low-performance CPU 1620 of the illustrative accelerator sled (2) 1618 b may be the same or similar “low-performance” CPU to the low-performance CPU 1606 of the illustrative compute sled (2) 1602 b. - It should be appreciated that, in some embodiments, one or more of the compute sleds 1602 and/or
accelerator sleds 1618 may be grouped into a managed node, such as by theresource hardware manager 1608, to collectively perform a workload, such as an application. A managed node may be embodied as an assembly of resources, such as compute resources, memory resources, storage resources, or other resources from the same or different sleds or racks. - Further, a managed node may be established, defined, or “spun up” by the
resource hardware manager 1608 at the time a workload is to be assigned to the managed node or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node. Theresource hardware manager 1608 may, in some embodiments, perform one or more orchestration operations in support of a cloud operating environment, such as OpenStack, and managed nodes established by theresource hardware manager 1608 may execute one or more applications or processes (i.e., workloads), such as in the VMs or containers, on behalf of a user of a client device (not shown) communicatively coupled to the resource hardware manager 1608 (e.g., via a network). - Referring now to
FIG. 20 , in use, one of the compute sleds 1602 may execute amethod 2000 for offloading a compute kernel (see, e.g., thecompute kernel 2204 ofFIGS. 22A-B , 2304 ofFIGS. 23A-B , and 2404 ofFIGS. 24A-B ) to anFPGA 1622 of one of the accelerator sleds 1618 by an application (see, e.g., theapplications FIGS. 22A-20B ) presently executing on thecompute sled 1602. Themethod 2000 begins inblock 2002, in which the application determines whether to offload the compute kernel (i.e., a routine compiled for high throughput accelerators). If so, themethod 2000 advances to block 2004, in which the application identifies an FPGA of one of the accelerator sleds 1618 to offload the compute kernel (e.g., an accelerator function unit) to. It should be appreciated that, in some embodiments, to identify the FPGA, the application may transmit an FPGA identification request to theresource hardware manager 1608, which may perform the actual identification of the FPGA and notify thecompute sled 1602 of the identified FPGA. In block 2006, the application executes the compute kernel on the identified FPGA. Inblock 2008, the application notifies a phase detection logic unit of the execution of the compute kernel. - Referring now to
FIGS. 21A and 21B , in use, a compute sled (e.g., one of the compute sleds 1602 ofFIG. 16 ), or more particularly the phasedetection logic unit 1610 of thecompute sled 1602, may execute amethod 2100 for auto-migration in accelerated architectures. Themethod 2100 begins in block 2102, in which thecompute sled 1602 determines whether to monitor an application (i.e., the hardware threads associated with the application). For example, thecompute sled 1602 may receive an indication from the application which indicates the execution of a compute kernel that was offloaded by the application to anFPGA 1622. If thecompute sled 1602 determines the application is to be monitored, themethod 2100 advances to block 2104, in which thecompute sled 1602 monitors hardware threads associated with the application to be monitored. To do so, inblock 2106, thecompute sled 1602 collects telemetry data corresponding to the hardware threads to be monitored. For example, inblock 2108, thecompute sled 1602 collects resource usage information. Additionally, inblock 2110, thecompute sled 1602 collects CPU core performance data, such as a number of instructions per cycle being executed at a given point in time and other metrics related to whether the performance of the application is being CPU bound, memory bound, or Acceleration bound. - In
block 2112, thecompute sled 1602 analyzes the collected telemetry to identify a phase change. To do so, in block 2114, thecompute sled 1602 compares at least a portion of the telemetry data to one or more corresponding thresholds. For example, thecompute sled 1602 may compare an IPC value against a peak IPC threshold for a particular compute element (e.g., a CPU). In block 2116, thecompute sled 1602 determines whether a phase change has been detected as a result of the analysis performed inblock 2112. As described previously the phases include, but are not limited to, a CPU bound phase, an FPGA bound phase, and a memory bound phase. If not, themethod 2100 returns to block 2104 to continue to monitor the hardware threads; otherwise, if a phase change has been detected (e.g., from a CPU bound phase to another phase) themethod 2100 advances to block 2118. Inblock 2118, thecompute sled 1602 identifies a new compute element to migrate the hardware threads to. - It should be appreciated that, in some embodiments, the
compute sled 1602 may not be capable of identifying the new compute element (e.g., due to thecompute sled 1602 not having the necessary resource information available to do so). Accordingly, in such embodiments, thecompute sled 1602 may transmit a request (e.g., a compute element identification request) to theresource hardware manager 1608 requesting theresource hardware manager 1608 to identify the new compute element and return the identified compute element. It should be appreciated that, in such embodiments, theresource hardware manager 1608 may be configured to identify the new compute element based on available resources of thecompute sled 1602 on which the hardware threads are presently executing, the available resources of theother compute sleds 1602, and resource requirements of the workload associated with the hardware threads. - In block 2120, the
compute sled 1602 migrates the hardware threads to the identified new compute element. To do so, inblock 2122, thecompute sled 1602 pauses the hardware threads running on the present compute element. Additionally, inblock 2124, thecompute sled 1602 migrates the hardware thread states to the other new compute element. Further, inblock 2126, thecompute sled 1602 resumes the migrated hardware threads. Finally, inblock 2128, thecompute sled 1602 takes the previously used compute element offline. Inblock 2130, thecompute sled 1602 notifies the respective operating system associated with the application of the migration. In some embodiments, thecompute sled 1602 may return the offlined compute element to the respective operating system. - In
block 2132, thecompute sled 1602 determines whether to also migrate the compute kernel associated with the migrated application from theFPGA 1622 on which the compute kernel is presently executing to a different FPGA 1622 (e.g., of a different one of the accelerator sleds 1618). If not, themethod 2100 branches to block 2144 ofFIG. 21B , which is describe below; otherwise, if thecompute sled 1602 determines to migrate the compute kernel, themethod 2100 branches to block 2134 ofFIG. 21B . Inblock 2134, thecompute sled 1602 determines another FPGA to migrate the compute kernel to. Similar to identifying the new compute element to migrate the hardware threads to, thecompute sled 1602 may not be capable of determining the FPGA, in some embodiments. Accordingly, in such embodiments, thecompute sled 1602 may transmit a request (e.g., an FPGA identification request) to theresource hardware manager 1608 requesting theresource hardware manager 1608 to identify the new FPGA and return the identified FPGA. It should be appreciated that, in such embodiments, theresource hardware manager 1608 may be configured to identify the new FPGA based on available resources of theaccelerator sled 1618 on which the compute kernel is presently executing, the available resources of theother accelerator sleds 1618, and resource requirements of the compute kernel. - In
block 2136, thecompute sled 1602 migrates the compute kernel to the determined new FPGA. In block 2138, thecompute sled 1602 notifies the application associated with the compute kernel of the compute kernel's migration to the new FPGA. Inblock 2140, thecompute sled 1602 monitors a completion status of the compute kernel. Inblock 2142, thecompute sled 1602 monitors a phase of the corresponding application. In block 2144, thecompute sled 1602 determines whether to migrate the application from the new compute element which the hardware thread was migrated to in block 2120. To do so, for example, thecompute sled 1602 may determine to migrate the application in response to having determined the compute kernel operation has completed, or is about to complete. Additionally or alternatively, thecompute sled 1602 may determine to migrate the application in response to having detected the phase has changed back to a CPU bound phase. - If the
compute sled 1602 determines not to migrate the application, themethod 2100 returns to block 2140 to continue monitoring the completion statues of the compute kernel, as well as to continue monitoring the phase of the corresponding application inblock 2142. Otherwise, if thecompute sled 1602 determines to migrate the application, themethod 2100 advances to block 2146 in which thecompute sled 1602 identifies another new compute element to migrate the hardware threads to. As noted previously, thecompute sled 1602 may rely on theresource hardware manager 1608 to identify the other new compute element and notify thecompute sled 1602 of the identified other new compute element. In block 2148, thecompute sled 1602 migrates the hardware threads to the identified other new compute element. To do so, inblock 2150, thecompute sled 1602 pauses the hardware threads running on the present compute element. Additionally, inblock 2152, thecompute sled 1602 migrates the hardware thread states to the other new compute element. Further, in block 2154, thecompute sled 1602 resumes the migrated hardware threads. Finally, inblock 2156, thecompute sled 1602 offlines the previously used compute element. Inblock 2158, thecompute sled 1602 notifies the respective operating system and the associated application of the successful migration. Accordingly, the application can make any reconfiguration changes to the application's software/network parameters as may be required as a result of the migration. - As noted previously, at least a portion of the phase
detection logic unit 1610 may be in one or more of the compute sleds 1602, theresource hardware manager 1608, and thenetwork switch 1612, in other embodiments. Accordingly, it should be appreciated that, in such embodiments, at least a portion of themethod 2100 may be performed by thenetwork switch 1612 and/or theresource hardware manager 1608 in addition or alternatively to the compute sleds 1602 as described herein. It should be further appreciated that while themethod 2100 has been illustratively described as being performed by a disaggregated architecture, the functions described herein may be performed, in other embodiments, by a platform including a local multi-processor computing device and at least one FPGA, or a configurable platform (e.g., the Intel® Discrete Configurable Platform) having a multiple processors and at least one FPGA. - As described with respect to the
method 2000 ofFIG. 20 , an application presently executing on a compute element (e.g., a high-performance CPU 1604) may offload a compute kernel to anFPGA 1622 of an accelerator sled 128 (e.g., theFPGA 1622 a of the accelerator sled (1) 1618 a, theFPGA 1622 b of the accelerator sled (2) 1618 b, etc.). Further, the application may notify a phasedetection logic unit 1610 of the offload such that, as described in themethod 2100 ofFIG. 21 , the phasedetection logic unit 1610 can monitor a phase of the application via the collection/analysis of telemetry data associated with the resources being used by the application. Additionally, upon detecting a phase change, the phasedetection logic unit 1610 may determine to migrate the application and, under certain conditions, the associated compute kernel. Accordingly, each ofFIGS. 22A and 22B, 23A and 23B, and 24A and 24B illustrate non-limiting example application/compute kernel migrations. - Referring now to
FIGS. 22A and 22B , an illustrative example for auto-migration of an application is shown in which an application is consolidated with another application in one of the compute sleds 1602. As illustratively shown in pre-migrationFIG. 22A , the compute sled (1) 1602 a includes a first high-performance CPU 1604, designated as high-performance CPU 1604 a, and a second high-performance CPU 1604, designated as high-performance CPU 1604 b. Each of the high-performance CPUs application 2202. Afirst application 2202, designated as application (1) 2202 a, is presently being executed on the high-performance CPU 1604 a. Asecond application 2202, designated as application (2) 2202 b, is presently being executed on the high-performance CPU 1604 b. Additionally, the accelerator sled (1) 1618 a is shown having acompute kernel 2204 presently executing in theFPGA 1622 a of the accelerator sled (1) 1618 a. For the purposes of the illustrative example, it should be appreciated that thecompute kernel 2204 is associated with (i.e., was offloaded by) application (1) 2202 a. - As illustratively shown in post-migration
FIG. 22B , the application (1) 2202 has been migrated from the high-performance CPU (1) 1604 a to the high-performance CPU (2) 1604 b (i.e., consolidated with the application (2) 2202 b). Additionally, the compute kernel is migrated from theFPGA 1622 a of the accelerator sled (a) 1618 a to theFPGA 1622 b of the accelerator sled (2) 1618 b. As described previously, in determining whether to migrate the application (2) 2202 b to the high-performance CPU (2) 1604 b and whether to migrate thecompute kernel 2204, the phasedetection logic unit 1610 is configured to identify resource usage/performance of a workload of the application (2) 2202 b while thecompute kernel 2204 is executing and compare the identified resource usage to a corresponding usage/performance threshold, as well as the phase of the application. For example, as described previously, the phasedetection logic unit 1610 may be configured to identify a present IPC value and compare the identified present IPC value to an IPC peak threshold value, as well as identify the present phase (e.g., FPGA bound inFIG. 22A and CPU bound inFIG. 22B ). - Referring now to
FIGS. 23A and 23B , an illustrative example for auto-migration of an application is shown which includes an application being migrated from the high-performance CPU 1604 of thecompute sled 1602 a to the low-performance CPU 1606 of thecompute sled 1602 b. As illustratively shown in pre-migrationFIG. 23A , anapplication 2302 is presently being executed by the high-performance CPU 1604 of the compute sled (1) 1602 a. Additionally, acompute kernel 2304 is presently executing on theFPGA 1622 a of the accelerator sled (1) 1618 a. As illustratively shown in post-migrationFIG. 23B , theapplication 2302 has been migrated to the low-performance CPU 1606 of the compute sled (2) 1602 b and the compute kernel has not been migrated. - Referring now to
FIGS. 24A and 24B , in illustrative example for auto-migration of an application and a compute kernel both being migrated to the same accelerator sled (e.g., one of the accelerator sleds 1618). As illustratively shown in pre-migrationFIG. 24A , anapplication 2402 is presently being executed by the high-performance CPU 1604 of the compute sled (1) 1602 a and acompute kernel 2404 is presently executing on theFPGA 1622 a of the accelerator sled (1) 1618 a. As illustratively shown in post-migrationFIG. 24B , theapplication 2402 has been migrated to the low-performance CPU 1620 of the accelerator sled (2) 1618 b and thecompute kernel 2404 has been migrated from theFPGA 1622 a of the accelerator sled (1) 1618 a to theFPGA 1622 b of the accelerator sled (2) 1618 b. - Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
- Example 1 includes a compute sled for auto-migration in accelerated architectures, the compute sled comprising a compute engine to receive, from an application executed on a first compute element of a compute sled of a plurality of compute sleds, an indication that a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled of a plurality of accelerator sleds, wherein each of the plurality of accelerator sleds and the plurality of compute sleds are communicatively coupled to the compute sled; monitor a plurality of hardware threads associated with the application; detect whether a phase change has been detected as a function of the monitored hardware threads; and migrate, in response to detected detection of the phase change, the hardware threads to a second compute element.
- Example 2 includes the subject matter of Example 1, and wherein to monitor the plurality of hardware threads comprises to collect telemetry data corresponding to one or more hardware resources used by the hardware threads during execution.
- Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to collect the telemetry data includes to collect an instructions per cycle (IPC) value of the first compute element.
- Example 4 includes the subject matter of any of Examples 1-3, and wherein to detect whether the phase change has been detected comprises to compare the IPC value of the first compute element to a peak IPC threshold value.
- Example 5 includes the subject matter of any of Examples 1-4, and wherein to detect whether the phase change has been detected comprises to identify a previous phase as a central processing unit (CPU) bound phase and identify a present phase as an FPGA bound phase.
- Example 6 includes the subject matter of any of Examples 1-5, and wherein to detect whether the phase change has been detected comprises to identify a previous phase as a central processing unit (CPU) bound phase and identify a present phase as a memory bound phase.
- Example 7 includes the subject matter of any of Examples 1-6, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to another high-performance CPU of the compute sled.
- Example 8 includes the subject matter of any of Examples 1-7, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- Example 9 includes the subject matter of any of Examples 1-8, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- Example 10 includes the subject matter of any of Examples 1-9, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of the accelerator sled.
- Example 11 includes the subject matter of any of Examples 1-10, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- Example 12 includes the subject matter of any of Examples 1-11, and wherein to migrate the hardware threads to the second compute element comprises to pause the hardware threads at the first compute element, migrate states of the hardware threads from the first compute element to the second compute element, resume the migrated hardware threads at the second compute element, and offline the first compute element.
- Example 13 includes the subject matter of any of Examples 1-12, and wherein the compute engine is further to migrate the compute kernel to another FPGA of another accelerator sled of the plurality of accelerator sleds.
- Example 14 includes the subject matter of any of Examples 1-13, and wherein the compute engine is further to receive an indication that indicates the compute kernel has completed; and migrate, in response to having received the indication, the application to a third compute element.
- Example 15 includes the subject matter of any of Examples 1-14, and wherein to migrate the application to the third compute element comprises to migrate the application to a high-performance CPU of one of the plurality of compute sleds.
- Example 16 includes the subject matter of any of Examples 1-15, and wherein to migrate the hardware threads to the third compute element comprises to pause the hardware threads at the second compute element, migrate states of the hardware threads from the second compute element to the third compute element, resume the migrated hardware threads at the third compute element, and offline the second compute element.
- Example 17 includes a method for auto-migration in accelerated architectures, the method comprising receiving, by a compute sled, from an application executed on a first compute element of a compute sled of a plurality of compute sleds, an indication that a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled of a plurality of accelerator sleds, wherein each of the plurality of accelerator sleds and the plurality of compute sleds are communicatively coupled to the compute sled; monitoring, by the compute sled, a plurality of hardware threads associated with the application; detecting, by the compute sled, whether a phase change has been detected as a function of the monitored hardware threads; and migrating, by the compute sled and in response to detected detection of the phase change, the hardware threads to a second compute element.
- Example 18 includes the subject matter of Example 17, and wherein monitoring the plurality of hardware threads comprises collecting telemetry data corresponding to one or more hardware resources used by the hardware threads during execution.
- Example 19 includes the subject matter of any of Examples 17 and 18, and wherein collecting the telemetry data includes collecting an instructions per cycle (IPC) value of the first compute element.
- Example 20 includes the subject matter of any of Examples 17-19, and wherein detecting whether the phase change has been detected comprises comparing the IPC value of the first compute element to a peak IPC threshold value.
- Example 21 includes the subject matter of any of Examples 17-20, and wherein detecting whether the phase change has been detected comprises identifying a previous phase as a central processing unit (CPU) bound phase and identify a present phase as an FPGA bound phase.
- Example 22 includes the subject matter of any of Examples 17-21, and wherein detecting whether the phase change has been detected comprises identifying a previous phase as a central processing unit (CPU) bound phase and identify a present phase as a memory bound phase.
- Example 23 includes the subject matter of any of Examples 17-22, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to another high-performance CPU of the compute sled.
- Example 24 includes the subject matter of any of Examples 17-23, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- Example 25 includes the subject matter of any of Examples 17-24, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- Example 26 includes the subject matter of any of Examples 17-25, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of the accelerator sled.
- Example 27 includes the subject matter of any of Examples 17-26, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- Example 28 includes the subject matter of any of Examples 17-27, and wherein migrating the hardware threads to the second compute element comprises pausing the hardware threads at the first compute element; migrating states of the hardware threads from the first compute element to the second compute element; resuming the migrated hardware threads at the second compute element; and offlining the first compute element.
- Example 29 includes the subject matter of any of Examples 17-28, and further including migrating, by the compute sled, the compute kernel to another FPGA of another accelerator sled of the plurality of accelerator sleds.
- Example 30 includes the subject matter of any of Examples 17-29, and further including receiving, by the compute sled, an indication that indicates the compute kernel has completed; and migrating, by the compute sled and in response to having received the indication, the application to a third compute element.
- Example 31 includes the subject matter of any of Examples 17-30, and wherein migrating the application to the third compute element comprises migrating the application to a high-performance CPU of one of the plurality of compute sleds.
- Example 32 includes the subject matter of any of Examples 17-31, and wherein migrating the hardware threads to the third compute element comprises pausing the hardware threads at the second compute element; migrating states of the hardware threads from the second compute element to the third compute element; resuming the migrated hardware threads at the third compute element; and offlining the second compute element.
- Example 33 includes one or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to being executed, cause a compute sled to perform the method of any of Examples 17-32.
- Example 34 includes a compute sled for improving throughput in a network, the compute sled comprising one or more processors; one or more memory devices having stored therein a plurality of instructions that, when executed by the one or more processors, cause the compute sled to perform the method of any of Examples 17-32.
- Example 35 includes a compute sled for auto-migration in accelerated architectures, the compute sled comprising phase detection logic circuitry to receive, from an application executed on a first compute element of a compute sled of a plurality of compute sleds, an indication that a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled of a plurality of accelerator sleds, wherein each of the plurality of accelerator sleds and the plurality of compute sleds are communicatively coupled to the compute sled; monitor a plurality of hardware threads associated with the application; detect whether a phase change has been detected as a function of the monitored hardware threads; and migrate, in response to detected detection of the phase change, the hardware threads to a second compute element.
- Example 36 includes the subject matter of Example 35, and wherein to monitor the plurality of hardware threads comprises to collect telemetry data corresponding to one or more hardware resources used by the hardware threads during execution.
- Example 37 includes the subject matter of any of Examples 35 and 36, and wherein to collect the telemetry data includes to collect an instructions per cycle (IPC) value of the first compute element.
- Example 38 includes the subject matter of any of Examples 35-37, and wherein to detect whether the phase change has been detected comprises to compare the IPC value of the first compute element to a peak IPC threshold value.
- Example 39 includes the subject matter of any of Examples 35-38, and wherein to detect whether the phase change has been detected comprises to identify a previous phase as a central processing unit (CPU) bound phase and identify a present phase as an FPGA bound phase.
- Example 40 includes the subject matter of any of Examples 35-39, and wherein to detect whether the phase change has been detected comprises to identify a previous phase as a central processing unit (CPU) bound phase and identify a present phase as a memory bound phase.
- Example 41 includes the subject matter of any of Examples 35-40, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to another high-performance CPU of the compute sled.
- Example 42 includes the subject matter of any of Examples 35-41, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- Example 43 includes the subject matter of any of Examples 35-42, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- Example 44 includes the subject matter of any of Examples 35-43, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of the accelerator sled.
- Example 45 includes the subject matter of any of Examples 35-44, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein to migrate the hardware threads to the second compute element in response to having detected the phase change comprises to migrate the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- Example 46 includes the subject matter of any of Examples 35-45, and wherein to migrate the hardware threads to the second compute element comprises to pause the hardware threads at the first compute element, migrate states of the hardware threads from the first compute element to the second compute element, resume the migrated hardware threads at the second compute element, and offline the first compute element.
- Example 47 includes the subject matter of any of Examples 35-46, and wherein the phase detection logic circuitry is further to migrate the compute kernel to another FPGA of another accelerator sled of the plurality of accelerator sleds.
- Example 48 includes the subject matter of any of Examples 35-47, and wherein the compute engine is further to receive an indication that indicates the compute kernel has completed; and migrate, in response to having received the indication, the application to a third compute element.
- Example 49 includes the subject matter of any of Examples 35-48, and wherein to migrate the application to the third compute element comprises to migrate the application to a high-performance CPU of one of the plurality of compute sleds.
- Example 50 includes the subject matter of any of Examples 35-49, and wherein to migrate the hardware threads to the third compute element comprises to pause the hardware threads at the second compute element, migrate states of the hardware threads from the second compute element to the third compute element, resume the migrated hardware threads at the third compute element, and offline the second compute element.
- Example 35 includes a compute sled for auto-migration in accelerated architectures, the compute sled comprising circuitry for receiving, from an application executed on a first compute element of a compute sled of a plurality of compute sleds, an indication that a compute kernel associated with the application has been offloaded to a field-programmable gate array (FPGA) of an accelerator sled of a plurality of accelerator sleds, wherein each of the plurality of accelerator sleds and the plurality of compute sleds are communicatively coupled to the compute sled; means for monitoring a plurality of hardware threads associated with the application; means for detecting whether a phase change has been detected as a function of the monitored hardware threads; and circuitry for migrating, in response to detected detection of the phase change, the hardware threads to a second compute element.
- Example 36 includes the subject matter of Example 35, and wherein the means for monitoring the plurality of hardware threads comprises means for collecting telemetry data corresponding to one or more hardware resources used by the hardware threads during execution.
- Example 37 includes the subject matter of any of Examples 35 and 36, and wherein the means for collecting the telemetry data includes means for collecting an instructions per cycle (IPC) value of the first compute element.
- Example 38 includes the subject matter of any of Examples 35-37, and wherein the means for detecting whether the phase change has been detected comprises means for comparing the IPC value of the first compute element to a peak IPC threshold value.
- Example 39 includes the subject matter of any of Examples 35-38, and wherein the means for detecting whether the phase change has been detected comprises means for identifying a previous phase as a central processing unit (CPU) bound phase and identify a present phase as an FPGA bound phase.
- Example 40 includes the subject matter of any of Examples 35-39, and wherein the means for detecting whether the phase change has been detected comprises means for identifying a previous phase as a central processing unit (CPU) bound phase and identify a present phase as a memory bound phase.
- Example 41 includes the subject matter of any of Examples 35-40, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to another high-performance CPU of the compute sled.
- Example 42 includes the subject matter of any of Examples 35-41, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a high-performance CPU of another compute sled of the plurality of compute sleds.
- Example 43 includes the subject matter of any of Examples 35-42, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another compute sled of the plurality of compute sleds.
- Example 44 includes the subject matter of any of Examples 35-43, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of the accelerator sled.
- Example 45 includes the subject matter of any of Examples 35-44, and wherein the first compute element on which the application is presently executing comprises a high-performance central processing unit (CPU), and wherein migrating the hardware threads to the second compute element in response to having detected the phase change comprises migrating the hardware threads to a low-performance CPU of another accelerator sled of the plurality of accelerator sleds.
- Example 46 includes the subject matter of any of Examples 35-45, and wherein the circuitry for migrating the hardware threads to the second compute element comprises circuitry for pausing the hardware threads at the first compute element; circuitry for migrating states of the hardware threads from the first compute element to the second compute element; circuitry for resuming the migrated hardware threads at the second compute element; and circuitry for offlining the first compute element.
- Example 47 includes the subject matter of any of Examples 35-46, and further including circuitry for migrating, by the compute sled, the compute kernel to another FPGA of another accelerator sled of the plurality of accelerator sleds.
- Example 48 includes the subject matter of any of Examples 35-47, and further including circuitry for receiving, by the compute sled, an indication that indicates the compute kernel has completed; and circuitry for migrating, by the compute sled and in response to having received the indication, the application to a third compute element.
- Example 49 includes the subject matter of any of Examples 35-48, and wherein the circuitry for migrating the application to the third compute element comprises circuitry for migrating the application to a high-performance CPU of one of the plurality of compute sleds.
- Example 50 includes the subject matter of any of Examples 35-49, and wherein the circuitry for migrating the hardware threads to the third compute element comprises circuitry for pausing the hardware threads at the second compute element; circuitry for migrating states of the hardware threads from the second compute element to the third compute element; circuitry for resuming the migrated hardware threads at the third compute element; and circuitry for offlining the second compute element.
Claims (25)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/859,385 US20190065281A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for auto-migration in accelerated architectures |
CN201811004878.7A CN109426568A (en) | 2017-08-30 | 2018-08-30 | For in the technology for accelerating the Autonomic Migration Framework in framework |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN201741030632 | 2017-08-30 | ||
IN201741030632 | 2017-08-30 | ||
US201762584401P | 2017-11-10 | 2017-11-10 | |
US15/859,385 US20190065281A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for auto-migration in accelerated architectures |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190065281A1 true US20190065281A1 (en) | 2019-02-28 |
Family
ID=65434219
Family Applications (24)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/850,325 Abandoned US20190068466A1 (en) | 2017-08-30 | 2017-12-21 | Technologies for auto-discovery of fault domains |
US15/858,305 Abandoned US20190068464A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for machine learning schemes in dynamic switching between adaptive connections and connection optimization |
US15/858,286 Abandoned US20190068523A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for allocating resources across data centers |
US15/858,288 Abandoned US20190068521A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for automated network congestion management |
US15/858,316 Abandoned US20190065260A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for kernel scale-out |
US15/858,549 Abandoned US20190065401A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient memory access on an accelerator sled |
US15/858,542 Active 2039-10-02 US11748172B2 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient pooling for a hyper converged infrastructure |
US15/858,748 Active 2039-08-11 US11614979B2 (en) | 2017-08-30 | 2017-12-29 | Technologies for configuration-free platform firmware |
US15/858,557 Abandoned US20190065083A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient access to pooled accelerator devices |
US15/859,368 Active 2040-02-21 US11422867B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for composing a managed node based on telemetry data |
US15/859,385 Abandoned US20190065281A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for auto-migration in accelerated architectures |
US15/859,364 Active 2039-07-30 US11392425B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for providing a split memory pool for full rack connectivity |
US15/859,394 Active 2040-04-27 US11467885B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for managing a latency-efficient pipeline through a network interface controller |
US15/859,366 Abandoned US20190065261A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for in-processor workload phase detection |
US15/859,388 Abandoned US20190065231A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for migrating virtual machines |
US15/859,363 Abandoned US20190068444A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for providing efficient transfer of results from accelerator devices in a disaggregated architecture |
US15/916,394 Abandoned US20190065415A1 (en) | 2017-08-30 | 2018-03-09 | Technologies for local disaggregation of memory |
US15/933,855 Active 2039-05-07 US11030017B2 (en) | 2017-08-30 | 2018-03-23 | Technologies for efficiently booting sleds in a disaggregated architecture |
US15/942,108 Abandoned US20190067848A1 (en) | 2017-08-30 | 2018-03-30 | Memory mezzanine connectors |
US15/942,101 Active 2040-07-19 US11416309B2 (en) | 2017-08-30 | 2018-03-30 | Technologies for dynamic accelerator selection |
US16/023,803 Active 2038-07-17 US10888016B2 (en) | 2017-08-30 | 2018-06-29 | Technologies for automated servicing of sleds of a data center |
US16/022,962 Active 2038-12-31 US11055149B2 (en) | 2017-08-30 | 2018-06-29 | Technologies for providing workload-based sled position adjustment |
US16/642,523 Abandoned US20200257566A1 (en) | 2017-08-30 | 2018-08-30 | Technologies for managing disaggregated resources in a data center |
US16/642,520 Abandoned US20200192710A1 (en) | 2017-08-30 | 2018-08-30 | Technologies for enabling and metering the utilization of features on demand |
Family Applications Before (10)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/850,325 Abandoned US20190068466A1 (en) | 2017-08-30 | 2017-12-21 | Technologies for auto-discovery of fault domains |
US15/858,305 Abandoned US20190068464A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for machine learning schemes in dynamic switching between adaptive connections and connection optimization |
US15/858,286 Abandoned US20190068523A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for allocating resources across data centers |
US15/858,288 Abandoned US20190068521A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for automated network congestion management |
US15/858,316 Abandoned US20190065260A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for kernel scale-out |
US15/858,549 Abandoned US20190065401A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient memory access on an accelerator sled |
US15/858,542 Active 2039-10-02 US11748172B2 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient pooling for a hyper converged infrastructure |
US15/858,748 Active 2039-08-11 US11614979B2 (en) | 2017-08-30 | 2017-12-29 | Technologies for configuration-free platform firmware |
US15/858,557 Abandoned US20190065083A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient access to pooled accelerator devices |
US15/859,368 Active 2040-02-21 US11422867B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for composing a managed node based on telemetry data |
Family Applications After (13)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/859,364 Active 2039-07-30 US11392425B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for providing a split memory pool for full rack connectivity |
US15/859,394 Active 2040-04-27 US11467885B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for managing a latency-efficient pipeline through a network interface controller |
US15/859,366 Abandoned US20190065261A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for in-processor workload phase detection |
US15/859,388 Abandoned US20190065231A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for migrating virtual machines |
US15/859,363 Abandoned US20190068444A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for providing efficient transfer of results from accelerator devices in a disaggregated architecture |
US15/916,394 Abandoned US20190065415A1 (en) | 2017-08-30 | 2018-03-09 | Technologies for local disaggregation of memory |
US15/933,855 Active 2039-05-07 US11030017B2 (en) | 2017-08-30 | 2018-03-23 | Technologies for efficiently booting sleds in a disaggregated architecture |
US15/942,108 Abandoned US20190067848A1 (en) | 2017-08-30 | 2018-03-30 | Memory mezzanine connectors |
US15/942,101 Active 2040-07-19 US11416309B2 (en) | 2017-08-30 | 2018-03-30 | Technologies for dynamic accelerator selection |
US16/023,803 Active 2038-07-17 US10888016B2 (en) | 2017-08-30 | 2018-06-29 | Technologies for automated servicing of sleds of a data center |
US16/022,962 Active 2038-12-31 US11055149B2 (en) | 2017-08-30 | 2018-06-29 | Technologies for providing workload-based sled position adjustment |
US16/642,523 Abandoned US20200257566A1 (en) | 2017-08-30 | 2018-08-30 | Technologies for managing disaggregated resources in a data center |
US16/642,520 Abandoned US20200192710A1 (en) | 2017-08-30 | 2018-08-30 | Technologies for enabling and metering the utilization of features on demand |
Country Status (5)
Country | Link |
---|---|
US (24) | US20190068466A1 (en) |
EP (1) | EP3676708A4 (en) |
CN (8) | CN109426316A (en) |
DE (1) | DE112018004798T5 (en) |
WO (5) | WO2019045928A1 (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190356729A1 (en) * | 2018-05-17 | 2019-11-21 | International Business Machines Corporation | Optimizing dynamic resource allocations for storage-dependent workloads in disaggregated data centers |
US10601903B2 (en) | 2018-05-17 | 2020-03-24 | International Business Machines Corporation | Optimizing dynamical resource allocations based on locality of resources in disaggregated data centers |
US10684887B2 (en) * | 2018-05-25 | 2020-06-16 | Vmware, Inc. | Live migration of a virtualized compute accelerator workload |
US10785549B2 (en) | 2016-07-22 | 2020-09-22 | Intel Corporation | Technologies for switching network traffic in a data center |
US10795713B2 (en) | 2018-05-25 | 2020-10-06 | Vmware, Inc. | Live migration of a virtualized compute accelerator workload |
US10841367B2 (en) | 2018-05-17 | 2020-11-17 | International Business Machines Corporation | Optimizing dynamical resource allocations for cache-dependent workloads in disaggregated data centers |
EP3757784A1 (en) * | 2019-06-28 | 2020-12-30 | Intel Corporation | Technologies for managing accelerator resources |
US10893096B2 (en) | 2018-05-17 | 2021-01-12 | International Business Machines Corporation | Optimizing dynamical resource allocations using a data heat map in disaggregated data centers |
US10936374B2 (en) | 2018-05-17 | 2021-03-02 | International Business Machines Corporation | Optimizing dynamic resource allocations for memory-dependent workloads in disaggregated data centers |
US10963176B2 (en) | 2016-11-29 | 2021-03-30 | Intel Corporation | Technologies for offloading acceleration task scheduling operations to accelerator sleds |
US10977085B2 (en) | 2018-05-17 | 2021-04-13 | International Business Machines Corporation | Optimizing dynamical resource allocations in disaggregated data centers |
US11003479B2 (en) * | 2019-04-29 | 2021-05-11 | Intel Corporation | Device, system and method to communicate a kernel binary via a network |
US11221886B2 (en) | 2018-05-17 | 2022-01-11 | International Business Machines Corporation | Optimizing dynamical resource allocations for cache-friendly workloads in disaggregated data centers |
US11263122B2 (en) * | 2019-04-09 | 2022-03-01 | Vmware, Inc. | Implementing fine grain data coherency of a shared memory region |
EP3974984A1 (en) * | 2020-09-25 | 2022-03-30 | INTEL Corporation | Technologies for scaling inter-kernel technologies for accelerator device kernels |
US11995330B2 (en) | 2017-08-30 | 2024-05-28 | Intel Corporation | Technologies for providing accelerated functions as a service in a disaggregated architecture |
Families Citing this family (113)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9948724B2 (en) * | 2015-09-10 | 2018-04-17 | International Business Machines Corporation | Handling multi-pipe connections |
CN109891908A (en) * | 2016-11-29 | 2019-06-14 | 英特尔公司 | Technology for the interconnection of millimeter wave rack |
US10425491B2 (en) * | 2017-01-30 | 2019-09-24 | Centurylink Intellectual Property Llc | Method and system for implementing application programming interface (API) to provide network metrics and network resource control to users |
US10346315B2 (en) | 2017-05-26 | 2019-07-09 | Oracle International Corporation | Latchless, non-blocking dynamically resizable segmented hash index |
US10574580B2 (en) * | 2017-07-04 | 2020-02-25 | Vmware, Inc. | Network resource management for hyper-converged infrastructures |
US11119835B2 (en) | 2017-08-30 | 2021-09-14 | Intel Corporation | Technologies for providing efficient reprovisioning in an accelerator device |
US11106427B2 (en) * | 2017-09-29 | 2021-08-31 | Intel Corporation | Memory filtering for disaggregate memory architectures |
US11650598B2 (en) * | 2017-12-30 | 2023-05-16 | Telescent Inc. | Automated physical network management system utilizing high resolution RFID, optical scans and mobile robotic actuator |
US10511690B1 (en) * | 2018-02-20 | 2019-12-17 | Intuit, Inc. | Method and apparatus for predicting experience degradation events in microservice-based applications |
US20210056426A1 (en) * | 2018-03-26 | 2021-02-25 | Hewlett-Packard Development Company, L.P. | Generation of kernels based on physical states |
US10761726B2 (en) * | 2018-04-16 | 2020-09-01 | VWware, Inc. | Resource fairness control in distributed storage systems using congestion data |
US11315013B2 (en) * | 2018-04-23 | 2022-04-26 | EMC IP Holding Company LLC | Implementing parameter server in networking infrastructure for high-performance computing |
US10599553B2 (en) * | 2018-04-27 | 2020-03-24 | International Business Machines Corporation | Managing cloud-based hardware accelerators |
US11042406B2 (en) * | 2018-06-05 | 2021-06-22 | Intel Corporation | Technologies for providing predictive thermal management |
US11431648B2 (en) | 2018-06-11 | 2022-08-30 | Intel Corporation | Technologies for providing adaptive utilization of different interconnects for workloads |
US20190384376A1 (en) * | 2018-06-18 | 2019-12-19 | American Megatrends, Inc. | Intelligent allocation of scalable rack resources |
US11388835B1 (en) * | 2018-06-27 | 2022-07-12 | Amazon Technologies, Inc. | Placement of custom servers |
US11436113B2 (en) * | 2018-06-28 | 2022-09-06 | Twitter, Inc. | Method and system for maintaining storage device failure tolerance in a composable infrastructure |
US10977193B2 (en) | 2018-08-17 | 2021-04-13 | Oracle International Corporation | Remote direct memory operations (RDMOs) for transactional processing systems |
US11347678B2 (en) * | 2018-08-06 | 2022-05-31 | Oracle International Corporation | One-sided reliable remote direct memory operations |
US11188348B2 (en) * | 2018-08-31 | 2021-11-30 | International Business Machines Corporation | Hybrid computing device selection analysis |
US11012423B2 (en) | 2018-09-25 | 2021-05-18 | International Business Machines Corporation | Maximizing resource utilization through efficient component communication in disaggregated datacenters |
US11163713B2 (en) | 2018-09-25 | 2021-11-02 | International Business Machines Corporation | Efficient component communication through protocol switching in disaggregated datacenters |
US11650849B2 (en) * | 2018-09-25 | 2023-05-16 | International Business Machines Corporation | Efficient component communication through accelerator switching in disaggregated datacenters |
US11182322B2 (en) | 2018-09-25 | 2021-11-23 | International Business Machines Corporation | Efficient component communication through resource rewiring in disaggregated datacenters |
US11138044B2 (en) * | 2018-09-26 | 2021-10-05 | Micron Technology, Inc. | Memory pooling between selected memory resources |
US10901893B2 (en) * | 2018-09-28 | 2021-01-26 | International Business Machines Corporation | Memory bandwidth management for performance-sensitive IaaS |
EP3861489A4 (en) * | 2018-10-03 | 2022-07-06 | Rigetti & Co, LLC | Parcelled quantum resources |
US10962389B2 (en) * | 2018-10-03 | 2021-03-30 | International Business Machines Corporation | Machine status detection |
US10768990B2 (en) * | 2018-11-01 | 2020-09-08 | International Business Machines Corporation | Protecting an application by autonomously limiting processing to a determined hardware capacity |
US11055186B2 (en) * | 2018-11-27 | 2021-07-06 | Red Hat, Inc. | Managing related devices for virtual machines using robust passthrough device enumeration |
US10901918B2 (en) * | 2018-11-29 | 2021-01-26 | International Business Machines Corporation | Constructing flexibly-secure systems in a disaggregated environment |
US11275622B2 (en) * | 2018-11-29 | 2022-03-15 | International Business Machines Corporation | Utilizing accelerators to accelerate data analytic workloads in disaggregated systems |
US10831975B2 (en) | 2018-11-29 | 2020-11-10 | International Business Machines Corporation | Debug boundaries in a hardware accelerator |
US11048318B2 (en) * | 2018-12-06 | 2021-06-29 | Intel Corporation | Reducing microprocessor power with minimal performance impact by dynamically adapting runtime operating configurations using machine learning |
US10771344B2 (en) * | 2018-12-21 | 2020-09-08 | Servicenow, Inc. | Discovery of hyper-converged infrastructure devices |
US10970107B2 (en) * | 2018-12-21 | 2021-04-06 | Servicenow, Inc. | Discovery of hyper-converged infrastructure |
US11269593B2 (en) * | 2019-01-23 | 2022-03-08 | Sap Se | Global number range generation |
US11271804B2 (en) * | 2019-01-25 | 2022-03-08 | Dell Products L.P. | Hyper-converged infrastructure component expansion/replacement system |
US11429440B2 (en) * | 2019-02-04 | 2022-08-30 | Hewlett Packard Enterprise Development Lp | Intelligent orchestration of disaggregated applications based on class of service |
US10817221B2 (en) * | 2019-02-12 | 2020-10-27 | International Business Machines Corporation | Storage device with mandatory atomic-only access |
US10949101B2 (en) * | 2019-02-25 | 2021-03-16 | Micron Technology, Inc. | Storage device operation orchestration |
US11443018B2 (en) * | 2019-03-12 | 2022-09-13 | Xilinx, Inc. | Locking execution of cores to licensed programmable devices in a data center |
US11294992B2 (en) * | 2019-03-12 | 2022-04-05 | Xilinx, Inc. | Locking execution of cores to licensed programmable devices in a data center |
JP7176455B2 (en) * | 2019-03-28 | 2022-11-22 | オムロン株式会社 | Monitoring system, setting device and monitoring method |
US11531869B1 (en) * | 2019-03-28 | 2022-12-20 | Xilinx, Inc. | Neural-network pooling |
US11243817B2 (en) * | 2019-03-29 | 2022-02-08 | Intel Corporation | Technologies for data migration between edge accelerators hosted on different edge locations |
US11055256B2 (en) * | 2019-04-02 | 2021-07-06 | Intel Corporation | Edge component computing system having integrated FaaS call handling capability |
US11089137B2 (en) * | 2019-04-02 | 2021-08-10 | International Business Machines Corporation | Dynamic data transmission |
WO2020206370A1 (en) | 2019-04-05 | 2020-10-08 | Cisco Technology, Inc. | Discovering trustworthy devices using attestation and mutual attestation |
US11416294B1 (en) * | 2019-04-17 | 2022-08-16 | Juniper Networks, Inc. | Task processing for management of data center resources |
CN110053650B (en) * | 2019-05-06 | 2022-06-07 | 湖南中车时代通信信号有限公司 | Automatic train operation system, automatic train operation system architecture and module management method of automatic train operation system |
CN110203600A (en) * | 2019-06-06 | 2019-09-06 | 北京卫星环境工程研究所 | Suitable for spacecraft material be automatically stored and radio frequency |
US11481117B2 (en) * | 2019-06-17 | 2022-10-25 | Hewlett Packard Enterprise Development Lp | Storage volume clustering based on workload fingerprints |
US10949362B2 (en) * | 2019-06-28 | 2021-03-16 | Intel Corporation | Technologies for facilitating remote memory requests in accelerator devices |
US10877817B1 (en) * | 2019-06-28 | 2020-12-29 | Intel Corporation | Technologies for providing inter-kernel application programming interfaces for an accelerated architecture |
WO2021026094A1 (en) * | 2019-08-02 | 2021-02-11 | Jpmorgan Chase Bank, N.A. | Systems and methods for provisioning a new secondary identityiq instance to an existing identityiq instance |
US11082411B2 (en) * | 2019-08-06 | 2021-08-03 | Advanced New Technologies Co., Ltd. | RDMA-based data transmission method, network interface card, server and medium |
US10925166B1 (en) * | 2019-08-07 | 2021-02-16 | Quanta Computer Inc. | Protection fixture |
US20220281105A1 (en) * | 2019-08-22 | 2022-09-08 | Nec Corporation | Robot control system, robot control method, and recording medium |
US10999403B2 (en) | 2019-09-27 | 2021-05-04 | Red Hat, Inc. | Composable infrastructure provisioning and balancing |
CN110650609B (en) * | 2019-10-10 | 2020-12-01 | 珠海与非科技有限公司 | Cloud server of distributed storage |
CA3151195A1 (en) * | 2019-10-10 | 2021-04-15 | Channel One Holdings Inc. | Methods and systems for time-bounding execution of computing workflows |
US11200046B2 (en) * | 2019-10-22 | 2021-12-14 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Managing composable compute system infrastructure with support for decoupled firmware updates |
DE102020127704A1 (en) | 2019-10-29 | 2021-04-29 | Nvidia Corporation | TECHNIQUES FOR EFFICIENT TRANSFER OF DATA TO A PROCESSOR |
US11080051B2 (en) | 2019-10-29 | 2021-08-03 | Nvidia Corporation | Techniques for efficiently transferring data to a processor |
CN112749121A (en) * | 2019-10-31 | 2021-05-04 | 中兴通讯股份有限公司 | Multi-chip interconnection system based on PCIE bus |
US11342004B2 (en) * | 2019-11-07 | 2022-05-24 | Quantum Corporation | System and method for rapid replacement of robotic media mover in automated media library |
US10747281B1 (en) * | 2019-11-19 | 2020-08-18 | International Business Machines Corporation | Mobile thermal balancing of data centers |
US11782810B2 (en) * | 2019-11-22 | 2023-10-10 | Dell Products, L.P. | Systems and methods for automated field replacement component configuration |
US11263105B2 (en) | 2019-11-26 | 2022-03-01 | Lucid Software, Inc. | Visualization tool for components within a cloud infrastructure |
US11861219B2 (en) | 2019-12-12 | 2024-01-02 | Intel Corporation | Buffer to reduce write amplification of misaligned write operations |
US11789878B2 (en) * | 2019-12-19 | 2023-10-17 | Intel Corporation | Adaptive fabric allocation for local and remote emerging memories based prediction schemes |
US11321259B2 (en) * | 2020-02-14 | 2022-05-03 | Sony Interactive Entertainment Inc. | Network architecture providing high speed storage access through a PCI express fabric between a compute node and a storage server |
US11636503B2 (en) | 2020-02-26 | 2023-04-25 | At&T Intellectual Property I, L.P. | System and method for offering network slice as a service |
US11122123B1 (en) | 2020-03-09 | 2021-09-14 | International Business Machines Corporation | Method for a network of storage devices |
US11121941B1 (en) | 2020-03-12 | 2021-09-14 | Cisco Technology, Inc. | Monitoring communications to identify performance degradation |
US20210304025A1 (en) * | 2020-03-24 | 2021-09-30 | Facebook, Inc. | Dynamic quality of service management for deep learning training communication |
US11115497B2 (en) * | 2020-03-25 | 2021-09-07 | Intel Corporation | Technologies for providing advanced resource management in a disaggregated environment |
US11630696B2 (en) | 2020-03-30 | 2023-04-18 | International Business Machines Corporation | Messaging for a hardware acceleration system |
US11509079B2 (en) * | 2020-04-06 | 2022-11-22 | Hewlett Packard Enterprise Development Lp | Blind mate connections with different sets of datums |
US11177618B1 (en) * | 2020-05-14 | 2021-11-16 | Dell Products L.P. | Server blind-mate power and signal connector dock |
US11295135B2 (en) * | 2020-05-29 | 2022-04-05 | Corning Research & Development Corporation | Asset tracking of communication equipment via mixed reality based labeling |
US11374808B2 (en) * | 2020-05-29 | 2022-06-28 | Corning Research & Development Corporation | Automated logging of patching operations via mixed reality based labeling |
US11947971B2 (en) * | 2020-06-11 | 2024-04-02 | Hewlett Packard Enterprise Development Lp | Remote resource configuration mechanism |
US11687629B2 (en) * | 2020-06-12 | 2023-06-27 | Baidu Usa Llc | Method for data protection in a data processing cluster with authentication |
US11360789B2 (en) | 2020-07-06 | 2022-06-14 | International Business Machines Corporation | Configuration of hardware devices |
CN111824668B (en) * | 2020-07-08 | 2022-07-19 | 北京极智嘉科技股份有限公司 | Robot and robot-based container storage and retrieval method |
US11681557B2 (en) * | 2020-07-31 | 2023-06-20 | International Business Machines Corporation | Systems and methods for managing resources in a hyperconverged infrastructure cluster |
EP4193302A1 (en) | 2020-08-05 | 2023-06-14 | Avesha, Inc. | Performing load balancing self adjustment within an application environment |
US11314687B2 (en) * | 2020-09-24 | 2022-04-26 | Commvault Systems, Inc. | Container data mover for migrating data between distributed data storage systems integrated with application orchestrators |
US11405451B2 (en) * | 2020-09-30 | 2022-08-02 | Jpmorgan Chase Bank, N.A. | Data pipeline architecture |
US11379402B2 (en) | 2020-10-20 | 2022-07-05 | Micron Technology, Inc. | Secondary device detection using a synchronous interface |
US20220129601A1 (en) * | 2020-10-26 | 2022-04-28 | Oracle International Corporation | Techniques for generating a configuration for electrically isolating fault domains in a data center |
US11803493B2 (en) * | 2020-11-30 | 2023-10-31 | Dell Products L.P. | Systems and methods for management controller co-processor host to variable subsystem proxy |
US20210092069A1 (en) * | 2020-12-10 | 2021-03-25 | Intel Corporation | Accelerating multi-node performance of machine learning workloads |
US11948014B2 (en) * | 2020-12-15 | 2024-04-02 | Google Llc | Multi-tenant control plane management on computing platform |
US11662934B2 (en) * | 2020-12-15 | 2023-05-30 | International Business Machines Corporation | Migration of a logical partition between mutually non-coherent host data processing systems |
US11645104B2 (en) * | 2020-12-22 | 2023-05-09 | Reliance Jio Infocomm Usa, Inc. | Intelligent data plane acceleration by offloading to distributed smart network interfaces |
CN114661457A (en) * | 2020-12-23 | 2022-06-24 | 英特尔公司 | Memory controller for managing QoS enforcement and migration between memories |
US11445028B2 (en) | 2020-12-30 | 2022-09-13 | Dell Products L.P. | System and method for providing secure console access with multiple smart NICs using NC-SL and SPDM |
US11803216B2 (en) | 2021-02-03 | 2023-10-31 | Hewlett Packard Enterprise Development Lp | Contiguous plane infrastructure for computing systems |
US11785735B2 (en) * | 2021-02-19 | 2023-10-10 | CyberSecure IPS, LLC | Intelligent cable patching of racks to facilitate cable installation |
US11503743B2 (en) * | 2021-03-12 | 2022-11-15 | Baidu Usa Llc | High availability fluid connector for liquid cooling |
US11470015B1 (en) * | 2021-03-22 | 2022-10-11 | Amazon Technologies, Inc. | Allocating workloads to heterogenous worker fleets |
US20220321403A1 (en) * | 2021-04-02 | 2022-10-06 | Nokia Solutions And Networks Oy | Programmable network segmentation for multi-tenant fpgas in cloud infrastructures |
US20220342688A1 (en) * | 2021-04-26 | 2022-10-27 | Dell Products L.P. | Systems and methods for migration of virtual computing resources using smart network interface controller acceleration |
US20220350675A1 (en) | 2021-05-03 | 2022-11-03 | Avesha, Inc. | Distributed computing system with multi tenancy based on application slices |
US11714775B2 (en) | 2021-05-10 | 2023-08-01 | Zenlayer Innovation LLC | Peripheral component interconnect (PCI) hosting device |
IT202100017564A1 (en) * | 2021-07-02 | 2023-01-02 | Fastweb S P A | Robotic apparatus to carry out maintenance operations on an electronic component |
US11863385B2 (en) * | 2022-01-21 | 2024-01-02 | International Business Machines Corporation | Optimizing container executions with network-attached hardware components of a composable disaggregated infrastructure |
US11921582B2 (en) | 2022-04-29 | 2024-03-05 | Microsoft Technology Licensing, Llc | Out of band method to change boot firmware configuration |
CN115052055B (en) * | 2022-08-17 | 2022-11-11 | 北京左江科技股份有限公司 | Network message checksum unloading method based on FPGA |
Family Cites Families (192)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2704350B1 (en) * | 1993-04-22 | 1995-06-02 | Bull Sa | Physical structure of a mass memory subsystem. |
JP3320344B2 (en) * | 1997-09-19 | 2002-09-03 | 富士通株式会社 | Cartridge transfer robot for library device and library device |
US6158000A (en) * | 1998-09-18 | 2000-12-05 | Compaq Computer Corporation | Shared memory initialization method for system having multiple processor capability |
US6230265B1 (en) * | 1998-09-30 | 2001-05-08 | International Business Machines Corporation | Method and system for configuring resources in a data processing system utilizing system power control information |
US7287096B2 (en) * | 2001-05-19 | 2007-10-23 | Texas Instruments Incorporated | Method for robust, flexible reconfiguration of transceive parameters for communication systems |
US7536715B2 (en) * | 2001-05-25 | 2009-05-19 | Secure Computing Corporation | Distributed firewall system and method |
US6901580B2 (en) * | 2001-06-22 | 2005-05-31 | Intel Corporation | Configuration parameter sequencing and sequencer |
US7415723B2 (en) * | 2002-06-11 | 2008-08-19 | Pandya Ashish A | Distributed network security system and a hardware processor therefor |
US7408876B1 (en) * | 2002-07-02 | 2008-08-05 | Extreme Networks | Method and apparatus for providing quality of service across a switched backplane between egress queue managers |
US20040073834A1 (en) * | 2002-10-10 | 2004-04-15 | Kermaani Kaamel M. | System and method for expanding the management redundancy of computer systems |
US7386889B2 (en) * | 2002-11-18 | 2008-06-10 | Trusted Network Technologies, Inc. | System and method for intrusion prevention in a communications network |
US7031154B2 (en) * | 2003-04-30 | 2006-04-18 | Hewlett-Packard Development Company, L.P. | Louvered rack |
US7238104B1 (en) * | 2003-05-02 | 2007-07-03 | Foundry Networks, Inc. | System and method for venting air from a computer casing |
US7146511B2 (en) * | 2003-10-07 | 2006-12-05 | Hewlett-Packard Development Company, L.P. | Rack equipment application performance modification system and method |
US20050132084A1 (en) * | 2003-12-10 | 2005-06-16 | Heung-For Cheng | Method and apparatus for providing server local SMBIOS table through out-of-band communication |
US7552217B2 (en) | 2004-04-07 | 2009-06-23 | Intel Corporation | System and method for Automatic firmware image recovery for server management operational code |
US7809836B2 (en) | 2004-04-07 | 2010-10-05 | Intel Corporation | System and method for automating bios firmware image recovery using a non-host processor and platform policy to select a donor system |
US7421535B2 (en) * | 2004-05-10 | 2008-09-02 | International Business Machines Corporation | Method for demoting tracks from cache |
JP4335760B2 (en) * | 2004-07-08 | 2009-09-30 | 富士通株式会社 | Rack mount storage unit and rack mount disk array device |
US7685319B2 (en) * | 2004-09-28 | 2010-03-23 | Cray Canada Corporation | Low latency communication via memory windows |
CN101542453A (en) * | 2005-01-05 | 2009-09-23 | 极端数据公司 | Systems and methods for providing co-processors to computing systems |
US20110016214A1 (en) * | 2009-07-15 | 2011-01-20 | Cluster Resources, Inc. | System and method of brokering cloud computing resources |
US7634584B2 (en) * | 2005-04-27 | 2009-12-15 | Solarflare Communications, Inc. | Packet validation in virtual network interface architecture |
US9135074B2 (en) * | 2005-05-19 | 2015-09-15 | Hewlett-Packard Development Company, L.P. | Evaluating performance of workload manager based on QoS to representative workload and usage efficiency of shared resource for plurality of minCPU and maxCPU allocation values |
US8799980B2 (en) * | 2005-11-16 | 2014-08-05 | Juniper Networks, Inc. | Enforcement of network device configuration policies within a computing environment |
TW200720941A (en) * | 2005-11-18 | 2007-06-01 | Inventec Corp | Host computer memory configuration data remote access method and system |
US7493419B2 (en) * | 2005-12-13 | 2009-02-17 | International Business Machines Corporation | Input/output workload fingerprinting for input/output schedulers |
US8713551B2 (en) * | 2006-01-03 | 2014-04-29 | International Business Machines Corporation | Apparatus, system, and method for non-interruptively updating firmware on a redundant hardware controller |
US20070271560A1 (en) * | 2006-05-18 | 2007-11-22 | Microsoft Corporation | Deploying virtual machine to host based on workload characterizations |
US7472211B2 (en) * | 2006-07-28 | 2008-12-30 | International Business Machines Corporation | Blade server switch module using out-of-band signaling to detect the physical location of an active drive enclosure device |
US8098658B1 (en) * | 2006-08-01 | 2012-01-17 | Hewett-Packard Development Company, L.P. | Power-based networking resource allocation |
US8010565B2 (en) * | 2006-10-16 | 2011-08-30 | Dell Products L.P. | Enterprise rack management method, apparatus and media |
US8068351B2 (en) * | 2006-11-10 | 2011-11-29 | Oracle America, Inc. | Cable management system |
US20090089564A1 (en) * | 2006-12-06 | 2009-04-02 | Brickell Ernie F | Protecting a Branch Instruction from Side Channel Vulnerabilities |
US8112524B2 (en) * | 2007-01-15 | 2012-02-07 | International Business Machines Corporation | Recommending moving resources in a partitioned computer |
US7738900B1 (en) | 2007-02-15 | 2010-06-15 | Nextel Communications Inc. | Systems and methods of group distribution for latency sensitive applications |
US8140719B2 (en) * | 2007-06-21 | 2012-03-20 | Sea Micro, Inc. | Dis-aggregated and distributed data-center architecture using a direct interconnect fabric |
CN101431432A (en) * | 2007-11-06 | 2009-05-13 | 联想(北京)有限公司 | Blade server |
US8078865B2 (en) * | 2007-11-20 | 2011-12-13 | Dell Products L.P. | Systems and methods for configuring out-of-band bios settings |
US8214467B2 (en) * | 2007-12-14 | 2012-07-03 | International Business Machines Corporation | Migrating port-specific operating parameters during blade server failover |
US20100267376A1 (en) * | 2007-12-17 | 2010-10-21 | Nokia Corporation | Accessory Configuration and Management |
US8645965B2 (en) * | 2007-12-31 | 2014-02-04 | Intel Corporation | Supporting metered clients with manycore through time-limited partitioning |
US8225159B1 (en) * | 2008-04-25 | 2012-07-17 | Netapp, Inc. | Method and system for implementing power savings features on storage devices within a storage subsystem |
US8166263B2 (en) * | 2008-07-03 | 2012-04-24 | Commvault Systems, Inc. | Continuous data protection over intermittent connections, such as continuous data backup for laptops or wireless devices |
US20100125695A1 (en) * | 2008-11-15 | 2010-05-20 | Nanostar Corporation | Non-volatile memory storage system |
US20100091458A1 (en) * | 2008-10-15 | 2010-04-15 | Mosier Jr David W | Electronics chassis with angled card cage |
US8954977B2 (en) * | 2008-12-09 | 2015-02-10 | Intel Corporation | Software-based thread remapping for power savings |
US8798045B1 (en) * | 2008-12-29 | 2014-08-05 | Juniper Networks, Inc. | Control plane architecture for switch fabrics |
US20100229175A1 (en) * | 2009-03-05 | 2010-09-09 | International Business Machines Corporation | Moving Resources In a Computing Environment Having Multiple Logically-Partitioned Computer Systems |
WO2010108165A1 (en) * | 2009-03-20 | 2010-09-23 | The Trustees Of Princeton University | Systems and methods for network acceleration and efficient indexing for caching file systems |
US8321870B2 (en) * | 2009-08-14 | 2012-11-27 | General Electric Company | Method and system for distributed computation having sub-task processing and sub-solution redistribution |
US20110055838A1 (en) * | 2009-08-28 | 2011-03-03 | Moyes William A | Optimized thread scheduling via hardware performance monitoring |
WO2011045863A1 (en) * | 2009-10-16 | 2011-04-21 | 富士通株式会社 | Electronic device and casing for electronic device |
CN101706802B (en) * | 2009-11-24 | 2013-06-05 | 成都市华为赛门铁克科技有限公司 | Method, device and sever for writing, modifying and restoring data |
US9129052B2 (en) * | 2009-12-03 | 2015-09-08 | International Business Machines Corporation | Metering resource usage in a cloud computing environment |
CN102135923A (en) * | 2010-01-21 | 2011-07-27 | 鸿富锦精密工业(深圳)有限公司 | Method for integrating operating system into BIOS (Basic Input/Output System) chip and method for starting operating system |
US8638553B1 (en) * | 2010-03-31 | 2014-01-28 | Amazon Technologies, Inc. | Rack system cooling with inclined computing devices |
US8601297B1 (en) * | 2010-06-18 | 2013-12-03 | Google Inc. | Systems and methods for energy proportional multiprocessor networks |
US8171142B2 (en) * | 2010-06-30 | 2012-05-01 | Vmware, Inc. | Data center inventory management using smart racks |
IT1401647B1 (en) * | 2010-07-09 | 2013-08-02 | Campatents B V | METHOD FOR MONITORING CHANGES OF CONFIGURATION OF A MONITORING DEVICE FOR AN AUTOMATIC MACHINE |
US8259450B2 (en) * | 2010-07-21 | 2012-09-04 | Birchbridge Incorporated | Mobile universal hardware platform |
US9428336B2 (en) * | 2010-07-28 | 2016-08-30 | Par Systems, Inc. | Robotic storage and retrieval systems |
WO2012021380A2 (en) * | 2010-08-13 | 2012-02-16 | Rambus Inc. | Fast-wake memory |
US8914805B2 (en) * | 2010-08-31 | 2014-12-16 | International Business Machines Corporation | Rescheduling workload in a hybrid computing environment |
US8489939B2 (en) * | 2010-10-25 | 2013-07-16 | At&T Intellectual Property I, L.P. | Dynamically allocating multitier applications based upon application requirements and performance and reliability of resources |
US9078251B2 (en) * | 2010-10-28 | 2015-07-07 | Lg Electronics Inc. | Method and apparatus for transceiving a data frame in a wireless LAN system |
US8838286B2 (en) * | 2010-11-04 | 2014-09-16 | Dell Products L.P. | Rack-level modular server and storage framework |
US8762668B2 (en) * | 2010-11-18 | 2014-06-24 | Hitachi, Ltd. | Multipath switching over multiple storage systems |
US9563479B2 (en) * | 2010-11-30 | 2017-02-07 | Red Hat, Inc. | Brokering optimized resource supply costs in host cloud-based network using predictive workloads |
CN102693181A (en) * | 2011-03-25 | 2012-09-26 | 鸿富锦精密工业(深圳)有限公司 | Firmware update-write system and method |
US9405550B2 (en) * | 2011-03-31 | 2016-08-02 | International Business Machines Corporation | Methods for the transmission of accelerator commands and corresponding command structure to remote hardware accelerator engines over an interconnect link |
US20120303322A1 (en) * | 2011-05-23 | 2012-11-29 | Rego Charles W | Incorporating memory and io cycle information into compute usage determinations |
EP2712443B1 (en) * | 2011-07-01 | 2019-11-06 | Hewlett-Packard Enterprise Development LP | Method of and system for managing computing resources |
US9317336B2 (en) * | 2011-07-27 | 2016-04-19 | Alcatel Lucent | Method and apparatus for assignment of virtual resources within a cloud environment |
US8713257B2 (en) * | 2011-08-26 | 2014-04-29 | Lsi Corporation | Method and system for shared high speed cache in SAS switches |
US8755176B2 (en) * | 2011-10-12 | 2014-06-17 | Xyratex Technology Limited | Data storage system, an energy module and a method of providing back-up power to a data storage system |
US9237107B2 (en) * | 2011-11-15 | 2016-01-12 | New Jersey Institute Of Technology | Fair quantized congestion notification (FQCN) to mitigate transport control protocol (TCP) throughput collapse in data center networks |
US20140304713A1 (en) * | 2011-11-23 | 2014-10-09 | Telefonaktiebolaget L M Ericsson (pulb) | Method and apparatus for distributed processing tasks |
DE102011119693A1 (en) * | 2011-11-29 | 2013-05-29 | Universität Heidelberg | System, computer-implemented method and computer program product for direct communication between hardware accelerators in a computer cluster |
US20130185729A1 (en) * | 2012-01-13 | 2013-07-18 | Rutgers, The State University Of New Jersey | Accelerating resource allocation in virtualized environments using workload classes and/or workload signatures |
US8732291B2 (en) * | 2012-01-13 | 2014-05-20 | Accenture Global Services Limited | Performance interference model for managing consolidated workloads in QOS-aware clouds |
US9336061B2 (en) * | 2012-01-14 | 2016-05-10 | International Business Machines Corporation | Integrated metering of service usage for hybrid clouds |
US9367360B2 (en) * | 2012-01-30 | 2016-06-14 | Microsoft Technology Licensing, Llc | Deploying a hardware inventory as a cloud-computing stamp |
TWI462017B (en) * | 2012-02-24 | 2014-11-21 | Wistron Corp | Server deployment system and method for updating data |
US9749413B2 (en) * | 2012-05-29 | 2017-08-29 | Intel Corporation | Peer-to-peer interrupt signaling between devices coupled via interconnects |
CN102694863B (en) * | 2012-05-30 | 2015-08-26 | 电子科技大学 | Based on the implementation method of the distributed memory system of adjustment of load and System Fault Tolerance |
JP5983045B2 (en) * | 2012-05-30 | 2016-08-31 | 富士通株式会社 | Library device |
US8832268B1 (en) * | 2012-08-16 | 2014-09-09 | Amazon Technologies, Inc. | Notification and resolution of infrastructure issues |
GB2525982B (en) * | 2012-10-08 | 2017-08-30 | Fisher Rosemount Systems Inc | Configurable user displays in a process control system |
US9202040B2 (en) | 2012-10-10 | 2015-12-01 | Globalfoundries Inc. | Chip authentication using multi-domain intrinsic identifiers |
US9047417B2 (en) * | 2012-10-29 | 2015-06-02 | Intel Corporation | NUMA aware network interface |
US20140185225A1 (en) * | 2012-12-28 | 2014-07-03 | Joel Wineland | Advanced Datacenter Designs |
US9130824B2 (en) | 2013-01-08 | 2015-09-08 | American Megatrends, Inc. | Chassis management implementation by management instance on baseboard management controller managing multiple computer nodes |
TWI568335B (en) * | 2013-01-15 | 2017-01-21 | 英特爾股份有限公司 | A rack assembly structure |
US9201837B2 (en) * | 2013-03-13 | 2015-12-01 | Futurewei Technologies, Inc. | Disaggregated server architecture for data centers |
US9582010B2 (en) * | 2013-03-14 | 2017-02-28 | Rackspace Us, Inc. | System and method of rack management |
US9634958B2 (en) * | 2013-04-02 | 2017-04-25 | Amazon Technologies, Inc. | Burst capacity for user-defined pools |
US9104562B2 (en) * | 2013-04-05 | 2015-08-11 | International Business Machines Corporation | Enabling communication over cross-coupled links between independently managed compute and storage networks |
CN103281351B (en) * | 2013-04-19 | 2016-12-28 | 武汉方寸科技有限公司 | A kind of high-effect Remote Sensing Data Processing and the cloud service platform of analysis |
US20140317267A1 (en) * | 2013-04-22 | 2014-10-23 | Advanced Micro Devices, Inc. | High-Density Server Management Controller |
US20140337496A1 (en) * | 2013-05-13 | 2014-11-13 | Advanced Micro Devices, Inc. | Embedded Management Controller for High-Density Servers |
CN103294521B (en) * | 2013-05-30 | 2016-08-10 | 天津大学 | A kind of method reducing data center's traffic load and energy consumption |
US9436600B2 (en) * | 2013-06-11 | 2016-09-06 | Svic No. 28 New Technology Business Investment L.L.P. | Non-volatile memory storage for multi-channel memory system |
US20150033222A1 (en) | 2013-07-25 | 2015-01-29 | Cavium, Inc. | Network Interface Card with Virtual Switch and Traffic Flow Policy Enforcement |
US10069686B2 (en) * | 2013-09-05 | 2018-09-04 | Pismo Labs Technology Limited | Methods and systems for managing a device through a manual information input module |
US9306861B2 (en) * | 2013-09-26 | 2016-04-05 | Red Hat Israel, Ltd. | Automatic promiscuous forwarding for a bridge |
US9413713B2 (en) * | 2013-12-05 | 2016-08-09 | Cisco Technology, Inc. | Detection of a misconfigured duplicate IP address in a distributed data center network fabric |
US9792243B2 (en) * | 2013-12-26 | 2017-10-17 | Intel Corporation | Computer architecture to provide flexibility and/or scalability |
US9705798B1 (en) * | 2014-01-07 | 2017-07-11 | Google Inc. | Systems and methods for routing data through data centers using an indirect generalized hypercube network |
US9444695B2 (en) * | 2014-01-30 | 2016-09-13 | Xerox Corporation | Methods and systems for scheduling a task |
CN105940378B (en) * | 2014-02-27 | 2019-08-13 | 英特尔公司 | For distributing the technology of configurable computing resource |
US10404547B2 (en) * | 2014-02-27 | 2019-09-03 | Intel Corporation | Workload optimization, scheduling, and placement for rack-scale architecture computing systems |
US9363926B1 (en) * | 2014-03-17 | 2016-06-07 | Amazon Technologies, Inc. | Modular mass storage system with staggered backplanes |
US9925492B2 (en) * | 2014-03-24 | 2018-03-27 | Mellanox Technologies, Ltd. | Remote transactional memory |
US10218645B2 (en) * | 2014-04-08 | 2019-02-26 | Mellanox Technologies, Ltd. | Low-latency processing in a network node |
US9503391B2 (en) * | 2014-04-11 | 2016-11-22 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system for network function placement |
US9544233B2 (en) * | 2014-04-28 | 2017-01-10 | New Jersey Institute Of Technology | Congestion management for datacenter network |
US9081828B1 (en) * | 2014-04-30 | 2015-07-14 | Igneous Systems, Inc. | Network addressable storage controller with storage drive profile comparison |
TWI510933B (en) * | 2014-05-13 | 2015-12-01 | Acer Inc | Method for remotely accessing data and local apparatus using the method |
WO2015176262A1 (en) * | 2014-05-22 | 2015-11-26 | 华为技术有限公司 | Node interconnection apparatus, resource control node and server system |
US9477279B1 (en) * | 2014-06-02 | 2016-10-25 | Datadirect Networks, Inc. | Data storage system with active power management and method for monitoring and dynamical control of power sharing between devices in data storage system |
US9602351B2 (en) * | 2014-06-06 | 2017-03-21 | Microsoft Technology Licensing, Llc | Proactive handling of network faults |
US10180889B2 (en) * | 2014-06-23 | 2019-01-15 | Liqid Inc. | Network failover handling in modular switched fabric based data storage systems |
US10382279B2 (en) * | 2014-06-30 | 2019-08-13 | Emc Corporation | Dynamically composed compute nodes comprising disaggregated components |
US10122605B2 (en) * | 2014-07-09 | 2018-11-06 | Cisco Technology, Inc | Annotation of network activity through different phases of execution |
US9892079B2 (en) * | 2014-07-25 | 2018-02-13 | Rajiv Ganth | Unified converged network, storage and compute system |
US9262144B1 (en) * | 2014-08-20 | 2016-02-16 | International Business Machines Corporation | Deploying virtual machine instances of a pattern to regions of a hierarchical tier using placement policies and constraints |
US9684531B2 (en) * | 2014-08-21 | 2017-06-20 | International Business Machines Corporation | Combining blade servers based on workload characteristics |
CN104168332A (en) * | 2014-09-01 | 2014-11-26 | 广东电网公司信息中心 | Load balance and node state monitoring method in high performance computing |
US9858104B2 (en) * | 2014-09-24 | 2018-01-02 | Pluribus Networks, Inc. | Connecting fabrics via switch-to-switch tunneling transparent to network servers |
US10630767B1 (en) * | 2014-09-30 | 2020-04-21 | Amazon Technologies, Inc. | Hardware grouping based computing resource allocation |
US10061599B1 (en) * | 2014-10-16 | 2018-08-28 | American Megatrends, Inc. | Bus enumeration acceleration |
US9098451B1 (en) * | 2014-11-21 | 2015-08-04 | Igneous Systems, Inc. | Shingled repair set for writing data |
US9886306B2 (en) * | 2014-11-21 | 2018-02-06 | International Business Machines Corporation | Cross-platform scheduling with long-term fairness and platform-specific optimization |
WO2016090485A1 (en) * | 2014-12-09 | 2016-06-16 | Cirba Ip Inc. | System and method for routing computing workloads based on proximity |
US20160173600A1 (en) | 2014-12-15 | 2016-06-16 | Cisco Technology, Inc. | Programmable processing engine for a virtual interface controller |
US10057186B2 (en) * | 2015-01-09 | 2018-08-21 | International Business Machines Corporation | Service broker for computational offloading and improved resource utilization |
EP3046028B1 (en) * | 2015-01-15 | 2020-02-19 | Alcatel Lucent | Load-balancing and scaling of cloud resources by migrating a data session |
US9965351B2 (en) * | 2015-01-27 | 2018-05-08 | Quantum Corporation | Power savings in cold storage |
US10234930B2 (en) * | 2015-02-13 | 2019-03-19 | Intel Corporation | Performing power management in a multicore processor |
JP2016167143A (en) * | 2015-03-09 | 2016-09-15 | 富士通株式会社 | Information processing system and control method of the same |
US9276900B1 (en) * | 2015-03-19 | 2016-03-01 | Igneous Systems, Inc. | Network bootstrapping for a distributed storage system |
US10848408B2 (en) * | 2015-03-26 | 2020-11-24 | Vmware, Inc. | Methods and apparatus to control computing resource utilization of monitoring agents |
US10606651B2 (en) * | 2015-04-17 | 2020-03-31 | Microsoft Technology Licensing, Llc | Free form expression accelerator with thread length-based thread assignment to clustered soft processor cores that share a functional circuit |
US10019388B2 (en) * | 2015-04-28 | 2018-07-10 | Liqid Inc. | Enhanced initialization for data storage assemblies |
US9910664B2 (en) * | 2015-05-04 | 2018-03-06 | American Megatrends, Inc. | System and method of online firmware update for baseboard management controller (BMC) devices |
US20160335209A1 (en) * | 2015-05-11 | 2016-11-17 | Quanta Computer Inc. | High-speed data transmission using pcie protocol |
US9696781B2 (en) * | 2015-05-28 | 2017-07-04 | Cisco Technology, Inc. | Automated power control for reducing power usage in communications networks |
US9792248B2 (en) * | 2015-06-02 | 2017-10-17 | Microsoft Technology Licensing, Llc | Fast read/write between networked computers via RDMA-based RPC requests |
US11203486B2 (en) * | 2015-06-02 | 2021-12-21 | Alert Innovation Inc. | Order fulfillment system |
US9606836B2 (en) * | 2015-06-09 | 2017-03-28 | Microsoft Technology Licensing, Llc | Independently networkable hardware accelerators for increased workflow optimization |
CN204887839U (en) * | 2015-07-23 | 2015-12-16 | 中兴通讯股份有限公司 | Veneer module level water cooling system |
US10055218B2 (en) * | 2015-08-11 | 2018-08-21 | Quanta Computer Inc. | System and method for adding and storing groups of firmware default settings |
US10348574B2 (en) * | 2015-08-17 | 2019-07-09 | Vmware, Inc. | Hardware management systems for disaggregated rack architectures in virtual server rack deployments |
US10736239B2 (en) * | 2015-09-22 | 2020-08-04 | Z-Impact, Inc. | High performance computing rack and storage system with forced cooling |
US10387209B2 (en) * | 2015-09-28 | 2019-08-20 | International Business Machines Corporation | Dynamic transparent provisioning of resources for application specific resources |
US10162793B1 (en) * | 2015-09-29 | 2018-12-25 | Amazon Technologies, Inc. | Storage adapter device for communicating with network storage |
US9888607B2 (en) * | 2015-09-30 | 2018-02-06 | Seagate Technology Llc | Self-biasing storage device sled |
US10216643B2 (en) * | 2015-11-23 | 2019-02-26 | International Business Machines Corporation | Optimizing page table manipulations |
US9811347B2 (en) * | 2015-12-14 | 2017-11-07 | Dell Products, L.P. | Managing dependencies for human interface infrastructure (HII) devices |
US10028401B2 (en) * | 2015-12-18 | 2018-07-17 | Microsoft Technology Licensing, Llc | Sidewall-accessible dense storage rack |
US20170180220A1 (en) * | 2015-12-18 | 2017-06-22 | Intel Corporation | Techniques to Generate Workload Performance Fingerprints for Cloud Infrastructure Elements |
US10452467B2 (en) | 2016-01-28 | 2019-10-22 | Intel Corporation | Automatic model-based computing environment performance monitoring |
US10581711B2 (en) * | 2016-01-28 | 2020-03-03 | Oracle International Corporation | System and method for policing network traffic flows using a ternary content addressable memory in a high performance computing environment |
WO2017146618A1 (en) * | 2016-02-23 | 2017-08-31 | Telefonaktiebolaget Lm Ericsson (Publ) | Methods and modules relating to allocation of host machines |
US20170257970A1 (en) * | 2016-03-04 | 2017-09-07 | Radisys Corporation | Rack having uniform bays and an optical interconnect system for shelf-level, modular deployment of sleds enclosing information technology equipment |
US9811281B2 (en) * | 2016-04-07 | 2017-11-07 | International Business Machines Corporation | Multi-tenant memory service for memory pool architectures |
US10701141B2 (en) * | 2016-06-30 | 2020-06-30 | International Business Machines Corporation | Managing software licenses in a disaggregated environment |
US11706895B2 (en) * | 2016-07-19 | 2023-07-18 | Pure Storage, Inc. | Independent scaling of compute resources and storage resources in a storage system |
US10234833B2 (en) * | 2016-07-22 | 2019-03-19 | Intel Corporation | Technologies for predicting power usage of a data center |
US10034407B2 (en) * | 2016-07-22 | 2018-07-24 | Intel Corporation | Storage sled for a data center |
US20180034908A1 (en) * | 2016-07-27 | 2018-02-01 | Alibaba Group Holding Limited | Disaggregated storage and computation system |
US10365852B2 (en) * | 2016-07-29 | 2019-07-30 | Vmware, Inc. | Resumable replica resynchronization |
US10193997B2 (en) | 2016-08-05 | 2019-01-29 | Dell Products L.P. | Encoded URI references in restful requests to facilitate proxy aggregation |
US10127107B2 (en) * | 2016-08-14 | 2018-11-13 | Nxp Usa, Inc. | Method for performing data transaction that selectively enables memory bank cuts and memory device therefor |
US10108560B1 (en) * | 2016-09-14 | 2018-10-23 | Evol1-Ip, Llc | Ethernet-leveraged hyper-converged infrastructure |
US10303458B2 (en) * | 2016-09-29 | 2019-05-28 | Hewlett Packard Enterprise Development Lp | Multi-platform installer |
US10776342B2 (en) * | 2016-11-18 | 2020-09-15 | Tuxena, Inc. | Systems and methods for recovering lost clusters from a mounted volume |
US10726131B2 (en) * | 2016-11-21 | 2020-07-28 | Facebook, Inc. | Systems and methods for mitigation of permanent denial of service attacks |
US20180150256A1 (en) * | 2016-11-29 | 2018-05-31 | Intel Corporation | Technologies for data deduplication in disaggregated architectures |
CN109891908A (en) * | 2016-11-29 | 2019-06-14 | 英特尔公司 | Technology for the interconnection of millimeter wave rack |
US10503671B2 (en) * | 2016-12-29 | 2019-12-10 | Oath Inc. | Controlling access to a shared resource |
US10282549B2 (en) * | 2017-03-07 | 2019-05-07 | Hewlett Packard Enterprise Development Lp | Modifying service operating system of baseboard management controller |
EP3592493A4 (en) * | 2017-03-08 | 2020-12-02 | BWXT Nuclear Energy, Inc. | Apparatus and method for baffle bolt repair |
US20180288152A1 (en) * | 2017-04-01 | 2018-10-04 | Anjaneya R. Chagam Reddy | Storage dynamic accessibility mechanism method and apparatus |
US10331581B2 (en) * | 2017-04-10 | 2019-06-25 | Hewlett Packard Enterprise Development Lp | Virtual channel and resource assignment |
US10355939B2 (en) * | 2017-04-13 | 2019-07-16 | International Business Machines Corporation | Scalable data center network topology on distributed switch |
US10467052B2 (en) * | 2017-05-01 | 2019-11-05 | Red Hat, Inc. | Cluster topology aware container scheduling for efficient data transfer |
US10303615B2 (en) * | 2017-06-16 | 2019-05-28 | Hewlett Packard Enterprise Development Lp | Matching pointers across levels of a memory hierarchy |
US20190166032A1 (en) * | 2017-11-30 | 2019-05-30 | American Megatrends, Inc. | Utilization based dynamic provisioning of rack computing resources |
US10447273B1 (en) * | 2018-09-11 | 2019-10-15 | Advanced Micro Devices, Inc. | Dynamic virtualized field-programmable gate array resource control for performance and reliability |
US11201818B2 (en) * | 2019-04-04 | 2021-12-14 | Cisco Technology, Inc. | System and method of providing policy selection in a network |
-
2017
- 2017-12-21 US US15/850,325 patent/US20190068466A1/en not_active Abandoned
- 2017-12-29 US US15/858,305 patent/US20190068464A1/en not_active Abandoned
- 2017-12-29 US US15/858,286 patent/US20190068523A1/en not_active Abandoned
- 2017-12-29 US US15/858,288 patent/US20190068521A1/en not_active Abandoned
- 2017-12-29 US US15/858,316 patent/US20190065260A1/en not_active Abandoned
- 2017-12-29 US US15/858,549 patent/US20190065401A1/en not_active Abandoned
- 2017-12-29 US US15/858,542 patent/US11748172B2/en active Active
- 2017-12-29 US US15/858,748 patent/US11614979B2/en active Active
- 2017-12-29 US US15/858,557 patent/US20190065083A1/en not_active Abandoned
- 2017-12-30 US US15/859,368 patent/US11422867B2/en active Active
- 2017-12-30 US US15/859,385 patent/US20190065281A1/en not_active Abandoned
- 2017-12-30 US US15/859,364 patent/US11392425B2/en active Active
- 2017-12-30 US US15/859,394 patent/US11467885B2/en active Active
- 2017-12-30 US US15/859,366 patent/US20190065261A1/en not_active Abandoned
- 2017-12-30 US US15/859,388 patent/US20190065231A1/en not_active Abandoned
- 2017-12-30 US US15/859,363 patent/US20190068444A1/en not_active Abandoned
-
2018
- 2018-03-09 US US15/916,394 patent/US20190065415A1/en not_active Abandoned
- 2018-03-23 US US15/933,855 patent/US11030017B2/en active Active
- 2018-03-30 US US15/942,108 patent/US20190067848A1/en not_active Abandoned
- 2018-03-30 US US15/942,101 patent/US11416309B2/en active Active
- 2018-06-29 US US16/023,803 patent/US10888016B2/en active Active
- 2018-06-29 US US16/022,962 patent/US11055149B2/en active Active
- 2018-07-27 CN CN201810845565.8A patent/CN109426316A/en active Pending
- 2018-07-27 CN CN201810843475.5A patent/CN109428841A/en active Pending
- 2018-07-30 EP EP18852427.6A patent/EP3676708A4/en active Pending
- 2018-07-30 DE DE112018004798.9T patent/DE112018004798T5/en active Pending
- 2018-07-30 WO PCT/US2018/044363 patent/WO2019045928A1/en active Application Filing
- 2018-07-30 WO PCT/US2018/044365 patent/WO2019045929A1/en active Application Filing
- 2018-07-30 WO PCT/US2018/044366 patent/WO2019045930A1/en unknown
- 2018-08-30 US US16/642,523 patent/US20200257566A1/en not_active Abandoned
- 2018-08-30 CN CN201811002563.9A patent/CN109428843A/en active Pending
- 2018-08-30 CN CN201811004878.7A patent/CN109426568A/en active Pending
- 2018-08-30 CN CN201811004869.8A patent/CN109426633A/en active Pending
- 2018-08-30 CN CN201811005041.4A patent/CN109426646A/en active Pending
- 2018-08-30 US US16/642,520 patent/US20200192710A1/en not_active Abandoned
- 2018-08-30 CN CN201811001590.4A patent/CN109428889A/en active Pending
- 2018-08-30 CN CN201811004916.9A patent/CN109426630A/en active Pending
- 2018-08-30 WO PCT/US2018/048917 patent/WO2019046620A1/en active Application Filing
- 2018-08-30 WO PCT/US2018/048946 patent/WO2019046639A1/en active Application Filing
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10802229B2 (en) | 2016-07-22 | 2020-10-13 | Intel Corporation | Technologies for switching network traffic in a data center |
US10785549B2 (en) | 2016-07-22 | 2020-09-22 | Intel Corporation | Technologies for switching network traffic in a data center |
US10791384B2 (en) | 2016-07-22 | 2020-09-29 | Intel Corporation | Technologies for switching network traffic in a data center |
US11595277B2 (en) | 2016-07-22 | 2023-02-28 | Intel Corporation | Technologies for switching network traffic in a data center |
US11128553B2 (en) | 2016-07-22 | 2021-09-21 | Intel Corporation | Technologies for switching network traffic in a data center |
US11977923B2 (en) | 2016-11-29 | 2024-05-07 | Intel Corporation | Cloud-based scale-up system composition |
US11907557B2 (en) | 2016-11-29 | 2024-02-20 | Intel Corporation | Technologies for dividing work across accelerator devices |
US10990309B2 (en) * | 2016-11-29 | 2021-04-27 | Intel Corporation | Technologies for coordinating disaggregated accelerator device resources |
US11429297B2 (en) | 2016-11-29 | 2022-08-30 | Intel Corporation | Technologies for dividing work across accelerator devices |
US11137922B2 (en) | 2016-11-29 | 2021-10-05 | Intel Corporation | Technologies for providing accelerated functions as a service in a disaggregated architecture |
US10963176B2 (en) | 2016-11-29 | 2021-03-30 | Intel Corporation | Technologies for offloading acceleration task scheduling operations to accelerator sleds |
US11029870B2 (en) | 2016-11-29 | 2021-06-08 | Intel Corporation | Technologies for dividing work across accelerator devices |
US11995330B2 (en) | 2017-08-30 | 2024-05-28 | Intel Corporation | Technologies for providing accelerated functions as a service in a disaggregated architecture |
US10977085B2 (en) | 2018-05-17 | 2021-04-13 | International Business Machines Corporation | Optimizing dynamical resource allocations in disaggregated data centers |
US10841367B2 (en) | 2018-05-17 | 2020-11-17 | International Business Machines Corporation | Optimizing dynamical resource allocations for cache-dependent workloads in disaggregated data centers |
US20190356729A1 (en) * | 2018-05-17 | 2019-11-21 | International Business Machines Corporation | Optimizing dynamic resource allocations for storage-dependent workloads in disaggregated data centers |
US10936374B2 (en) | 2018-05-17 | 2021-03-02 | International Business Machines Corporation | Optimizing dynamic resource allocations for memory-dependent workloads in disaggregated data centers |
US10893096B2 (en) | 2018-05-17 | 2021-01-12 | International Business Machines Corporation | Optimizing dynamical resource allocations using a data heat map in disaggregated data centers |
US11221886B2 (en) | 2018-05-17 | 2022-01-11 | International Business Machines Corporation | Optimizing dynamical resource allocations for cache-friendly workloads in disaggregated data centers |
US10601903B2 (en) | 2018-05-17 | 2020-03-24 | International Business Machines Corporation | Optimizing dynamical resource allocations based on locality of resources in disaggregated data centers |
US11330042B2 (en) * | 2018-05-17 | 2022-05-10 | International Business Machines Corporation | Optimizing dynamic resource allocations for storage-dependent workloads in disaggregated data centers |
US10795713B2 (en) | 2018-05-25 | 2020-10-06 | Vmware, Inc. | Live migration of a virtualized compute accelerator workload |
US10684887B2 (en) * | 2018-05-25 | 2020-06-16 | Vmware, Inc. | Live migration of a virtualized compute accelerator workload |
US11263122B2 (en) * | 2019-04-09 | 2022-03-01 | Vmware, Inc. | Implementing fine grain data coherency of a shared memory region |
US11003479B2 (en) * | 2019-04-29 | 2021-05-11 | Intel Corporation | Device, system and method to communicate a kernel binary via a network |
EP3757784A1 (en) * | 2019-06-28 | 2020-12-30 | Intel Corporation | Technologies for managing accelerator resources |
EP3974984A1 (en) * | 2020-09-25 | 2022-03-30 | INTEL Corporation | Technologies for scaling inter-kernel technologies for accelerator device kernels |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190065281A1 (en) | Technologies for auto-migration in accelerated architectures | |
US11888967B2 (en) | Technologies for dynamic accelerator selection | |
US11467873B2 (en) | Technologies for RDMA queue pair QOS management | |
US20200241999A1 (en) | Performance monitoring for short-lived functions | |
US11861424B2 (en) | Technologies for providing efficient reprovisioning in an accelerator device | |
US20190250857A1 (en) | TECHNOLOGIES FOR AUTOMATIC WORKLOAD DETECTION AND CACHE QoS POLICY APPLICATION | |
EP3731090A1 (en) | Technologies for providing resource health based node composition and management | |
US20210334138A1 (en) | Technologies for pre-configuring accelerators by predicting bit-streams | |
EP3731063B1 (en) | Technologies for providing adaptive power management in an accelerator sled | |
US10783100B2 (en) | Technologies for flexible I/O endpoint acceleration | |
US10579547B2 (en) | Technologies for providing I/O channel abstraction for accelerator device kernels | |
US20190319892A1 (en) | Technologies for managing burst bandwidth requirements | |
EP3739448B1 (en) | Technologies for compressing communication for accelerator devices | |
EP3757784A1 (en) | Technologies for managing accelerator resources | |
CN111492348A (en) | Techniques for achieving guaranteed network quality with hardware acceleration | |
US20210073161A1 (en) | Technologies for establishing communication channel between accelerator device kernels | |
EP3731095A1 (en) | Technologies for providing inter-kernel communication abstraction to support scale-up and scale-out |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BERNAT, FRANCESC GUIM;CUSTODIO, EVAN;BALLE, SUSANNE M.;AND OTHERS;SIGNING DATES FROM 20180122 TO 20180126;REEL/FRAME:044983/0674 |
|
STCT | Information on status: administrative procedure adjustment |
Free format text: PROSECUTION SUSPENDED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER |
|
STCV | Information on status: appeal procedure |
Free format text: EXAMINER'S ANSWER TO APPEAL BRIEF MAILED |
|
STCV | Information on status: appeal procedure |
Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS |
|
STCV | Information on status: appeal procedure |
Free format text: BOARD OF APPEALS DECISION RENDERED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |