US20200257566A1 - Technologies for managing disaggregated resources in a data center - Google Patents
Technologies for managing disaggregated resources in a data center Download PDFInfo
- Publication number
- US20200257566A1 US20200257566A1 US16/642,523 US201816642523A US2020257566A1 US 20200257566 A1 US20200257566 A1 US 20200257566A1 US 201816642523 A US201816642523 A US 201816642523A US 2020257566 A1 US2020257566 A1 US 2020257566A1
- Authority
- US
- United States
- Prior art keywords
- resources
- compute device
- compute
- sled
- threads
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000005516 engineering process Methods 0.000 title abstract description 13
- 239000000203 mixture Substances 0.000 claims description 16
- 238000004891 communication Methods 0.000 description 80
- 239000000758 substrate Substances 0.000 description 74
- 230000006870 function Effects 0.000 description 34
- 230000003287 optical effect Effects 0.000 description 28
- 238000007726 management method Methods 0.000 description 20
- 238000000034 method Methods 0.000 description 17
- 238000013500 data storage Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 16
- 238000001816 cooling Methods 0.000 description 14
- 238000012545 processing Methods 0.000 description 13
- 230000007246 mechanism Effects 0.000 description 9
- 239000007787 solid Substances 0.000 description 9
- 230000002093 peripheral effect Effects 0.000 description 7
- 239000004744 fabric Substances 0.000 description 6
- 238000012546 transfer Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 239000013307 optical fiber Substances 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 239000000872 buffer Substances 0.000 description 3
- 230000020169 heat generation Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 2
- 239000005387 chalcogenide glass Substances 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000002070 nanowire Substances 0.000 description 2
- 229910052760 oxygen Inorganic materials 0.000 description 2
- 239000001301 oxygen Substances 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000013341 scale-up Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000005641 tunneling Effects 0.000 description 2
- 239000004593 Epoxy Substances 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001152 differential interference contrast microscopy Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000002648 laminated material Substances 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 238000005476 soldering Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J15/00—Gripping heads and other end effectors
- B25J15/0014—Gripping heads and other end effectors having fork, comb or plate shaped means for engaging the lower surface on a object to be transported
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/18—Packaging or power distribution
- G06F1/183—Internal mounting support structures, e.g. for printed circuit boards, internal connecting means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/20—Cooling means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3006—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3442—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for planning or managing the needed capacity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
- G06F12/0615—Address space extension
- G06F12/0623—Address space extension for memory modules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1605—Handling requests for interconnection or transfer for access to memory bus based on arbitration
- G06F13/1652—Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
- G06F13/1657—Access to multiple memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/16—Handling requests for interconnection or transfer for access to memory bus
- G06F13/1668—Details of memory controller
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
- G06F13/30—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal with priority control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4204—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
- G06F13/4221—Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/161—Computing infrastructure, e.g. computer clusters, blade chassis or hardware partitioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7807—System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7867—Architectures of general purpose stored program computers comprising a single central processing unit with reconfigurable architecture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5072—Grid computing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
- H04L41/0816—Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0896—Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5019—Ensuring fulfilment of SLA
- H04L41/5025—Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/06—Generation of reports
- H04L43/065—Generation of reports related to network devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/25—Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/76—Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions
- H04L47/762—Admission control; Resource allocation using dynamic resource allocation, e.g. in-call renegotiation requested by the user or requested by the network in response to changing network conditions triggered by the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/83—Admission control; Resource allocation based on usage prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/40—Constructional details, e.g. power supply, mechanical construction or backplane
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/16—Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q1/00—Details of selecting apparatus or arrangements
- H04Q1/02—Constructional details
- H04Q1/10—Exchange station construction
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/14—Mounting supporting structure in casing or on frame or rack
- H05K7/1485—Servers; Data center rooms, e.g. 19-inch computer racks
- H05K7/1488—Cabinets therefor, e.g. chassis or racks or mechanical interfaces between blades and support structures
- H05K7/1489—Cabinets therefor, e.g. chassis or racks or mechanical interfaces between blades and support structures characterized by the mounting of blades therein, e.g. brackets, rails, trays
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/18—Construction of rack or frame
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/20—Modifications to facilitate cooling, ventilating, or heating
- H05K7/20009—Modifications to facilitate cooling, ventilating, or heating using a gaseous coolant in electronic enclosures
- H05K7/20209—Thermal management, e.g. fan control
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/20—Modifications to facilitate cooling, ventilating, or heating
- H05K7/20709—Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
- H05K7/20718—Forced ventilation of a gaseous coolant
- H05K7/20736—Forced ventilation of a gaseous coolant within cabinets for removing heat from server blades
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4022—Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/10—Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
- G06F21/105—Arrangements for software license management or administration, e.g. for managing licenses at corporate level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2200/00—Indexing scheme relating to G06F1/04 - G06F1/32
- G06F2200/20—Indexing scheme relating to G06F1/20
- G06F2200/201—Cooling arrangements using cooling fluid
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/86—Event-based monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/885—Monitoring specific for caches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
- G06F9/4856—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0631—Resource planning, allocation, distributing or scheduling for enterprises or organisations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0283—Price estimation or determination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/04—Network management architectures or arrangements
- H04L41/044—Network management architectures or arrangements comprising hierarchical management structures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/50—Network service management, e.g. ensuring proper service fulfilment according to agreements
- H04L41/5003—Managing SLA; Interaction between SLA and QoS
- H04L41/5019—Ensuring fulfilment of SLA
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/16—Threshold monitoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K7/00—Constructional details common to different types of electric apparatus
- H05K7/14—Mounting supporting structure in casing or on frame or rack
- H05K7/1485—Servers; Data center rooms, e.g. 19-inch computer racks
- H05K7/1498—Resource management, Optimisation arrangements, e.g. configuration, identification, tracking, physical location
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
Definitions
- Data centers that execute workloads typically contain racks of compute devices to execute the various workloads.
- workloads e.g., services, applications, processes, sets of operations, etc.
- clients e.g., customers, tenants, etc.
- data centers are evolving towards a disaggregated architecture where storage media can be shared, so as to address issues of underutilization of capacity and/or throughput of storage devices in datacenters due to imbalanced requirements across applications and over time.
- Each compute device may have multiple components located in various positions on the compute device that perform different functions (e.g., memory to access data, compute components to execute operations, etc.) and each data center may include thousands to tens-of-thousands of such compute devices.
- scaling the management control plane generally requires the inclusion of additional switching mechanisms, such as a spine switch in data centers which employ top-of-rack switches.
- client workloads are executed in virtualized or containerized clouds (e.g., using Openstack). Accordingly, the composition of client networks, teardown of spawning racks, flow management, and routing across resources is required. Generally, such control plane management requires packet telemetry gathering and analysis at scale, in addition to management of the hardware switches with actions that influence the client virtual networks.
- spawning a storage volume of disaggregated distributed disks, or disaggregated pooled field programmable gate arrays (FPGAs) for machine learning or deep learning deployments e.g., using the Theano library, the Caffe deep learning framework, etc.
- HPC high-performance computing
- FIG. 1 is a simplified diagram of at least one embodiment of a data center for executing workloads with disaggregated resources
- FIG. 2 is a simplified diagram of at least one embodiment of a pod that may be included in the data center of FIG. 1 ;
- FIG. 3 is a perspective view of at least one embodiment of a rack that may be included in the pod of FIG. 2 ;
- FIG. 4 is a side elevation view of the rack of FIG. 3 ;
- FIG. 5 is a perspective view of the rack of FIG. 3 having a sled mounted therein;
- FIG. 6 is a is a simplified block diagram of at least one embodiment of a top side of the sled of FIG. 5 ;
- FIG. 7 is a simplified block diagram of at least one embodiment of a bottom side of the sled of FIG. 6 ;
- FIG. 8 is a simplified block diagram of at least one embodiment of a compute sled usable in the data center of FIG. 1 ;
- FIG. 9 is a top perspective view of at least one embodiment of the compute sled of FIG. 8 ;
- FIG. 10 is a simplified block diagram of at least one embodiment of an accelerator sled usable in the data center of FIG. 1 ;
- FIG. 11 is a top perspective view of at least one embodiment of the accelerator sled of FIG. 10 ;
- FIG. 12 is a simplified block diagram of at least one embodiment of a storage sled usable in the data center of FIG. 1 ;
- FIG. 13 is a top perspective view of at least one embodiment of the storage sled of FIG. 12 ;
- FIG. 14 is a simplified block diagram of at least one embodiment of a memory sled usable in the data center of FIG. 1 ;
- FIG. 15 is a simplified block diagram of a system that may be established within the data center of FIG. 1 to execute workloads with managed nodes composed of disaggregated resources;
- FIG. 16 is a simplified diagram of at least one embodiment of a system for managing disaggregated resources in a data center that includes a controller compute device;
- FIG. 17 is a simplified block diagram of at least one embodiment of the controller compute device of the system of FIG. 16 ;
- FIG. 18 is a simplified block diagram of at least one embodiment of a method for managing disaggregated resources in a data center that may be performed by the controller compute device of FIGS. 16 and 17 ;
- FIG. 19 is a simplified block diagram of at least one embodiment of a communication data flow for performing a hardware lifecycle management operation that may be performed by the controller compute device of FIGS. 16 and 17 ;
- FIG. 20 is a simplified block diagram of at least one embodiment of a communication data flow for scheduling and managing groups of nodes that may be performed by the controller compute device of FIGS. 16 and 17 ;
- FIG. 21 is a simplified block diagram of at least one embodiment of a communication data flow for managing an underlay network that may be performed by the controller compute device of FIGS. 16 and 17 ;
- FIG. 22 is a simplified block diagram of at least one embodiment of a communication data flow for allocating a network slice that may be performed by the controller compute device of FIGS. 16 and 17 .
- references in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
- items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
- items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
- the disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof.
- the disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors.
- a machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
- a data center 100 (e.g., a facility used to house computer systems and associated components, such as telecommunications and storage systems) in which disaggregated resources may cooperatively execute one or more workloads (e.g., applications on behalf of customers) includes multiple pods 110 , 120 , 130 , 140 , each of which includes one or more rows of racks.
- workloads e.g., applications on behalf of customers
- PODs PODs by one definition are a group of racks.
- each rack houses multiple sleds, each of which may be primarily equipped with a particular type of resource (e.g., memory devices, data storage devices, accelerator devices, general purpose processors), i.e., resources that can be logically coupled to form a composed node, which can act as, for example, a server.
- resource e.g., memory devices, data storage devices, accelerator devices, general purpose processors
- the sleds in each pod 110 , 120 , 130 , 140 are connected to multiple pod switches (e.g., switches that route data communications to and from sleds within the pod).
- the pod switches connect with spine switches 150 that switch communications among pods (e.g., the pods 110 , 120 , 130 , 140 ) in the data center 100 .
- the sleds may be connected with a fabric using Intel® Omni-Path technology. In other embodiments, the sleds may be connected with other fabrics, such as InfiniBand or Ethernet.
- resources within sleds in the data center 100 may be allocated to a group (referred to herein as a “managed node”) containing resources from one or more sleds to be collectively utilized in the execution of a workload.
- the workload can execute as if the resources belonging to the managed node were located on the same sled.
- the resources in a managed node may belong to sleds belonging to different racks, and even to different pods 110 , 120 , 130 , 140 .
- some resources of a single sled may be allocated to one managed node while other resources of the same sled are allocated to a different managed node (e.g., one processor assigned to one managed node and another processor of the same sled assigned to a different managed node).
- a data center comprising disaggregated resources can be used in a wide variety of contexts, such as enterprise, government, cloud service provider, and communications service provider (e.g., Telco's), as well in a wide variety of sizes, from cloud service provider mega-data centers that consume over 100,000 sq. ft. to single- or multi-rack installations for use in base stations.
- contexts such as enterprise, government, cloud service provider, and communications service provider (e.g., Telco's)
- Telco's communications service provider
- the disaggregation of resources to sleds comprised predominantly of a single type of resource e.g., compute sleds comprising primarily compute resources, memory sleds containing primarily memory resources
- the selective allocation and deallocation of the disaggregated resources to form a managed node assigned to execute a workload improves the operation and resource usage of the data center 100 relative to typical data centers comprised of hyperconverged servers containing compute, memory, storage and perhaps additional resources in a single chassis.
- resources of a given type can be upgraded independently of other resources.
- different resources types typically have different refresh rates, greater resource utilization and reduced total cost of ownership may be achieved.
- a data center operator can upgrade the processors throughout their facility by only swapping out the compute sleds.
- accelerator and storage resources may not be contemporaneously upgraded and, rather, may be allowed to continue operating until those resources are scheduled for their own refresh.
- Resource utilization may also increase. For example, if managed nodes are composed (e.g., resources collectively combined to provide certain functionality) based on requirements of the workloads that will be running on them, resources within a node are more likely to be fully utilized. Such utilization may allow for more managed nodes to run in a data center with a given set of resources, or for a data center expected to run a given set of workloads, to be built using fewer resources.
- the pod 110 in the illustrative embodiment, includes a set of rows 200 , 210 , 220 , 230 of racks 240 .
- Each rack 240 may house multiple sleds (e.g., sixteen sleds) and provide power and data connections to the housed sleds, as described in more detail herein.
- the racks in each row 200 , 210 , 220 , 230 are connected to multiple pod switches 250 , 260 .
- the pod switch 250 includes a set of ports 252 to which the sleds of the racks of the pod 110 are connected and another set of ports 254 that connect the pod 110 to the spine switches 150 to provide connectivity to other pods in the data center 100 .
- the pod switch 260 includes a set of ports 262 to which the sleds of the racks of the pod 110 are connected and a set of ports 264 that connect the pod 110 to the spine switches 150 . As such, the use of the pair of switches 250 , 260 provides an amount of redundancy to the pod 110 .
- the switches 150 , 250 , 260 may be embodied as dual-mode optical switches, capable of routing both Ethernet protocol communications carrying Internet Protocol (IP) packets and communications according to a second, high-performance link-layer protocol (e.g., Intel's Omni-Path Architecture's, InfiniBand, PCI Express) via optical signaling media of an optical fabric.
- IP Internet Protocol
- a second, high-performance link-layer protocol e.g., Intel's Omni-Path Architecture's, InfiniBand, PCI Express
- each of the other pods 120 , 130 , 140 may be similarly structured as, and have components similar to, the pod 110 shown in and described in regard to FIG. 2 (e.g., each pod may have rows of racks housing multiple sleds as described above).
- each pod 110 , 120 , 130 , 140 may be connected to a different number of pod switches, providing even more failover capacity.
- pods may be arranged differently than the rows-of-racks configuration shown in FIGS. 1-2 .
- a pod may be embodied as multiple sets of racks in which each set of racks is arranged radially, i.e., the racks are equidistant from a center switch.
- each illustrative rack 240 of the data center 100 includes two elongated support posts 302 , 304 , which are arranged vertically.
- the elongated support posts 302 , 304 may extend upwardly from a floor of the data center 100 when deployed.
- the rack 240 also includes one or more horizontal pairs 310 of elongated support arms 312 (identified in FIG. 3 via a dashed ellipse) configured to support a sled of the data center 100 as discussed below.
- One elongated support arm 312 of the pair of elongated support arms 312 extends outwardly from the elongated support post 302 and the other elongated support arm 312 extends outwardly from the elongated support post 304 .
- each sled of the data center 100 is embodied as a chassis-less sled. That is, each sled has a chassis-less circuit board substrate on which physical resources (e.g., processors, memory, accelerators, storage, etc.) are mounted as discussed in more detail below.
- the rack 240 is configured to receive the chassis-less sleds.
- each pair 310 of elongated support arms 312 defines a sled slot 320 of the rack 240 , which is configured to receive a corresponding chassis-less sled.
- each illustrative elongated support arm 312 includes a circuit board guide 330 configured to receive the chassis-less circuit board substrate of the sled.
- Each circuit board guide 330 is secured to, or otherwise mounted to, a top side 332 of the corresponding elongated support arm 312 .
- each circuit board guide 330 is mounted at a distal end of the corresponding elongated support arm 312 relative to the corresponding elongated support post 302 , 304 .
- not every circuit board guide 330 may be referenced in each Figure.
- Each circuit board guide 330 includes an inner wall that defines a circuit board slot 380 configured to receive the chassis-less circuit board substrate of a sled 400 when the sled 400 is received in the corresponding sled slot 320 of the rack 240 .
- a user aligns the chassis-less circuit board substrate of an illustrative chassis-less sled 400 to a sled slot 320 .
- the user, or robot may then slide the chassis-less circuit board substrate forward into the sled slot 320 such that each side edge 414 of the chassis-less circuit board substrate is received in a corresponding circuit board slot 380 of the circuit board guides 330 of the pair 310 of elongated support arms 312 that define the corresponding sled slot 320 as shown in FIG. 4 .
- each type of resource can be upgraded independently of each other and at their own optimized refresh rate.
- the sleds are configured to blindly mate with power and data communication cables in each rack 240 , enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced.
- the data center 100 may operate (e.g., execute workloads, undergo maintenance and/or upgrades, etc.) without human involvement on the data center floor.
- a human may facilitate one or more maintenance or upgrade operations in the data center 100 .
- each circuit board guide 330 is dual sided. That is, each circuit board guide 330 includes an inner wall that defines a circuit board slot 380 on each side of the circuit board guide 330 . In this way, each circuit board guide 330 can support a chassis-less circuit board substrate on either side. As such, a single additional elongated support post may be added to the rack 240 to turn the rack 240 into a two-rack solution that can hold twice as many sled slots 320 as shown in FIG. 3 .
- the illustrative rack 240 includes seven pairs 310 of elongated support arms 312 that define a corresponding seven sled slots 320 , each configured to receive and support a corresponding sled 400 as discussed above.
- the rack 240 may include additional or fewer pairs 310 of elongated support arms 312 (i.e., additional or fewer sled slots 320 ). It should be appreciated that because the sled 400 is chassis-less, the sled 400 may have an overall height that is different than typical servers. As such, in some embodiments, the height of each sled slot 320 may be shorter than the height of a typical server (e.g., shorter than a single rank unit, “1 U”).
- the vertical distance between each pair 310 of elongated support arms 312 may be less than a standard rack unit “1 U.” Additionally, due to the relative decrease in height of the sled slots 320 , the overall height of the rack 240 in some embodiments may be shorter than the height of traditional rack enclosures. For example, in some embodiments, each of the elongated support posts 302 , 304 may have a length of six feet or less. Again, in other embodiments, the rack 240 may have different dimensions. For example, in some embodiments, the vertical distance between each pair 310 of elongated support arms 312 may be greater than a standard rack until “1 U”.
- the increased vertical distance between the sleds allows for larger heat sinks to be attached to the physical resources and for larger fans to be used (e.g., in the fan array 370 described below) for cooling each sled, which in turn can allow the physical resources to operate at increased power levels.
- the rack 240 does not include any walls, enclosures, or the like. Rather, the rack 240 is an enclosure-less rack that is opened to the local environment.
- an end plate may be attached to one of the elongated support posts 302 , 304 in those situations in which the rack 240 forms an end-of-row rack in the data center 100 .
- each elongated support post 302 , 304 includes an inner wall that defines an inner chamber in which interconnects may be located.
- the interconnects routed through the elongated support posts 302 , 304 may be embodied as any type of interconnects including, but not limited to, data or communication interconnects to provide communication connections to each sled slot 320 , power interconnects to provide power to each sled slot 320 , and/or other types of interconnects.
- the rack 240 in the illustrative embodiment, includes a support platform on which a corresponding optical data connector (not shown) is mounted.
- Each optical data connector is associated with a corresponding sled slot 320 and is configured to mate with an optical data connector of a corresponding sled 400 when the sled 400 is received in the corresponding sled slot 320 .
- optical connections between components (e.g., sleds, racks, and switches) in the data center 100 are made with a blind mate optical connection.
- a door on each cable may prevent dust from contaminating the fiber inside the cable.
- the door is pushed open when the end of the cable approaches or enters the connector mechanism. Subsequently, the optical fiber inside the cable may enter a gel within the connector mechanism and the optical fiber of one cable comes into contact with the optical fiber of another cable within the gel inside the connector mechanism.
- the illustrative rack 240 also includes a fan array 370 coupled to the cross-support arms of the rack 240 .
- the fan array 370 includes one or more rows of cooling fans 372 , which are aligned in a horizontal line between the elongated support posts 302 , 304 .
- the fan array 370 includes a row of cooling fans 372 for each sled slot 320 of the rack 240 .
- each sled 400 does not include any on-board cooling system in the illustrative embodiment and, as such, the fan array 370 provides cooling for each sled 400 received in the rack 240 .
- Each rack 240 also includes a power supply associated with each sled slot 320 .
- Each power supply is secured to one of the elongated support arms 312 of the pair 310 of elongated support arms 312 that define the corresponding sled slot 320 .
- the rack 240 may include a power supply coupled or secured to each elongated support arm 312 extending from the elongated support post 302 .
- Each power supply includes a power connector configured to mate with a power connector of the sled 400 when the sled 400 is received in the corresponding sled slot 320 .
- the sled 400 does not include any on-board power supply and, as such, the power supplies provided in the rack 240 supply power to corresponding sleds 400 when mounted to the rack 240 .
- Each power supply is configured to satisfy the power requirements for its associated sled, which can vary from sled to sled.
- the power supplies provided in the rack 240 can operate independent of each other. That is, within a single rack, a first power supply providing power to a compute sled can provide power levels that are different than power levels supplied by a second power supply providing power to an accelerator sled.
- the power supplies may be controllable at the sled level or rack level, and may be controlled locally by components on the associated sled or remotely, such as by another sled or an orchestrator.
- each sled 400 in the illustrative embodiment, is configured to be mounted in a corresponding rack 240 of the data center 100 as discussed above.
- each sled 400 may be optimized or otherwise configured for performing particular tasks, such as compute tasks, acceleration tasks, data storage tasks, etc.
- the sled 400 may be embodied as a compute sled 800 as discussed below in regard to FIGS. 8-9 , an accelerator sled 1000 as discussed below in regard to FIGS. 10-11 , a storage sled 1200 as discussed below in regard to FIGS. 12-13 , or as a sled optimized or otherwise configured to perform other specialized tasks, such as a memory sled 1400 , discussed below in regard to FIG. 14 .
- the illustrative sled 400 includes a chassis-less circuit board substrate 602 , which supports various physical resources (e.g., electrical components) mounted thereon.
- the circuit board substrate 602 is “chassis-less” in that the sled 400 does not include a housing or enclosure. Rather, the chassis-less circuit board substrate 602 is open to the local environment.
- the chassis-less circuit board substrate 602 may be formed from any material capable of supporting the various electrical components mounted thereon.
- the chassis-less circuit board substrate 602 is formed from an FR-4 glass-reinforced epoxy laminate material. Of course, other materials may be used to form the chassis-less circuit board substrate 602 in other embodiments.
- the chassis-less circuit board substrate 602 includes multiple features that improve the thermal cooling characteristics of the various electrical components mounted on the chassis-less circuit board substrate 602 .
- the chassis-less circuit board substrate 602 does not include a housing or enclosure, which may improve the airflow over the electrical components of the sled 400 by reducing those structures that may inhibit air flow.
- the chassis-less circuit board substrate 602 is not positioned in an individual housing or enclosure, there is no vertically-arranged backplane (e.g., a backplate of the chassis) attached to the chassis-less circuit board substrate 602 , which could inhibit air flow across the electrical components.
- the chassis-less circuit board substrate 602 has a geometric shape configured to reduce the length of the airflow path across the electrical components mounted to the chassis-less circuit board substrate 602 .
- the illustrative chassis-less circuit board substrate 602 has a width 604 that is greater than a depth 606 of the chassis-less circuit board substrate 602 .
- the chassis-less circuit board substrate 602 has a width of about 21 inches and a depth of about 9 inches, compared to a typical server that has a width of about 17 inches and a depth of about 39 inches.
- an airflow path 608 that extends from a front edge 610 of the chassis-less circuit board substrate 602 toward a rear edge 612 has a shorter distance relative to typical servers, which may improve the thermal cooling characteristics of the sled 400 .
- the various physical resources mounted to the chassis-less circuit board substrate 602 are mounted in corresponding locations such that no two substantively heat-producing electrical components shadow each other as discussed in more detail below.
- no two electrical components which produce appreciable heat during operation (i.e., greater than a nominal heat sufficient enough to adversely impact the cooling of another electrical component), are mounted to the chassis-less circuit board substrate 602 linearly in-line with each other along the direction of the airflow path 608 (i.e., along a direction extending from the front edge 610 toward the rear edge 612 of the chassis-less circuit board substrate 602 ).
- the illustrative sled 400 includes one or more physical resources 620 mounted to a top side 650 of the chassis-less circuit board substrate 602 .
- the physical resources 620 may be embodied as any type of processor, controller, or other compute circuit capable of performing various tasks such as compute functions and/or controlling the functions of the sled 400 depending on, for example, the type or intended functionality of the sled 400 .
- the physical resources 620 may be embodied as high-performance processors in embodiments in which the sled 400 is embodied as a compute sled, as accelerator co-processors or circuits in embodiments in which the sled 400 is embodied as an accelerator sled, storage controllers in embodiments in which the sled 400 is embodied as a storage sled, or a set of memory devices in embodiments in which the sled 400 is embodied as a memory sled.
- the sled 400 also includes one or more additional physical resources 630 mounted to the top side 650 of the chassis-less circuit board substrate 602 .
- the additional physical resources include a network interface controller (NIC) as discussed in more detail below.
- NIC network interface controller
- the physical resources 630 may include additional or other electrical components, circuits, and/or devices in other embodiments.
- the physical resources 620 are communicatively coupled to the physical resources 630 via an input/output (I/O) subsystem 622 .
- the I/O subsystem 622 may be embodied as circuitry and/or components to facilitate input/output operations with the physical resources 620 , the physical resources 630 , and/or other components of the sled 400 .
- the I/O subsystem 622 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, waveguides, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations.
- the I/O subsystem 622 is embodied as, or otherwise includes, a double data rate 4 (DDR4) data bus or a DDR5 data bus.
- DDR4 double data rate 4
- the sled 400 may also include a resource-to-resource interconnect 624 .
- the resource-to-resource interconnect 624 may be embodied as any type of communication interconnect capable of facilitating resource-to-resource communications.
- the resource-to-resource interconnect 624 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622 ).
- the resource-to-resource interconnect 624 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to resource-to-resource communications.
- QPI QuickPath Interconnect
- UPI UltraPath Interconnect
- the sled 400 also includes a power connector 640 configured to mate with a corresponding power connector of the rack 240 when the sled 400 is mounted in the corresponding rack 240 .
- the sled 400 receives power from a power supply of the rack 240 via the power connector 640 to supply power to the various electrical components of the sled 400 . That is, the sled 400 does not include any local power supply (i.e., an on-board power supply) to provide power to the electrical components of the sled 400 .
- the exclusion of a local or on-board power supply facilitates the reduction in the overall footprint of the chassis-less circuit board substrate 602 , which may increase the thermal cooling characteristics of the various electrical components mounted on the chassis-less circuit board substrate 602 as discussed above.
- voltage regulators are placed on a bottom side 750 (see FIG. 7 ) of the chassis-less circuit board substrate 602 directly opposite of the processors 820 (see FIG. 8 ), and power is routed from the voltage regulators to the processors 820 by vias extending through the circuit board substrate 602 .
- Such a configuration provides an increased thermal budget, additional current and/or voltage, and better voltage control relative to typical printed circuit boards in which processor power is delivered from a voltage regulator, in part, by printed circuit traces.
- the sled 400 may also include mounting features 642 configured to mate with a mounting arm, or other structure, of a robot to facilitate the placement of the sled 600 in a rack 240 by the robot.
- the mounting features 642 may be embodied as any type of physical structures that allow the robot to grasp the sled 400 without damaging the chassis-less circuit board substrate 602 or the electrical components mounted thereto.
- the mounting features 642 may be embodied as non-conductive pads attached to the chassis-less circuit board substrate 602 .
- the mounting features may be embodied as brackets, braces, or other similar structures attached to the chassis-less circuit board substrate 602 .
- the particular number, shape, size, and/or make-up of the mounting feature 642 may depend on the design of the robot configured to manage the sled 400 .
- the sled 400 in addition to the physical resources 630 mounted on the top side 650 of the chassis-less circuit board substrate 602 , the sled 400 also includes one or more memory devices 720 mounted to a bottom side 750 of the chassis-less circuit board substrate 602 . That is, the chassis-less circuit board substrate 602 is embodied as a double-sided circuit board.
- the physical resources 620 are communicatively coupled to the memory devices 720 via the I/O subsystem 622 .
- the physical resources 620 and the memory devices 720 may be communicatively coupled by one or more vias extending through the chassis-less circuit board substrate 602 .
- Each physical resource 620 may be communicatively coupled to a different set of one or more memory devices 720 in some embodiments. Alternatively, in other embodiments, each physical resource 620 may be communicatively coupled to each memory device 720 .
- the memory devices 720 may be embodied as any type of memory device capable of storing data for the physical resources 620 during operation of the sled 400 , such as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory.
- Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium.
- Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM).
- RAM random access memory
- DRAM dynamic random access memory
- SRAM static random access memory
- SDRAM synchronous dynamic random access memory
- DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4.
- LPDDR Low Power DDR
- Such standards may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces.
- the memory device is a block addressable memory device, such as those based on NAND or NOR technologies.
- a memory device may also include next-generation nonvolatile devices, such as Intel 3D XPointTM memory or other byte addressable write-in-place nonvolatile memory devices.
- the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory.
- PCM Phase Change Memory
- MRAM magnetoresistive random access memory
- MRAM magnetoresistive random access memory
- STT spin transfer torque
- the memory device may refer to the die itself and/or to a packaged memory product.
- the memory device may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance.
- the sled 400 may be embodied as a compute sled 800 .
- the compute sled 800 is optimized, or otherwise configured, to perform compute tasks.
- the compute sled 800 may rely on other sleds, such as acceleration sleds and/or storage sleds, to perform such compute tasks.
- the compute sled 800 includes various physical resources (e.g., electrical components) similar to the physical resources of the sled 400 , which have been identified in FIG. 8 using the same reference numbers.
- the description of such components provided above in regard to FIGS. 6 and 7 applies to the corresponding components of the compute sled 800 and is not repeated herein for clarity of the description of the compute sled 800 .
- the physical resources 620 are embodied as processors 820 . Although only two processors 820 are shown in FIG. 8 , it should be appreciated that the compute sled 800 may include additional processors 820 in other embodiments.
- the processors 820 are embodied as high-performance processors 820 and may be configured to operate at a relatively high power rating. Although the processors 820 generate additional heat operating at power ratings greater than typical processors (which operate at around 155-230 W), the enhanced thermal cooling characteristics of the chassis-less circuit board substrate 602 discussed above facilitate the higher power operation.
- the processors 820 are configured to operate at a power rating of at least 250 W. In some embodiments, the processors 820 may be configured to operate at a power rating of at least 350 W.
- the compute sled 800 may also include a processor-to-processor interconnect 842 .
- the processor-to-processor interconnect 842 may be embodied as any type of communication interconnect capable of facilitating processor-to-processor interconnect 842 communications.
- the processor-to-processor interconnect 842 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622 ).
- processor-to-processor interconnect 842 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.
- QPI QuickPath Interconnect
- UPI UltraPath Interconnect
- point-to-point interconnect dedicated to processor-to-processor communications.
- the compute sled 800 also includes a communication circuit 830 .
- the illustrative communication circuit 830 includes a network interface controller (NIC) 832 , which may also be referred to as a host fabric interface (HFI).
- NIC network interface controller
- HFI host fabric interface
- the NIC 832 may be embodied as, or otherwise include, any type of integrated circuit, discrete circuits, controller chips, chipsets, add-in-boards, daughtercards, network interface cards, or other devices that may be used by the compute sled 800 to connect with another compute device (e.g., with other sleds 400 ).
- the NIC 832 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors.
- the NIC 832 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 832 .
- the local processor of the NIC 832 may be capable of performing one or more of the functions of the processors 820 .
- the local memory of the NIC 832 may be integrated into one or more components of the compute sled at the board level, socket level, chip level, and/or other levels.
- the communication circuit 830 is communicatively coupled to an optical data connector 834 .
- the optical data connector 834 is configured to mate with a corresponding optical data connector of the rack 240 when the compute sled 800 is mounted in the rack 240 .
- the optical data connector 834 includes a plurality of optical fibers which lead from a mating surface of the optical data connector 834 to an optical transceiver 836 .
- the optical transceiver 836 is configured to convert incoming optical signals from the rack-side optical data connector to electrical signals and to convert electrical signals to outgoing optical signals to the rack-side optical data connector.
- the optical transceiver 836 may form a portion of the communication circuit 830 in other embodiments.
- the compute sled 800 may also include an expansion connector 840 .
- the expansion connector 840 is configured to mate with a corresponding connector of an expansion chassis-less circuit board substrate to provide additional physical resources to the compute sled 800 .
- the additional physical resources may be used, for example, by the processors 820 during operation of the compute sled 800 .
- the expansion chassis-less circuit board substrate may be substantially similar to the chassis-less circuit board substrate 602 discussed above and may include various electrical components mounted thereto. The particular electrical components mounted to the expansion chassis-less circuit board substrate may depend on the intended functionality of the expansion chassis-less circuit board substrate.
- the expansion chassis-less circuit board substrate may provide additional compute resources, memory resources, and/or storage resources.
- the additional physical resources of the expansion chassis-less circuit board substrate may include, but is not limited to, processors, memory devices, storage devices, and/or accelerator circuits including, for example, field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits.
- processors memory devices, storage devices, and/or accelerator circuits
- FPGAs field programmable gate arrays
- ASICs application-specific integrated circuits
- security co-processors graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits.
- GPUs graphics processing units
- machine learning circuits or other specialized processors, controllers, devices, and/or circuits.
- the processors 820 , communication circuit 830 , and optical data connector 834 are mounted to the top side 650 of the chassis-less circuit board substrate 602 .
- Any suitable attachment or mounting technology may be used to mount the physical resources of the compute sled 800 to the chassis-less circuit board substrate 602 .
- the various physical resources may be mounted in corresponding sockets (e.g., a processor socket), holders, or brackets.
- some of the electrical components may be directly mounted to the chassis-less circuit board substrate 602 via soldering or similar techniques.
- the individual processors 820 and communication circuit 830 are mounted to the top side 650 of the chassis-less circuit board substrate 602 such that no two heat-producing, electrical components shadow each other.
- the processors 820 and communication circuit 830 are mounted in corresponding locations on the top side 650 of the chassis-less circuit board substrate 602 such that no two of those physical resources are linearly in-line with others along the direction of the airflow path 608 .
- the optical data connector 834 is in-line with the communication circuit 830 , the optical data connector 834 produces no or nominal heat during operation.
- the memory devices 720 of the compute sled 800 are mounted to the bottom side 750 of the of the chassis-less circuit board substrate 602 as discussed above in regard to the sled 400 . Although mounted to the bottom side 750 , the memory devices 720 are communicatively coupled to the processors 820 located on the top side 650 via the I/O subsystem 622 . Because the chassis-less circuit board substrate 602 is embodied as a double-sided circuit board, the memory devices 720 and the processors 820 may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-less circuit board substrate 602 . Of course, each processor 820 may be communicatively coupled to a different set of one or more memory devices 720 in some embodiments.
- each processor 820 may be communicatively coupled to each memory device 720 .
- the memory devices 720 may be mounted to one or more memory mezzanines on the bottom side of the chassis-less circuit board substrate 602 and may interconnect with a corresponding processor 820 through a ball-grid array.
- Each of the processors 820 includes a heatsink 850 secured thereto. Due to the mounting of the memory devices 720 to the bottom side 750 of the chassis-less circuit board substrate 602 (as well as the vertical spacing of the sleds 400 in the corresponding rack 240 ), the top side 650 of the chassis-less circuit board substrate 602 includes additional “free” area or space that facilitates the use of heatsinks 850 having a larger size relative to traditional heatsinks used in typical servers. Additionally, due to the improved thermal cooling characteristics of the chassis-less circuit board substrate 602 , none of the processor heatsinks 850 include cooling fans attached thereto. That is, each of the heatsinks 850 is embodied as a fan-less heatsink. In some embodiments, the heat sinks 850 mounted atop the processors 820 may overlap with the heat sink attached to the communication circuit 830 in the direction of the airflow path 608 due to their increased size, as illustratively suggested by FIG. 9 .
- the sled 400 may be embodied as an accelerator sled 1000 .
- the accelerator sled 1000 is configured, to perform specialized compute tasks, such as machine learning, encryption, hashing, or other computational-intensive task.
- a compute sled 800 may offload tasks to the accelerator sled 1000 during operation.
- the accelerator sled 1000 includes various components similar to components of the sled 400 and/or compute sled 800 , which have been identified in FIG. 10 using the same reference numbers. The description of such components provided above in regard to FIGS. 6, 7, and 8 apply to the corresponding components of the accelerator sled 1000 and is not repeated herein for clarity of the description of the accelerator sled 1000 .
- the physical resources 620 are embodied as accelerator circuits 1020 .
- the accelerator sled 1000 may include additional accelerator circuits 1020 in other embodiments.
- the accelerator sled 1000 may include four accelerator circuits 1020 in some embodiments.
- the accelerator circuits 1020 may be embodied as any type of processor, co-processor, compute circuit, or other device capable of performing compute or processing operations.
- the accelerator circuits 1020 may be embodied as, for example, field programmable gate arrays (FPGA), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), neuromorphic processor units, quantum computers, machine learning circuits, or other specialized processors, controllers, devices, and/or circuits.
- FPGA field programmable gate arrays
- ASICs application-specific integrated circuits
- security co-processors graphics processing units
- GPUs graphics processing units
- neuromorphic processor units quantum computers
- machine learning circuits or other specialized processors, controllers, devices, and/or circuits.
- the accelerator sled 1000 may also include an accelerator-to-accelerator interconnect 1042 . Similar to the resource-to-resource interconnect 624 of the sled 600 discussed above, the accelerator-to-accelerator interconnect 1042 may be embodied as any type of communication interconnect capable of facilitating accelerator-to-accelerator communications. In the illustrative embodiment, the accelerator-to-accelerator interconnect 1042 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622 ).
- the accelerator-to-accelerator interconnect 1042 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.
- the accelerator circuits 1020 may be daisy-chained with a primary accelerator circuit 1020 connected to the NIC 832 and memory 720 through the I/O subsystem 622 and a secondary accelerator circuit 1020 connected to the NIC 832 and memory 720 through a primary accelerator circuit 1020 .
- FIG. 11 an illustrative embodiment of the accelerator sled 1000 is shown.
- the accelerator circuits 1020 , communication circuit 830 , and optical data connector 834 are mounted to the top side 650 of the chassis-less circuit board substrate 602 .
- the individual accelerator circuits 1020 and communication circuit 830 are mounted to the top side 650 of the chassis-less circuit board substrate 602 such that no two heat-producing, electrical components shadow each other as discussed above.
- the memory devices 720 of the accelerator sled 1000 are mounted to the bottom side 750 of the of the chassis-less circuit board substrate 602 as discussed above in regard to the sled 600 .
- each of the accelerator circuits 1020 may include a heatsink 1070 that is larger than a traditional heatsink used in a server. As discussed above with reference to the heatsinks 870 , the heatsinks 1070 may be larger than traditional heatsinks because of the “free” area provided by the memory resources 720 being located on the bottom side 750 of the chassis-less circuit board substrate 602 rather than on the top side 650 .
- the sled 400 may be embodied as a storage sled 1200 .
- the storage sled 1200 is configured, to store data in a data storage 1250 local to the storage sled 1200 .
- a compute sled 800 or an accelerator sled 1000 may store and retrieve data from the data storage 1250 of the storage sled 1200 .
- the storage sled 1200 includes various components similar to components of the sled 400 and/or the compute sled 800 , which have been identified in FIG. 12 using the same reference numbers. The description of such components provided above in regard to FIGS. 6, 7, and 8 apply to the corresponding components of the storage sled 1200 and is not repeated herein for clarity of the description of the storage sled 1200 .
- the physical resources 620 are embodied as storage controllers 1220 . Although only two storage controllers 1220 are shown in FIG. 12 , it should be appreciated that the storage sled 1200 may include additional storage controllers 1220 in other embodiments.
- the storage controllers 1220 may be embodied as any type of processor, controller, or control circuit capable of controlling the storage and retrieval of data into the data storage 1250 based on requests received via the communication circuit 830 .
- the storage controllers 1220 are embodied as relatively low-power processors or controllers.
- the storage controllers 1220 may be configured to operate at a power rating of about 75 watts.
- the storage sled 1200 may also include a controller-to-controller interconnect 1242 .
- the controller-to-controller interconnect 1242 may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications.
- the controller-to-controller interconnect 1242 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622 ).
- controller-to-controller interconnect 1242 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.
- QPI QuickPath Interconnect
- UPI UltraPath Interconnect
- point-to-point interconnect dedicated to processor-to-processor communications.
- the data storage 1250 is embodied as, or otherwise includes, a storage cage 1252 configured to house one or more solid state drives (SSDs) 1254 .
- the storage cage 1252 includes a number of mounting slots 1256 , each of which is configured to receive a corresponding solid state drive 1254 .
- Each of the mounting slots 1256 includes a number of drive guides 1258 that cooperate to define an access opening 1260 of the corresponding mounting slot 1256 .
- the storage cage 1252 is secured to the chassis-less circuit board substrate 602 such that the access openings face away from (i.e., toward the front of) the chassis-less circuit board substrate 602 .
- solid state drives 1254 are accessible while the storage sled 1200 is mounted in a corresponding rack 204 .
- a solid state drive 1254 may be swapped out of a rack 240 (e.g., via a robot) while the storage sled 1200 remains mounted in the corresponding rack 240 .
- the storage cage 1252 illustratively includes sixteen mounting slots 1256 and is capable of mounting and storing sixteen solid state drives 1254 .
- the storage cage 1252 may be configured to store additional or fewer solid state drives 1254 in other embodiments.
- the solid state drivers are mounted vertically in the storage cage 1252 , but may be mounted in the storage cage 1252 in a different orientation in other embodiments.
- Each solid state drive 1254 may be embodied as any type of data storage device capable of storing long term data. To do so, the solid state drives 1254 may include volatile and non-volatile memory devices discussed above.
- the storage controllers 1220 , the communication circuit 830 , and the optical data connector 834 are illustratively mounted to the top side 650 of the chassis-less circuit board substrate 602 .
- any suitable attachment or mounting technology may be used to mount the electrical components of the storage sled 1200 to the chassis-less circuit board substrate 602 including, for example, sockets (e.g., a processor socket), holders, brackets, soldered connections, and/or other mounting or securing techniques.
- the individual storage controllers 1220 and the communication circuit 830 are mounted to the top side 650 of the chassis-less circuit board substrate 602 such that no two heat-producing, electrical components shadow each other.
- the storage controllers 1220 and the communication circuit 830 are mounted in corresponding locations on the top side 650 of the chassis-less circuit board substrate 602 such that no two of those electrical components are linearly in-line with each other along the direction of the airflow path 608 .
- the memory devices 720 of the storage sled 1200 are mounted to the bottom side 750 of the of the chassis-less circuit board substrate 602 as discussed above in regard to the sled 400 . Although mounted to the bottom side 750 , the memory devices 720 are communicatively coupled to the storage controllers 1220 located on the top side 650 via the I/O subsystem 622 . Again, because the chassis-less circuit board substrate 602 is embodied as a double-sided circuit board, the memory devices 720 and the storage controllers 1220 may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-less circuit board substrate 602 . Each of the storage controllers 1220 includes a heatsink 1270 secured thereto.
- each of the heatsinks 1270 includes cooling fans attached thereto. That is, each of the heatsinks 1270 is embodied as a fan-less heatsink.
- the sled 400 may be embodied as a memory sled 1400 .
- the storage sled 1400 is optimized, or otherwise configured, to provide other sleds 400 (e.g., compute sleds 800 , accelerator sleds 1000 , etc.) with access to a pool of memory (e.g., in two or more sets 1430 , 1432 of memory devices 720 ) local to the memory sled 1200 .
- a compute sled 800 or an accelerator sled 1000 may remotely write to and/or read from one or more of the memory sets 1430 , 1432 of the memory sled 1200 using a logical address space that maps to physical addresses in the memory sets 1430 , 1432 .
- the memory sled 1400 includes various components similar to components of the sled 400 and/or the compute sled 800 , which have been identified in FIG. 14 using the same reference numbers. The description of such components provided above in regard to FIGS. 6, 7, and 8 apply to the corresponding components of the memory sled 1400 and is not repeated herein for clarity of the description of the memory sled 1400 .
- the physical resources 620 are embodied as memory controllers 1420 . Although only two memory controllers 1420 are shown in FIG. 14 , it should be appreciated that the memory sled 1400 may include additional memory controllers 1420 in other embodiments.
- the memory controllers 1420 may be embodied as any type of processor, controller, or control circuit capable of controlling the writing and reading of data into the memory sets 1430 , 1432 based on requests received via the communication circuit 830 .
- each memory controller 1420 is connected to a corresponding memory set 1430 , 1432 to write to and read from memory devices 720 within the corresponding memory set 1430 , 1432 and enforce any permissions (e.g., read, write, etc.) associated with sled 400 that has sent a request to the memory sled 1400 to perform a memory access operation (e.g., read or write).
- a memory access operation e.g., read or write
- the memory sled 1400 may also include a controller-to-controller interconnect 1442 .
- the controller-to-controller interconnect 1442 may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications.
- the controller-to-controller interconnect 1442 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622 ).
- the controller-to-controller interconnect 1442 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.
- a memory controller 1420 may access, through the controller-to-controller interconnect 1442 , memory that is within the memory set 1432 associated with another memory controller 1420 .
- a scalable memory controller is made of multiple smaller memory controllers, referred to herein as “chiplets”, on a memory sled (e.g., the memory sled 1400 ).
- the chiplets may be interconnected (e.g., using EMIB (Embedded Multi-Die Interconnect Bridge)).
- the combined chiplet memory controller may scale up to a relatively large number of memory controllers and I/O ports, (e.g., up to 16 memory channels).
- the memory controllers 1420 may implement a memory interleave (e.g., one memory address is mapped to the memory set 1430 , the next memory address is mapped to the memory set 1432 , and the third address is mapped to the memory set 1430 , etc.).
- the interleaving may be managed within the memory controllers 1420 , or from CPU sockets (e.g., of the compute sled 800 ) across network links to the memory sets 1430 , 1432 , and may improve the latency associated with performing memory access operations as compared to accessing contiguous memory addresses from the same memory device.
- the memory sled 1400 may be connected to one or more other sleds 400 (e.g., in the same rack 240 or an adjacent rack 240 ) through a waveguide, using the waveguide connector 1480 .
- the waveguides are 64 millimeter waveguides that provide 16 Rx (i.e., receive) lanes and 16 Tx (i.e., transmit) lanes.
- Each lane in the illustrative embodiment, is either 16 GHz or 32 GHz. In other embodiments, the frequencies may be different.
- Using a waveguide may provide high throughput access to the memory pool (e.g., the memory sets 1430 , 1432 ) to another sled (e.g., a sled 400 in the same rack 240 or an adjacent rack 240 as the memory sled 1400 ) without adding to the load on the optical data connector 834 .
- the memory pool e.g., the memory sets 1430 , 1432
- another sled e.g., a sled 400 in the same rack 240 or an adjacent rack 240 as the memory sled 1400
- the system 1510 includes an orchestrator server 1520 , which may be embodied as a managed node comprising a compute device (e.g., a processor 820 on a compute sled 800 ) executing management software (e.g., a cloud operating environment, such as OpenStack) that is communicatively coupled to multiple sleds 400 including a large number of compute sleds 1530 (e.g., each similar to the compute sled 800 ), memory sleds 1540 (e.g., each similar to the memory sled 1400 ), accelerator sleds 1550 (e.g., each similar to the memory sled 1000 ), and storage sleds 1560 (e.g., each similar to the storage sled 1200 ).
- a compute device e.g., a processor 820 on a compute sled 800
- management software e.g., a cloud operating environment, such as OpenStack
- multiple sleds 400 including a large number of compute
- One or more of the sleds 1530 , 1540 , 1550 , 1560 may be grouped into a managed node 1570 , such as by the orchestrator server 1520 , to collectively perform a workload (e.g., an application 1532 executed in a virtual machine or in a container).
- the managed node 1570 may be embodied as an assembly of physical resources 620 , such as processors 820 , memory resources 720 , accelerator circuits 1020 , or data storage 1250 , from the same or different sleds 400 .
- the managed node may be established, defined, or “spun up” by the orchestrator server 1520 at the time a workload is to be assigned to the managed node or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node.
- the orchestrator server 1520 may selectively allocate and/or deallocate physical resources 620 from the sleds 400 and/or add or remove one or more sleds 400 from the managed node 1570 as a function of quality of service (QoS) targets (e.g., performance targets associated with a throughput, latency, instructions per second, etc.) associated with a service level agreement for the workload (e.g., the application 1532 ).
- QoS quality of service
- the orchestrator server 1520 may receive telemetry data indicative of performance conditions (e.g., throughput, latency, instructions per second, etc.) in each sled 400 of the managed node 1570 and compare the telemetry data to the quality of service targets to determine whether the quality of service targets are being satisfied.
- the orchestrator server 1520 may additionally determine whether one or more physical resources may be deallocated from the managed node 1570 while still satisfying the QoS targets, thereby freeing up those physical resources for use in another managed node (e.g., to execute a different workload).
- the orchestrator server 1520 may determine to dynamically allocate additional physical resources to assist in the execution of the workload (e.g., the application 1532 ) while the workload is executing. Similarly, the orchestrator server 1520 may determine to dynamically deallocate physical resources from a managed node if the orchestrator server 1520 determines that deallocating the physical resource would result in QoS targets still being met.
- the orchestrator server 1520 may identify trends in the resource utilization of the workload (e.g., the application 1532 ), such as by identifying phases of execution (e.g., time periods in which different operations, each having different resource utilizations characteristics, are performed) of the workload (e.g., the application 1532 ) and pre-emptively identifying available resources in the data center 100 and allocating them to the managed node 1570 (e.g., within a predefined time period of the associated phase beginning).
- phases of execution e.g., time periods in which different operations, each having different resource utilizations characteristics, are performed
- the orchestrator server 1520 may model performance based on various latencies and a distribution scheme to place workloads among compute sleds and other resources (e.g., accelerator sleds, memory sleds, storage sleds) in the data center 100 .
- the orchestrator server 1520 may utilize a model that accounts for the performance of resources on the sleds 400 (e.g., FPGA performance, memory access latency, etc.) and the performance (e.g., congestion, latency, bandwidth) of the path through the network to the resource (e.g., FPGA).
- the orchestrator server 1520 may determine which resource(s) should be used with which workloads based on the total latency associated with each potential resource available in the data center 100 (e.g., the latency associated with the performance of the resource itself in addition to the latency associated with the path through the network between the compute sled executing the workload and the sled 400 on which the resource is located).
- the orchestrator server 1520 may generate a map of heat generation in the data center 100 using telemetry data (e.g., temperatures, fan speeds, etc.) reported from the sleds 400 and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in the data center 100 .
- telemetry data e.g., temperatures, fan speeds, etc.
- the orchestrator server 1520 may organize received telemetry data into a hierarchical model that is indicative of a relationship between the managed nodes (e.g., a spatial relationship such as the physical locations of the resources of the managed nodes within the data center 100 and/or a functional relationship, such as groupings of the managed nodes by the customers the managed nodes provide services for, the types of functions typically performed by the managed nodes, managed nodes that typically share or exchange workloads among each other, etc.). Based on differences in the physical locations and resources in the managed nodes, a given workload may exhibit different resource utilizations (e.g., cause a different internal temperature, use a different percentage of processor or memory capacity) across the resources of different managed nodes.
- resource utilizations e.g., cause a different internal temperature, use a different percentage of processor or memory capacity
- the orchestrator server 1520 may determine the differences based on the telemetry data stored in the hierarchical model and factor the differences into a prediction of future resource utilization of a workload if the workload is reassigned from one managed node to another managed node, to accurately balance resource utilization in the data center 100 .
- the orchestrator server 1520 may send self-test information to the sleds 400 to enable each sled 400 to locally (e.g., on the sled 400 ) determine whether telemetry data generated by the sled 400 satisfies one or more conditions (e.g., an available capacity that satisfies a predefined threshold, a temperature that satisfies a predefined threshold, etc.). Each sled 400 may then report back a simplified result (e.g., yes or no) to the orchestrator server 1520 , which the orchestrator server 1520 may utilize in determining the allocation of resources to managed nodes.
- a simplified result e.g., yes or no
- a system 1600 for managing disaggregated resources in a data center includes a cloud orchestrator server 1602 , similar to the orchestrator server 1520 of FIG. 15 , which is communicatively coupled to a controller compute device 1604 .
- the controller compute device 1604 is configured to function as a software defined infrastructure controller and resource manager for racks (e.g., the rack 240 in the data center 100 of FIG. 1 ) and pods (e.g., the pods 110 , 120 , 130 , 140 in the data center 100 of FIG. 1 ).
- the controller compute device 1604 is configured to employ a system-level composable services framework and protocol to allow the controller compute device 1604 to function as an undercloud manager, including a task manager (e.g., functioning as a task scheduler and queue manager) with queues for asynchronous compute, network, and storage management.
- a task manager e.g., functioning as a task scheduler and queue manager
- the controller compute device 1604 is configured to perform hardware lifecycle management operations (e.g., discovery, composition, configuration, release, etc.) of the various disaggregated resources, dynamically and at scale. To do so, the controller compute device 1604 is configured to use a protocol of communication that can discover hardware capabilities, leverage composable services to map requests received from the orchestrator server 1520 , service hardware composability requests, and perform telemetry-based autonomous actions.
- hardware lifecycle management operations e.g., discovery, composition, configuration, release, etc.
- the illustrative system 1600 additionally includes managed nodes 1622 communicatively coupled to the controller compute device 1604 .
- the illustrative managed nodes 1622 includes a first managed node designated as managed node ( 1 ) 1622 a and a second managed node designated as managed node (N) 1622 b , in which the managed node (N) 1622 b represents the “Nth” managed node 1622 and “N” is a positive integer.
- one or more sleds e.g., one or more of the sleds 1530 , 1540 , 1550 , 1560 of FIG.
- the managed nodes 1622 may be embodied as an assembly of resources, such as compute resources, memory resources, storage resources, or other resources, from the same or different sleds or racks. Further, any of the managed nodes 1622 may be established, defined, or “spun up” by a respective pod manager service at the time a workload is to be assigned to a managed node 1622 or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node 1622 .
- the controller compute device 1604 may be embodied as any type of computation or computer device capable of performing the functions described herein, including, without limitation, a computer, a server (e.g., stand-alone, rack-mounted, blade, etc.), a sled (e.g., a compute sled, an accelerator sled, a storage sled, a memory sled, etc.), an enhanced or smart NIC (e.g., a host fabric interface (HFI)), a network appliance (e.g., physical or virtual), a router, a switch (e.g., a disaggregated switch, a rack-mounted switch, a standalone switch, a fully managed switch, a partially managed switch, a full-duplex switch, and/or a half-duplex communication mode enabled switch), a web appliance, a distributed computing system, a processor-based system, and/or a multiprocessor system.
- a server e.g., stand-alone, rack-mounted, blade,
- the illustrative controller compute device 1604 includes a compute engine 1606 , an I/O subsystem 1612 , one or more data storage devices 1614 , communication circuitry 1616 , and, in some embodiments, one or more peripheral devices 1620 . It should be appreciated that the controller compute device 1604 may include other or additional components, such as those commonly found in a typical computing device (e.g., various input/output devices and/or other components), in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component.
- the compute engine 1606 may be embodied as any type of device or collection of devices capable of performing the various compute functions as described herein.
- the compute engine 1606 may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable-array (FPGA), a system-on-a-chip (SOC), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein.
- the compute engine 1606 may include, or may otherwise be embodied as, one or more processors 1608 (i.e., one or more central processing units (CPUs)) and memory 1610 .
- processors 1608 i.e., one or more central processing units (CPUs)
- the processor(s) 1608 may be embodied as any type of processor(s) capable of performing the functions described herein.
- the processor(s) 1608 may be embodied as one or more single-core processors, multi-core processors, digital signal processors (DSPs), microcontrollers, or other processor(s) or processing/controlling circuit(s).
- the processor(s) 1608 may be embodied as, include, or otherwise be coupled to an FPGA (e.g., reconfigurable circuitry), an ASIC, reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein.
- the memory 1610 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. It should be appreciated that the memory 1610 may include main memory (i.e., a primary memory) and/or cache memory (i.e., memory that can be accessed more quickly than the main memory). Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM).
- SDRAM synchronous dynamic random access memory
- DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4.
- LPDDR Low Power DDR
- Such standards may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces.
- the memory device is a block addressable memory device, such as those based on NAND or NOR technologies.
- a memory device may also include a three dimensional crosspoint memory device (e.g., Intel 3D XPointTM memory), or other byte addressable write-in-place nonvolatile memory devices.
- the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory.
- the memory device may refer to the die itself and/or to a packaged memory product.
- the memory 1610 may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. In some embodiments, all or a portion of the memory 1610 may be integrated into a processor 1608 . In operation, the memory 1610 may store various software and data used during operation such as applications, data operated on by the applications, routing rules, libraries, and drivers.
- the compute engine 1606 is communicatively coupled to other components of the controller compute device 1604 via the I/O subsystem 1612 , which may be embodied as circuitry and/or components to facilitate input/output operations with the processor 1608 , the memory 1610 , and other components of the controller compute device 1604 .
- the I/O subsystem 1612 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations.
- the I/O subsystem 1612 may form a portion of a SoC and be incorporated, along with one or more of the processor 1608 , the memory 1610 , and other components of the controller compute device 1604 , on a single integrated circuit chip.
- the one or more data storage devices 1614 may be embodied as any type of storage device(s) configured for short-term or long-term storage of data, such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices.
- Each data storage device 1614 may include a system partition that stores data and firmware code for the data storage device 1614 .
- Each data storage device 1614 may also include an operating system partition that stores data files and executables for an operating system.
- the communication circuitry 1616 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between the controller compute device 1604 and other computing devices, such as the source compute device 102 , as well as any network communication enabling devices, such as an access point, network switch/router, etc., to allow communication over the network 104 . Accordingly, the communication circuitry 1616 may be configured to use any one or more communication technologies (e.g., wireless or wired communication technologies) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, LTE, 5G, etc.) to effect such communication.
- communication technologies e.g., wireless or wired communication technologies
- associated protocols e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, LTE, 5G, etc.
- the communication circuitry 1616 may include specialized circuitry, hardware, or combination thereof to perform pipeline logic (e.g., hardware algorithms) for performing the functions described herein, including processing network packets (e.g., parse received network packets, determine destination computing devices for each received network packets, forward the network packets to a particular buffer queue of a respective host buffer of the controller compute device 1604 , etc.), performing computational functions, etc.
- pipeline logic e.g., hardware algorithms
- performance of one or more of the functions of communication circuitry 1616 as described herein may be performed by specialized circuitry, hardware, or combination thereof of the communication circuitry 1616 , which may be embodied as a SoC or otherwise form a portion of a SoC of the controller compute device 1604 (e.g., incorporated on a single integrated circuit chip along with a processor 1608 , the memory 1610 , and/or other components of the controller compute device 1604 ).
- the specialized circuitry, hardware, or combination thereof may be embodied as one or more discrete processing units of the controller compute device 1604 , each of which may be capable of performing one or more of the functions described herein.
- the illustrative communication circuitry 1616 includes a network interface controller (NIC) 1618 , which may also be referred to as a host fabric interface (HFI) in some embodiments (e.g., high performance computing (HPC) environments).
- the NIC 1618 may be embodied as any type of firmware, hardware, software, or any combination thereof that facilities communications access between the controller compute device 1604 and a network (e.g., the network 104 ).
- the NIC 1618 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by the controller compute device 1604 to connect with another compute device (e.g., the source compute device 102 ).
- the NIC 1618 may be embodied as part of a SoC that includes one or more processors, or included on a multichip package that also contains one or more processors. Additionally or alternatively, in some embodiments, the NIC 1618 may include one or more processing cores (not shown) local to the NIC 1618 . In such embodiments, the processing core(s) may be capable of performing one or more of the functions described herein. In some embodiments, the NIC 1618 may additionally include a local memory (not shown). In such embodiments, the local memory of the NIC 1618 may be integrated into one or more components of the controller compute device 1604 at the board level, socket level, chip level, and/or other levels.
- the one or more peripheral devices 1620 may include any type of device that is usable to input information into the controller compute device 1604 and/or receive information from the controller compute device 1604 .
- the peripheral devices 1620 may be embodied as any auxiliary device usable to input information into the controller compute device 1604 , such as a keyboard, a mouse, a microphone, a barcode reader, an image scanner, etc., or output information from the controller compute device 1604 , such as a display, a speaker, graphics circuitry, a printer, a projector, etc.
- one or more of the peripheral devices 1620 may function as both an input device and an output device (e.g., a touchscreen display, a digitizer on top of a display screen, etc.).
- peripheral devices 1620 connected to the controller compute device 1604 may depend on, for example, the type and/or intended use of the controller compute device 1604 .
- the peripheral devices 1620 may include one or more ports, such as a universal serial bus (USB) port, for example, for connecting external peripheral devices to the controller compute device 1604 .
- USB universal serial bus
- the cloud orchestrator server 1602 may have components similar to those described in reference to the illustrative controller compute device 1604 . As such, the description of those like and/or similar components of the controller compute device 1604 are equally applicable to the description of components of the orchestrator server 1602 , and are not repeated herein for clarity of the description. Further, it should be appreciated that the orchestrator server 1602 may include other components, sub-components, and devices commonly found in a computing device, which are not discussed above in reference to the controller compute device 1604 and not discussed herein for clarity of the description.
- the controller compute device 1604 establishes an environment 1700 during operation.
- the illustrative environment 1700 includes a network traffic ingress/egress manager 1708 , an application programming interface (API) manager 1710 , a task manager 1714 , a microservice resource controller 1716 , and a microtask resource controller 1718 .
- API application programming interface
- the various components of the environment 1700 may be embodied as hardware, firmware, software, or a combination thereof.
- one or more of the components of the environment 1700 may be embodied as circuitry or a collection of electrical devices (e.g., network traffic ingress/egress management circuitry 1708 , API management circuitry 1710 , task management circuitry 1714 , microservice resource controller circuitry 1716 , microtask resource controller circuitry 1718 , etc.).
- electrical devices e.g., network traffic ingress/egress management circuitry 1708 , API management circuitry 1710 , task management circuitry 1714 , microservice resource controller circuitry 1716 , microtask resource controller circuitry 1718 , etc.
- each of the one or more functions described herein as being performed by the controller compute device 1604 may be performed, at least in part, by one or more components of the controller compute device 1604 , such as the compute engine 1606 , the I/O subsystem 1612 , and/or other components of the controller compute device 1604 .
- the controller compute device 1604 may be performed by the orchestrator server 1602 in other embodiments.
- one or more of the network traffic ingress/egress management circuitry 1708 , the API management circuitry 1710 , the task management circuitry 1714 , the microservice resource controller circuitry 1716 , and the microtask resource controller circuitry 1718 may reside on the orchestrator server 1602 .
- one or more of the illustrative components may form a portion of another component and/or one or more of the illustrative components may be independent of one another.
- one or more of the components of the environment 1700 may be embodied as virtualized hardware components or emulated architecture, which may be established and maintained by the compute engine 1606 , the NIC 1618 , and/or other software/hardware components of the controller compute device 1604 .
- the controller compute device 1604 may include other components, sub-components, modules, sub-modules, logic, sub-logic, and/or devices commonly found in a computing device (e.g., device drivers, interfaces, etc.), which are not illustrated in FIG. 17 for clarity of the description.
- the controller compute device 1604 additionally includes pod manager data 1702 , task data 1704 , and compose service data 1706 , each of which may be accessed by the various components and/or sub-components of the controller compute device 1604 . Further, each of the pod manager data 1702 , the task data 1704 , and the compose service data 1706 may be accessed by the various components of the controller compute device 1604 . Additionally, it should be appreciated that in some embodiments the data stored in, or otherwise represented by, each of the pod manager data 1702 , the task data 1704 , and the compose service data 1706 may not be mutually exclusive relative to each other.
- data stored in the pod manager data 1702 may also be stored as a portion of one or more of the task data 1704 and/or the compose service data 1706 , or in another alternative arrangement.
- the various data utilized by the controller compute device 1604 is described herein as particular discrete data, such data may be combined, aggregated, and/or otherwise form portions of a single or multiple data sets, including duplicative copies, in other embodiments.
- the network traffic ingress/egress manager 1708 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to receive inbound and route/transmit outbound network traffic. To do so, the illustrative network traffic ingress/egress manager 1708 is configured to facilitate inbound network communications (e.g., network traffic, network packets, network flows, etc.) to the controller compute device 1604 .
- inbound network communications e.g., network traffic, network packets, network flows, etc.
- the network traffic ingress/egress manager 1708 is configured to manage (e.g., create, modify, delete, etc.) connections to physical and virtual network ports (i.e., virtual network interfaces) of the controller compute device 1604 (e.g., via the communication circuitry 1616 ), as well as the ingress buffers/queues associated therewith.
- the API manager 1710 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to manage the API service 1712 instance to perform the functions as described herein. To do so, the API manager 1710 may be configured to instantiate the API service 1712 based on one or more characteristics, such as supported protocols (e.g., Representational State Transfer (REST), Extensible Markup Language (XML), etc.), libraries, etc.
- the API service 1712 is configured to provide multiple points for inbound API calls, and perform a translation thereof into corresponding message(s), as necessary, for internal consumption (e.g., by the task manager 1714 ).
- the API service 1712 is additionally configured to generate outbound calls (e.g., to the cloud orchestrator server 1602 ) based on messages received internally (e.g., from the task manager 1714 ).
- the task manager 1714 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to schedule tasks received at the controller compute device 1604 , such as may be received from an orchestrator server (e.g., the cloud orchestrator server 1602 of FIG. 16 ) communicatively coupled to the controller compute device 1604 .
- workload processing requests may be transmitted between the task manager 1714 and a pooled system management engine (PSME).
- PSME pooled system management engine
- certain processing tasks may be coordinated between the task manager 1714 and the applicable PSME (e.g., via a corresponding pod manager service) for fulfillment by one or more devices associated with the PSME.
- PSME is nomenclature used by Intel Corporation and is used herein merely for convenience.
- the PSME may be embodied as any sled-level, rack-level, or tray-level management engine.
- the task manager 1714 is configured receive an indication that a request for the initiation of a service managed by the controller compute device 1604 has been received by the controller compute device 1604 .
- the task scheduler may receive such initialization requests via the API service 1712 .
- the task manager 1714 is further configured to create tasks and any messages associated therewith that are usable to identify information associated with the created task (e.g., for the execution thereof).
- the task manager 1714 may be configured to create a task-related message (e.g., based on a task management application protocol) that includes a header, a destination service, a source service, a task identifier, a task timestamp, a state of the task, and a request body. It should be appreciated that such tasks may be performed synchronously or asynchronously, and such task related information may be stored in the task data 1704 .
- the task manager 1714 is further configured to post the created tasks to the appropriate task queue. Accordingly, the task manager 1714 is further configured to manage the queue of created tasks and the messages associated therewith for performing the tasks. In other words, the task manager 1714 is additionally configured to function as a task queue manager.
- the microservice resource controller 1716 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to control the composition of disaggregated service resources (i.e., microservice resources) to compose a host (e.g., a node that can function as a server) to perform the requested services.
- a microservice is a software development technique that structures an application as a collection of loosely coupled services.
- services are fine-grained and the protocols are lightweight.
- such services are often processes that communicate over a network to fulfill a goal using technology-agnostic protocols or inter-process communication mechanisms (e.g., shared memory).
- services in a microservice architecture are independently deployable, easy to replace, organized around capabilities, small in size, messaging enabled, bounded by contexts, autonomously developed, and decentralized.
- the microservice resource controller 1716 is additionally configured to initialize any additional services associated with the operation of the composed nodes, such as a communication network, for example. To do so, the microservice resource controller 1716 is configured to manage disaggregated network resources, compute resources, storage resources, accelerator resources, etc., using an associated microservice (e.g., a network service, a storage service, a compute service, etc.) Accordingly, the microservice resource controller 1716 is configured to pick up (e.g., retrieve from the applicable task queue) and execute the tasks. It should be appreciated that each service controlled by the controller compute device 1604 is comprised of one or more microservices capable of providing one or more services thereof.
- an associated microservice e.g., a network service, a storage service, a compute service, etc.
- the illustrative microservice resource controller 1716 includes a network service 1718 , a storage service 1720 , a compute service 1722 , and a telemetry service 1724 .
- the network service 1718 is configured to use network-related resources to perform a particular task associated with a requested controller service.
- the storage service 1720 is configured to use storage resources to perform particular storage-related tasks associated with a requested controller service.
- the compute service 1722 is configured to use compute and/or accelerator resources to perform particular storage-related tasks associated with a requested controller service.
- the telemetry service 1724 is configured to collect/store telemetry data in accordance with a requested controller service. It should be appreciated that the microservice resource controller 1716 may include additional and/or alternative services in other embodiments. In some embodiments, information associated with the composed resources may be stored in the composed service data 1706 .
- the microtask resource controller 1726 which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to control the composition of disaggregated microservice resources (i.e., microtask resources) to perform the requested services.
- the illustrative microtask resource controller 1726 includes a database service 1728 configured to manage one or more databases, a timestamp service 1730 configured to apply timestamps, and a compose service 1732 configured to compose services via the microtask resources.
- the resource allocator 1734 is configured to allocate resources for each microtask, as may be requested by the other microtasks (e.g., the database service 1728 , the timestamp service 1730 , the compose service 1732 , etc.) via a corresponding thread. It should be appreciated that the resource allocator 1734 is at the lowest level in the hierarchy of microtasks.
- the compose service 1732 is configured to manage the composable hardware dynamically as necessary to scale up or down. Accordingly, the compose service 1732 can call the resource allocator 1734 to compose (e.g., configure, group, etc.) various resources, such as by workload for a particular service.
- the compose service 1732 may be configured to initiate a discovery operation, create zones, provision a network, compose a host, release a host, provision storage, provision a node, etc.
- a PSME may be configured to detect resources (e.g., via a discovery that may be initiated by the controller compute device 1604 ), such that information related thereto (e.g., processing power, configuration, specialized functionality, average utilization, or the like) can be retrieved and provided to the resource allocator 1734 .
- information related thereto e.g., processing power, configuration, specialized functionality, average utilization, or the like
- each sled e.g., one of the compute sleds 1530 ) equipped with a PSME may detect device resources (e.g., NICs, ports, memory, CPUs, etc.) within the data center (e.g., the system 1510 ), including discovering information about each detected device (e.g., processing power, configuration, specialized functionality, average utilization, and/or the like) that is usable to schedule one or more portions (e.g., tasks) of an application to be processed by device(s) available in the system 1510 suited to performing the respective task.
- the resource data may be stored in the pod manager data 1702 .
- the controller compute device 1604 may execute a method 1800 for managing disaggregated resources in a data center.
- the disaggregated resources may include managing distributed pooled storage, distributed pooled compute, distributed pooled accelerators, etc.
- the method 1800 begins in block 1802 , in which the controller compute device 1604 determines whether to initialize a service. It should be appreciated that initializing the service may include initializing a service to perform a particular function and/or composing one or more nodes to execute the initialized service. If so, the method 1800 advances to block 1804 , in which the controller compute device 1604 stores information associated with the service to be initialized. In block 1806 , the controller compute device 1604 creates a task based on the service to be initialized. In block 1808 , the controller compute device 1604 inserts the created task into a corresponding task message queue.
- the controller compute device 1604 determines whether the created task is to be processed, such as may be determinable when the created task is at a head of the task message queue. In other words, the controller compute device 1604 determines whether to compose the requested service. If so, the method 1800 advances to block 1812 , in which the controller compute device 1604 creates a microservice to perform the created task. It should be appreciated that the controller compute device 1604 may create the microservice to be hosted on more than one host device. Accordingly, under such conditions, the host devices may be configured to communicate between the pod manager services (e.g., using a main service over Advanced Message Queuing Protocol (AMQP)).
- AQP Advanced Message Queuing Protocol
- the controller compute device 1604 composes the microservice as a collection of services, which can be instantiated within their respective namespace by any service, for example the illustrative network service 1718 , storage service 1720 , compute service 1722 , and/or telemetry service 1724 of FIG. 17 , or any other service/microservice that may be associated with the microservice resource controller 1716 of FIG. 17 in other embodiments.
- the controller compute device 1604 creates each of the collection of services as a collection of microtasks (e.g., via the resource allocator 1734 of FIG.
- illustrative database service 1728 such as, for example, the illustrative database service 1728 , timestamp service 1730 , and/or compose service 1732 of FIG. 17 , or any other service/microtask that may be associated with the microtask resource controller 1726 of FIG. 17 in other embodiments.
- the method 1800 advances to blocks 1818 and 1822 , either serially or in parallel. In other words, blocks 1818 and 1822 can be performed in parallel.
- the controller compute device 1604 creates one or more threads to perform any asynchronous task(s) associated with the created microservice, including any hardware management lifecycle operations, network management operations, network slice allocations, etc., as described herein (see, e.g., the communication flows 1900 of FIG. 19, 2000 of FIG. 20, 2100 of FIG. 21, and 2200 of FIG. 22 ).
- the controller compute device 1604 completes any asynchronous tasks associated with the created threads in the background.
- the controller compute device 1604 completes any synchronous task(s) associated with the created microservice.
- microtasks may be required to be spun-up in support of that microservice. It should be further appreciated that such spun-up microtasks can either be shut down when the work has completed or maintain a live state for future use, depending on the embodiment.
- an embodiment of an illustrative communication flow 1900 for performing hardware lifecycle management operations includes the cloud orchestrator server 1602 and the controller compute device 1604 of FIG. 16 .
- the illustrative controller compute device 1604 includes the API service 1712 , the task manager 1714 , the microservice resource controller 1716 , and the microtask resource controller of FIG. 17 .
- the illustrative communication flow 1900 includes a number of data flows, some of which may be executed separately or together, depending on the embodiment.
- the cloud orchestrator server 1602 determines to orchestrate a VM to run an application which requires various compute resources.
- the cloud orchestrator server 1602 transmits a service orchestration request (e.g., via an applicable API call) that includes a list of resources (e.g., compute resources, storage resources, network resources, accelerator resources, etc.) to the API service 1712 that are usable to identify which resources are required to compose a node.
- resources e.g., compute resources, storage resources, network resources, accelerator resources, etc.
- system resources e.g., memory devices, data storage devices, accelerator devices, general purpose processors, etc.
- the API service 1712 forwards the received service orchestration request to the task manager 1714 .
- the API service 1712 may generate a new message that effectively translates the received service orchestration request into a message interpretable by the task manager 1714 to perform the requested service orchestration.
- the task manager 1714 determines that a node is to be composed and storage allocated thereto. As such, in data flow 1908 , the task manager 1714 generates and enqueues a task to orchestrate the requested node.
- the microservice resource controller 1716 allocates the task (e.g., via one or more services) to initiate composition of the requested node and, in data flow 1912 , transmits a notification of the allocated task to the microtask resource controller 1726 .
- the microtask resource controller 1726 (e.g., via the compose service 1732 of FIG. 17 ) allocates a thread (e.g., from a thread pool) to call a pod manager (e.g., a via a pod manager service) to discover resources of the associated hardware cluster.
- the microtask resource controller 1726 e.g., via the resource allocator 1734 of FIG.
- the microtask resource controller 1726 (e.g., via the resource allocator 1734 of FIG. 17 ) allocates a thread to deploy a storage volume.
- the microtask resource controller 1726 transmits a notification of the composed node resources to the microservice resource controller 1716 .
- the microservice resource controller 1716 transmits a notification of completion to the cloud orchestrator server 1602 (e.g., via the task manager 1714 and the API service 1712 ) that includes an identifier of the composed node.
- an embodiment of an illustrative communication flow 2000 for scheduling and managing groups of nodes is shown that includes the cloud orchestrator server 1602 and the controller compute device 1604 of FIG. 16 .
- the illustrative controller compute device 1604 includes the API service 1712 , the task manager 1714 , the microservice resource controller 1716 , and the microtask resource controller of FIG. 17 .
- the illustrative communication flow 2000 includes a number of data flows, some of which may be executed separately or together, depending on the embodiment.
- data flow 2002 the cloud orchestrator server 1602 determines a set of VMs or containers and wants to compose and provision a group of nodes.
- the cloud orchestrator server 1602 transmits a service orchestration request (e.g., via an applicable API call) to the API service 1712 that includes a list of required node characteristics (e.g., compute resources, storage resources, network resources, accelerator resources, configuration settings, etc.).
- the API service 1712 forwards the received service orchestration request to the task manager 1714 with a system group notification. It should be appreciated that the API service 1712 may generate a new message that effectively translates the received service orchestration request into a message interpretable by the task manager 1714 to perform the requested service orchestration.
- the task manager 1714 determines that a group of nodes are to be composed and storage allocated thereto. As such, in data flow 2008 , the task manager 1714 spawns and enqueues multiple tasks to orchestrate the requested nodes.
- the microservice resource controller 1716 allocates each task (e.g., via one or more services) to initiate composition of the requested nodes and, in data flow 2012 , transmits a notification of the allocated tasks to the microtask resource controller 1726 .
- the microtask resource controller 1726 e.g., via the compose service 1732 of FIG. 17 ) allocates threads (e.g., from a thread pool) to call a pod manager to discover resources of the associated hardware cluster.
- the microtask resource controller 1726 (e.g., via the resource allocator 1734 of FIG. 17 ) allocates threads to call a compose API of the pod manager to compose a portion of the discovered resources.
- the microtask resource controller 1726 (e.g., via the resource allocator 1734 of FIG. 17 ) allocates threads to deploy a storage volumes, as necessary.
- the microtask resource controller 1726 transmits a notification of the composed nodes to the microservice resource controller 1716 .
- the microservice resource controller 1716 transmits a notification of completion to the cloud orchestrator server 1602 (e.g., via the task manager 1714 and the API service 1712 ) that includes an identifier of each of the composed nodes and an identifier of the group of composed nodes.
- an embodiment of an illustrative communication flow 2100 for managing an underlay network includes the cloud orchestrator server 1602 and the controller compute device 1604 of FIG. 16 .
- the illustrative controller compute device 1604 includes the API service 1712 , the task manager 1714 , the microservice resource controller 1716 , and the microtask resource controller of FIG. 17 .
- the illustrative communication flow 2100 includes a number of data flows, some of which may be executed separately or together, depending on the embodiment.
- the cloud orchestrator server 1602 determines that an underlay network (e.g., a virtual local area network (VLAN), a Virtual Extensible LAN (VxLAN), etc.) is required with specific configurations for particular network traffic associated with a tenant (e.g., based on a service-level agreement (SLA)).
- an underlay network e.g., a virtual local area network (VLAN), a Virtual Extensible LAN (VxLAN), etc.
- SLA service-level agreement
- the cloud orchestrator server 1602 transmits a service orchestration request (e.g., via an applicable API call) to the API service 1712 that includes a set of configuration settings of the underlay network.
- the API service 1712 forwards the request to the task manager 1714 .
- the API service 1712 may generate a new message that effectively translates the received service orchestration request into a message interpretable by the task manager 1714 to perform the requested service orchestration.
- the task manager 1714 spawns a task to compose network resources (e.g., via the network service 1718 ).
- the microservice resource controller 1716 starts one or more threads (e.g., one or more master threads) to allocate the requested network resources.
- the microtask resource controller 1726 allocates a thread to configure one or more ports of a switch of the system (e.g., the switch 150 of FIG. 1 or one of the switches 250 , 260 of FIG. 2 ). In data flow 2114 , the microtask resource controller 1726 allocates another thread to configure one or more ports of one or more end-hosts (e.g., composed node(s)). In some embodiments, the configuration may be performed via a communicatively coupled pod manager (not shown), such as may be performed via one or more pod manager network API calls.
- the task manager 1714 may be configured to spawn additional tasks for the microservice resource controller 1716 to start additional threads to provision other network services (e.g., a VLAN on the host and switch). Accordingly, in such embodiments, the pod manager network API and a switch API may be called at that time.
- the microservice resource controller 1716 transmits a notification of completion to the cloud orchestrator server 1602 (e.g., via the task manager 1714 and the API service 1712 ) that includes a completion code and an identifier of the underlay network.
- an embodiment of an illustrative communication flow 2200 for allocating a network slice is shown that includes the cloud orchestrator server 1602 and the controller compute device 1604 of FIG. 16 .
- the illustrative controller compute device 1604 includes the API service 1712 , the task manager 1714 , the microservice resource controller 1716 , and the microtask resource controller of FIG. 17 .
- the illustrative communication flow 2200 includes a number of data flows, some of which may be executed separately or together, depending on the embodiment.
- the cloud orchestrator server 1602 determines that a network slice is required with specific configurations for a telecommunications network.
- vetwork slicing is a form of virtual network architecture using the same/similar principles as software defined networks (SDNs) and network function virtualization (NFV) architectures in fixed networks that allows multiple logical networks to run on top of a shared physical network infrastructure.
- SDNs software defined networks
- NFV network function virtualization
- the cloud orchestrator server 1602 transmits a service orchestration request (e.g., via an applicable API call) to the API service 1712 that includes a list of required resources for the network slice, such as a specific accelerator resource for performing certain functions of the network slice.
- the API service 1712 transmits a request to the task manager 1714 to attach resources to a composed node (see, e.g., the communication flow 1900 of FIG. 19 ).
- the API service 1712 transmits a request to the task manager 1714 to start a service and network to communicate with the attached resources.
- the task manager 1714 spawns the appropriate tasks to attach the resources, and start the communication service and network.
- the microservice resource controller 1716 allocates tasks to initiate composition of the requested resources and services.
- the microservice resource controller 1716 transmits a notification of the allocated tasks. Accordingly, in data flow 2216 , the microtask resource controller 1726 allocates a thread to call a pod manager (e.g., via the applicable API calls to that pod manager) to attach the required resources to a composed node. Additionally, in data flow 2218 , the microtask resource controller 1726 allocates a thread to communicate with the switch (e.g., the switch 150 of FIG. 1 or one of the switches 250 , 260 of FIG. 2 ) to create a network to the attached resources of the composed node. In data flow 2220 , the microtask resource controller 1726 transmits a notification that the allocation threads have completed.
- the switch e.g., the switch 150 of FIG. 1 or one of the switches 250 , 260 of FIG. 2
- the microservice resource controller 1716 transmits a notification of completion to the cloud to the cloud orchestrator server 1602 (e.g., via the task manager 1714 and the API service 1712 ) that includes an identifier of each allocated resource.
- An embodiment of the technologies disclosed herein may include any one or more, and any combination of, the examples described below.
- Example 1 includes a compute device for managing disaggregated resources in a data center, the compute device comprising microservice resource controller circuitry to (i) determine that a service related task has been generated and (ii) create one or more microservices to perform the determined service related task using at least one of a plurality of services managed by the microservice resource controller circuitry; and microtask resource controller circuitry to generate one or more microtasks to compose at least one service based on the one or more microservices.
- microservice resource controller circuitry to (i) determine that a service related task has been generated and (ii) create one or more microservices to perform the determined service related task using at least one of a plurality of services managed by the microservice resource controller circuitry
- microtask resource controller circuitry to generate one or more microtasks to compose at least one service based on the one or more microservices.
- Example 2 includes the subject matter of Example 1, and wherein to generate the one or more microtasks comprises to create one or more threads for each of the one or more microservices, and wherein each of the one or more threads is to execute a respective one of the one or more microtasks.
- Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to create the one or more threads comprises to allocate a first thread to call a pod manager of the data center to discover resources of a hardware cluster of the data center.
- Example 4 includes the subject matter of any of Examples 1-3, and wherein to create the one or more threads further comprises to allocate a second thread to compose a portion of the discovered resources into a composed node that is configured to function as a server.
- Example 5 includes the subject matter of any of Examples 1-4, and wherein to create the one or more threads further comprises to allocate a third thread to deploy a storage volume to be associated with the composed node.
- Example 6 includes the subject matter of any of Examples 1-5, and wherein the microservice resource controller circuitry is further to transmit a notification of completion to an entity that requested composition of the composed node, and wherein the notification of completion includes an identifier of the composed node.
- Example 7 includes the subject matter of any of Examples 1-6, and wherein to create the one or more threads further comprises to allocate a plurality of threads to compose a portion of the discovered resources into a group of composed nodes, wherein each composed node of the group of composed nodes is configured to function as a server.
- Example 8 includes the subject matter of any of Examples 1-7, and wherein the microservice resource controller circuitry is further to transmit a notification of completion to an entity that requested composition of the composed node, and wherein the notification of completion includes an identifier of each composed node of the group of composed nodes and a group identifier that identifies the group of composed nodes.
- Example 9 includes the subject matter of any of Examples 1-8, and wherein to determine that the service related task has been generated comprises to determine that an underlay network of the data center is to be orchestrated, wherein to create the one or more threads comprises to start a master thread to compose network resources, and wherein the master thread is to (i) allocate a child thread to configure one or more switch ports of a switch of the data center and (ii) allocate one or more threads to configured one or more host ports of a node of the data center.
- Example 10 includes the subject matter of any of Examples 1-9, and wherein the microservice resource controller circuitry is further to transmit a notification of completion to an entity that requested the underlay network to be orchestrated, and wherein the notification of completion includes a completion code and an identifier of the underlay network.
- Example 11 includes the subject matter of any of Examples 1-10, and wherein to determine that the service related task has been generated comprises to determine that the generated service related task indicates that at least one node is to be orchestrated.
- Example 12 includes the subject matter of any of Examples 1-11, and wherein the resources include compute resources, storage resources, and network resources.
- Example 13 includes a compute device for managing disaggregated resources in a data center, the compute device comprising a compute engine to determine that a service related task has been generated and (ii) create one or more microservices to perform the determined service related task using at least one of a plurality of services managed by the compute engine; and microtask resource controller circuitry to generate one or more microtasks to compose at least one service based on the one or more microservices.
- Example 14 includes the subject matter of Example 13, and wherein to generate the microtask comprises to create one or more threads for each of the one or more microservices, and wherein each of the one or more threads is to execute a respective one of the one or more microtasks.
- Example 15 includes the subject matter of any of Examples 13 and 14, and wherein to determine that the service related task has been generated comprises to determine that the generated service related task indicates that at least one node is to be orchestrated, and wherein to create the one or more threads comprises to allocate a first thread to call a pod manager of the data center to discover resources of a hardware cluster of the data center, wherein the resources include compute resources, storage resources, and network resources.
- Example 16 includes the subject matter of any of Examples 13-15, and wherein to create the one or more threads further comprises to allocate a second thread to compose a portion of the discovered resources into a composed node that is configured to function as a server.
- Example 17 includes the subject matter of any of Examples 13-16, and wherein to create the one or more threads further comprises to allocate a third thread to deploy a storage volume to be associated with the composed node.
- Example 18 includes the subject matter of any of Examples 13-17, and wherein the compute engine is further to transmit a notification of completion to an entity that requested composition of the composed node, and wherein the notification of completion includes an identifier of the composed node.
- Example 19 includes the subject matter of any of Examples 13-18, and wherein to create the one or more threads further comprises to allocate a plurality of threads to compose a portion of the discovered resources into a group of composed nodes, wherein each composed node of the group of composed nodes is configured to function as a server.
- Example 20 includes the subject matter of any of Examples 13-19, and wherein the compute engine is further to transmit a notification of completion to an entity that requested composition of the composed node, and wherein the notification of completion includes an identifier of each composed node of the group of composed nodes and a group identifier that identifies the group of composed nodes.
- Example 21 includes the subject matter of any of Examples 13-20, and wherein to determine that the service related task has been generated comprises to determine that an underlay network of the data center is to be orchestrated, wherein to create the one or more threads comprises to start a master thread to compose network resources of the data center, and wherein the master thread is to (i) allocate a child thread to configure one or more switch ports of a switch of the data center and (ii) allocate one or more threads to configured one or more host ports of a node of the data center.
- Example 22 includes the subject matter of any of Examples 13-21, and wherein the compute engine is further to transmit a notification of completion to an entity that requested the underlay network to be orchestrated, and wherein the notification of completion includes a completion code and an identifier of the underlay network.
- Example 23 includes a method for managing disaggregated resources in a data center, the method comprising determining, by a compute device, that a service related task has been generated; creating, by the compute device, one or more microservices to perform the determined service related task using at least one of a plurality of services managed by the compute device; and generating, by the compute device, one or more microtasks to compose at least one service based on the one or more microservices.
- Example 24 includes the subject matter of Example 23, and wherein generating the microtask comprises creating one or more threads for each of the one or more microservices, and wherein each of the one or more threads is to execute a respective one of the one or more microtasks.
- Example 25 includes the subject matter of any of Examples 23 and 24, and wherein determining that the service related task has been generated comprises determining that the generated service related task indicates that at least one node is to be orchestrated, and wherein creating the one or more threads comprises allocating a first thread to call a pod manager of the data center to discover resources of a hardware cluster of the data center, wherein the resources include compute resources, storage resources, and network resources.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Quality & Reliability (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Thermal Sciences (AREA)
- Power Engineering (AREA)
- Computer Security & Cryptography (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Environmental & Geological Engineering (AREA)
- Cooling Or The Like Of Electrical Apparatus (AREA)
- Multi Processors (AREA)
- Business, Economics & Management (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Technology Law (AREA)
- Data Mining & Analysis (AREA)
- General Business, Economics & Management (AREA)
- Manufacturing & Machinery (AREA)
- Marketing (AREA)
- Multimedia (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Entrepreneurship & Innovation (AREA)
- Bioethics (AREA)
- Power Sources (AREA)
- Databases & Information Systems (AREA)
Abstract
Description
- The present application claims the benefit of Indian Provisional Patent Application No. 201741030632, filed Aug. 30, 2017 and U.S. Provisional Patent Application No. 62/584,401, filed Nov. 10, 2017.
- Large data centers that execute workloads (e.g., services, applications, processes, sets of operations, etc.) on behalf of one or more clients (e.g., customers, tenants, etc.) typically contain racks of compute devices to execute the various workloads. Data centers are evolving towards a disaggregated architecture where storage media can be shared, so as to address issues of underutilization of capacity and/or throughput of storage devices in datacenters due to imbalanced requirements across applications and over time. Each compute device may have multiple components located in various positions on the compute device that perform different functions (e.g., memory to access data, compute components to execute operations, etc.) and each data center may include thousands to tens-of-thousands of such compute devices. As such, scaling the management control plane generally requires the inclusion of additional switching mechanisms, such as a spine switch in data centers which employ top-of-rack switches.
- Oftentimes, client workloads are executed in virtualized or containerized clouds (e.g., using Openstack). Accordingly, the composition of client networks, teardown of spawning racks, flow management, and routing across resources is required. Generally, such control plane management requires packet telemetry gathering and analysis at scale, in addition to management of the hardware switches with actions that influence the client virtual networks. For example, spawning a storage volume of disaggregated distributed disks, or disaggregated pooled field programmable gate arrays (FPGAs) for machine learning or deep learning deployments (e.g., using the Theano library, the Caffe deep learning framework, etc.) orchestrated as high-performance computing (HPC) workloads requires a scalable management solution that takes into account the disaggregated nature of the hardware resources of the data center.
- The concepts described herein are illustrated by way of example and not by way of limitation in the accompanying figures. For simplicity and clarity of illustration, elements illustrated in the figures are not necessarily drawn to scale. Where considered appropriate, reference labels have been repeated among the figures to indicate corresponding or analogous elements.
-
FIG. 1 is a simplified diagram of at least one embodiment of a data center for executing workloads with disaggregated resources; -
FIG. 2 is a simplified diagram of at least one embodiment of a pod that may be included in the data center ofFIG. 1 ; -
FIG. 3 is a perspective view of at least one embodiment of a rack that may be included in the pod ofFIG. 2 ; -
FIG. 4 is a side elevation view of the rack ofFIG. 3 ; -
FIG. 5 is a perspective view of the rack ofFIG. 3 having a sled mounted therein; -
FIG. 6 is a is a simplified block diagram of at least one embodiment of a top side of the sled ofFIG. 5 ; -
FIG. 7 is a simplified block diagram of at least one embodiment of a bottom side of the sled ofFIG. 6 ; -
FIG. 8 is a simplified block diagram of at least one embodiment of a compute sled usable in the data center ofFIG. 1 ; -
FIG. 9 is a top perspective view of at least one embodiment of the compute sled ofFIG. 8 ; -
FIG. 10 is a simplified block diagram of at least one embodiment of an accelerator sled usable in the data center ofFIG. 1 ; -
FIG. 11 is a top perspective view of at least one embodiment of the accelerator sled ofFIG. 10 ; -
FIG. 12 is a simplified block diagram of at least one embodiment of a storage sled usable in the data center ofFIG. 1 ; -
FIG. 13 is a top perspective view of at least one embodiment of the storage sled ofFIG. 12 ; -
FIG. 14 is a simplified block diagram of at least one embodiment of a memory sled usable in the data center ofFIG. 1 ; -
FIG. 15 is a simplified block diagram of a system that may be established within the data center ofFIG. 1 to execute workloads with managed nodes composed of disaggregated resources; -
FIG. 16 is a simplified diagram of at least one embodiment of a system for managing disaggregated resources in a data center that includes a controller compute device; -
FIG. 17 is a simplified block diagram of at least one embodiment of the controller compute device of the system ofFIG. 16 ; -
FIG. 18 is a simplified block diagram of at least one embodiment of a method for managing disaggregated resources in a data center that may be performed by the controller compute device ofFIGS. 16 and 17 ; -
FIG. 19 is a simplified block diagram of at least one embodiment of a communication data flow for performing a hardware lifecycle management operation that may be performed by the controller compute device ofFIGS. 16 and 17 ; -
FIG. 20 is a simplified block diagram of at least one embodiment of a communication data flow for scheduling and managing groups of nodes that may be performed by the controller compute device ofFIGS. 16 and 17 ; -
FIG. 21 is a simplified block diagram of at least one embodiment of a communication data flow for managing an underlay network that may be performed by the controller compute device ofFIGS. 16 and 17 ; and -
FIG. 22 is a simplified block diagram of at least one embodiment of a communication data flow for allocating a network slice that may be performed by the controller compute device ofFIGS. 16 and 17 . - While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will be described herein in detail. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
- References in the specification to “one embodiment,” “an embodiment,” “an illustrative embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include that particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. Additionally, it should be appreciated that items included in a list in the form of “at least one A, B, and C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C). Similarly, items listed in the form of “at least one of A, B, or C” can mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).
- The disclosed embodiments may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on a transitory or non-transitory machine-readable (e.g., computer-readable) storage medium, which may be read and executed by one or more processors. A machine-readable storage medium may be embodied as any storage device, mechanism, or other physical structure for storing or transmitting information in a form readable by a machine (e.g., a volatile or non-volatile memory, a media disc, or other media device).
- In the drawings, some structural or method features may be shown in specific arrangements and/or orderings. However, it should be appreciated that such specific arrangements and/or orderings may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Additionally, the inclusion of a structural or method feature in a particular figure is not meant to imply that such feature is required in all embodiments and, in some embodiments, may not be included or may be combined with other features.
- Referring now to
FIG. 1 , a data center 100 (e.g., a facility used to house computer systems and associated components, such as telecommunications and storage systems) in which disaggregated resources may cooperatively execute one or more workloads (e.g., applications on behalf of customers) includesmultiple pods data center 100 is shown with multiple pods, in some embodiments, thedata center 100 may be embodied as a single pod. PODs by one definition are a group of racks. As described in more detail herein, each rack houses multiple sleds, each of which may be primarily equipped with a particular type of resource (e.g., memory devices, data storage devices, accelerator devices, general purpose processors), i.e., resources that can be logically coupled to form a composed node, which can act as, for example, a server. - In the illustrative embodiment, the sleds in each
pod spine switches 150 that switch communications among pods (e.g., thepods data center 100. In some embodiments, the sleds may be connected with a fabric using Intel® Omni-Path technology. In other embodiments, the sleds may be connected with other fabrics, such as InfiniBand or Ethernet. As described in more detail herein, resources within sleds in thedata center 100 may be allocated to a group (referred to herein as a “managed node”) containing resources from one or more sleds to be collectively utilized in the execution of a workload. The workload can execute as if the resources belonging to the managed node were located on the same sled. The resources in a managed node may belong to sleds belonging to different racks, and even todifferent pods - A data center comprising disaggregated resources, such as
data center 100, can be used in a wide variety of contexts, such as enterprise, government, cloud service provider, and communications service provider (e.g., Telco's), as well in a wide variety of sizes, from cloud service provider mega-data centers that consume over 100,000 sq. ft. to single- or multi-rack installations for use in base stations. - The disaggregation of resources to sleds comprised predominantly of a single type of resource (e.g., compute sleds comprising primarily compute resources, memory sleds containing primarily memory resources), and the selective allocation and deallocation of the disaggregated resources to form a managed node assigned to execute a workload improves the operation and resource usage of the
data center 100 relative to typical data centers comprised of hyperconverged servers containing compute, memory, storage and perhaps additional resources in a single chassis. For example, because sleds predominantly contain resources of a particular type, resources of a given type can be upgraded independently of other resources. Additionally, because different resources types (processors, storage, accelerators, etc.) typically have different refresh rates, greater resource utilization and reduced total cost of ownership may be achieved. For example, a data center operator can upgrade the processors throughout their facility by only swapping out the compute sleds. In such a case, accelerator and storage resources may not be contemporaneously upgraded and, rather, may be allowed to continue operating until those resources are scheduled for their own refresh. Resource utilization may also increase. For example, if managed nodes are composed (e.g., resources collectively combined to provide certain functionality) based on requirements of the workloads that will be running on them, resources within a node are more likely to be fully utilized. Such utilization may allow for more managed nodes to run in a data center with a given set of resources, or for a data center expected to run a given set of workloads, to be built using fewer resources. - Referring now to
FIG. 2 , thepod 110, in the illustrative embodiment, includes a set ofrows racks 240. Eachrack 240 may house multiple sleds (e.g., sixteen sleds) and provide power and data connections to the housed sleds, as described in more detail herein. In the illustrative embodiment, the racks in eachrow pod switch 250 includes a set ofports 252 to which the sleds of the racks of thepod 110 are connected and another set ofports 254 that connect thepod 110 to the spine switches 150 to provide connectivity to other pods in thedata center 100. Similarly, thepod switch 260 includes a set ofports 262 to which the sleds of the racks of thepod 110 are connected and a set ofports 264 that connect thepod 110 to the spine switches 150. As such, the use of the pair ofswitches pod 110. For example, if either of theswitches pod 110 may still maintain data communication with the remainder of the data center 100 (e.g., sleds of other pods) through theother switch switches - It should be appreciated that each of the
other pods pod 110 shown in and described in regard toFIG. 2 (e.g., each pod may have rows of racks housing multiple sleds as described above). Additionally, while twopod switches pod FIGS. 1-2 . For example, a pod may be embodied as multiple sets of racks in which each set of racks is arranged radially, i.e., the racks are equidistant from a center switch. - Referring now to
FIGS. 3-5 , eachillustrative rack 240 of thedata center 100 includes two elongated support posts 302, 304, which are arranged vertically. For example, the elongated support posts 302, 304 may extend upwardly from a floor of thedata center 100 when deployed. Therack 240 also includes one or morehorizontal pairs 310 of elongated support arms 312 (identified inFIG. 3 via a dashed ellipse) configured to support a sled of thedata center 100 as discussed below. Oneelongated support arm 312 of the pair ofelongated support arms 312 extends outwardly from theelongated support post 302 and the otherelongated support arm 312 extends outwardly from theelongated support post 304. - In the illustrative embodiments, each sled of the
data center 100 is embodied as a chassis-less sled. That is, each sled has a chassis-less circuit board substrate on which physical resources (e.g., processors, memory, accelerators, storage, etc.) are mounted as discussed in more detail below. As such, therack 240 is configured to receive the chassis-less sleds. For example, eachpair 310 ofelongated support arms 312 defines asled slot 320 of therack 240, which is configured to receive a corresponding chassis-less sled. To do so, each illustrativeelongated support arm 312 includes acircuit board guide 330 configured to receive the chassis-less circuit board substrate of the sled. Eachcircuit board guide 330 is secured to, or otherwise mounted to, atop side 332 of the correspondingelongated support arm 312. For example, in the illustrative embodiment, eachcircuit board guide 330 is mounted at a distal end of the correspondingelongated support arm 312 relative to the correspondingelongated support post circuit board guide 330 may be referenced in each Figure. - Each
circuit board guide 330 includes an inner wall that defines acircuit board slot 380 configured to receive the chassis-less circuit board substrate of asled 400 when thesled 400 is received in thecorresponding sled slot 320 of therack 240. To do so, as shown inFIG. 4 , a user (or robot) aligns the chassis-less circuit board substrate of anillustrative chassis-less sled 400 to asled slot 320. The user, or robot, may then slide the chassis-less circuit board substrate forward into thesled slot 320 such that eachside edge 414 of the chassis-less circuit board substrate is received in a correspondingcircuit board slot 380 of the circuit board guides 330 of thepair 310 ofelongated support arms 312 that define thecorresponding sled slot 320 as shown inFIG. 4 . By having robotically accessible and robotically manipulable sleds comprising disaggregated resources, each type of resource can be upgraded independently of each other and at their own optimized refresh rate. Furthermore, the sleds are configured to blindly mate with power and data communication cables in eachrack 240, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. As such, in some embodiments, thedata center 100 may operate (e.g., execute workloads, undergo maintenance and/or upgrades, etc.) without human involvement on the data center floor. In other embodiments, a human may facilitate one or more maintenance or upgrade operations in thedata center 100. - It should be appreciated that each
circuit board guide 330 is dual sided. That is, eachcircuit board guide 330 includes an inner wall that defines acircuit board slot 380 on each side of thecircuit board guide 330. In this way, eachcircuit board guide 330 can support a chassis-less circuit board substrate on either side. As such, a single additional elongated support post may be added to therack 240 to turn therack 240 into a two-rack solution that can hold twice asmany sled slots 320 as shown inFIG. 3 . Theillustrative rack 240 includes sevenpairs 310 ofelongated support arms 312 that define a corresponding sevensled slots 320, each configured to receive and support acorresponding sled 400 as discussed above. Of course, in other embodiments, therack 240 may include additional orfewer pairs 310 of elongated support arms 312 (i.e., additional or fewer sled slots 320). It should be appreciated that because thesled 400 is chassis-less, thesled 400 may have an overall height that is different than typical servers. As such, in some embodiments, the height of eachsled slot 320 may be shorter than the height of a typical server (e.g., shorter than a single rank unit, “1 U”). That is, the vertical distance between eachpair 310 ofelongated support arms 312 may be less than a standard rack unit “1 U.” Additionally, due to the relative decrease in height of thesled slots 320, the overall height of therack 240 in some embodiments may be shorter than the height of traditional rack enclosures. For example, in some embodiments, each of the elongated support posts 302, 304 may have a length of six feet or less. Again, in other embodiments, therack 240 may have different dimensions. For example, in some embodiments, the vertical distance between eachpair 310 ofelongated support arms 312 may be greater than a standard rack until “1 U”. In such embodiments, the increased vertical distance between the sleds allows for larger heat sinks to be attached to the physical resources and for larger fans to be used (e.g., in thefan array 370 described below) for cooling each sled, which in turn can allow the physical resources to operate at increased power levels. Further, it should be appreciated that therack 240 does not include any walls, enclosures, or the like. Rather, therack 240 is an enclosure-less rack that is opened to the local environment. Of course, in some cases, an end plate may be attached to one of the elongated support posts 302, 304 in those situations in which therack 240 forms an end-of-row rack in thedata center 100. - In some embodiments, various interconnects may be routed upwardly or downwardly through the elongated support posts 302, 304. To facilitate such routing, each
elongated support post sled slot 320, power interconnects to provide power to eachsled slot 320, and/or other types of interconnects. - The
rack 240, in the illustrative embodiment, includes a support platform on which a corresponding optical data connector (not shown) is mounted. Each optical data connector is associated with acorresponding sled slot 320 and is configured to mate with an optical data connector of acorresponding sled 400 when thesled 400 is received in thecorresponding sled slot 320. In some embodiments, optical connections between components (e.g., sleds, racks, and switches) in thedata center 100 are made with a blind mate optical connection. For example, a door on each cable may prevent dust from contaminating the fiber inside the cable. In the process of connecting to a blind mate optical connector mechanism, the door is pushed open when the end of the cable approaches or enters the connector mechanism. Subsequently, the optical fiber inside the cable may enter a gel within the connector mechanism and the optical fiber of one cable comes into contact with the optical fiber of another cable within the gel inside the connector mechanism. - The
illustrative rack 240 also includes afan array 370 coupled to the cross-support arms of therack 240. Thefan array 370 includes one or more rows of coolingfans 372, which are aligned in a horizontal line between the elongated support posts 302, 304. In the illustrative embodiment, thefan array 370 includes a row of coolingfans 372 for eachsled slot 320 of therack 240. As discussed above, eachsled 400 does not include any on-board cooling system in the illustrative embodiment and, as such, thefan array 370 provides cooling for eachsled 400 received in therack 240. Eachrack 240, in the illustrative embodiment, also includes a power supply associated with eachsled slot 320. Each power supply is secured to one of theelongated support arms 312 of thepair 310 ofelongated support arms 312 that define thecorresponding sled slot 320. For example, therack 240 may include a power supply coupled or secured to eachelongated support arm 312 extending from theelongated support post 302. Each power supply includes a power connector configured to mate with a power connector of thesled 400 when thesled 400 is received in thecorresponding sled slot 320. In the illustrative embodiment, thesled 400 does not include any on-board power supply and, as such, the power supplies provided in therack 240 supply power to correspondingsleds 400 when mounted to therack 240. Each power supply is configured to satisfy the power requirements for its associated sled, which can vary from sled to sled. Additionally, the power supplies provided in therack 240 can operate independent of each other. That is, within a single rack, a first power supply providing power to a compute sled can provide power levels that are different than power levels supplied by a second power supply providing power to an accelerator sled. The power supplies may be controllable at the sled level or rack level, and may be controlled locally by components on the associated sled or remotely, such as by another sled or an orchestrator. - Referring now to
FIG. 6 , thesled 400, in the illustrative embodiment, is configured to be mounted in acorresponding rack 240 of thedata center 100 as discussed above. In some embodiments, eachsled 400 may be optimized or otherwise configured for performing particular tasks, such as compute tasks, acceleration tasks, data storage tasks, etc. For example, thesled 400 may be embodied as acompute sled 800 as discussed below in regard toFIGS. 8-9 , anaccelerator sled 1000 as discussed below in regard toFIGS. 10-11 , astorage sled 1200 as discussed below in regard toFIGS. 12-13 , or as a sled optimized or otherwise configured to perform other specialized tasks, such as amemory sled 1400, discussed below in regard toFIG. 14 . - As discussed above, the
illustrative sled 400 includes a chassis-lesscircuit board substrate 602, which supports various physical resources (e.g., electrical components) mounted thereon. It should be appreciated that thecircuit board substrate 602 is “chassis-less” in that thesled 400 does not include a housing or enclosure. Rather, the chassis-lesscircuit board substrate 602 is open to the local environment. The chassis-lesscircuit board substrate 602 may be formed from any material capable of supporting the various electrical components mounted thereon. For example, in an illustrative embodiment, the chassis-lesscircuit board substrate 602 is formed from an FR-4 glass-reinforced epoxy laminate material. Of course, other materials may be used to form the chassis-lesscircuit board substrate 602 in other embodiments. - As discussed in more detail below, the chassis-less
circuit board substrate 602 includes multiple features that improve the thermal cooling characteristics of the various electrical components mounted on the chassis-lesscircuit board substrate 602. As discussed, the chassis-lesscircuit board substrate 602 does not include a housing or enclosure, which may improve the airflow over the electrical components of thesled 400 by reducing those structures that may inhibit air flow. For example, because the chassis-lesscircuit board substrate 602 is not positioned in an individual housing or enclosure, there is no vertically-arranged backplane (e.g., a backplate of the chassis) attached to the chassis-lesscircuit board substrate 602, which could inhibit air flow across the electrical components. Additionally, the chassis-lesscircuit board substrate 602 has a geometric shape configured to reduce the length of the airflow path across the electrical components mounted to the chassis-lesscircuit board substrate 602. For example, the illustrative chassis-lesscircuit board substrate 602 has awidth 604 that is greater than a depth 606 of the chassis-lesscircuit board substrate 602. In one particular embodiment, for example, the chassis-lesscircuit board substrate 602 has a width of about 21 inches and a depth of about 9 inches, compared to a typical server that has a width of about 17 inches and a depth of about 39 inches. As such, anairflow path 608 that extends from afront edge 610 of the chassis-lesscircuit board substrate 602 toward arear edge 612 has a shorter distance relative to typical servers, which may improve the thermal cooling characteristics of thesled 400. Furthermore, although not illustrated inFIG. 6 , the various physical resources mounted to the chassis-lesscircuit board substrate 602 are mounted in corresponding locations such that no two substantively heat-producing electrical components shadow each other as discussed in more detail below. That is, no two electrical components, which produce appreciable heat during operation (i.e., greater than a nominal heat sufficient enough to adversely impact the cooling of another electrical component), are mounted to the chassis-lesscircuit board substrate 602 linearly in-line with each other along the direction of the airflow path 608 (i.e., along a direction extending from thefront edge 610 toward therear edge 612 of the chassis-less circuit board substrate 602). - As discussed above, the
illustrative sled 400 includes one or morephysical resources 620 mounted to atop side 650 of the chassis-lesscircuit board substrate 602. Although twophysical resources 620 are shown inFIG. 6 , it should be appreciated that thesled 400 may include one, two, or morephysical resources 620 in other embodiments. Thephysical resources 620 may be embodied as any type of processor, controller, or other compute circuit capable of performing various tasks such as compute functions and/or controlling the functions of thesled 400 depending on, for example, the type or intended functionality of thesled 400. For example, as discussed in more detail below, thephysical resources 620 may be embodied as high-performance processors in embodiments in which thesled 400 is embodied as a compute sled, as accelerator co-processors or circuits in embodiments in which thesled 400 is embodied as an accelerator sled, storage controllers in embodiments in which thesled 400 is embodied as a storage sled, or a set of memory devices in embodiments in which thesled 400 is embodied as a memory sled. - The
sled 400 also includes one or more additionalphysical resources 630 mounted to thetop side 650 of the chassis-lesscircuit board substrate 602. In the illustrative embodiment, the additional physical resources include a network interface controller (NIC) as discussed in more detail below. Of course, depending on the type and functionality of thesled 400, thephysical resources 630 may include additional or other electrical components, circuits, and/or devices in other embodiments. - The
physical resources 620 are communicatively coupled to thephysical resources 630 via an input/output (I/O)subsystem 622. The I/O subsystem 622 may be embodied as circuitry and/or components to facilitate input/output operations with thephysical resources 620, thephysical resources 630, and/or other components of thesled 400. For example, the I/O subsystem 622 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, waveguides, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In the illustrative embodiment, the I/O subsystem 622 is embodied as, or otherwise includes, a double data rate 4 (DDR4) data bus or a DDR5 data bus. - In some embodiments, the
sled 400 may also include a resource-to-resource interconnect 624. The resource-to-resource interconnect 624 may be embodied as any type of communication interconnect capable of facilitating resource-to-resource communications. In the illustrative embodiment, the resource-to-resource interconnect 624 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the resource-to-resource interconnect 624 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to resource-to-resource communications. - The
sled 400 also includes apower connector 640 configured to mate with a corresponding power connector of therack 240 when thesled 400 is mounted in thecorresponding rack 240. Thesled 400 receives power from a power supply of therack 240 via thepower connector 640 to supply power to the various electrical components of thesled 400. That is, thesled 400 does not include any local power supply (i.e., an on-board power supply) to provide power to the electrical components of thesled 400. The exclusion of a local or on-board power supply facilitates the reduction in the overall footprint of the chassis-lesscircuit board substrate 602, which may increase the thermal cooling characteristics of the various electrical components mounted on the chassis-lesscircuit board substrate 602 as discussed above. In some embodiments, voltage regulators are placed on a bottom side 750 (seeFIG. 7 ) of the chassis-lesscircuit board substrate 602 directly opposite of the processors 820 (seeFIG. 8 ), and power is routed from the voltage regulators to theprocessors 820 by vias extending through thecircuit board substrate 602. Such a configuration provides an increased thermal budget, additional current and/or voltage, and better voltage control relative to typical printed circuit boards in which processor power is delivered from a voltage regulator, in part, by printed circuit traces. - In some embodiments, the
sled 400 may also include mountingfeatures 642 configured to mate with a mounting arm, or other structure, of a robot to facilitate the placement of the sled 600 in arack 240 by the robot. The mounting features 642 may be embodied as any type of physical structures that allow the robot to grasp thesled 400 without damaging the chassis-lesscircuit board substrate 602 or the electrical components mounted thereto. For example, in some embodiments, the mounting features 642 may be embodied as non-conductive pads attached to the chassis-lesscircuit board substrate 602. In other embodiments, the mounting features may be embodied as brackets, braces, or other similar structures attached to the chassis-lesscircuit board substrate 602. The particular number, shape, size, and/or make-up of the mountingfeature 642 may depend on the design of the robot configured to manage thesled 400. - Referring now to
FIG. 7 , in addition to thephysical resources 630 mounted on thetop side 650 of the chassis-lesscircuit board substrate 602, thesled 400 also includes one ormore memory devices 720 mounted to abottom side 750 of the chassis-lesscircuit board substrate 602. That is, the chassis-lesscircuit board substrate 602 is embodied as a double-sided circuit board. Thephysical resources 620 are communicatively coupled to thememory devices 720 via the I/O subsystem 622. For example, thephysical resources 620 and thememory devices 720 may be communicatively coupled by one or more vias extending through the chassis-lesscircuit board substrate 602. Eachphysical resource 620 may be communicatively coupled to a different set of one ormore memory devices 720 in some embodiments. Alternatively, in other embodiments, eachphysical resource 620 may be communicatively coupled to eachmemory device 720. - The
memory devices 720 may be embodied as any type of memory device capable of storing data for thephysical resources 620 during operation of thesled 400, such as any type of volatile (e.g., dynamic random access memory (DRAM), etc.) or non-volatile memory. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In particular embodiments, DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4. Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces. - In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include next-generation nonvolatile devices, such as Intel 3D XPoint™ memory or other byte addressable write-in-place nonvolatile memory devices. In one embodiment, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product. In some embodiments, the memory device may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance.
- Referring now to
FIG. 8 , in some embodiments, thesled 400 may be embodied as acompute sled 800. Thecompute sled 800 is optimized, or otherwise configured, to perform compute tasks. Of course, as discussed above, thecompute sled 800 may rely on other sleds, such as acceleration sleds and/or storage sleds, to perform such compute tasks. Thecompute sled 800 includes various physical resources (e.g., electrical components) similar to the physical resources of thesled 400, which have been identified inFIG. 8 using the same reference numbers. The description of such components provided above in regard toFIGS. 6 and 7 applies to the corresponding components of thecompute sled 800 and is not repeated herein for clarity of the description of thecompute sled 800. - In the
illustrative compute sled 800, thephysical resources 620 are embodied asprocessors 820. Although only twoprocessors 820 are shown inFIG. 8 , it should be appreciated that thecompute sled 800 may includeadditional processors 820 in other embodiments. Illustratively, theprocessors 820 are embodied as high-performance processors 820 and may be configured to operate at a relatively high power rating. Although theprocessors 820 generate additional heat operating at power ratings greater than typical processors (which operate at around 155-230 W), the enhanced thermal cooling characteristics of the chassis-lesscircuit board substrate 602 discussed above facilitate the higher power operation. For example, in the illustrative embodiment, theprocessors 820 are configured to operate at a power rating of at least 250 W. In some embodiments, theprocessors 820 may be configured to operate at a power rating of at least 350 W. - In some embodiments, the
compute sled 800 may also include a processor-to-processor interconnect 842. Similar to the resource-to-resource interconnect 624 of thesled 400 discussed above, the processor-to-processor interconnect 842 may be embodied as any type of communication interconnect capable of facilitating processor-to-processor interconnect 842 communications. In the illustrative embodiment, the processor-to-processor interconnect 842 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the processor-to-processor interconnect 842 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. - The
compute sled 800 also includes acommunication circuit 830. Theillustrative communication circuit 830 includes a network interface controller (NIC) 832, which may also be referred to as a host fabric interface (HFI). TheNIC 832 may be embodied as, or otherwise include, any type of integrated circuit, discrete circuits, controller chips, chipsets, add-in-boards, daughtercards, network interface cards, or other devices that may be used by thecompute sled 800 to connect with another compute device (e.g., with other sleds 400). In some embodiments, theNIC 832 may be embodied as part of a system-on-a-chip (SoC) that includes one or more processors, or included on a multichip package that also contains one or more processors. In some embodiments, theNIC 832 may include a local processor (not shown) and/or a local memory (not shown) that are both local to theNIC 832. In such embodiments, the local processor of theNIC 832 may be capable of performing one or more of the functions of theprocessors 820. Additionally or alternatively, in such embodiments, the local memory of theNIC 832 may be integrated into one or more components of the compute sled at the board level, socket level, chip level, and/or other levels. - The
communication circuit 830 is communicatively coupled to anoptical data connector 834. Theoptical data connector 834 is configured to mate with a corresponding optical data connector of therack 240 when thecompute sled 800 is mounted in therack 240. Illustratively, theoptical data connector 834 includes a plurality of optical fibers which lead from a mating surface of theoptical data connector 834 to anoptical transceiver 836. Theoptical transceiver 836 is configured to convert incoming optical signals from the rack-side optical data connector to electrical signals and to convert electrical signals to outgoing optical signals to the rack-side optical data connector. Although shown as forming part of theoptical data connector 834 in the illustrative embodiment, theoptical transceiver 836 may form a portion of thecommunication circuit 830 in other embodiments. - In some embodiments, the
compute sled 800 may also include anexpansion connector 840. In such embodiments, theexpansion connector 840 is configured to mate with a corresponding connector of an expansion chassis-less circuit board substrate to provide additional physical resources to thecompute sled 800. The additional physical resources may be used, for example, by theprocessors 820 during operation of thecompute sled 800. The expansion chassis-less circuit board substrate may be substantially similar to the chassis-lesscircuit board substrate 602 discussed above and may include various electrical components mounted thereto. The particular electrical components mounted to the expansion chassis-less circuit board substrate may depend on the intended functionality of the expansion chassis-less circuit board substrate. For example, the expansion chassis-less circuit board substrate may provide additional compute resources, memory resources, and/or storage resources. As such, the additional physical resources of the expansion chassis-less circuit board substrate may include, but is not limited to, processors, memory devices, storage devices, and/or accelerator circuits including, for example, field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), machine learning circuits, or other specialized processors, controllers, devices, and/or circuits. - Referring now to
FIG. 9 , an illustrative embodiment of thecompute sled 800 is shown. As shown, theprocessors 820,communication circuit 830, andoptical data connector 834 are mounted to thetop side 650 of the chassis-lesscircuit board substrate 602. Any suitable attachment or mounting technology may be used to mount the physical resources of thecompute sled 800 to the chassis-lesscircuit board substrate 602. For example, the various physical resources may be mounted in corresponding sockets (e.g., a processor socket), holders, or brackets. In some cases, some of the electrical components may be directly mounted to the chassis-lesscircuit board substrate 602 via soldering or similar techniques. - As discussed above, the
individual processors 820 andcommunication circuit 830 are mounted to thetop side 650 of the chassis-lesscircuit board substrate 602 such that no two heat-producing, electrical components shadow each other. In the illustrative embodiment, theprocessors 820 andcommunication circuit 830 are mounted in corresponding locations on thetop side 650 of the chassis-lesscircuit board substrate 602 such that no two of those physical resources are linearly in-line with others along the direction of theairflow path 608. It should be appreciated that, although theoptical data connector 834 is in-line with thecommunication circuit 830, theoptical data connector 834 produces no or nominal heat during operation. - The
memory devices 720 of thecompute sled 800 are mounted to thebottom side 750 of the of the chassis-lesscircuit board substrate 602 as discussed above in regard to thesled 400. Although mounted to thebottom side 750, thememory devices 720 are communicatively coupled to theprocessors 820 located on thetop side 650 via the I/O subsystem 622. Because the chassis-lesscircuit board substrate 602 is embodied as a double-sided circuit board, thememory devices 720 and theprocessors 820 may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-lesscircuit board substrate 602. Of course, eachprocessor 820 may be communicatively coupled to a different set of one ormore memory devices 720 in some embodiments. Alternatively, in other embodiments, eachprocessor 820 may be communicatively coupled to eachmemory device 720. In some embodiments, thememory devices 720 may be mounted to one or more memory mezzanines on the bottom side of the chassis-lesscircuit board substrate 602 and may interconnect with acorresponding processor 820 through a ball-grid array. - Each of the
processors 820 includes aheatsink 850 secured thereto. Due to the mounting of thememory devices 720 to thebottom side 750 of the chassis-less circuit board substrate 602 (as well as the vertical spacing of thesleds 400 in the corresponding rack 240), thetop side 650 of the chassis-lesscircuit board substrate 602 includes additional “free” area or space that facilitates the use ofheatsinks 850 having a larger size relative to traditional heatsinks used in typical servers. Additionally, due to the improved thermal cooling characteristics of the chassis-lesscircuit board substrate 602, none of theprocessor heatsinks 850 include cooling fans attached thereto. That is, each of theheatsinks 850 is embodied as a fan-less heatsink. In some embodiments, theheat sinks 850 mounted atop theprocessors 820 may overlap with the heat sink attached to thecommunication circuit 830 in the direction of theairflow path 608 due to their increased size, as illustratively suggested byFIG. 9 . - Referring now to
FIG. 10 , in some embodiments, thesled 400 may be embodied as anaccelerator sled 1000. Theaccelerator sled 1000 is configured, to perform specialized compute tasks, such as machine learning, encryption, hashing, or other computational-intensive task. In some embodiments, for example, acompute sled 800 may offload tasks to theaccelerator sled 1000 during operation. Theaccelerator sled 1000 includes various components similar to components of thesled 400 and/or computesled 800, which have been identified inFIG. 10 using the same reference numbers. The description of such components provided above in regard toFIGS. 6, 7, and 8 apply to the corresponding components of theaccelerator sled 1000 and is not repeated herein for clarity of the description of theaccelerator sled 1000. - In the
illustrative accelerator sled 1000, thephysical resources 620 are embodied asaccelerator circuits 1020. Although only twoaccelerator circuits 1020 are shown inFIG. 10 , it should be appreciated that theaccelerator sled 1000 may includeadditional accelerator circuits 1020 in other embodiments. For example, as shown inFIG. 11 , theaccelerator sled 1000 may include fouraccelerator circuits 1020 in some embodiments. Theaccelerator circuits 1020 may be embodied as any type of processor, co-processor, compute circuit, or other device capable of performing compute or processing operations. For example, theaccelerator circuits 1020 may be embodied as, for example, field programmable gate arrays (FPGA), application-specific integrated circuits (ASICs), security co-processors, graphics processing units (GPUs), neuromorphic processor units, quantum computers, machine learning circuits, or other specialized processors, controllers, devices, and/or circuits. - In some embodiments, the
accelerator sled 1000 may also include an accelerator-to-accelerator interconnect 1042. Similar to the resource-to-resource interconnect 624 of the sled 600 discussed above, the accelerator-to-accelerator interconnect 1042 may be embodied as any type of communication interconnect capable of facilitating accelerator-to-accelerator communications. In the illustrative embodiment, the accelerator-to-accelerator interconnect 1042 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the accelerator-to-accelerator interconnect 1042 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. In some embodiments, theaccelerator circuits 1020 may be daisy-chained with aprimary accelerator circuit 1020 connected to theNIC 832 andmemory 720 through the I/O subsystem 622 and asecondary accelerator circuit 1020 connected to theNIC 832 andmemory 720 through aprimary accelerator circuit 1020. - Referring now to
FIG. 11 , an illustrative embodiment of theaccelerator sled 1000 is shown. As discussed above, theaccelerator circuits 1020,communication circuit 830, andoptical data connector 834 are mounted to thetop side 650 of the chassis-lesscircuit board substrate 602. Again, theindividual accelerator circuits 1020 andcommunication circuit 830 are mounted to thetop side 650 of the chassis-lesscircuit board substrate 602 such that no two heat-producing, electrical components shadow each other as discussed above. Thememory devices 720 of theaccelerator sled 1000 are mounted to thebottom side 750 of the of the chassis-lesscircuit board substrate 602 as discussed above in regard to the sled 600. Although mounted to thebottom side 750, thememory devices 720 are communicatively coupled to theaccelerator circuits 1020 located on thetop side 650 via the I/O subsystem 622 (e.g., through vias). Further, each of theaccelerator circuits 1020 may include a heatsink 1070 that is larger than a traditional heatsink used in a server. As discussed above with reference to the heatsinks 870, the heatsinks 1070 may be larger than traditional heatsinks because of the “free” area provided by thememory resources 720 being located on thebottom side 750 of the chassis-lesscircuit board substrate 602 rather than on thetop side 650. - Referring now to
FIG. 12 , in some embodiments, thesled 400 may be embodied as astorage sled 1200. Thestorage sled 1200 is configured, to store data in adata storage 1250 local to thestorage sled 1200. For example, during operation, acompute sled 800 or anaccelerator sled 1000 may store and retrieve data from thedata storage 1250 of thestorage sled 1200. Thestorage sled 1200 includes various components similar to components of thesled 400 and/or thecompute sled 800, which have been identified inFIG. 12 using the same reference numbers. The description of such components provided above in regard toFIGS. 6, 7, and 8 apply to the corresponding components of thestorage sled 1200 and is not repeated herein for clarity of the description of thestorage sled 1200. - In the
illustrative storage sled 1200, thephysical resources 620 are embodied asstorage controllers 1220. Although only twostorage controllers 1220 are shown inFIG. 12 , it should be appreciated that thestorage sled 1200 may includeadditional storage controllers 1220 in other embodiments. Thestorage controllers 1220 may be embodied as any type of processor, controller, or control circuit capable of controlling the storage and retrieval of data into thedata storage 1250 based on requests received via thecommunication circuit 830. In the illustrative embodiment, thestorage controllers 1220 are embodied as relatively low-power processors or controllers. For example, in some embodiments, thestorage controllers 1220 may be configured to operate at a power rating of about 75 watts. - In some embodiments, the
storage sled 1200 may also include a controller-to-controller interconnect 1242. Similar to the resource-to-resource interconnect 624 of thesled 400 discussed above, the controller-to-controller interconnect 1242 may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications. In the illustrative embodiment, the controller-to-controller interconnect 1242 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the controller-to-controller interconnect 1242 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. - Referring now to
FIG. 13 , an illustrative embodiment of thestorage sled 1200 is shown. In the illustrative embodiment, thedata storage 1250 is embodied as, or otherwise includes, astorage cage 1252 configured to house one or more solid state drives (SSDs) 1254. To do so, thestorage cage 1252 includes a number of mountingslots 1256, each of which is configured to receive a correspondingsolid state drive 1254. Each of the mountingslots 1256 includes a number of drive guides 1258 that cooperate to define anaccess opening 1260 of thecorresponding mounting slot 1256. Thestorage cage 1252 is secured to the chassis-lesscircuit board substrate 602 such that the access openings face away from (i.e., toward the front of) the chassis-lesscircuit board substrate 602. As such, solid state drives 1254 are accessible while thestorage sled 1200 is mounted in a corresponding rack 204. For example, asolid state drive 1254 may be swapped out of a rack 240 (e.g., via a robot) while thestorage sled 1200 remains mounted in thecorresponding rack 240. - The
storage cage 1252 illustratively includes sixteen mountingslots 1256 and is capable of mounting and storing sixteen solid state drives 1254. Of course, thestorage cage 1252 may be configured to store additional or fewer solid state drives 1254 in other embodiments. Additionally, in the illustrative embodiment, the solid state drivers are mounted vertically in thestorage cage 1252, but may be mounted in thestorage cage 1252 in a different orientation in other embodiments. Eachsolid state drive 1254 may be embodied as any type of data storage device capable of storing long term data. To do so, the solid state drives 1254 may include volatile and non-volatile memory devices discussed above. - As shown in
FIG. 13 , thestorage controllers 1220, thecommunication circuit 830, and theoptical data connector 834 are illustratively mounted to thetop side 650 of the chassis-lesscircuit board substrate 602. Again, as discussed above, any suitable attachment or mounting technology may be used to mount the electrical components of thestorage sled 1200 to the chassis-lesscircuit board substrate 602 including, for example, sockets (e.g., a processor socket), holders, brackets, soldered connections, and/or other mounting or securing techniques. - As discussed above, the
individual storage controllers 1220 and thecommunication circuit 830 are mounted to thetop side 650 of the chassis-lesscircuit board substrate 602 such that no two heat-producing, electrical components shadow each other. For example, thestorage controllers 1220 and thecommunication circuit 830 are mounted in corresponding locations on thetop side 650 of the chassis-lesscircuit board substrate 602 such that no two of those electrical components are linearly in-line with each other along the direction of theairflow path 608. - The
memory devices 720 of thestorage sled 1200 are mounted to thebottom side 750 of the of the chassis-lesscircuit board substrate 602 as discussed above in regard to thesled 400. Although mounted to thebottom side 750, thememory devices 720 are communicatively coupled to thestorage controllers 1220 located on thetop side 650 via the I/O subsystem 622. Again, because the chassis-lesscircuit board substrate 602 is embodied as a double-sided circuit board, thememory devices 720 and thestorage controllers 1220 may be communicatively coupled by one or more vias, connectors, or other mechanisms extending through the chassis-lesscircuit board substrate 602. Each of thestorage controllers 1220 includes a heatsink 1270 secured thereto. As discussed above, due to the improved thermal cooling characteristics of the chassis-lesscircuit board substrate 602 of thestorage sled 1200, none of the heatsinks 1270 include cooling fans attached thereto. That is, each of the heatsinks 1270 is embodied as a fan-less heatsink. - Referring now to
FIG. 14 , in some embodiments, thesled 400 may be embodied as amemory sled 1400. Thestorage sled 1400 is optimized, or otherwise configured, to provide other sleds 400 (e.g., compute sleds 800, accelerator sleds 1000, etc.) with access to a pool of memory (e.g., in two ormore sets memory sled 1200. For example, during operation, acompute sled 800 or anaccelerator sled 1000 may remotely write to and/or read from one or more of the memory sets 1430, 1432 of thememory sled 1200 using a logical address space that maps to physical addresses in the memory sets 1430, 1432. Thememory sled 1400 includes various components similar to components of thesled 400 and/or thecompute sled 800, which have been identified inFIG. 14 using the same reference numbers. The description of such components provided above in regard toFIGS. 6, 7, and 8 apply to the corresponding components of thememory sled 1400 and is not repeated herein for clarity of the description of thememory sled 1400. - In the
illustrative memory sled 1400, thephysical resources 620 are embodied asmemory controllers 1420. Although only twomemory controllers 1420 are shown inFIG. 14 , it should be appreciated that thememory sled 1400 may includeadditional memory controllers 1420 in other embodiments. Thememory controllers 1420 may be embodied as any type of processor, controller, or control circuit capable of controlling the writing and reading of data into the memory sets 1430, 1432 based on requests received via thecommunication circuit 830. In the illustrative embodiment, eachmemory controller 1420 is connected to acorresponding memory set memory devices 720 within the correspondingmemory set sled 400 that has sent a request to thememory sled 1400 to perform a memory access operation (e.g., read or write). - In some embodiments, the
memory sled 1400 may also include a controller-to-controller interconnect 1442. Similar to the resource-to-resource interconnect 624 of thesled 400 discussed above, the controller-to-controller interconnect 1442 may be embodied as any type of communication interconnect capable of facilitating controller-to-controller communications. In the illustrative embodiment, the controller-to-controller interconnect 1442 is embodied as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the controller-to-controller interconnect 1442 may be embodied as a QuickPath Interconnect (QPI), an UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. As such, in some embodiments, amemory controller 1420 may access, through the controller-to-controller interconnect 1442, memory that is within the memory set 1432 associated with anothermemory controller 1420. In some embodiments, a scalable memory controller is made of multiple smaller memory controllers, referred to herein as “chiplets”, on a memory sled (e.g., the memory sled 1400). The chiplets may be interconnected (e.g., using EMIB (Embedded Multi-Die Interconnect Bridge)). The combined chiplet memory controller may scale up to a relatively large number of memory controllers and I/O ports, (e.g., up to 16 memory channels). In some embodiments, thememory controllers 1420 may implement a memory interleave (e.g., one memory address is mapped to thememory set 1430, the next memory address is mapped to thememory set 1432, and the third address is mapped to thememory set 1430, etc.). The interleaving may be managed within thememory controllers 1420, or from CPU sockets (e.g., of the compute sled 800) across network links to the memory sets 1430, 1432, and may improve the latency associated with performing memory access operations as compared to accessing contiguous memory addresses from the same memory device. - Further, in some embodiments, the
memory sled 1400 may be connected to one or more other sleds 400 (e.g., in thesame rack 240 or an adjacent rack 240) through a waveguide, using thewaveguide connector 1480. In the illustrative embodiment, the waveguides are 64 millimeter waveguides that provide 16 Rx (i.e., receive) lanes and 16 Tx (i.e., transmit) lanes. Each lane, in the illustrative embodiment, is either 16 GHz or 32 GHz. In other embodiments, the frequencies may be different. Using a waveguide may provide high throughput access to the memory pool (e.g., the memory sets 1430, 1432) to another sled (e.g., asled 400 in thesame rack 240 or anadjacent rack 240 as the memory sled 1400) without adding to the load on theoptical data connector 834. - Referring now to
FIG. 15 , a system for executing one or more workloads (e.g., applications) may be implemented in accordance with thedata center 100. In the illustrative embodiment, thesystem 1510 includes anorchestrator server 1520, which may be embodied as a managed node comprising a compute device (e.g., aprocessor 820 on a compute sled 800) executing management software (e.g., a cloud operating environment, such as OpenStack) that is communicatively coupled tomultiple sleds 400 including a large number of compute sleds 1530 (e.g., each similar to the compute sled 800), memory sleds 1540 (e.g., each similar to the memory sled 1400), accelerator sleds 1550 (e.g., each similar to the memory sled 1000), and storage sleds 1560 (e.g., each similar to the storage sled 1200). One or more of thesleds node 1570, such as by theorchestrator server 1520, to collectively perform a workload (e.g., anapplication 1532 executed in a virtual machine or in a container). The managednode 1570 may be embodied as an assembly ofphysical resources 620, such asprocessors 820,memory resources 720,accelerator circuits 1020, ordata storage 1250, from the same ordifferent sleds 400. Further, the managed node may be established, defined, or “spun up” by theorchestrator server 1520 at the time a workload is to be assigned to the managed node or at any other time, and may exist regardless of whether any workloads are presently assigned to the managed node. In the illustrative embodiment, theorchestrator server 1520 may selectively allocate and/or deallocatephysical resources 620 from thesleds 400 and/or add or remove one ormore sleds 400 from the managednode 1570 as a function of quality of service (QoS) targets (e.g., performance targets associated with a throughput, latency, instructions per second, etc.) associated with a service level agreement for the workload (e.g., the application 1532). In doing so, theorchestrator server 1520 may receive telemetry data indicative of performance conditions (e.g., throughput, latency, instructions per second, etc.) in eachsled 400 of the managednode 1570 and compare the telemetry data to the quality of service targets to determine whether the quality of service targets are being satisfied. Theorchestrator server 1520 may additionally determine whether one or more physical resources may be deallocated from the managednode 1570 while still satisfying the QoS targets, thereby freeing up those physical resources for use in another managed node (e.g., to execute a different workload). Alternatively, if the QoS targets are not presently satisfied, theorchestrator server 1520 may determine to dynamically allocate additional physical resources to assist in the execution of the workload (e.g., the application 1532) while the workload is executing. Similarly, theorchestrator server 1520 may determine to dynamically deallocate physical resources from a managed node if theorchestrator server 1520 determines that deallocating the physical resource would result in QoS targets still being met. - Additionally, in some embodiments, the
orchestrator server 1520 may identify trends in the resource utilization of the workload (e.g., the application 1532), such as by identifying phases of execution (e.g., time periods in which different operations, each having different resource utilizations characteristics, are performed) of the workload (e.g., the application 1532) and pre-emptively identifying available resources in thedata center 100 and allocating them to the managed node 1570 (e.g., within a predefined time period of the associated phase beginning). In some embodiments, theorchestrator server 1520 may model performance based on various latencies and a distribution scheme to place workloads among compute sleds and other resources (e.g., accelerator sleds, memory sleds, storage sleds) in thedata center 100. For example, theorchestrator server 1520 may utilize a model that accounts for the performance of resources on the sleds 400 (e.g., FPGA performance, memory access latency, etc.) and the performance (e.g., congestion, latency, bandwidth) of the path through the network to the resource (e.g., FPGA). As such, theorchestrator server 1520 may determine which resource(s) should be used with which workloads based on the total latency associated with each potential resource available in the data center 100 (e.g., the latency associated with the performance of the resource itself in addition to the latency associated with the path through the network between the compute sled executing the workload and thesled 400 on which the resource is located). - In some embodiments, the
orchestrator server 1520 may generate a map of heat generation in thedata center 100 using telemetry data (e.g., temperatures, fan speeds, etc.) reported from thesleds 400 and allocate resources to managed nodes as a function of the map of heat generation and predicted heat generation associated with different workloads, to maintain a target temperature and heat distribution in thedata center 100. Additionally or alternatively, in some embodiments, theorchestrator server 1520 may organize received telemetry data into a hierarchical model that is indicative of a relationship between the managed nodes (e.g., a spatial relationship such as the physical locations of the resources of the managed nodes within thedata center 100 and/or a functional relationship, such as groupings of the managed nodes by the customers the managed nodes provide services for, the types of functions typically performed by the managed nodes, managed nodes that typically share or exchange workloads among each other, etc.). Based on differences in the physical locations and resources in the managed nodes, a given workload may exhibit different resource utilizations (e.g., cause a different internal temperature, use a different percentage of processor or memory capacity) across the resources of different managed nodes. Theorchestrator server 1520 may determine the differences based on the telemetry data stored in the hierarchical model and factor the differences into a prediction of future resource utilization of a workload if the workload is reassigned from one managed node to another managed node, to accurately balance resource utilization in thedata center 100. - To reduce the computational load on the
orchestrator server 1520 and the data transfer load on the network, in some embodiments, theorchestrator server 1520 may send self-test information to thesleds 400 to enable eachsled 400 to locally (e.g., on the sled 400) determine whether telemetry data generated by thesled 400 satisfies one or more conditions (e.g., an available capacity that satisfies a predefined threshold, a temperature that satisfies a predefined threshold, etc.). Eachsled 400 may then report back a simplified result (e.g., yes or no) to theorchestrator server 1520, which theorchestrator server 1520 may utilize in determining the allocation of resources to managed nodes. - Referring now to
FIG. 16 , asystem 1600 for managing disaggregated resources in a data center (e.g., thedata center 100 ofFIG. 1 ) includes acloud orchestrator server 1602, similar to theorchestrator server 1520 ofFIG. 15 , which is communicatively coupled to acontroller compute device 1604. In use, as will be described in further detail below, thecontroller compute device 1604 is configured to function as a software defined infrastructure controller and resource manager for racks (e.g., therack 240 in thedata center 100 ofFIG. 1 ) and pods (e.g., thepods data center 100 ofFIG. 1 ). To do so, thecontroller compute device 1604 is configured to employ a system-level composable services framework and protocol to allow thecontroller compute device 1604 to function as an undercloud manager, including a task manager (e.g., functioning as a task scheduler and queue manager) with queues for asynchronous compute, network, and storage management. - Additionally, as will also be described in further detail below, the
controller compute device 1604 is configured to perform hardware lifecycle management operations (e.g., discovery, composition, configuration, release, etc.) of the various disaggregated resources, dynamically and at scale. To do so, thecontroller compute device 1604 is configured to use a protocol of communication that can discover hardware capabilities, leverage composable services to map requests received from theorchestrator server 1520, service hardware composability requests, and perform telemetry-based autonomous actions. - The
illustrative system 1600 additionally includes managednodes 1622 communicatively coupled to thecontroller compute device 1604. The illustrative managednodes 1622 includes a first managed node designated as managed node (1) 1622 a and a second managed node designated as managed node (N) 1622 b, in which the managed node (N) 1622 b represents the “Nth” managednode 1622 and “N” is a positive integer. As described previously, one or more sleds (e.g., one or more of thesleds FIG. 15 ) may be grouped into a managed node (e.g., by a pod manager service of the controller compute device 1604) to collectively perform a workload, such as an application. As such, the managednodes 1622 may be embodied as an assembly of resources, such as compute resources, memory resources, storage resources, or other resources, from the same or different sleds or racks. Further, any of the managednodes 1622 may be established, defined, or “spun up” by a respective pod manager service at the time a workload is to be assigned to a managednode 1622 or at any other time, and may exist regardless of whether any workloads are presently assigned to the managednode 1622. - The
controller compute device 1604 may be embodied as any type of computation or computer device capable of performing the functions described herein, including, without limitation, a computer, a server (e.g., stand-alone, rack-mounted, blade, etc.), a sled (e.g., a compute sled, an accelerator sled, a storage sled, a memory sled, etc.), an enhanced or smart NIC (e.g., a host fabric interface (HFI)), a network appliance (e.g., physical or virtual), a router, a switch (e.g., a disaggregated switch, a rack-mounted switch, a standalone switch, a fully managed switch, a partially managed switch, a full-duplex switch, and/or a half-duplex communication mode enabled switch), a web appliance, a distributed computing system, a processor-based system, and/or a multiprocessor system. As shown inFIG. 16 , the illustrativecontroller compute device 1604 includes acompute engine 1606, an I/O subsystem 1612, one or moredata storage devices 1614,communication circuitry 1616, and, in some embodiments, one or moreperipheral devices 1620. It should be appreciated that thecontroller compute device 1604 may include other or additional components, such as those commonly found in a typical computing device (e.g., various input/output devices and/or other components), in other embodiments. Additionally, in some embodiments, one or more of the illustrative components may be incorporated in, or otherwise form a portion of, another component. - The
compute engine 1606 may be embodied as any type of device or collection of devices capable of performing the various compute functions as described herein. In some embodiments, thecompute engine 1606 may be embodied as a single device such as an integrated circuit, an embedded system, a field-programmable-array (FPGA), a system-on-a-chip (SOC), an application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein. Additionally, in some embodiments, thecompute engine 1606 may include, or may otherwise be embodied as, one or more processors 1608 (i.e., one or more central processing units (CPUs)) andmemory 1610. - The processor(s) 1608 may be embodied as any type of processor(s) capable of performing the functions described herein. For example, the processor(s) 1608 may be embodied as one or more single-core processors, multi-core processors, digital signal processors (DSPs), microcontrollers, or other processor(s) or processing/controlling circuit(s). In some embodiments, the processor(s) 1608 may be embodied as, include, or otherwise be coupled to an FPGA (e.g., reconfigurable circuitry), an ASIC, reconfigurable hardware or hardware circuitry, or other specialized hardware to facilitate performance of the functions described herein.
- The
memory 1610 may be embodied as any type of volatile or non-volatile memory or data storage capable of performing the functions described herein. It should be appreciated that thememory 1610 may include main memory (i.e., a primary memory) and/or cache memory (i.e., memory that can be accessed more quickly than the main memory). Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One particular type of DRAM that may be used in a memory module is synchronous dynamic random access memory (SDRAM). In particular embodiments, DRAM of a memory component may comply with a standard promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-4A for DDR4 SDRAM, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4. Such standards (and similar standards) may be referred to as DDR-based standards and communication interfaces of the storage devices that implement such standards may be referred to as DDR-based interfaces. - In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technologies. A memory device may also include a three dimensional crosspoint memory device (e.g., Intel 3D XPoint™ memory), or other byte addressable write-in-place nonvolatile memory devices. In one embodiment, the memory device may be or may include memory devices that use chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single or multi-level Phase Change Memory (PCM), a resistive memory, nanowire memory, ferroelectric transistor random access memory (FeTRAM), anti-ferroelectric memory, magnetoresistive random access memory (MRAM) memory that incorporates memristor technology, resistive memory including the metal oxide base, the oxygen vacancy base and the conductive bridge Random Access Memory (CB-RAM), or spin transfer torque (STT)-MRAM, a spintronic magnetic junction memory based device, a magnetic tunneling junction (MTJ) based device, a DW (Domain Wall) and SOT (Spin Orbit Transfer) based device, a thyristor based memory device, or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to a packaged memory product.
- In such embodiments in which memory device includes a 3D crosspoint memory (e.g., Intel 3D XPoint™ memory), the
memory 1610 may comprise a transistor-less stackable cross point architecture in which memory cells sit at the intersection of word lines and bit lines and are individually addressable and in which bit storage is based on a change in bulk resistance. In some embodiments, all or a portion of thememory 1610 may be integrated into aprocessor 1608. In operation, thememory 1610 may store various software and data used during operation such as applications, data operated on by the applications, routing rules, libraries, and drivers. - The
compute engine 1606 is communicatively coupled to other components of thecontroller compute device 1604 via the I/O subsystem 1612, which may be embodied as circuitry and/or components to facilitate input/output operations with theprocessor 1608, thememory 1610, and other components of thecontroller compute device 1604. For example, the I/O subsystem 1612 may be embodied as, or otherwise include, memory controller hubs, input/output control hubs, integrated sensor hubs, firmware devices, communication links (e.g., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.), and/or other components and subsystems to facilitate the input/output operations. In some embodiments, the I/O subsystem 1612 may form a portion of a SoC and be incorporated, along with one or more of theprocessor 1608, thememory 1610, and other components of thecontroller compute device 1604, on a single integrated circuit chip. - The one or more
data storage devices 1614 may be embodied as any type of storage device(s) configured for short-term or long-term storage of data, such as, for example, memory devices and circuits, memory cards, hard disk drives, solid-state drives, or other data storage devices. Eachdata storage device 1614 may include a system partition that stores data and firmware code for thedata storage device 1614. Eachdata storage device 1614 may also include an operating system partition that stores data files and executables for an operating system. - The
communication circuitry 1616 may be embodied as any communication circuit, device, or collection thereof, capable of enabling communications between thecontroller compute device 1604 and other computing devices, such as the source compute device 102, as well as any network communication enabling devices, such as an access point, network switch/router, etc., to allow communication over the network 104. Accordingly, thecommunication circuitry 1616 may be configured to use any one or more communication technologies (e.g., wireless or wired communication technologies) and associated protocols (e.g., Ethernet, Bluetooth®, Wi-Fi®, WiMAX, LTE, 5G, etc.) to effect such communication. - It should be appreciated that, in some embodiments, the
communication circuitry 1616 may include specialized circuitry, hardware, or combination thereof to perform pipeline logic (e.g., hardware algorithms) for performing the functions described herein, including processing network packets (e.g., parse received network packets, determine destination computing devices for each received network packets, forward the network packets to a particular buffer queue of a respective host buffer of thecontroller compute device 1604, etc.), performing computational functions, etc. - In some embodiments, performance of one or more of the functions of
communication circuitry 1616 as described herein may be performed by specialized circuitry, hardware, or combination thereof of thecommunication circuitry 1616, which may be embodied as a SoC or otherwise form a portion of a SoC of the controller compute device 1604 (e.g., incorporated on a single integrated circuit chip along with aprocessor 1608, thememory 1610, and/or other components of the controller compute device 1604). Alternatively, in some embodiments, the specialized circuitry, hardware, or combination thereof may be embodied as one or more discrete processing units of thecontroller compute device 1604, each of which may be capable of performing one or more of the functions described herein. - The
illustrative communication circuitry 1616 includes a network interface controller (NIC) 1618, which may also be referred to as a host fabric interface (HFI) in some embodiments (e.g., high performance computing (HPC) environments). TheNIC 1618 may be embodied as any type of firmware, hardware, software, or any combination thereof that facilities communications access between thecontroller compute device 1604 and a network (e.g., the network 104). For example, theNIC 1618 may be embodied as one or more add-in-boards, daughtercards, network interface cards, controller chips, chipsets, or other devices that may be used by thecontroller compute device 1604 to connect with another compute device (e.g., the source compute device 102). - In some embodiments, the
NIC 1618 may be embodied as part of a SoC that includes one or more processors, or included on a multichip package that also contains one or more processors. Additionally or alternatively, in some embodiments, theNIC 1618 may include one or more processing cores (not shown) local to theNIC 1618. In such embodiments, the processing core(s) may be capable of performing one or more of the functions described herein. In some embodiments, theNIC 1618 may additionally include a local memory (not shown). In such embodiments, the local memory of theNIC 1618 may be integrated into one or more components of thecontroller compute device 1604 at the board level, socket level, chip level, and/or other levels. - The one or more
peripheral devices 1620 may include any type of device that is usable to input information into thecontroller compute device 1604 and/or receive information from thecontroller compute device 1604. Theperipheral devices 1620 may be embodied as any auxiliary device usable to input information into thecontroller compute device 1604, such as a keyboard, a mouse, a microphone, a barcode reader, an image scanner, etc., or output information from thecontroller compute device 1604, such as a display, a speaker, graphics circuitry, a printer, a projector, etc. It should be appreciated that, in some embodiments, one or more of theperipheral devices 1620 may function as both an input device and an output device (e.g., a touchscreen display, a digitizer on top of a display screen, etc.). It should be further appreciated that the types ofperipheral devices 1620 connected to thecontroller compute device 1604 may depend on, for example, the type and/or intended use of thecontroller compute device 1604. Additionally or alternatively, in some embodiments, theperipheral devices 1620 may include one or more ports, such as a universal serial bus (USB) port, for example, for connecting external peripheral devices to thecontroller compute device 1604. - The
cloud orchestrator server 1602 may have components similar to those described in reference to the illustrativecontroller compute device 1604. As such, the description of those like and/or similar components of thecontroller compute device 1604 are equally applicable to the description of components of theorchestrator server 1602, and are not repeated herein for clarity of the description. Further, it should be appreciated that theorchestrator server 1602 may include other components, sub-components, and devices commonly found in a computing device, which are not discussed above in reference to thecontroller compute device 1604 and not discussed herein for clarity of the description. While illustratively shown as being a separate computing device from theorchestrator server 1602, it should be appreciated that, in other embodiments, at least a portion of the functions described herein as being performed by thecontroller compute device 1604 may be performed by theorchestrator server 1602. - Referring now to
FIG. 17 , in use, thecontroller compute device 1604 establishes anenvironment 1700 during operation. Theillustrative environment 1700 includes a network traffic ingress/egress manager 1708, an application programming interface (API)manager 1710, atask manager 1714, amicroservice resource controller 1716, and amicrotask resource controller 1718. The various components of theenvironment 1700 may be embodied as hardware, firmware, software, or a combination thereof. As such, in some embodiments, one or more of the components of theenvironment 1700 may be embodied as circuitry or a collection of electrical devices (e.g., network traffic ingress/egress management circuitry 1708,API management circuitry 1710,task management circuitry 1714, microserviceresource controller circuitry 1716, microtaskresource controller circuitry 1718, etc.). - It should be appreciated that, in some embodiments, each of the one or more functions described herein as being performed by the
controller compute device 1604 may be performed, at least in part, by one or more components of thecontroller compute device 1604, such as thecompute engine 1606, the I/O subsystem 1612, and/or other components of thecontroller compute device 1604. As described previously, at least a portion of the functions described herein as being performed by thecontroller compute device 1604 may be performed by theorchestrator server 1602 in other embodiments. Accordingly, in such embodiments, it should be appreciated that one or more of the network traffic ingress/egress management circuitry 1708, theAPI management circuitry 1710, thetask management circuitry 1714, the microserviceresource controller circuitry 1716, and the microtaskresource controller circuitry 1718 may reside on theorchestrator server 1602. - Additionally, in some embodiments, one or more of the illustrative components may form a portion of another component and/or one or more of the illustrative components may be independent of one another. Further, in some embodiments, one or more of the components of the
environment 1700 may be embodied as virtualized hardware components or emulated architecture, which may be established and maintained by thecompute engine 1606, theNIC 1618, and/or other software/hardware components of thecontroller compute device 1604. It should be appreciated that thecontroller compute device 1604 may include other components, sub-components, modules, sub-modules, logic, sub-logic, and/or devices commonly found in a computing device (e.g., device drivers, interfaces, etc.), which are not illustrated inFIG. 17 for clarity of the description. - In the
illustrative environment 1700, thecontroller compute device 1604 additionally includespod manager data 1702,task data 1704, and composeservice data 1706, each of which may be accessed by the various components and/or sub-components of thecontroller compute device 1604. Further, each of thepod manager data 1702, thetask data 1704, and the composeservice data 1706 may be accessed by the various components of thecontroller compute device 1604. Additionally, it should be appreciated that in some embodiments the data stored in, or otherwise represented by, each of thepod manager data 1702, thetask data 1704, and the composeservice data 1706 may not be mutually exclusive relative to each other. For example, in some implementations, data stored in thepod manager data 1702 may also be stored as a portion of one or more of thetask data 1704 and/or the composeservice data 1706, or in another alternative arrangement. As such, although the various data utilized by thecontroller compute device 1604 is described herein as particular discrete data, such data may be combined, aggregated, and/or otherwise form portions of a single or multiple data sets, including duplicative copies, in other embodiments. - The network traffic ingress/
egress manager 1708, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to receive inbound and route/transmit outbound network traffic. To do so, the illustrative network traffic ingress/egress manager 1708 is configured to facilitate inbound network communications (e.g., network traffic, network packets, network flows, etc.) to thecontroller compute device 1604. Accordingly, the network traffic ingress/egress manager 1708 is configured to manage (e.g., create, modify, delete, etc.) connections to physical and virtual network ports (i.e., virtual network interfaces) of the controller compute device 1604 (e.g., via the communication circuitry 1616), as well as the ingress buffers/queues associated therewith. - The
API manager 1710, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to manage theAPI service 1712 instance to perform the functions as described herein. To do so, theAPI manager 1710 may be configured to instantiate theAPI service 1712 based on one or more characteristics, such as supported protocols (e.g., Representational State Transfer (REST), Extensible Markup Language (XML), etc.), libraries, etc. TheAPI service 1712 is configured to provide multiple points for inbound API calls, and perform a translation thereof into corresponding message(s), as necessary, for internal consumption (e.g., by the task manager 1714). TheAPI service 1712 is additionally configured to generate outbound calls (e.g., to the cloud orchestrator server 1602) based on messages received internally (e.g., from the task manager 1714). - The
task manager 1714, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to schedule tasks received at thecontroller compute device 1604, such as may be received from an orchestrator server (e.g., thecloud orchestrator server 1602 ofFIG. 16 ) communicatively coupled to thecontroller compute device 1604. In some embodiments, workload processing requests may be transmitted between thetask manager 1714 and a pooled system management engine (PSME). In such embodiments, certain processing tasks may be coordinated between thetask manager 1714 and the applicable PSME (e.g., via a corresponding pod manager service) for fulfillment by one or more devices associated with the PSME. It should be understood that the term “PSME” is nomenclature used by Intel Corporation and is used herein merely for convenience. It should be further understood that the PSME may be embodied as any sled-level, rack-level, or tray-level management engine. - In an illustrative example, the
task manager 1714 is configured receive an indication that a request for the initiation of a service managed by thecontroller compute device 1604 has been received by thecontroller compute device 1604. In some embodiments, the task scheduler may receive such initialization requests via theAPI service 1712. Thetask manager 1714 is further configured to create tasks and any messages associated therewith that are usable to identify information associated with the created task (e.g., for the execution thereof). In an illustrative embodiment, thetask manager 1714 may be configured to create a task-related message (e.g., based on a task management application protocol) that includes a header, a destination service, a source service, a task identifier, a task timestamp, a state of the task, and a request body. It should be appreciated that such tasks may be performed synchronously or asynchronously, and such task related information may be stored in thetask data 1704. Thetask manager 1714 is further configured to post the created tasks to the appropriate task queue. Accordingly, thetask manager 1714 is further configured to manage the queue of created tasks and the messages associated therewith for performing the tasks. In other words, thetask manager 1714 is additionally configured to function as a task queue manager. - The
microservice resource controller 1716, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to control the composition of disaggregated service resources (i.e., microservice resources) to compose a host (e.g., a node that can function as a server) to perform the requested services. It should be understood that a microservice is a software development technique that structures an application as a collection of loosely coupled services. Furthermore, in a microservices architecture, services are fine-grained and the protocols are lightweight. It should be further appreciated that such services are often processes that communicate over a network to fulfill a goal using technology-agnostic protocols or inter-process communication mechanisms (e.g., shared memory). As such, services in a microservice architecture are independently deployable, easy to replace, organized around capabilities, small in size, messaging enabled, bounded by contexts, autonomously developed, and decentralized. - The
microservice resource controller 1716 is additionally configured to initialize any additional services associated with the operation of the composed nodes, such as a communication network, for example. To do so, themicroservice resource controller 1716 is configured to manage disaggregated network resources, compute resources, storage resources, accelerator resources, etc., using an associated microservice (e.g., a network service, a storage service, a compute service, etc.) Accordingly, themicroservice resource controller 1716 is configured to pick up (e.g., retrieve from the applicable task queue) and execute the tasks. It should be appreciated that each service controlled by thecontroller compute device 1604 is comprised of one or more microservices capable of providing one or more services thereof. - As such, the illustrative
microservice resource controller 1716 includes anetwork service 1718, astorage service 1720, acompute service 1722, and atelemetry service 1724. Thenetwork service 1718 is configured to use network-related resources to perform a particular task associated with a requested controller service. Thestorage service 1720 is configured to use storage resources to perform particular storage-related tasks associated with a requested controller service. Thecompute service 1722 is configured to use compute and/or accelerator resources to perform particular storage-related tasks associated with a requested controller service. Thetelemetry service 1724 is configured to collect/store telemetry data in accordance with a requested controller service. It should be appreciated that themicroservice resource controller 1716 may include additional and/or alternative services in other embodiments. In some embodiments, information associated with the composed resources may be stored in the composedservice data 1706. - The
microtask resource controller 1726, which may be embodied as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof as discussed above, is configured to control the composition of disaggregated microservice resources (i.e., microtask resources) to perform the requested services. To do so, the illustrativemicrotask resource controller 1726 includes adatabase service 1728 configured to manage one or more databases, atimestamp service 1730 configured to apply timestamps, and a composeservice 1732 configured to compose services via the microtask resources. Theresource allocator 1734 is configured to allocate resources for each microtask, as may be requested by the other microtasks (e.g., thedatabase service 1728, thetimestamp service 1730, the composeservice 1732, etc.) via a corresponding thread. It should be appreciated that theresource allocator 1734 is at the lowest level in the hierarchy of microtasks. - In other words, the compose
service 1732 is configured to manage the composable hardware dynamically as necessary to scale up or down. Accordingly, the composeservice 1732 can call theresource allocator 1734 to compose (e.g., configure, group, etc.) various resources, such as by workload for a particular service. For example, the composeservice 1732 may be configured to initiate a discovery operation, create zones, provision a network, compose a host, release a host, provision storage, provision a node, etc. - To do so, for example, a PSME may be configured to detect resources (e.g., via a discovery that may be initiated by the controller compute device 1604), such that information related thereto (e.g., processing power, configuration, specialized functionality, average utilization, or the like) can be retrieved and provided to the
resource allocator 1734. In such embodiments, for example, each sled (e.g., one of the compute sleds 1530) equipped with a PSME may detect device resources (e.g., NICs, ports, memory, CPUs, etc.) within the data center (e.g., the system 1510), including discovering information about each detected device (e.g., processing power, configuration, specialized functionality, average utilization, and/or the like) that is usable to schedule one or more portions (e.g., tasks) of an application to be processed by device(s) available in thesystem 1510 suited to performing the respective task. In some embodiments, the resource data may be stored in thepod manager data 1702. - Referring now to
FIG. 18 , in operation, thecontroller compute device 1604 may execute amethod 1800 for managing disaggregated resources in a data center. As described previously, the disaggregated resources may include managing distributed pooled storage, distributed pooled compute, distributed pooled accelerators, etc. Themethod 1800 begins inblock 1802, in which thecontroller compute device 1604 determines whether to initialize a service. It should be appreciated that initializing the service may include initializing a service to perform a particular function and/or composing one or more nodes to execute the initialized service. If so, themethod 1800 advances to block 1804, in which thecontroller compute device 1604 stores information associated with the service to be initialized. Inblock 1806, thecontroller compute device 1604 creates a task based on the service to be initialized. Inblock 1808, thecontroller compute device 1604 inserts the created task into a corresponding task message queue. - In
block 1810, thecontroller compute device 1604 determines whether the created task is to be processed, such as may be determinable when the created task is at a head of the task message queue. In other words, thecontroller compute device 1604 determines whether to compose the requested service. If so, themethod 1800 advances to block 1812, in which thecontroller compute device 1604 creates a microservice to perform the created task. It should be appreciated that thecontroller compute device 1604 may create the microservice to be hosted on more than one host device. Accordingly, under such conditions, the host devices may be configured to communicate between the pod manager services (e.g., using a main service over Advanced Message Queuing Protocol (AMQP)). - To create the microservice, in
block 1814, thecontroller compute device 1604 composes the microservice as a collection of services, which can be instantiated within their respective namespace by any service, for example theillustrative network service 1718,storage service 1720,compute service 1722, and/ortelemetry service 1724 ofFIG. 17 , or any other service/microservice that may be associated with themicroservice resource controller 1716 ofFIG. 17 in other embodiments. Inblock 1816, thecontroller compute device 1604 creates each of the collection of services as a collection of microtasks (e.g., via theresource allocator 1734 ofFIG. 17 ), such as, for example, theillustrative database service 1728,timestamp service 1730, and/or composeservice 1732 ofFIG. 17 , or any other service/microtask that may be associated with themicrotask resource controller 1726 ofFIG. 17 in other embodiments. - From
block 1816, themethod 1800 advances toblocks block 1818, thecontroller compute device 1604 creates one or more threads to perform any asynchronous task(s) associated with the created microservice, including any hardware management lifecycle operations, network management operations, network slice allocations, etc., as described herein (see, e.g., the communication flows 1900 ofFIG. 19, 2000 ofFIG. 20, 2100 ofFIG. 21, and 2200 ofFIG. 22 ). Inblock 1820, thecontroller compute device 1604 completes any asynchronous tasks associated with the created threads in the background. Inblock 1822, thecontroller compute device 1604 completes any synchronous task(s) associated with the created microservice. - It should be appreciated that, depending on the conditions at the time at which the service initialization request was received, the relevant microservice may have already been created, but that additional microtasks may be required to be spun-up in support of that microservice. It should be further appreciated that such spun-up microtasks can either be shut down when the work has completed or maintain a live state for future use, depending on the embodiment.
- Referring now to
FIG. 19 , an embodiment of anillustrative communication flow 1900 for performing hardware lifecycle management operations is shown that includes thecloud orchestrator server 1602 and thecontroller compute device 1604 ofFIG. 16 . The illustrativecontroller compute device 1604 includes theAPI service 1712, thetask manager 1714, themicroservice resource controller 1716, and the microtask resource controller ofFIG. 17 . Theillustrative communication flow 1900 includes a number of data flows, some of which may be executed separately or together, depending on the embodiment. Indata flow 1902, thecloud orchestrator server 1602 determines to orchestrate a VM to run an application which requires various compute resources. - To do so, in
data flow 1904, thecloud orchestrator server 1602 transmits a service orchestration request (e.g., via an applicable API call) that includes a list of resources (e.g., compute resources, storage resources, network resources, accelerator resources, etc.) to theAPI service 1712 that are usable to identify which resources are required to compose a node. As described previously, system resources (e.g., memory devices, data storage devices, accelerator devices, general purpose processors, etc.) can be logically coupled to form a composed node, which can act as, for example, a server. Indata flow 1906, theAPI service 1712 forwards the received service orchestration request to thetask manager 1714. It should be appreciated that theAPI service 1712 may generate a new message that effectively translates the received service orchestration request into a message interpretable by thetask manager 1714 to perform the requested service orchestration. Upon receipt of the message, thetask manager 1714 determines that a node is to be composed and storage allocated thereto. As such, indata flow 1908, thetask manager 1714 generates and enqueues a task to orchestrate the requested node. - In
data flow 1910, themicroservice resource controller 1716 allocates the task (e.g., via one or more services) to initiate composition of the requested node and, indata flow 1912, transmits a notification of the allocated task to themicrotask resource controller 1726. Indata flow 1914, the microtask resource controller 1726 (e.g., via the composeservice 1732 ofFIG. 17 ) allocates a thread (e.g., from a thread pool) to call a pod manager (e.g., a via a pod manager service) to discover resources of the associated hardware cluster. Additionally, indata flow 1916, the microtask resource controller 1726 (e.g., via theresource allocator 1734 ofFIG. 17 ) allocates a thread to call a compose API of the pod manager to compose a portion of the discovered resources. Further, indata flow 1918, the microtask resource controller 1726 (e.g., via theresource allocator 1734 ofFIG. 17 ) allocates a thread to deploy a storage volume. Indata flow 1920, themicrotask resource controller 1726 transmits a notification of the composed node resources to themicroservice resource controller 1716. Indata flow 1922, themicroservice resource controller 1716 transmits a notification of completion to the cloud orchestrator server 1602 (e.g., via thetask manager 1714 and the API service 1712) that includes an identifier of the composed node. - Referring now to
FIG. 20 , an embodiment of anillustrative communication flow 2000 for scheduling and managing groups of nodes (e.g., the managednodes 1622 ofFIG. 16 ) is shown that includes thecloud orchestrator server 1602 and thecontroller compute device 1604 ofFIG. 16 . The illustrativecontroller compute device 1604 includes theAPI service 1712, thetask manager 1714, themicroservice resource controller 1716, and the microtask resource controller ofFIG. 17 . Theillustrative communication flow 2000 includes a number of data flows, some of which may be executed separately or together, depending on the embodiment. Indata flow 2002, thecloud orchestrator server 1602 determines a set of VMs or containers and wants to compose and provision a group of nodes. - To do so, in
data flow 2004, thecloud orchestrator server 1602 transmits a service orchestration request (e.g., via an applicable API call) to theAPI service 1712 that includes a list of required node characteristics (e.g., compute resources, storage resources, network resources, accelerator resources, configuration settings, etc.). Indata flow 2006, theAPI service 1712 forwards the received service orchestration request to thetask manager 1714 with a system group notification. It should be appreciated that theAPI service 1712 may generate a new message that effectively translates the received service orchestration request into a message interpretable by thetask manager 1714 to perform the requested service orchestration. Upon receipt of the message, thetask manager 1714 determines that a group of nodes are to be composed and storage allocated thereto. As such, indata flow 2008, thetask manager 1714 spawns and enqueues multiple tasks to orchestrate the requested nodes. - In
data flow 2010, themicroservice resource controller 1716 allocates each task (e.g., via one or more services) to initiate composition of the requested nodes and, indata flow 2012, transmits a notification of the allocated tasks to themicrotask resource controller 1726. Indata flow 2014, the microtask resource controller 1726 (e.g., via the composeservice 1732 ofFIG. 17 ) allocates threads (e.g., from a thread pool) to call a pod manager to discover resources of the associated hardware cluster. Additionally, indata flow 2016, the microtask resource controller 1726 (e.g., via theresource allocator 1734 ofFIG. 17 ) allocates threads to call a compose API of the pod manager to compose a portion of the discovered resources. Further, indata flow 2018, the microtask resource controller 1726 (e.g., via theresource allocator 1734 ofFIG. 17 ) allocates threads to deploy a storage volumes, as necessary. Indata flow 2020, themicrotask resource controller 1726 transmits a notification of the composed nodes to themicroservice resource controller 1716. Indata flow 2022, themicroservice resource controller 1716 transmits a notification of completion to the cloud orchestrator server 1602 (e.g., via thetask manager 1714 and the API service 1712) that includes an identifier of each of the composed nodes and an identifier of the group of composed nodes. - Referring now to
FIG. 21 , an embodiment of anillustrative communication flow 2100 for managing an underlay network is shown that includes thecloud orchestrator server 1602 and thecontroller compute device 1604 ofFIG. 16 . The illustrativecontroller compute device 1604 includes theAPI service 1712, thetask manager 1714, themicroservice resource controller 1716, and the microtask resource controller ofFIG. 17 . Theillustrative communication flow 2100 includes a number of data flows, some of which may be executed separately or together, depending on the embodiment. Indata flow 2102, thecloud orchestrator server 1602 determines that an underlay network (e.g., a virtual local area network (VLAN), a Virtual Extensible LAN (VxLAN), etc.) is required with specific configurations for particular network traffic associated with a tenant (e.g., based on a service-level agreement (SLA)). - To do so, in
data flow 2104, thecloud orchestrator server 1602 transmits a service orchestration request (e.g., via an applicable API call) to theAPI service 1712 that includes a set of configuration settings of the underlay network. Indata flow 2106, theAPI service 1712 forwards the request to thetask manager 1714. As described previously, it should be appreciated that theAPI service 1712 may generate a new message that effectively translates the received service orchestration request into a message interpretable by thetask manager 1714 to perform the requested service orchestration. Indata flow 2108, thetask manager 1714 spawns a task to compose network resources (e.g., via the network service 1718). Upon receipt of the network resource composition task, indata flow 2110, themicroservice resource controller 1716 starts one or more threads (e.g., one or more master threads) to allocate the requested network resources. - In
data flow 2112, themicrotask resource controller 1726 allocates a thread to configure one or more ports of a switch of the system (e.g., theswitch 150 ofFIG. 1 or one of theswitches FIG. 2 ). Indata flow 2114, themicrotask resource controller 1726 allocates another thread to configure one or more ports of one or more end-hosts (e.g., composed node(s)). In some embodiments, the configuration may be performed via a communicatively coupled pod manager (not shown), such as may be performed via one or more pod manager network API calls. It should be appreciated that, depending on the embodiment, thetask manager 1714 may be configured to spawn additional tasks for themicroservice resource controller 1716 to start additional threads to provision other network services (e.g., a VLAN on the host and switch). Accordingly, in such embodiments, the pod manager network API and a switch API may be called at that time. Upon completion, themicroservice resource controller 1716 transmits a notification of completion to the cloud orchestrator server 1602 (e.g., via thetask manager 1714 and the API service 1712) that includes a completion code and an identifier of the underlay network. - Referring now to
FIG. 22 , an embodiment of anillustrative communication flow 2200 for allocating a network slice is shown that includes thecloud orchestrator server 1602 and thecontroller compute device 1604 ofFIG. 16 . The illustrativecontroller compute device 1604 includes theAPI service 1712, thetask manager 1714, themicroservice resource controller 1716, and the microtask resource controller ofFIG. 17 . Theillustrative communication flow 2200 includes a number of data flows, some of which may be executed separately or together, depending on the embodiment. Indata flow 2202, thecloud orchestrator server 1602 determines that a network slice is required with specific configurations for a telecommunications network. It should be understood that vetwork slicing is a form of virtual network architecture using the same/similar principles as software defined networks (SDNs) and network function virtualization (NFV) architectures in fixed networks that allows multiple logical networks to run on top of a shared physical network infrastructure. - To do so, in
data flow 2204, thecloud orchestrator server 1602 transmits a service orchestration request (e.g., via an applicable API call) to theAPI service 1712 that includes a list of required resources for the network slice, such as a specific accelerator resource for performing certain functions of the network slice. Indata flow 2206, theAPI service 1712 transmits a request to thetask manager 1714 to attach resources to a composed node (see, e.g., thecommunication flow 1900 ofFIG. 19 ). Indata flow 2208, theAPI service 1712 transmits a request to thetask manager 1714 to start a service and network to communicate with the attached resources. Indata flow 2210, thetask manager 1714 spawns the appropriate tasks to attach the resources, and start the communication service and network. Indata flow 2212, themicroservice resource controller 1716 allocates tasks to initiate composition of the requested resources and services. - In
data flow 2214, themicroservice resource controller 1716 transmits a notification of the allocated tasks. Accordingly, indata flow 2216, themicrotask resource controller 1726 allocates a thread to call a pod manager (e.g., via the applicable API calls to that pod manager) to attach the required resources to a composed node. Additionally, indata flow 2218, themicrotask resource controller 1726 allocates a thread to communicate with the switch (e.g., theswitch 150 ofFIG. 1 or one of theswitches FIG. 2 ) to create a network to the attached resources of the composed node. Indata flow 2220, themicrotask resource controller 1726 transmits a notification that the allocation threads have completed. Indata flow 2222, themicroservice resource controller 1716 transmits a notification of completion to the cloud to the cloud orchestrator server 1602 (e.g., via thetask manager 1714 and the API service 1712) that includes an identifier of each allocated resource. - Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
- Example 1 includes a compute device for managing disaggregated resources in a data center, the compute device comprising microservice resource controller circuitry to (i) determine that a service related task has been generated and (ii) create one or more microservices to perform the determined service related task using at least one of a plurality of services managed by the microservice resource controller circuitry; and microtask resource controller circuitry to generate one or more microtasks to compose at least one service based on the one or more microservices.
- Example 2 includes the subject matter of Example 1, and wherein to generate the one or more microtasks comprises to create one or more threads for each of the one or more microservices, and wherein each of the one or more threads is to execute a respective one of the one or more microtasks.
- Example 3 includes the subject matter of any of Examples 1 and 2, and wherein to create the one or more threads comprises to allocate a first thread to call a pod manager of the data center to discover resources of a hardware cluster of the data center.
- Example 4 includes the subject matter of any of Examples 1-3, and wherein to create the one or more threads further comprises to allocate a second thread to compose a portion of the discovered resources into a composed node that is configured to function as a server.
- Example 5 includes the subject matter of any of Examples 1-4, and wherein to create the one or more threads further comprises to allocate a third thread to deploy a storage volume to be associated with the composed node.
- Example 6 includes the subject matter of any of Examples 1-5, and wherein the microservice resource controller circuitry is further to transmit a notification of completion to an entity that requested composition of the composed node, and wherein the notification of completion includes an identifier of the composed node.
- Example 7 includes the subject matter of any of Examples 1-6, and wherein to create the one or more threads further comprises to allocate a plurality of threads to compose a portion of the discovered resources into a group of composed nodes, wherein each composed node of the group of composed nodes is configured to function as a server.
- Example 8 includes the subject matter of any of Examples 1-7, and wherein the microservice resource controller circuitry is further to transmit a notification of completion to an entity that requested composition of the composed node, and wherein the notification of completion includes an identifier of each composed node of the group of composed nodes and a group identifier that identifies the group of composed nodes.
- Example 9 includes the subject matter of any of Examples 1-8, and wherein to determine that the service related task has been generated comprises to determine that an underlay network of the data center is to be orchestrated, wherein to create the one or more threads comprises to start a master thread to compose network resources, and wherein the master thread is to (i) allocate a child thread to configure one or more switch ports of a switch of the data center and (ii) allocate one or more threads to configured one or more host ports of a node of the data center.
- Example 10 includes the subject matter of any of Examples 1-9, and wherein the microservice resource controller circuitry is further to transmit a notification of completion to an entity that requested the underlay network to be orchestrated, and wherein the notification of completion includes a completion code and an identifier of the underlay network.
- Example 11 includes the subject matter of any of Examples 1-10, and wherein to determine that the service related task has been generated comprises to determine that the generated service related task indicates that at least one node is to be orchestrated.
- Example 12 includes the subject matter of any of Examples 1-11, and wherein the resources include compute resources, storage resources, and network resources.
- Example 13 includes a compute device for managing disaggregated resources in a data center, the compute device comprising a compute engine to determine that a service related task has been generated and (ii) create one or more microservices to perform the determined service related task using at least one of a plurality of services managed by the compute engine; and microtask resource controller circuitry to generate one or more microtasks to compose at least one service based on the one or more microservices.
- Example 14 includes the subject matter of Example 13, and wherein to generate the microtask comprises to create one or more threads for each of the one or more microservices, and wherein each of the one or more threads is to execute a respective one of the one or more microtasks.
- Example 15 includes the subject matter of any of Examples 13 and 14, and wherein to determine that the service related task has been generated comprises to determine that the generated service related task indicates that at least one node is to be orchestrated, and wherein to create the one or more threads comprises to allocate a first thread to call a pod manager of the data center to discover resources of a hardware cluster of the data center, wherein the resources include compute resources, storage resources, and network resources.
- Example 16 includes the subject matter of any of Examples 13-15, and wherein to create the one or more threads further comprises to allocate a second thread to compose a portion of the discovered resources into a composed node that is configured to function as a server.
- Example 17 includes the subject matter of any of Examples 13-16, and wherein to create the one or more threads further comprises to allocate a third thread to deploy a storage volume to be associated with the composed node.
- Example 18 includes the subject matter of any of Examples 13-17, and wherein the compute engine is further to transmit a notification of completion to an entity that requested composition of the composed node, and wherein the notification of completion includes an identifier of the composed node.
- Example 19 includes the subject matter of any of Examples 13-18, and wherein to create the one or more threads further comprises to allocate a plurality of threads to compose a portion of the discovered resources into a group of composed nodes, wherein each composed node of the group of composed nodes is configured to function as a server.
- Example 20 includes the subject matter of any of Examples 13-19, and wherein the compute engine is further to transmit a notification of completion to an entity that requested composition of the composed node, and wherein the notification of completion includes an identifier of each composed node of the group of composed nodes and a group identifier that identifies the group of composed nodes.
- Example 21 includes the subject matter of any of Examples 13-20, and wherein to determine that the service related task has been generated comprises to determine that an underlay network of the data center is to be orchestrated, wherein to create the one or more threads comprises to start a master thread to compose network resources of the data center, and wherein the master thread is to (i) allocate a child thread to configure one or more switch ports of a switch of the data center and (ii) allocate one or more threads to configured one or more host ports of a node of the data center.
- Example 22 includes the subject matter of any of Examples 13-21, and wherein the compute engine is further to transmit a notification of completion to an entity that requested the underlay network to be orchestrated, and wherein the notification of completion includes a completion code and an identifier of the underlay network.
- Example 23 includes a method for managing disaggregated resources in a data center, the method comprising determining, by a compute device, that a service related task has been generated; creating, by the compute device, one or more microservices to perform the determined service related task using at least one of a plurality of services managed by the compute device; and generating, by the compute device, one or more microtasks to compose at least one service based on the one or more microservices.
- Example 24 includes the subject matter of Example 23, and wherein generating the microtask comprises creating one or more threads for each of the one or more microservices, and wherein each of the one or more threads is to execute a respective one of the one or more microtasks.
- Example 25 includes the subject matter of any of Examples 23 and 24, and wherein determining that the service related task has been generated comprises determining that the generated service related task indicates that at least one node is to be orchestrated, and wherein creating the one or more threads comprises allocating a first thread to call a pod manager of the data center to discover resources of a hardware cluster of the data center, wherein the resources include compute resources, storage resources, and network resources.
Claims (26)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/642,523 US20200257566A1 (en) | 2017-08-30 | 2018-08-30 | Technologies for managing disaggregated resources in a data center |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN201741030632 | 2017-08-30 | ||
IN201741030632 | 2017-08-30 | ||
US201762584401P | 2017-11-10 | 2017-11-10 | |
PCT/US2018/048917 WO2019046620A1 (en) | 2017-08-30 | 2018-08-30 | Technologies for managing disaggregated resources in a data center |
US16/642,523 US20200257566A1 (en) | 2017-08-30 | 2018-08-30 | Technologies for managing disaggregated resources in a data center |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200257566A1 true US20200257566A1 (en) | 2020-08-13 |
Family
ID=65434219
Family Applications (24)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/850,325 Abandoned US20190068466A1 (en) | 2017-08-30 | 2017-12-21 | Technologies for auto-discovery of fault domains |
US15/858,305 Abandoned US20190068464A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for machine learning schemes in dynamic switching between adaptive connections and connection optimization |
US15/858,549 Abandoned US20190065401A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient memory access on an accelerator sled |
US15/858,557 Abandoned US20190065083A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient access to pooled accelerator devices |
US15/858,316 Abandoned US20190065260A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for kernel scale-out |
US15/858,288 Abandoned US20190068521A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for automated network congestion management |
US15/858,286 Abandoned US20190068523A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for allocating resources across data centers |
US15/858,748 Active 2039-08-11 US11614979B2 (en) | 2017-08-30 | 2017-12-29 | Technologies for configuration-free platform firmware |
US15/858,542 Active 2039-10-02 US11748172B2 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient pooling for a hyper converged infrastructure |
US15/859,394 Active 2040-04-27 US11467885B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for managing a latency-efficient pipeline through a network interface controller |
US15/859,388 Abandoned US20190065231A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for migrating virtual machines |
US15/859,363 Abandoned US20190068444A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for providing efficient transfer of results from accelerator devices in a disaggregated architecture |
US15/859,366 Abandoned US20190065261A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for in-processor workload phase detection |
US15/859,385 Abandoned US20190065281A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for auto-migration in accelerated architectures |
US15/859,364 Active 2039-07-30 US11392425B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for providing a split memory pool for full rack connectivity |
US15/859,368 Active 2040-02-21 US11422867B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for composing a managed node based on telemetry data |
US15/916,394 Abandoned US20190065415A1 (en) | 2017-08-30 | 2018-03-09 | Technologies for local disaggregation of memory |
US15/933,855 Active 2039-05-07 US11030017B2 (en) | 2017-08-30 | 2018-03-23 | Technologies for efficiently booting sleds in a disaggregated architecture |
US15/942,108 Abandoned US20190067848A1 (en) | 2017-08-30 | 2018-03-30 | Memory mezzanine connectors |
US15/942,101 Active 2040-07-19 US11416309B2 (en) | 2017-08-30 | 2018-03-30 | Technologies for dynamic accelerator selection |
US16/023,803 Active 2038-07-17 US10888016B2 (en) | 2017-08-30 | 2018-06-29 | Technologies for automated servicing of sleds of a data center |
US16/022,962 Active 2038-12-31 US11055149B2 (en) | 2017-08-30 | 2018-06-29 | Technologies for providing workload-based sled position adjustment |
US16/642,520 Abandoned US20200192710A1 (en) | 2017-08-30 | 2018-08-30 | Technologies for enabling and metering the utilization of features on demand |
US16/642,523 Abandoned US20200257566A1 (en) | 2017-08-30 | 2018-08-30 | Technologies for managing disaggregated resources in a data center |
Family Applications Before (23)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/850,325 Abandoned US20190068466A1 (en) | 2017-08-30 | 2017-12-21 | Technologies for auto-discovery of fault domains |
US15/858,305 Abandoned US20190068464A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for machine learning schemes in dynamic switching between adaptive connections and connection optimization |
US15/858,549 Abandoned US20190065401A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient memory access on an accelerator sled |
US15/858,557 Abandoned US20190065083A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient access to pooled accelerator devices |
US15/858,316 Abandoned US20190065260A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for kernel scale-out |
US15/858,288 Abandoned US20190068521A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for automated network congestion management |
US15/858,286 Abandoned US20190068523A1 (en) | 2017-08-30 | 2017-12-29 | Technologies for allocating resources across data centers |
US15/858,748 Active 2039-08-11 US11614979B2 (en) | 2017-08-30 | 2017-12-29 | Technologies for configuration-free platform firmware |
US15/858,542 Active 2039-10-02 US11748172B2 (en) | 2017-08-30 | 2017-12-29 | Technologies for providing efficient pooling for a hyper converged infrastructure |
US15/859,394 Active 2040-04-27 US11467885B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for managing a latency-efficient pipeline through a network interface controller |
US15/859,388 Abandoned US20190065231A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for migrating virtual machines |
US15/859,363 Abandoned US20190068444A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for providing efficient transfer of results from accelerator devices in a disaggregated architecture |
US15/859,366 Abandoned US20190065261A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for in-processor workload phase detection |
US15/859,385 Abandoned US20190065281A1 (en) | 2017-08-30 | 2017-12-30 | Technologies for auto-migration in accelerated architectures |
US15/859,364 Active 2039-07-30 US11392425B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for providing a split memory pool for full rack connectivity |
US15/859,368 Active 2040-02-21 US11422867B2 (en) | 2017-08-30 | 2017-12-30 | Technologies for composing a managed node based on telemetry data |
US15/916,394 Abandoned US20190065415A1 (en) | 2017-08-30 | 2018-03-09 | Technologies for local disaggregation of memory |
US15/933,855 Active 2039-05-07 US11030017B2 (en) | 2017-08-30 | 2018-03-23 | Technologies for efficiently booting sleds in a disaggregated architecture |
US15/942,108 Abandoned US20190067848A1 (en) | 2017-08-30 | 2018-03-30 | Memory mezzanine connectors |
US15/942,101 Active 2040-07-19 US11416309B2 (en) | 2017-08-30 | 2018-03-30 | Technologies for dynamic accelerator selection |
US16/023,803 Active 2038-07-17 US10888016B2 (en) | 2017-08-30 | 2018-06-29 | Technologies for automated servicing of sleds of a data center |
US16/022,962 Active 2038-12-31 US11055149B2 (en) | 2017-08-30 | 2018-06-29 | Technologies for providing workload-based sled position adjustment |
US16/642,520 Abandoned US20200192710A1 (en) | 2017-08-30 | 2018-08-30 | Technologies for enabling and metering the utilization of features on demand |
Country Status (5)
Country | Link |
---|---|
US (24) | US20190068466A1 (en) |
EP (1) | EP3676708A4 (en) |
CN (8) | CN109426316A (en) |
DE (1) | DE112018004798T5 (en) |
WO (5) | WO2019045930A1 (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11122132B2 (en) * | 2017-01-30 | 2021-09-14 | Centurylink Intellectual Property Llc | Application programming interface (API) to provide network metrics and network resource control to users |
US20210326299A1 (en) * | 2019-04-02 | 2021-10-21 | Intel Corporation | Edge component computing system having integrated faas call handling capability |
US20210389960A1 (en) * | 2020-06-11 | 2021-12-16 | Hewlett Packard Enterprise Development Lp | Remote resource configuration mechanism |
US11295135B2 (en) * | 2020-05-29 | 2022-04-05 | Corning Research & Development Corporation | Asset tracking of communication equipment via mixed reality based labeling |
US11315013B2 (en) * | 2018-04-23 | 2022-04-26 | EMC IP Holding Company LLC | Implementing parameter server in networking infrastructure for high-performance computing |
US11360789B2 (en) | 2020-07-06 | 2022-06-14 | International Business Machines Corporation | Configuration of hardware devices |
US20220188170A1 (en) * | 2020-12-15 | 2022-06-16 | Google Llc | Multi-Tenant Control Plane Management on Computing Platform |
US20220197681A1 (en) * | 2020-12-22 | 2022-06-23 | Reliance Jio Infocomm Usa, Inc. | Intelligent data plane acceleration by offloading to distributed smart network interfaces |
US11374808B2 (en) * | 2020-05-29 | 2022-06-28 | Corning Research & Development Corporation | Automated logging of patching operations via mixed reality based labeling |
US11405451B2 (en) * | 2020-09-30 | 2022-08-02 | Jpmorgan Chase Bank, N.A. | Data pipeline architecture |
US11416294B1 (en) * | 2019-04-17 | 2022-08-16 | Juniper Networks, Inc. | Task processing for management of data center resources |
US11470015B1 (en) * | 2021-03-22 | 2022-10-11 | Amazon Technologies, Inc. | Allocating workloads to heterogenous worker fleets |
US20230004786A1 (en) * | 2021-06-30 | 2023-01-05 | Micron Technology, Inc. | Artificial neural networks on a deep learning accelerator |
US20230093868A1 (en) * | 2021-09-22 | 2023-03-30 | Ridgeline, Inc. | Mechanism for real-time identity resolution in a distributed system |
US11630696B2 (en) | 2020-03-30 | 2023-04-18 | International Business Machines Corporation | Messaging for a hardware acceleration system |
US11636503B2 (en) * | 2020-02-26 | 2023-04-25 | At&T Intellectual Property I, L.P. | System and method for offering network slice as a service |
US20230239209A1 (en) * | 2022-01-21 | 2023-07-27 | International Business Machines Corporation | Optimizing container executions with network-attached hardware components of a composable disaggregated infrastructure |
US11736559B2 (en) | 2020-08-05 | 2023-08-22 | Avesha, Inc. | Providing a set of application slices within an application environment |
US11805073B2 (en) | 2021-05-03 | 2023-10-31 | Avesha, Inc. | Controlling placement of workloads of an application within an application environment |
Families Citing this family (124)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9948724B2 (en) * | 2015-09-10 | 2018-04-17 | International Business Machines Corporation | Handling multi-pipe connections |
US10091904B2 (en) * | 2016-07-22 | 2018-10-02 | Intel Corporation | Storage sled for data center |
US20180150256A1 (en) | 2016-11-29 | 2018-05-31 | Intel Corporation | Technologies for data deduplication in disaggregated architectures |
CN109891908A (en) * | 2016-11-29 | 2019-06-14 | 英特尔公司 | Technology for the interconnection of millimeter wave rack |
US10346315B2 (en) | 2017-05-26 | 2019-07-09 | Oracle International Corporation | Latchless, non-blocking dynamically resizable segmented hash index |
US10574580B2 (en) | 2017-07-04 | 2020-02-25 | Vmware, Inc. | Network resource management for hyper-converged infrastructures |
US11119835B2 (en) | 2017-08-30 | 2021-09-14 | Intel Corporation | Technologies for providing efficient reprovisioning in an accelerator device |
US11106427B2 (en) * | 2017-09-29 | 2021-08-31 | Intel Corporation | Memory filtering for disaggregate memory architectures |
US11650598B2 (en) * | 2017-12-30 | 2023-05-16 | Telescent Inc. | Automated physical network management system utilizing high resolution RFID, optical scans and mobile robotic actuator |
US10511690B1 (en) | 2018-02-20 | 2019-12-17 | Intuit, Inc. | Method and apparatus for predicting experience degradation events in microservice-based applications |
US20210056426A1 (en) * | 2018-03-26 | 2021-02-25 | Hewlett-Packard Development Company, L.P. | Generation of kernels based on physical states |
US10761726B2 (en) * | 2018-04-16 | 2020-09-01 | VWware, Inc. | Resource fairness control in distributed storage systems using congestion data |
US10599553B2 (en) * | 2018-04-27 | 2020-03-24 | International Business Machines Corporation | Managing cloud-based hardware accelerators |
US11221886B2 (en) | 2018-05-17 | 2022-01-11 | International Business Machines Corporation | Optimizing dynamical resource allocations for cache-friendly workloads in disaggregated data centers |
US11330042B2 (en) * | 2018-05-17 | 2022-05-10 | International Business Machines Corporation | Optimizing dynamic resource allocations for storage-dependent workloads in disaggregated data centers |
US10893096B2 (en) | 2018-05-17 | 2021-01-12 | International Business Machines Corporation | Optimizing dynamical resource allocations using a data heat map in disaggregated data centers |
US10841367B2 (en) | 2018-05-17 | 2020-11-17 | International Business Machines Corporation | Optimizing dynamical resource allocations for cache-dependent workloads in disaggregated data centers |
US10601903B2 (en) | 2018-05-17 | 2020-03-24 | International Business Machines Corporation | Optimizing dynamical resource allocations based on locality of resources in disaggregated data centers |
US10977085B2 (en) | 2018-05-17 | 2021-04-13 | International Business Machines Corporation | Optimizing dynamical resource allocations in disaggregated data centers |
US10936374B2 (en) | 2018-05-17 | 2021-03-02 | International Business Machines Corporation | Optimizing dynamic resource allocations for memory-dependent workloads in disaggregated data centers |
US10795713B2 (en) | 2018-05-25 | 2020-10-06 | Vmware, Inc. | Live migration of a virtualized compute accelerator workload |
US10684887B2 (en) * | 2018-05-25 | 2020-06-16 | Vmware, Inc. | Live migration of a virtualized compute accelerator workload |
US11042406B2 (en) | 2018-06-05 | 2021-06-22 | Intel Corporation | Technologies for providing predictive thermal management |
US11431648B2 (en) | 2018-06-11 | 2022-08-30 | Intel Corporation | Technologies for providing adaptive utilization of different interconnects for workloads |
US20190384376A1 (en) * | 2018-06-18 | 2019-12-19 | American Megatrends, Inc. | Intelligent allocation of scalable rack resources |
US11388835B1 (en) * | 2018-06-27 | 2022-07-12 | Amazon Technologies, Inc. | Placement of custom servers |
US11436113B2 (en) * | 2018-06-28 | 2022-09-06 | Twitter, Inc. | Method and system for maintaining storage device failure tolerance in a composable infrastructure |
US11968548B1 (en) * | 2018-07-10 | 2024-04-23 | Cable Television Laboratories, Inc. | Systems and methods for reducing communication network performance degradation using in-band telemetry data |
US12034593B1 (en) | 2018-07-10 | 2024-07-09 | Cable Television Laboratories, Inc. | Systems and methods for advanced core network controls |
US11347678B2 (en) * | 2018-08-06 | 2022-05-31 | Oracle International Corporation | One-sided reliable remote direct memory operations |
US10977193B2 (en) | 2018-08-17 | 2021-04-13 | Oracle International Corporation | Remote direct memory operations (RDMOs) for transactional processing systems |
US11188348B2 (en) * | 2018-08-31 | 2021-11-30 | International Business Machines Corporation | Hybrid computing device selection analysis |
US11012423B2 (en) | 2018-09-25 | 2021-05-18 | International Business Machines Corporation | Maximizing resource utilization through efficient component communication in disaggregated datacenters |
US11163713B2 (en) | 2018-09-25 | 2021-11-02 | International Business Machines Corporation | Efficient component communication through protocol switching in disaggregated datacenters |
US11650849B2 (en) * | 2018-09-25 | 2023-05-16 | International Business Machines Corporation | Efficient component communication through accelerator switching in disaggregated datacenters |
US11182322B2 (en) | 2018-09-25 | 2021-11-23 | International Business Machines Corporation | Efficient component communication through resource rewiring in disaggregated datacenters |
US11138044B2 (en) * | 2018-09-26 | 2021-10-05 | Micron Technology, Inc. | Memory pooling between selected memory resources |
US10901893B2 (en) * | 2018-09-28 | 2021-01-26 | International Business Machines Corporation | Memory bandwidth management for performance-sensitive IaaS |
EP3861489A4 (en) * | 2018-10-03 | 2022-07-06 | Rigetti & Co, LLC | Parcelled quantum resources |
US10962389B2 (en) * | 2018-10-03 | 2021-03-30 | International Business Machines Corporation | Machine status detection |
US10768990B2 (en) * | 2018-11-01 | 2020-09-08 | International Business Machines Corporation | Protecting an application by autonomously limiting processing to a determined hardware capacity |
US11055186B2 (en) * | 2018-11-27 | 2021-07-06 | Red Hat, Inc. | Managing related devices for virtual machines using robust passthrough device enumeration |
US10831975B2 (en) | 2018-11-29 | 2020-11-10 | International Business Machines Corporation | Debug boundaries in a hardware accelerator |
US10901918B2 (en) * | 2018-11-29 | 2021-01-26 | International Business Machines Corporation | Constructing flexibly-secure systems in a disaggregated environment |
US11275622B2 (en) * | 2018-11-29 | 2022-03-15 | International Business Machines Corporation | Utilizing accelerators to accelerate data analytic workloads in disaggregated systems |
US11048318B2 (en) * | 2018-12-06 | 2021-06-29 | Intel Corporation | Reducing microprocessor power with minimal performance impact by dynamically adapting runtime operating configurations using machine learning |
US10970107B2 (en) * | 2018-12-21 | 2021-04-06 | Servicenow, Inc. | Discovery of hyper-converged infrastructure |
US10771344B2 (en) * | 2018-12-21 | 2020-09-08 | Servicenow, Inc. | Discovery of hyper-converged infrastructure devices |
US11269593B2 (en) * | 2019-01-23 | 2022-03-08 | Sap Se | Global number range generation |
US11271804B2 (en) * | 2019-01-25 | 2022-03-08 | Dell Products L.P. | Hyper-converged infrastructure component expansion/replacement system |
US11429440B2 (en) * | 2019-02-04 | 2022-08-30 | Hewlett Packard Enterprise Development Lp | Intelligent orchestration of disaggregated applications based on class of service |
US10817221B2 (en) * | 2019-02-12 | 2020-10-27 | International Business Machines Corporation | Storage device with mandatory atomic-only access |
US10949101B2 (en) * | 2019-02-25 | 2021-03-16 | Micron Technology, Inc. | Storage device operation orchestration |
US11443018B2 (en) * | 2019-03-12 | 2022-09-13 | Xilinx, Inc. | Locking execution of cores to licensed programmable devices in a data center |
US11294992B2 (en) * | 2019-03-12 | 2022-04-05 | Xilinx, Inc. | Locking execution of cores to licensed programmable devices in a data center |
US11531869B1 (en) * | 2019-03-28 | 2022-12-20 | Xilinx, Inc. | Neural-network pooling |
JP7176455B2 (en) * | 2019-03-28 | 2022-11-22 | オムロン株式会社 | Monitoring system, setting device and monitoring method |
US11243817B2 (en) * | 2019-03-29 | 2022-02-08 | Intel Corporation | Technologies for data migration between edge accelerators hosted on different edge locations |
US11089137B2 (en) * | 2019-04-02 | 2021-08-10 | International Business Machines Corporation | Dynamic data transmission |
WO2020206370A1 (en) | 2019-04-05 | 2020-10-08 | Cisco Technology, Inc. | Discovering trustworthy devices using attestation and mutual attestation |
US11263122B2 (en) * | 2019-04-09 | 2022-03-01 | Vmware, Inc. | Implementing fine grain data coherency of a shared memory region |
US11003479B2 (en) * | 2019-04-29 | 2021-05-11 | Intel Corporation | Device, system and method to communicate a kernel binary via a network |
CN110053650B (en) * | 2019-05-06 | 2022-06-07 | 湖南中车时代通信信号有限公司 | Automatic train operation system, automatic train operation system architecture and module management method of automatic train operation system |
CN110203600A (en) * | 2019-06-06 | 2019-09-06 | 北京卫星环境工程研究所 | Suitable for spacecraft material be automatically stored and radio frequency |
US11481117B2 (en) * | 2019-06-17 | 2022-10-25 | Hewlett Packard Enterprise Development Lp | Storage volume clustering based on workload fingerprints |
US10949362B2 (en) * | 2019-06-28 | 2021-03-16 | Intel Corporation | Technologies for facilitating remote memory requests in accelerator devices |
US20200409748A1 (en) * | 2019-06-28 | 2020-12-31 | Intel Corporation | Technologies for managing accelerator resources |
US10877817B1 (en) * | 2019-06-28 | 2020-12-29 | Intel Corporation | Technologies for providing inter-kernel application programming interfaces for an accelerated architecture |
EP4007963A1 (en) | 2019-08-02 | 2022-06-08 | JPMorgan Chase Bank, N.A. | Systems and methods for provisioning a new secondary identityiq instance to an existing identityiq instance |
US11082411B2 (en) * | 2019-08-06 | 2021-08-03 | Advanced New Technologies Co., Ltd. | RDMA-based data transmission method, network interface card, server and medium |
US10925166B1 (en) * | 2019-08-07 | 2021-02-16 | Quanta Computer Inc. | Protection fixture |
EP4019206A4 (en) * | 2019-08-22 | 2022-08-17 | NEC Corporation | Robot control system, robot control method, and recording medium |
US10999403B2 (en) | 2019-09-27 | 2021-05-04 | Red Hat, Inc. | Composable infrastructure provisioning and balancing |
CN110650609B (en) * | 2019-10-10 | 2020-12-01 | 珠海与非科技有限公司 | Cloud server of distributed storage |
WO2021072236A2 (en) * | 2019-10-10 | 2021-04-15 | Channel One Holdings Inc. | Methods and systems for time-bounding execution of computing workflows |
US11200046B2 (en) * | 2019-10-22 | 2021-12-14 | Lenovo Enterprise Solutions (Singapore) Pte. Ltd. | Managing composable compute system infrastructure with support for decoupled firmware updates |
US11080051B2 (en) * | 2019-10-29 | 2021-08-03 | Nvidia Corporation | Techniques for efficiently transferring data to a processor |
DE102020127704A1 (en) | 2019-10-29 | 2021-04-29 | Nvidia Corporation | TECHNIQUES FOR EFFICIENT TRANSFER OF DATA TO A PROCESSOR |
CN112749121A (en) * | 2019-10-31 | 2021-05-04 | 中兴通讯股份有限公司 | Multi-chip interconnection system based on PCIE bus |
US11342004B2 (en) * | 2019-11-07 | 2022-05-24 | Quantum Corporation | System and method for rapid replacement of robotic media mover in automated media library |
US10747281B1 (en) * | 2019-11-19 | 2020-08-18 | International Business Machines Corporation | Mobile thermal balancing of data centers |
US11782810B2 (en) * | 2019-11-22 | 2023-10-10 | Dell Products, L.P. | Systems and methods for automated field replacement component configuration |
US11263105B2 (en) * | 2019-11-26 | 2022-03-01 | Lucid Software, Inc. | Visualization tool for components within a cloud infrastructure |
US11861219B2 (en) | 2019-12-12 | 2024-01-02 | Intel Corporation | Buffer to reduce write amplification of misaligned write operations |
US11789878B2 (en) | 2019-12-19 | 2023-10-17 | Intel Corporation | Adaptive fabric allocation for local and remote emerging memories based prediction schemes |
US11321259B2 (en) * | 2020-02-14 | 2022-05-03 | Sony Interactive Entertainment Inc. | Network architecture providing high speed storage access through a PCI express fabric between a compute node and a storage server |
US11122123B1 (en) | 2020-03-09 | 2021-09-14 | International Business Machines Corporation | Method for a network of storage devices |
US11121941B1 (en) | 2020-03-12 | 2021-09-14 | Cisco Technology, Inc. | Monitoring communications to identify performance degradation |
US20210304025A1 (en) * | 2020-03-24 | 2021-09-30 | Facebook, Inc. | Dynamic quality of service management for deep learning training communication |
US11115497B2 (en) * | 2020-03-25 | 2021-09-07 | Intel Corporation | Technologies for providing advanced resource management in a disaggregated environment |
US11509079B2 (en) * | 2020-04-06 | 2022-11-22 | Hewlett Packard Enterprise Development Lp | Blind mate connections with different sets of datums |
US12001826B2 (en) | 2020-04-24 | 2024-06-04 | Intel Corporation | Device firmware update techniques |
WO2021217578A1 (en) * | 2020-04-30 | 2021-11-04 | Intel Corporation | Compilation for function as service implementations distributed across server arrays |
US11177618B1 (en) * | 2020-05-14 | 2021-11-16 | Dell Products L.P. | Server blind-mate power and signal connector dock |
US11687629B2 (en) * | 2020-06-12 | 2023-06-27 | Baidu Usa Llc | Method for data protection in a data processing cluster with authentication |
CN111824668B (en) * | 2020-07-08 | 2022-07-19 | 北京极智嘉科技股份有限公司 | Robot and robot-based container storage and retrieval method |
US11681557B2 (en) * | 2020-07-31 | 2023-06-20 | International Business Machines Corporation | Systems and methods for managing resources in a hyperconverged infrastructure cluster |
US20220092481A1 (en) * | 2020-09-18 | 2022-03-24 | Dell Products L.P. | Integration optimization using machine learning algorithms |
US11570243B2 (en) | 2020-09-22 | 2023-01-31 | Commvault Systems, Inc. | Decommissioning, re-commissioning, and commissioning new metadata nodes in a working distributed data storage system |
US11314687B2 (en) * | 2020-09-24 | 2022-04-26 | Commvault Systems, Inc. | Container data mover for migrating data between distributed data storage systems integrated with application orchestrators |
US20210011787A1 (en) * | 2020-09-25 | 2021-01-14 | Francesc Guim Bernat | Technologies for scaling inter-kernel technologies for accelerator device kernels |
US11379402B2 (en) | 2020-10-20 | 2022-07-05 | Micron Technology, Inc. | Secondary device detection using a synchronous interface |
US20220129601A1 (en) * | 2020-10-26 | 2022-04-28 | Oracle International Corporation | Techniques for generating a configuration for electrically isolating fault domains in a data center |
US11803493B2 (en) * | 2020-11-30 | 2023-10-31 | Dell Products L.P. | Systems and methods for management controller co-processor host to variable subsystem proxy |
US20210092069A1 (en) * | 2020-12-10 | 2021-03-25 | Intel Corporation | Accelerating multi-node performance of machine learning workloads |
US11662934B2 (en) * | 2020-12-15 | 2023-05-30 | International Business Machines Corporation | Migration of a logical partition between mutually non-coherent host data processing systems |
US11994997B2 (en) * | 2020-12-23 | 2024-05-28 | Intel Corporation | Memory controller to manage quality of service enforcement and migration between local and pooled memory |
US11445028B2 (en) | 2020-12-30 | 2022-09-13 | Dell Products L.P. | System and method for providing secure console access with multiple smart NICs using NC-SL and SPDM |
US11803216B2 (en) | 2021-02-03 | 2023-10-31 | Hewlett Packard Enterprise Development Lp | Contiguous plane infrastructure for computing systems |
US11785735B2 (en) * | 2021-02-19 | 2023-10-10 | CyberSecure IPS, LLC | Intelligent cable patching of racks to facilitate cable installation |
US11503743B2 (en) * | 2021-03-12 | 2022-11-15 | Baidu Usa Llc | High availability fluid connector for liquid cooling |
US20220321403A1 (en) * | 2021-04-02 | 2022-10-06 | Nokia Solutions And Networks Oy | Programmable network segmentation for multi-tenant fpgas in cloud infrastructures |
US20220342688A1 (en) * | 2021-04-26 | 2022-10-27 | Dell Products L.P. | Systems and methods for migration of virtual computing resources using smart network interface controller acceleration |
US11714775B2 (en) | 2021-05-10 | 2023-08-01 | Zenlayer Innovation LLC | Peripheral component interconnect (PCI) hosting device |
US12045643B1 (en) * | 2021-06-03 | 2024-07-23 | Amazon Technologies, Inc. | Power aware load placement for sub-lineups |
US20210328933A1 (en) * | 2021-06-25 | 2021-10-21 | Akhilesh Thyagaturu | Network flow-based hardware allocation |
US20220413987A1 (en) * | 2021-06-28 | 2022-12-29 | Dell Products L.P. | System and method for accelerator-centric workload placement |
IT202100017564A1 (en) * | 2021-07-02 | 2023-01-02 | Fastweb S P A | Robotic apparatus to carry out maintenance operations on an electronic component |
EP4142442B1 (en) * | 2021-08-30 | 2024-04-17 | Ovh | Cooling assembly for a data center rack and method for assembling a rack system |
US20230115664A1 (en) * | 2021-10-08 | 2023-04-13 | Seagate Technology Llc | Resource management for disaggregated architectures |
US20230121562A1 (en) * | 2021-10-15 | 2023-04-20 | Dell Products, L.P. | Telemetry of artificial intelligence (ai) and/or machine learning (ml) workloads |
US12066964B1 (en) * | 2021-12-10 | 2024-08-20 | Amazon Technologies, Inc. | Highly available modular hardware acceleration device |
US11921582B2 (en) * | 2022-04-29 | 2024-03-05 | Microsoft Technology Licensing, Llc | Out of band method to change boot firmware configuration |
CN115052055B (en) * | 2022-08-17 | 2022-11-11 | 北京左江科技股份有限公司 | Network message checksum unloading method based on FPGA |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150186319A1 (en) * | 2013-12-26 | 2015-07-02 | Dirk F. Blevins | Computer architecture to provide flexibility and/or scalability |
Family Cites Families (191)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2704350B1 (en) * | 1993-04-22 | 1995-06-02 | Bull Sa | Physical structure of a mass memory subsystem. |
JP3320344B2 (en) * | 1997-09-19 | 2002-09-03 | 富士通株式会社 | Cartridge transfer robot for library device and library device |
US6158000A (en) * | 1998-09-18 | 2000-12-05 | Compaq Computer Corporation | Shared memory initialization method for system having multiple processor capability |
US6230265B1 (en) * | 1998-09-30 | 2001-05-08 | International Business Machines Corporation | Method and system for configuring resources in a data processing system utilizing system power control information |
US7287096B2 (en) * | 2001-05-19 | 2007-10-23 | Texas Instruments Incorporated | Method for robust, flexible reconfiguration of transceive parameters for communication systems |
US7536715B2 (en) * | 2001-05-25 | 2009-05-19 | Secure Computing Corporation | Distributed firewall system and method |
US6901580B2 (en) * | 2001-06-22 | 2005-05-31 | Intel Corporation | Configuration parameter sequencing and sequencer |
US7415723B2 (en) * | 2002-06-11 | 2008-08-19 | Pandya Ashish A | Distributed network security system and a hardware processor therefor |
US7408876B1 (en) * | 2002-07-02 | 2008-08-05 | Extreme Networks | Method and apparatus for providing quality of service across a switched backplane between egress queue managers |
US20040073834A1 (en) * | 2002-10-10 | 2004-04-15 | Kermaani Kaamel M. | System and method for expanding the management redundancy of computer systems |
US7386889B2 (en) * | 2002-11-18 | 2008-06-10 | Trusted Network Technologies, Inc. | System and method for intrusion prevention in a communications network |
US7031154B2 (en) * | 2003-04-30 | 2006-04-18 | Hewlett-Packard Development Company, L.P. | Louvered rack |
US7238104B1 (en) * | 2003-05-02 | 2007-07-03 | Foundry Networks, Inc. | System and method for venting air from a computer casing |
US7146511B2 (en) * | 2003-10-07 | 2006-12-05 | Hewlett-Packard Development Company, L.P. | Rack equipment application performance modification system and method |
US20050132084A1 (en) * | 2003-12-10 | 2005-06-16 | Heung-For Cheng | Method and apparatus for providing server local SMBIOS table through out-of-band communication |
US7809836B2 (en) | 2004-04-07 | 2010-10-05 | Intel Corporation | System and method for automating bios firmware image recovery using a non-host processor and platform policy to select a donor system |
US7552217B2 (en) | 2004-04-07 | 2009-06-23 | Intel Corporation | System and method for Automatic firmware image recovery for server management operational code |
US7421535B2 (en) * | 2004-05-10 | 2008-09-02 | International Business Machines Corporation | Method for demoting tracks from cache |
JP4335760B2 (en) * | 2004-07-08 | 2009-09-30 | 富士通株式会社 | Rack mount storage unit and rack mount disk array device |
US7685319B2 (en) * | 2004-09-28 | 2010-03-23 | Cray Canada Corporation | Low latency communication via memory windows |
US7454550B2 (en) * | 2005-01-05 | 2008-11-18 | Xtremedata, Inc. | Systems and methods for providing co-processors to computing systems |
US20110016214A1 (en) * | 2009-07-15 | 2011-01-20 | Cluster Resources, Inc. | System and method of brokering cloud computing resources |
US7634584B2 (en) * | 2005-04-27 | 2009-12-15 | Solarflare Communications, Inc. | Packet validation in virtual network interface architecture |
US9135074B2 (en) * | 2005-05-19 | 2015-09-15 | Hewlett-Packard Development Company, L.P. | Evaluating performance of workload manager based on QoS to representative workload and usage efficiency of shared resource for plurality of minCPU and maxCPU allocation values |
US8799980B2 (en) * | 2005-11-16 | 2014-08-05 | Juniper Networks, Inc. | Enforcement of network device configuration policies within a computing environment |
TW200720941A (en) * | 2005-11-18 | 2007-06-01 | Inventec Corp | Host computer memory configuration data remote access method and system |
US7493419B2 (en) * | 2005-12-13 | 2009-02-17 | International Business Machines Corporation | Input/output workload fingerprinting for input/output schedulers |
US8713551B2 (en) * | 2006-01-03 | 2014-04-29 | International Business Machines Corporation | Apparatus, system, and method for non-interruptively updating firmware on a redundant hardware controller |
US20070271560A1 (en) * | 2006-05-18 | 2007-11-22 | Microsoft Corporation | Deploying virtual machine to host based on workload characterizations |
US7472211B2 (en) * | 2006-07-28 | 2008-12-30 | International Business Machines Corporation | Blade server switch module using out-of-band signaling to detect the physical location of an active drive enclosure device |
US8098658B1 (en) * | 2006-08-01 | 2012-01-17 | Hewett-Packard Development Company, L.P. | Power-based networking resource allocation |
US8010565B2 (en) * | 2006-10-16 | 2011-08-30 | Dell Products L.P. | Enterprise rack management method, apparatus and media |
US8068351B2 (en) * | 2006-11-10 | 2011-11-29 | Oracle America, Inc. | Cable management system |
US20090089564A1 (en) * | 2006-12-06 | 2009-04-02 | Brickell Ernie F | Protecting a Branch Instruction from Side Channel Vulnerabilities |
US8112524B2 (en) * | 2007-01-15 | 2012-02-07 | International Business Machines Corporation | Recommending moving resources in a partitioned computer |
US7738900B1 (en) | 2007-02-15 | 2010-06-15 | Nextel Communications Inc. | Systems and methods of group distribution for latency sensitive applications |
US8140719B2 (en) * | 2007-06-21 | 2012-03-20 | Sea Micro, Inc. | Dis-aggregated and distributed data-center architecture using a direct interconnect fabric |
CN101431432A (en) * | 2007-11-06 | 2009-05-13 | 联想(北京)有限公司 | Blade server |
US8078865B2 (en) * | 2007-11-20 | 2011-12-13 | Dell Products L.P. | Systems and methods for configuring out-of-band bios settings |
US8214467B2 (en) * | 2007-12-14 | 2012-07-03 | International Business Machines Corporation | Migrating port-specific operating parameters during blade server failover |
EP2223211A4 (en) * | 2007-12-17 | 2011-08-17 | Nokia Corp | Accessory configuration and management |
US8645965B2 (en) * | 2007-12-31 | 2014-02-04 | Intel Corporation | Supporting metered clients with manycore through time-limited partitioning |
US8225159B1 (en) * | 2008-04-25 | 2012-07-17 | Netapp, Inc. | Method and system for implementing power savings features on storage devices within a storage subsystem |
US8166263B2 (en) * | 2008-07-03 | 2012-04-24 | Commvault Systems, Inc. | Continuous data protection over intermittent connections, such as continuous data backup for laptops or wireless devices |
US20100125695A1 (en) * | 2008-11-15 | 2010-05-20 | Nanostar Corporation | Non-volatile memory storage system |
US20100091458A1 (en) * | 2008-10-15 | 2010-04-15 | Mosier Jr David W | Electronics chassis with angled card cage |
US8954977B2 (en) * | 2008-12-09 | 2015-02-10 | Intel Corporation | Software-based thread remapping for power savings |
US8798045B1 (en) * | 2008-12-29 | 2014-08-05 | Juniper Networks, Inc. | Control plane architecture for switch fabrics |
US20100229175A1 (en) * | 2009-03-05 | 2010-09-09 | International Business Machines Corporation | Moving Resources In a Computing Environment Having Multiple Logically-Partitioned Computer Systems |
WO2010108165A1 (en) * | 2009-03-20 | 2010-09-23 | The Trustees Of Princeton University | Systems and methods for network acceleration and efficient indexing for caching file systems |
US8321870B2 (en) * | 2009-08-14 | 2012-11-27 | General Electric Company | Method and system for distributed computation having sub-task processing and sub-solution redistribution |
US20110055838A1 (en) * | 2009-08-28 | 2011-03-03 | Moyes William A | Optimized thread scheduling via hardware performance monitoring |
WO2011045863A1 (en) * | 2009-10-16 | 2011-04-21 | 富士通株式会社 | Electronic device and casing for electronic device |
CN101706802B (en) * | 2009-11-24 | 2013-06-05 | 成都市华为赛门铁克科技有限公司 | Method, device and sever for writing, modifying and restoring data |
US9129052B2 (en) * | 2009-12-03 | 2015-09-08 | International Business Machines Corporation | Metering resource usage in a cloud computing environment |
CN102135923A (en) * | 2010-01-21 | 2011-07-27 | 鸿富锦精密工业(深圳)有限公司 | Method for integrating operating system into BIOS (Basic Input/Output System) chip and method for starting operating system |
US8638553B1 (en) * | 2010-03-31 | 2014-01-28 | Amazon Technologies, Inc. | Rack system cooling with inclined computing devices |
US8601297B1 (en) * | 2010-06-18 | 2013-12-03 | Google Inc. | Systems and methods for energy proportional multiprocessor networks |
US8171142B2 (en) * | 2010-06-30 | 2012-05-01 | Vmware, Inc. | Data center inventory management using smart racks |
IT1401647B1 (en) * | 2010-07-09 | 2013-08-02 | Campatents B V | METHOD FOR MONITORING CHANGES OF CONFIGURATION OF A MONITORING DEVICE FOR AN AUTOMATIC MACHINE |
US8259450B2 (en) * | 2010-07-21 | 2012-09-04 | Birchbridge Incorporated | Mobile universal hardware platform |
US9428336B2 (en) * | 2010-07-28 | 2016-08-30 | Par Systems, Inc. | Robotic storage and retrieval systems |
US8824222B2 (en) * | 2010-08-13 | 2014-09-02 | Rambus Inc. | Fast-wake memory |
US8914805B2 (en) * | 2010-08-31 | 2014-12-16 | International Business Machines Corporation | Rescheduling workload in a hybrid computing environment |
US8489939B2 (en) * | 2010-10-25 | 2013-07-16 | At&T Intellectual Property I, L.P. | Dynamically allocating multitier applications based upon application requirements and performance and reliability of resources |
US9078251B2 (en) * | 2010-10-28 | 2015-07-07 | Lg Electronics Inc. | Method and apparatus for transceiving a data frame in a wireless LAN system |
US8838286B2 (en) * | 2010-11-04 | 2014-09-16 | Dell Products L.P. | Rack-level modular server and storage framework |
US8762668B2 (en) * | 2010-11-18 | 2014-06-24 | Hitachi, Ltd. | Multipath switching over multiple storage systems |
US9563479B2 (en) * | 2010-11-30 | 2017-02-07 | Red Hat, Inc. | Brokering optimized resource supply costs in host cloud-based network using predictive workloads |
CN102693181A (en) * | 2011-03-25 | 2012-09-26 | 鸿富锦精密工业(深圳)有限公司 | Firmware update-write system and method |
US9405550B2 (en) * | 2011-03-31 | 2016-08-02 | International Business Machines Corporation | Methods for the transmission of accelerator commands and corresponding command structure to remote hardware accelerator engines over an interconnect link |
US20120303322A1 (en) * | 2011-05-23 | 2012-11-29 | Rego Charles W | Incorporating memory and io cycle information into compute usage determinations |
US9515952B2 (en) * | 2011-07-01 | 2016-12-06 | Hewlett Packard Enterprise Development Lp | Method of and system for managing computing resources |
US9317336B2 (en) * | 2011-07-27 | 2016-04-19 | Alcatel Lucent | Method and apparatus for assignment of virtual resources within a cloud environment |
US8713257B2 (en) * | 2011-08-26 | 2014-04-29 | Lsi Corporation | Method and system for shared high speed cache in SAS switches |
US8755176B2 (en) * | 2011-10-12 | 2014-06-17 | Xyratex Technology Limited | Data storage system, an energy module and a method of providing back-up power to a data storage system |
US9237107B2 (en) * | 2011-11-15 | 2016-01-12 | New Jersey Institute Of Technology | Fair quantized congestion notification (FQCN) to mitigate transport control protocol (TCP) throughput collapse in data center networks |
WO2013077787A1 (en) * | 2011-11-23 | 2013-05-30 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for distributed processing tasks |
DE102011119693A1 (en) * | 2011-11-29 | 2013-05-29 | Universität Heidelberg | System, computer-implemented method and computer program product for direct communication between hardware accelerators in a computer cluster |
US20130185729A1 (en) * | 2012-01-13 | 2013-07-18 | Rutgers, The State University Of New Jersey | Accelerating resource allocation in virtualized environments using workload classes and/or workload signatures |
US8732291B2 (en) | 2012-01-13 | 2014-05-20 | Accenture Global Services Limited | Performance interference model for managing consolidated workloads in QOS-aware clouds |
US9336061B2 (en) * | 2012-01-14 | 2016-05-10 | International Business Machines Corporation | Integrated metering of service usage for hybrid clouds |
US9367360B2 (en) * | 2012-01-30 | 2016-06-14 | Microsoft Technology Licensing, Llc | Deploying a hardware inventory as a cloud-computing stamp |
TWI462017B (en) * | 2012-02-24 | 2014-11-21 | Wistron Corp | Server deployment system and method for updating data |
GB2517097B (en) * | 2012-05-29 | 2020-05-27 | Intel Corp | Peer-to-peer interrupt signaling between devices coupled via interconnects |
CN102694863B (en) * | 2012-05-30 | 2015-08-26 | 电子科技大学 | Based on the implementation method of the distributed memory system of adjustment of load and System Fault Tolerance |
JP5983045B2 (en) * | 2012-05-30 | 2016-08-31 | 富士通株式会社 | Library device |
US8832268B1 (en) * | 2012-08-16 | 2014-09-09 | Amazon Technologies, Inc. | Notification and resolution of infrastructure issues |
CN106896762B (en) * | 2012-10-08 | 2020-07-10 | 费希尔-罗斯蒙特系统公司 | Configurable user display in a process control system |
US9202040B2 (en) | 2012-10-10 | 2015-12-01 | Globalfoundries Inc. | Chip authentication using multi-domain intrinsic identifiers |
US9047417B2 (en) * | 2012-10-29 | 2015-06-02 | Intel Corporation | NUMA aware network interface |
US20140185225A1 (en) * | 2012-12-28 | 2014-07-03 | Joel Wineland | Advanced Datacenter Designs |
US9367419B2 (en) | 2013-01-08 | 2016-06-14 | American Megatrends, Inc. | Implementation on baseboard management controller of single out-of-band communication access to multiple managed computer nodes |
TWI568335B (en) * | 2013-01-15 | 2017-01-21 | 英特爾股份有限公司 | A rack assembly structure |
US9201837B2 (en) * | 2013-03-13 | 2015-12-01 | Futurewei Technologies, Inc. | Disaggregated server architecture for data centers |
US9582010B2 (en) * | 2013-03-14 | 2017-02-28 | Rackspace Us, Inc. | System and method of rack management |
US9634958B2 (en) * | 2013-04-02 | 2017-04-25 | Amazon Technologies, Inc. | Burst capacity for user-defined pools |
US9104562B2 (en) * | 2013-04-05 | 2015-08-11 | International Business Machines Corporation | Enabling communication over cross-coupled links between independently managed compute and storage networks |
CN103281351B (en) * | 2013-04-19 | 2016-12-28 | 武汉方寸科技有限公司 | Cloud service platform for high-efficiency remote sensing data processing and analysis |
US20140317267A1 (en) * | 2013-04-22 | 2014-10-23 | Advanced Micro Devices, Inc. | High-Density Server Management Controller |
US20140337496A1 (en) * | 2013-05-13 | 2014-11-13 | Advanced Micro Devices, Inc. | Embedded Management Controller for High-Density Servers |
CN103294521B (en) * | 2013-05-30 | 2016-08-10 | 天津大学 | A kind of method reducing data center's traffic load and energy consumption |
US9436600B2 (en) * | 2013-06-11 | 2016-09-06 | Svic No. 28 New Technology Business Investment L.L.P. | Non-volatile memory storage for multi-channel memory system |
US20150033222A1 (en) | 2013-07-25 | 2015-01-29 | Cavium, Inc. | Network Interface Card with Virtual Switch and Traffic Flow Policy Enforcement |
US10069686B2 (en) * | 2013-09-05 | 2018-09-04 | Pismo Labs Technology Limited | Methods and systems for managing a device through a manual information input module |
US9306861B2 (en) * | 2013-09-26 | 2016-04-05 | Red Hat Israel, Ltd. | Automatic promiscuous forwarding for a bridge |
US9413713B2 (en) * | 2013-12-05 | 2016-08-09 | Cisco Technology, Inc. | Detection of a misconfigured duplicate IP address in a distributed data center network fabric |
US9705798B1 (en) * | 2014-01-07 | 2017-07-11 | Google Inc. | Systems and methods for routing data through data centers using an indirect generalized hypercube network |
US9444695B2 (en) * | 2014-01-30 | 2016-09-13 | Xerox Corporation | Methods and systems for scheduling a task |
KR101815148B1 (en) * | 2014-02-27 | 2018-01-04 | 인텔 코포레이션 | Techniques to allocate configurable computing resources |
EP3111592B1 (en) * | 2014-02-27 | 2021-04-28 | Intel Corporation | Workload optimization, scheduling, and placement for rack-scale architecture computing systems |
US9363926B1 (en) * | 2014-03-17 | 2016-06-07 | Amazon Technologies, Inc. | Modular mass storage system with staggered backplanes |
US9925492B2 (en) * | 2014-03-24 | 2018-03-27 | Mellanox Technologies, Ltd. | Remote transactional memory |
US10218645B2 (en) * | 2014-04-08 | 2019-02-26 | Mellanox Technologies, Ltd. | Low-latency processing in a network node |
US9503391B2 (en) * | 2014-04-11 | 2016-11-22 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and system for network function placement |
US9544233B2 (en) * | 2014-04-28 | 2017-01-10 | New Jersey Institute Of Technology | Congestion management for datacenter network |
US9081828B1 (en) * | 2014-04-30 | 2015-07-14 | Igneous Systems, Inc. | Network addressable storage controller with storage drive profile comparison |
TWI510933B (en) * | 2014-05-13 | 2015-12-01 | Acer Inc | Method for remotely accessing data and local apparatus using the method |
CN113157442A (en) * | 2014-05-22 | 2021-07-23 | 华为技术有限公司 | Node interconnection device, resource control node and server system |
US9477279B1 (en) * | 2014-06-02 | 2016-10-25 | Datadirect Networks, Inc. | Data storage system with active power management and method for monitoring and dynamical control of power sharing between devices in data storage system |
US9602351B2 (en) * | 2014-06-06 | 2017-03-21 | Microsoft Technology Licensing, Llc | Proactive handling of network faults |
US9684575B2 (en) * | 2014-06-23 | 2017-06-20 | Liqid Inc. | Failover handling in modular switched fabric for data storage systems |
US10382279B2 (en) * | 2014-06-30 | 2019-08-13 | Emc Corporation | Dynamically composed compute nodes comprising disaggregated components |
US10122605B2 (en) * | 2014-07-09 | 2018-11-06 | Cisco Technology, Inc | Annotation of network activity through different phases of execution |
US9892079B2 (en) * | 2014-07-25 | 2018-02-13 | Rajiv Ganth | Unified converged network, storage and compute system |
US9262144B1 (en) * | 2014-08-20 | 2016-02-16 | International Business Machines Corporation | Deploying virtual machine instances of a pattern to regions of a hierarchical tier using placement policies and constraints |
US9684531B2 (en) * | 2014-08-21 | 2017-06-20 | International Business Machines Corporation | Combining blade servers based on workload characteristics |
CN104168332A (en) * | 2014-09-01 | 2014-11-26 | 广东电网公司信息中心 | Load balance and node state monitoring method in high performance computing |
US9858104B2 (en) * | 2014-09-24 | 2018-01-02 | Pluribus Networks, Inc. | Connecting fabrics via switch-to-switch tunneling transparent to network servers |
US10630767B1 (en) * | 2014-09-30 | 2020-04-21 | Amazon Technologies, Inc. | Hardware grouping based computing resource allocation |
US10061599B1 (en) * | 2014-10-16 | 2018-08-28 | American Megatrends, Inc. | Bus enumeration acceleration |
US9886306B2 (en) * | 2014-11-21 | 2018-02-06 | International Business Machines Corporation | Cross-platform scheduling with long-term fairness and platform-specific optimization |
US9098451B1 (en) * | 2014-11-21 | 2015-08-04 | Igneous Systems, Inc. | Shingled repair set for writing data |
WO2016090485A1 (en) * | 2014-12-09 | 2016-06-16 | Cirba Ip Inc. | System and method for routing computing workloads based on proximity |
US20160173600A1 (en) | 2014-12-15 | 2016-06-16 | Cisco Technology, Inc. | Programmable processing engine for a virtual interface controller |
US10057186B2 (en) * | 2015-01-09 | 2018-08-21 | International Business Machines Corporation | Service broker for computational offloading and improved resource utilization |
EP3046028B1 (en) * | 2015-01-15 | 2020-02-19 | Alcatel Lucent | Load-balancing and scaling of cloud resources by migrating a data session |
US10114692B2 (en) * | 2015-01-27 | 2018-10-30 | Quantum Corporation | High/low energy zone data storage |
US10234930B2 (en) * | 2015-02-13 | 2019-03-19 | Intel Corporation | Performing power management in a multicore processor |
JP2016167143A (en) * | 2015-03-09 | 2016-09-15 | 富士通株式会社 | Information processing system and control method of the same |
US9276900B1 (en) * | 2015-03-19 | 2016-03-01 | Igneous Systems, Inc. | Network bootstrapping for a distributed storage system |
US10848408B2 (en) * | 2015-03-26 | 2020-11-24 | Vmware, Inc. | Methods and apparatus to control computing resource utilization of monitoring agents |
US10606651B2 (en) * | 2015-04-17 | 2020-03-31 | Microsoft Technology Licensing, Llc | Free form expression accelerator with thread length-based thread assignment to clustered soft processor cores that share a functional circuit |
US10019388B2 (en) * | 2015-04-28 | 2018-07-10 | Liqid Inc. | Enhanced initialization for data storage assemblies |
US9910664B2 (en) * | 2015-05-04 | 2018-03-06 | American Megatrends, Inc. | System and method of online firmware update for baseboard management controller (BMC) devices |
US20160335209A1 (en) * | 2015-05-11 | 2016-11-17 | Quanta Computer Inc. | High-speed data transmission using pcie protocol |
US9696781B2 (en) * | 2015-05-28 | 2017-07-04 | Cisco Technology, Inc. | Automated power control for reducing power usage in communications networks |
US11203486B2 (en) * | 2015-06-02 | 2021-12-21 | Alert Innovation Inc. | Order fulfillment system |
US9792248B2 (en) * | 2015-06-02 | 2017-10-17 | Microsoft Technology Licensing, Llc | Fast read/write between networked computers via RDMA-based RPC requests |
US9606836B2 (en) * | 2015-06-09 | 2017-03-28 | Microsoft Technology Licensing, Llc | Independently networkable hardware accelerators for increased workflow optimization |
CN204887839U (en) * | 2015-07-23 | 2015-12-16 | 中兴通讯股份有限公司 | Veneer module level water cooling system |
US10055218B2 (en) * | 2015-08-11 | 2018-08-21 | Quanta Computer Inc. | System and method for adding and storing groups of firmware default settings |
US10348574B2 (en) * | 2015-08-17 | 2019-07-09 | Vmware, Inc. | Hardware management systems for disaggregated rack architectures in virtual server rack deployments |
US10736239B2 (en) * | 2015-09-22 | 2020-08-04 | Z-Impact, Inc. | High performance computing rack and storage system with forced cooling |
US10387209B2 (en) * | 2015-09-28 | 2019-08-20 | International Business Machines Corporation | Dynamic transparent provisioning of resources for application specific resources |
US10162793B1 (en) * | 2015-09-29 | 2018-12-25 | Amazon Technologies, Inc. | Storage adapter device for communicating with network storage |
US9888607B2 (en) * | 2015-09-30 | 2018-02-06 | Seagate Technology Llc | Self-biasing storage device sled |
US10216643B2 (en) * | 2015-11-23 | 2019-02-26 | International Business Machines Corporation | Optimizing page table manipulations |
US9811347B2 (en) * | 2015-12-14 | 2017-11-07 | Dell Products, L.P. | Managing dependencies for human interface infrastructure (HII) devices |
US10028401B2 (en) * | 2015-12-18 | 2018-07-17 | Microsoft Technology Licensing, Llc | Sidewall-accessible dense storage rack |
US20170180220A1 (en) * | 2015-12-18 | 2017-06-22 | Intel Corporation | Techniques to Generate Workload Performance Fingerprints for Cloud Infrastructure Elements |
US10374926B2 (en) * | 2016-01-28 | 2019-08-06 | Oracle International Corporation | System and method for monitoring logical network traffic flows using a ternary content addressable memory in a high performance computing environment |
US10452467B2 (en) | 2016-01-28 | 2019-10-22 | Intel Corporation | Automatic model-based computing environment performance monitoring |
EP3420450A1 (en) * | 2016-02-23 | 2019-01-02 | Telefonaktiebolaget LM Ericsson (publ) | Methods and modules relating to allocation of host machines |
US20170257970A1 (en) * | 2016-03-04 | 2017-09-07 | Radisys Corporation | Rack having uniform bays and an optical interconnect system for shelf-level, modular deployment of sleds enclosing information technology equipment |
US9811281B2 (en) * | 2016-04-07 | 2017-11-07 | International Business Machines Corporation | Multi-tenant memory service for memory pool architectures |
US10701141B2 (en) * | 2016-06-30 | 2020-06-30 | International Business Machines Corporation | Managing software licenses in a disaggregated environment |
US11706895B2 (en) * | 2016-07-19 | 2023-07-18 | Pure Storage, Inc. | Independent scaling of compute resources and storage resources in a storage system |
US10091904B2 (en) * | 2016-07-22 | 2018-10-02 | Intel Corporation | Storage sled for data center |
US10234833B2 (en) * | 2016-07-22 | 2019-03-19 | Intel Corporation | Technologies for predicting power usage of a data center |
US20180034908A1 (en) * | 2016-07-27 | 2018-02-01 | Alibaba Group Holding Limited | Disaggregated storage and computation system |
US10365852B2 (en) * | 2016-07-29 | 2019-07-30 | Vmware, Inc. | Resumable replica resynchronization |
US10193997B2 (en) | 2016-08-05 | 2019-01-29 | Dell Products L.P. | Encoded URI references in restful requests to facilitate proxy aggregation |
US10127107B2 (en) * | 2016-08-14 | 2018-11-13 | Nxp Usa, Inc. | Method for performing data transaction that selectively enables memory bank cuts and memory device therefor |
US10108560B1 (en) * | 2016-09-14 | 2018-10-23 | Evol1-Ip, Llc | Ethernet-leveraged hyper-converged infrastructure |
US10303458B2 (en) * | 2016-09-29 | 2019-05-28 | Hewlett Packard Enterprise Development Lp | Multi-platform installer |
US10776342B2 (en) * | 2016-11-18 | 2020-09-15 | Tuxena, Inc. | Systems and methods for recovering lost clusters from a mounted volume |
US10726131B2 (en) * | 2016-11-21 | 2020-07-28 | Facebook, Inc. | Systems and methods for mitigation of permanent denial of service attacks |
US20180150256A1 (en) * | 2016-11-29 | 2018-05-31 | Intel Corporation | Technologies for data deduplication in disaggregated architectures |
CN109891908A (en) * | 2016-11-29 | 2019-06-14 | 英特尔公司 | Technology for the interconnection of millimeter wave rack |
US10503671B2 (en) * | 2016-12-29 | 2019-12-10 | Oath Inc. | Controlling access to a shared resource |
US10282549B2 (en) * | 2017-03-07 | 2019-05-07 | Hewlett Packard Enterprise Development Lp | Modifying service operating system of baseboard management controller |
US10967465B2 (en) * | 2017-03-08 | 2021-04-06 | Bwxt Nuclear Energy, Inc. | Apparatus and method for baffle bolt repair |
US20180288152A1 (en) * | 2017-04-01 | 2018-10-04 | Anjaneya R. Chagam Reddy | Storage dynamic accessibility mechanism method and apparatus |
US10331581B2 (en) * | 2017-04-10 | 2019-06-25 | Hewlett Packard Enterprise Development Lp | Virtual channel and resource assignment |
US10355939B2 (en) * | 2017-04-13 | 2019-07-16 | International Business Machines Corporation | Scalable data center network topology on distributed switch |
US10467052B2 (en) * | 2017-05-01 | 2019-11-05 | Red Hat, Inc. | Cluster topology aware container scheduling for efficient data transfer |
US10303615B2 (en) * | 2017-06-16 | 2019-05-28 | Hewlett Packard Enterprise Development Lp | Matching pointers across levels of a memory hierarchy |
US20190166032A1 (en) * | 2017-11-30 | 2019-05-30 | American Megatrends, Inc. | Utilization based dynamic provisioning of rack computing resources |
US10447273B1 (en) * | 2018-09-11 | 2019-10-15 | Advanced Micro Devices, Inc. | Dynamic virtualized field-programmable gate array resource control for performance and reliability |
US11201818B2 (en) * | 2019-04-04 | 2021-12-14 | Cisco Technology, Inc. | System and method of providing policy selection in a network |
-
2017
- 2017-12-21 US US15/850,325 patent/US20190068466A1/en not_active Abandoned
- 2017-12-29 US US15/858,305 patent/US20190068464A1/en not_active Abandoned
- 2017-12-29 US US15/858,549 patent/US20190065401A1/en not_active Abandoned
- 2017-12-29 US US15/858,557 patent/US20190065083A1/en not_active Abandoned
- 2017-12-29 US US15/858,316 patent/US20190065260A1/en not_active Abandoned
- 2017-12-29 US US15/858,288 patent/US20190068521A1/en not_active Abandoned
- 2017-12-29 US US15/858,286 patent/US20190068523A1/en not_active Abandoned
- 2017-12-29 US US15/858,748 patent/US11614979B2/en active Active
- 2017-12-29 US US15/858,542 patent/US11748172B2/en active Active
- 2017-12-30 US US15/859,394 patent/US11467885B2/en active Active
- 2017-12-30 US US15/859,388 patent/US20190065231A1/en not_active Abandoned
- 2017-12-30 US US15/859,363 patent/US20190068444A1/en not_active Abandoned
- 2017-12-30 US US15/859,366 patent/US20190065261A1/en not_active Abandoned
- 2017-12-30 US US15/859,385 patent/US20190065281A1/en not_active Abandoned
- 2017-12-30 US US15/859,364 patent/US11392425B2/en active Active
- 2017-12-30 US US15/859,368 patent/US11422867B2/en active Active
-
2018
- 2018-03-09 US US15/916,394 patent/US20190065415A1/en not_active Abandoned
- 2018-03-23 US US15/933,855 patent/US11030017B2/en active Active
- 2018-03-30 US US15/942,108 patent/US20190067848A1/en not_active Abandoned
- 2018-03-30 US US15/942,101 patent/US11416309B2/en active Active
- 2018-06-29 US US16/023,803 patent/US10888016B2/en active Active
- 2018-06-29 US US16/022,962 patent/US11055149B2/en active Active
- 2018-07-27 CN CN201810845565.8A patent/CN109426316A/en active Pending
- 2018-07-27 CN CN201810843475.5A patent/CN109428841B/en active Active
- 2018-07-30 WO PCT/US2018/044366 patent/WO2019045930A1/en unknown
- 2018-07-30 WO PCT/US2018/044363 patent/WO2019045928A1/en active Application Filing
- 2018-07-30 DE DE112018004798.9T patent/DE112018004798T5/en active Pending
- 2018-07-30 EP EP18852427.6A patent/EP3676708A4/en active Pending
- 2018-07-30 WO PCT/US2018/044365 patent/WO2019045929A1/en active Application Filing
- 2018-08-30 CN CN201811002563.9A patent/CN109428843A/en active Pending
- 2018-08-30 CN CN201811004916.9A patent/CN109426630A/en active Pending
- 2018-08-30 US US16/642,520 patent/US20200192710A1/en not_active Abandoned
- 2018-08-30 WO PCT/US2018/048917 patent/WO2019046620A1/en active Application Filing
- 2018-08-30 CN CN201811005041.4A patent/CN109426646B/en active Active
- 2018-08-30 WO PCT/US2018/048946 patent/WO2019046639A1/en active Application Filing
- 2018-08-30 CN CN201811001590.4A patent/CN109428889A/en active Pending
- 2018-08-30 CN CN201811004869.8A patent/CN109426633A/en active Pending
- 2018-08-30 CN CN201811004878.7A patent/CN109426568A/en active Pending
- 2018-08-30 US US16/642,523 patent/US20200257566A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150186319A1 (en) * | 2013-12-26 | 2015-07-02 | Dirk F. Blevins | Computer architecture to provide flexibility and/or scalability |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11297149B2 (en) | 2017-01-30 | 2022-04-05 | Centurylink Intellectual Property Llc | Application programming interface (API) to provide network metrics and network resource control to users |
US11122132B2 (en) * | 2017-01-30 | 2021-09-14 | Centurylink Intellectual Property Llc | Application programming interface (API) to provide network metrics and network resource control to users |
US11315013B2 (en) * | 2018-04-23 | 2022-04-26 | EMC IP Holding Company LLC | Implementing parameter server in networking infrastructure for high-performance computing |
US11650951B2 (en) * | 2019-04-02 | 2023-05-16 | Intel Corporation | Edge component computing system having integrated FaaS call handling capability |
US20210326299A1 (en) * | 2019-04-02 | 2021-10-21 | Intel Corporation | Edge component computing system having integrated faas call handling capability |
US11416294B1 (en) * | 2019-04-17 | 2022-08-16 | Juniper Networks, Inc. | Task processing for management of data center resources |
US11636503B2 (en) * | 2020-02-26 | 2023-04-25 | At&T Intellectual Property I, L.P. | System and method for offering network slice as a service |
US11961101B2 (en) | 2020-02-26 | 2024-04-16 | At&T Intellectual Property I, L.P. | System and method for offering network slice as a service |
US11630696B2 (en) | 2020-03-30 | 2023-04-18 | International Business Machines Corporation | Messaging for a hardware acceleration system |
US11295135B2 (en) * | 2020-05-29 | 2022-04-05 | Corning Research & Development Corporation | Asset tracking of communication equipment via mixed reality based labeling |
US11374808B2 (en) * | 2020-05-29 | 2022-06-28 | Corning Research & Development Corporation | Automated logging of patching operations via mixed reality based labeling |
US11947971B2 (en) * | 2020-06-11 | 2024-04-02 | Hewlett Packard Enterprise Development Lp | Remote resource configuration mechanism |
US20210389960A1 (en) * | 2020-06-11 | 2021-12-16 | Hewlett Packard Enterprise Development Lp | Remote resource configuration mechanism |
US11360789B2 (en) | 2020-07-06 | 2022-06-14 | International Business Machines Corporation | Configuration of hardware devices |
US11736559B2 (en) | 2020-08-05 | 2023-08-22 | Avesha, Inc. | Providing a set of application slices within an application environment |
US11405451B2 (en) * | 2020-09-30 | 2022-08-02 | Jpmorgan Chase Bank, N.A. | Data pipeline architecture |
US20220188170A1 (en) * | 2020-12-15 | 2022-06-16 | Google Llc | Multi-Tenant Control Plane Management on Computing Platform |
US11948014B2 (en) * | 2020-12-15 | 2024-04-02 | Google Llc | Multi-tenant control plane management on computing platform |
US20230251893A1 (en) * | 2020-12-22 | 2023-08-10 | Reliance Jio Infocomm Usa, Inc. | Intelligent data plane acceleration by offloading to distributed smart network interfaces |
US11645104B2 (en) * | 2020-12-22 | 2023-05-09 | Reliance Jio Infocomm Usa, Inc. | Intelligent data plane acceleration by offloading to distributed smart network interfaces |
US20220197681A1 (en) * | 2020-12-22 | 2022-06-23 | Reliance Jio Infocomm Usa, Inc. | Intelligent data plane acceleration by offloading to distributed smart network interfaces |
US11470015B1 (en) * | 2021-03-22 | 2022-10-11 | Amazon Technologies, Inc. | Allocating workloads to heterogenous worker fleets |
US11805073B2 (en) | 2021-05-03 | 2023-10-31 | Avesha, Inc. | Controlling placement of workloads of an application within an application environment |
US20230004786A1 (en) * | 2021-06-30 | 2023-01-05 | Micron Technology, Inc. | Artificial neural networks on a deep learning accelerator |
US20230093868A1 (en) * | 2021-09-22 | 2023-03-30 | Ridgeline, Inc. | Mechanism for real-time identity resolution in a distributed system |
US20230239209A1 (en) * | 2022-01-21 | 2023-07-27 | International Business Machines Corporation | Optimizing container executions with network-attached hardware components of a composable disaggregated infrastructure |
US11863385B2 (en) * | 2022-01-21 | 2024-01-02 | International Business Machines Corporation | Optimizing container executions with network-attached hardware components of a composable disaggregated infrastructure |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200257566A1 (en) | Technologies for managing disaggregated resources in a data center | |
US11522682B2 (en) | Technologies for providing streamlined provisioning of accelerated functions in a disaggregated architecture | |
US11467873B2 (en) | Technologies for RDMA queue pair QOS management | |
US20190065290A1 (en) | Technologies for providing efficient reprovisioning in an accelerator device | |
US11228539B2 (en) | Technologies for managing disaggregated accelerator networks based on remote direct memory access | |
EP3731090A1 (en) | Technologies for providing resource health based node composition and management | |
US10884968B2 (en) | Technologies for flexible protocol acceleration | |
EP3731091A1 (en) | Technologies for providing an accelerator device discovery service | |
US11038815B2 (en) | Technologies for managing burst bandwidth requirements | |
US12073255B2 (en) | Technologies for providing latency-aware consensus management in a disaggregated architecture | |
US10783100B2 (en) | Technologies for flexible I/O endpoint acceleration | |
US10579547B2 (en) | Technologies for providing I/O channel abstraction for accelerator device kernels | |
EP3757784A1 (en) | Technologies for managing accelerator resources | |
US20210073161A1 (en) | Technologies for establishing communication channel between accelerator device kernels | |
EP3757785B1 (en) | Technologies for facilitating remote memory requests in accelerator devices | |
EP3731094A1 (en) | Technologies for providing inter-kernel flow control for accelerator device kernels | |
WO2019165110A1 (en) | Technologies for achieving network quality of assurance with hardware acceleration | |
US20190324802A1 (en) | Technologies for providing efficient message polling | |
EP3731095A1 (en) | Technologies for providing inter-kernel communication abstraction to support scale-up and scale-out |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
AS | Assignment |
Owner name: INTEL CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GANGULI, MRITTIKA;NARAYAN, ANANTH;BHANDARU, MALINI;AND OTHERS;SIGNING DATES FROM 20180830 TO 20210920;REEL/FRAME:059638/0762 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |