EP1636696B1 - Os agnostic resource sharing across multiple computing platforms - Google Patents

Os agnostic resource sharing across multiple computing platforms Download PDF

Info

Publication number
EP1636696B1
EP1636696B1 EP04754766.6A EP04754766A EP1636696B1 EP 1636696 B1 EP1636696 B1 EP 1636696B1 EP 04754766 A EP04754766 A EP 04754766A EP 1636696 B1 EP1636696 B1 EP 1636696B1
Authority
EP
European Patent Office
Prior art keywords
resource
server
blade
oob
access
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP04754766.6A
Other languages
German (de)
French (fr)
Other versions
EP1636696A2 (en
Inventor
Vincent Zimmer
Michael Rothman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of EP1636696A2 publication Critical patent/EP1636696A2/en
Application granted granted Critical
Publication of EP1636696B1 publication Critical patent/EP1636696B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4405Initialisation of multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals

Definitions

  • the field of invention relates generally to clustered computing environments, such as blade server computing environments, and, more specifically but not exclusively relates to techniques for sharing resources hosted by individual platforms (nodes) to create global resources that may be shared across all nodes.
  • IT Information Technology
  • CIOs Chief Information Officers
  • a company's IT (information technology) infrastructure is centered around computer servers that are linked together via various types of networks, such as private local area networks (LANs) and private and public wide area networks (WANs).
  • the servers are used to deploy various applications and to manage data storage and transactional processes.
  • these servers will include stand-alone servers and/or higher density rack-mounted servers, such as 4U, 2U and 1U servers.
  • a blade server employs a plurality of closely-spaced “server blades” (blades) disposed in a common chassis to deliver high-density computing functionality.
  • blades servers
  • Each blade provides a complete computing platform, including one or more processors, memory, network connection, and disk storage integrated on a single system board.
  • other components such as power supplies and fans, are shared among the blades in a given chassis and/or rack. This provides a significant reduction in capital equipment costs when compared to conventional rack-mounted servers.
  • a scalable compute cluster is a group of two or more computer systems, also known as compute nodes, configured to work together to perform computational-intensive tasks.
  • SCC scalable compute cluster
  • the task can be completed much more quickly than if a single system performed the tasks.
  • the more nodes that are applied to a task the quicker the task can be completed.
  • the number of nodes that can effectively be used to complete the task is dependent on the application used.
  • a typical SCC is built using Intel®-based servers running the Linux operating system and cluster infrastructure software. These servers are often referred to as commodity off the shelf (COTS) servers. They are connected through a network to form the cluster.
  • COTS commodity off the shelf
  • An SCC normally needs anywhere from tens to hundreds of servers to be effective at performing computational-intensive tasks. Fulfilling this need to group a large number of servers in one location to form a cluster is a perfect fit for a blade server.
  • the blade server chassis design and architecture provides the ability to place a massive amount of computer horsepower in a single location.
  • the built-in networking and switching capabilities of the blade server architecture enables individual blades to be added or removed, enabling optimal scaling for a given tasks. With such flexibility, blade server-based SCC's provides a cost-effective alternative to other infrastructure for performing computational tasks, such as supercomputers.
  • each blade in a blade server is enabled to provide full platform functionality, thus being able to operate independent of other blades in the server.
  • the resources available to each blade are likewise limited to its own resources. Thus, in many instances resources are inefficiently utilized. Under current architectures, there is no scheme that enables efficient server-wide resource sharing.
  • US 2002/0124134 discloses a data storage system cluster architecture that includes integrated cached disc arrays (ICDAs) and cluster interconnect such as a set of fibre channel links.
  • ICDAs integrated cached disc arrays
  • a switch network in each ICDA provides connections between the cluster interconnect and host interfaces, disk interfaces, and memory modules that may reside in the ICDA.
  • Figure 12 is a schematic diagram illustrating data flows between a pair of computing platforms to support sharing a video resource
  • Figure 13 is a schematic diagram illustrating data flows between a pair of computing platforms to support sharing user input resources
  • Embodiments of methods and computer components and systems for performing resource sharing across clustered platform environments are described herein.
  • numerous specific details are set forth to provide a thorough understanding of embodiments of the invention.
  • One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc.
  • well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
  • techniques are disclosed herein for sharing resources across clustered platform environments in a manner under which resources hosted by individual platforms are made accessible to other platform nodes
  • the techniques employ firmware-based functionality that provides a "behind the scenes" access mechanisms without requiring any OS complicity.
  • the resource sharing and access operations are completely transparent to operating systems running on the blades, and thus operating system independent.
  • the capabilities afforded by the novel techniques disclosed herein may be employed in existing and future distributed platform environments without requiring any changes to the operating systems targeted for the environments.
  • the resource-sharing mechanism is effectuated by several platforms that "expose" resources that are aggregated to form global resources.
  • Each platform employs a respective set of firmware that runs prior to the operating system load (pre-boot) and coincident with the operating system runtime.
  • runtime deployment is facilitated by a hidden execution mode known as the System Management Mode (SMM), which has the ability to receive and respond to periodic System Management Interrupts (SMI) to allow resource sharing and access information to be transparently passed to firmware SMM code configured to effectuate the mechanisms.
  • SMM resource management code conveys information and messaging to other nodes via an out-of-band (OOB) network or communication channel in an OS-transparent manner.
  • OOB out-of-band
  • FIG. 1 a-c and 2 For illustrative purposes, several embodiments of the invention are disclosed below in the context of a blade server environment.
  • a rack-mounted chassis 100 is employed to provide power and communication functions for a plurality of blades 102, each of which occupies a corresponding slot. (It is noted that all slots in a chassis do not need to be occupied.)
  • one of more chassis 100 may be installed in a blade server rack 103 shown in Figure 1c .
  • Each blade is coupled to an interface plane 104 (i.e ., a backplane or mid-plane) upon installation via one or more mating connectors.
  • the interface plane will include a plurality of respective mating connectors that provide power and communication signals to the blades.
  • many interface planes provide "hot-swapping" functionality - that is, blades can be added or removed (“hot-swapped”) on the fly, without taking the entire chassis down through appropriate power and data signal buffering.
  • a typical mid-plane interface plane configuration is shown in Figures 1a and 1 b.
  • the backside of interface plane 104 is coupled to one or more power supplies 106.
  • the power supplies are redundant and hot-swappable, being coupled to appropriate power planes and conditioning circuitry to enable continued operation in the event of a power supply failure.
  • an array of power supplies may be used to supply power to an entire rack of blades, wherein there is not a one-to-one power supply-to-chassis correspondence.
  • a plurality of cooling fans 108 are employed to draw air through the chassis to cool the server blades.
  • a network connect card may include a physical interface comprising a plurality of network port connections (e.g., RJ-45 ports), or may comprise a high-density connector designed to directly connect to a network device, such as a network switch, hub, or router.
  • a network connect card may include a physical interface comprising a plurality of network port connections (e.g., RJ-45 ports), or may comprise a high-density connector designed to directly connect to a network device, such as a network switch, hub, or router.
  • Blades servers usually provide some type of management interface for managing operations of the individual blades. This may generally be facilitated by an out-of-band network or communication channel or channels. For example, one or more buses for facilitating a "private" or “management” network and appropriate switching may be built into the interface plane, or a private network may be implemented through closely-coupled network cabling and a network.
  • the switching and other management functionality may be provided by a management card 112 that is coupled to the backside or frontside of the interface plane.
  • a management server may be employed to manage blade activities, wherein communications are handled via standard computer networking infrastructure, such as Ethernet.
  • each blade comprises a separate computing platform that is configured to perform server-type functions, i.e., is a "server on a card.”
  • each blade includes components common to conventional servers, including a main circuit board 201 providing internal wiring ( i.e ., buses) for coupling appropriate integrated circuits (ICs) and other components mounted to the board.
  • These components include one or more processors 202 coupled to system memory 204 (e.g. , DDR RAM), cache memory 206 (e.g ., SDRAM), and a firmware storage device 208 (e.g ., flash memory).
  • system memory 204 e.g. , DDR RAM
  • cache memory 206 e.g ., SDRAM
  • firmware storage device 208 e.g ., flash memory
  • a "public" NIC (network interface) chip 210 is provided for supporting conventional network communication functions, such as to support communication between blades and external network infrastructure.
  • Other illustrated components include status LEDs 212, an RJ-45 console port 214, and an interface plane connector 216.
  • Additional components include various passive components (e.g. , resistors, capacitors), power conditioning components, and peripheral device connectors.
  • each blade 200 will also provide on-board storage. This is typically facilitated via one or more built-in disk controllers and corresponding connectors to which one or more disk drives 218 are coupled.
  • typical disk controllers include Ultra ATA controllers, SCSI controllers, and the like.
  • the disk drives may be housed separate from the blades in the same or a separate rack, such as might be the case when a network-attached storage (NAS) appliance is employed to storing large volumes of data.
  • NAS network-attached storage
  • an out-of-band communication channel comprises a communication means that supports communication between devices in an OS-transparent manner - that is, a means to enable inter-blade communication without requiring operating system complicity.
  • a dedicated bus such as a system management bus that implements the SMBUS standard (www.smbus.org), a dedicated private or management network, such as an Ethernet-based network using VLAN-802.1 Q), or a serial communication scheme, e.g., employing the RS-485 serial communication standard.
  • interface plane 104 will include corresponding buses or built-in network traces to support the selected OOB scheme.
  • appropriate network cabling and networking devices may be deployed inside or external to chassis 100.
  • embodiments of the invention employ a firmware-based scheme for effectuating a resource sharing set-up and access mechanism to enable sharing of resources across blade server nodes.
  • resource management firmware code is loaded during initialization of each blade and made available for access during OS run-time.
  • resource information is collected, and global resource information is built. Based on the global resource information, appropriate global resource access is provided back to each blade. This information is handed off to the operating system upon its initialization, such that the global resource appears (from the OS standpoint) as a local resource.
  • OS runtime operations accesses to the shared resources are handled via interaction between OS and/or OS drivers and corresponding firmware in conjunction with resource access management that is facilitated via the OOB channel.
  • resource sharing is facilitated via an extensible firmware framework known as Extensible Firmware Interface (EFI) (specifications and examples of which may be found at hftp://developer.intel.com/technology/efi).
  • EFI Extensible Firmware Interface
  • the EFI framework include provisions for extending BIOS functionality beyond that provided by the BIOS code stored in a platform's BIOS device (e.g., flash memory).
  • EFI enables firmware, in the form of firmware modules and drivers, to be loaded from a variety of different resources, including primary and secondary flash devices, option ROMs, various persistent storage devices (e.g., hard disks, CD ROMs, etc.), and even over computer networks.
  • firmware in the form of firmware modules and drivers, to be loaded from a variety of different resources, including primary and secondary flash devices, option ROMs, various persistent storage devices (e.g., hard disks, CD ROMs, etc.), and even over computer networks.
  • FIG. 3 shows an event sequence/architecture diagram used to illustrate operations performed by a platform under the framework in response to a cold boot (e.g., a power off/on reset).
  • the process is logically divided into several phases, including a pre-EFI Initialization Environment (PEI) phase, a Driver Execution Environment (DXE) phase, a Boot Device Selection (BDS) phase, a Transient System Load (TSL) phase, and an operating system runtime (RT) phase.
  • PEI pre-EFI Initialization Environment
  • DXE Driver Execution Environment
  • BDS Boot Device Selection
  • TSL Transient System Load
  • RT operating system runtime
  • the PEI phase provides a standardized method of loading and invoking specific initial configuration routines for the processor (CPU), chipset, and motherboard.
  • the PEI phase is responsible for initializing enough of the system to provide a stable base for the follow on phases.
  • Initialization of the platforms core components, including the CPU, chipset and main board ( i.e ., motherboard) is performed during the PEI phase.
  • This phase is also referred to as the "early initialization" phase.
  • Typical operations performed during this phase include the POST (power-on self test) operations, and discovery of platform resources.
  • the PEI phase discovers memory and prepares a resource map that is handed off to the DXE phase.
  • the state of the system at the end of the PEI phase is passed to the DXE phase through a list of position independent data structures called Hand Off Blocks (HOBs).
  • HOBs Hand Off Blocks
  • the DXE phase is the phase during which most of the system initialization is performed.
  • the DXE phase is facilitated by several components, including the DXE core 300, the DXE dispatcher 302, and a set of DXE drivers 304.
  • the DXE core 300 produces a set of Boot Services 306, Runtime Services 308, and DXE Services 310.
  • the DXE dispatcher 302 is responsible for discovering and executing DXE drivers 304 in the correct order.
  • the DXE drivers 304 are responsible for initializing the processor, chipset, and platform components as well as providing software abstractions for console and boot devices. These components work together to initialize the platform and provide the services required to boot an operating system.
  • the DXE and the Boot Device Selection phases work together to establish consoles and attempt the booting of operating systems.
  • the DXE phase is terminated when an operating system successfully begins its boot process (i.e ., the BDS phase starts). Only the runtime services and selected DXE services provided by the DXE core and selected services provided by runtime DXE drivers are allowed to persist into the OS runtime environment.
  • the result of DXE is the presentation of a fully formed EFI interface.
  • the DXE core is designed to be completely portable with no CPU, chipset, or platform dependencies. This is accomplished by designing in several features. First, the DXE core only depends upon the HOB list for its initial state. This means that the DXE core does not depend on any services from a previous phase, so all the prior phases can be unloaded once the HOB list is passed to the DXE core. Second, the DXE core does not contain any hard coded addresses. This further means the DXE core can be loaded anywhere in physical memory, and it can function correctly no matter where physical memory or where Firmware segments are located in the processor's physical address space. Third, the DXE core does not contain any CPU-specific, chipset specific, or platform specific information. Instead, the DXE core is abstracted from the system hardware through a set of architectural protocol interfaces. These architectural protocol interfaces are produced by DXE drivers 304, which are invoked by DXE Dispatcher 302.
  • the DXE core produces an EFI System Table 400 and its associated set of Boot Services 306 and Runtime Services 308, as shown in Figure 4 .
  • the DXE core also maintains a handle database 402.
  • the handle database comprises a list of one or more handles, wherein a handle is a list of one or more unique protocol GUIDs (Globally Unique Identifiers) that map to respective protocols 404.
  • GUIDs Globally Unique Identifiers
  • a protocol is a software abstraction for a set of services. Some protocols abstract I/O devices, and other protocols abstract a common set of system services.
  • a protocol typically contains a set of APIs and some number of data fields. Every protocol is named by a GUID, and the DXE Core produces services that allow protocols to be registered in the handle database. As the DXE Dispatcher executes DXE drivers, additional protocols will be added to the handle database including the architectural protocols used to abstract the DXE Core from platform specific details.
  • the Boot Services comprise a set of services that are used during the DXE and BDS phases. Among others, these services include Memory Services, Protocol Handler Services, and Driver Support Services: Memory Services provide services to allocate and free memory pages and allocate and free the memory pool on byte boundaries. It also provides a service to retrieve a map of all the current physical memory usage in the platform. Protocol Handler Services provides services to add and remove handles from the handle database. It also provides services to add and remove protocols from the handles in the handle database. Addition services are available that allow any component to lookup handles in the handle database, and open and close protocols in the handle database. Support Services provides services to connect and disconnect drivers to devices in the platform. These services are used by the BDS phase to either connect all drivers to all devices, or to connect only the minimum number of drivers to devices required to establish the consoles and boot an operating system ( i.e. , for supporting a fast boot mechanism).
  • Memory Services provide services to allocate and free memory pages and allocate and free the memory pool on byte boundaries. It also provides a service to retrieve a map of
  • Runtime Services are available both during pre-boot and OS runtime operations.
  • One of the Runtime Services that is leveraged by embodiments disclosed herein is the Variable Services.
  • the Variable Services provide services to lookup, add, and remove environmental variables from both volatile and non-volatile storage.
  • the DXE Services Table includes data corresponding to a first set of DXE services 406A that are available during pre-boot only, and a second set of DXE services 406B that are available during both pre-boot and OS runtime.
  • the pre-boot only services include Global Coherency Domain Services, which provide services to manage I/O resources, memory mapped I/O resources, and system memory resources in the platform. Also included are DXE Dispatcher Services, which provide services to manage DXE drivers that are being dispatched by the DXE dispatcher.
  • the services offered by each of Boot Services 306, Runtime Services 308, and DXE services 310 are accessed via respective sets of API's 312, 314, and 316.
  • the API's provide an abstracted interface that enables subsequently loaded components to leverage selected services provided by the DXE Core.
  • DXE Dispatcher 302. The DXE Dispatcher is responsible for loading and invoking DXE drivers found in firmware volumes, which correspond to the logical storage units from which firmware is loaded under the EFI framework.
  • the DXE dispatcher searches for drivers in the firmware volumes described by the HOB List. As execution continues, other firmware volumes might be located. When they are, the dispatcher searches them for drivers as well.
  • DXE drivers There are two subclasses of DXE drivers.
  • the first subclass includes DXE drivers that execute very early in the DXE phase. The execution order of these DXE drivers depends on the presence and contents of an a priori file and the evaluation of dependency expressions.
  • These early DXE drivers will typically contain processor, chipset, and platform initialization code. These early drivers will also typically produce the architectural protocols that are required for the DXE core to produce its full complement of Boot Services and Runtime Services.
  • the second class of DXE drivers are those that comply with the EFI 1.10 Driver Model. These drivers do not perform any hardware initialization when they are executed by the DXE dispatcher. Instead, they register a Driver Binding Protocol interface in the handle database. The set of Driver Binding Protocols are used by the BDS phase to connect the drivers to the devices required to establish consoles and provide access to boot devices.
  • the DXE Drivers that comply with the EFI 1.10 Driver Model ultimately provide software abstractions for console devices and boot devices when they are explicitly asked to do so.
  • Any DXE driver may consume the Boot Services and Runtime Services to perform their functions.
  • the early DXE drivers need to be aware that not all of these services may be available when they execute because all of the architectural protocols might not have been registered yet.
  • DXE drivers must use dependency expressions to guarantee that the services and protocol interfaces they require are available before they are executed.
  • the DXE drivers that comply with the EFI 1.10 Driver Model do not need to be concerned with this possibility. These drivers simply register the Driver Binding Protocol in the handle database when they are executed. This operation can be performed without the use of any architectural protocols.
  • a DXE driver may "publish" an API by using the InstallConfigurationTable function. This published drivers are depicted by API's 318. Under EFI, publication of an API exposes the API for access by other firmware components. The API's provide interfaces for the Device, Bus, or Service to which the DXE driver corresponds during their respective lifetimes.
  • the BDS architectural protocol executes during the BDS phase.
  • the BDS architectural protocol locates and loads various applications that execute in the pre-boot services environment.
  • Such applications might represent a traditional OS boot loader, or extended services that might run instead of, or prior to loading the final OS.
  • extended pre-boot services might include setup configuration, extended diagnostics, flash update support, OEM value-adds, or the OS boot code.
  • a Boot Dispatcher 320 is used during the BDS phase to enable selection of a Boot target, e.g., an OS to be booted by the system.
  • a final OS Boot loader 322 is run to load the selected OS. Once the OS has been loaded, there is no further need for the Boot Services 306, and for many of the services provided in connection with DXE drivers 304 via API's 318, as well as DXE Services 306A. Accordingly, these reduced sets of API's that may be accessed during OS runtime are depicted as API's 316A, and 318A in Figure 3 .
  • An OS-transparent out-of-band communication scheme is employed to allow various types of resources to be shared across server nodes.
  • firmware-base components e.g, firmware, drivers and API's
  • the scheme may be effectuated across multiple computing platforms, including groups of blades, individual chassis, racks, or groups of racks.
  • firmware provided on each platform is loaded and executed to set up the OOB channel and appropriate resource access and data re-routing mechanisms.
  • Each blade then transmits information about its shared resources over the OOB to a global resource manager.
  • the global resource manager aggregates the data and configures a "virtual" global resource.
  • Global resource configuration data in the form of global resource descriptors is then sent back to the blades to apprise the blades of the configuration and access mechanism for the global resource.
  • Drivers are then configured to support access to the global resource.
  • the global resource descriptors are handed off to the operating system during OS load, wherein the virtual global resource appears as a local device to the operating system, and thus is employed as such during OS runtime operations without requiring any modification to the OS code.
  • the process begins by performing several initialization operations on each blade to set up the resource device drivers and the OOB communications framework.
  • the system performs pre-boot system initialization operations in the manner discussed above with reference to Figure 3 .
  • early initialization operations are performed in a block 502 by loading and executing firmware stored in each blade's boot firmware device (BFD).
  • BFD boot firmware device
  • the BFD comprises the firmware device that stores firmware for booting the system
  • the BFD for server blade 200 comprises firmware device 208.
  • processor 202 executes reset stub code that jumps execution to the base address of a boot block of the BFD via a reset vector.
  • the boot block contains firmware instructions for performing early initialization, and is executed by processor 202 to initialize the CPU, chipset, and motherboard. (It is noted that during a warm boot (reset) early initialization is not performed, or is at least performed in a limited manner.) Execution of firmware instructions corresponding to an EFI core are executed next, leading to the DXE phase.
  • the Variable Services are setup in the manner discussed above with reference to Figures 3 and 4 .
  • DXE dispatcher 302 begins loading DXE drivers 304.
  • Each DXE driver corresponds to a system component, and provides an interface for directly accessing that component. Included in the DXE drivers is an OOB monitor driver that will be subsequently employed for facilitating OOB communications.
  • the OOB monitor driver is installed in a protected area in each blade.
  • an out-of-band communication channel or network that operates independent of network communications that are managed by the operating systems is employed to facilitate inter-blade communication in an OS-transparent manner.
  • SMRAM 600 (see Figure 6 ), and is hidden from the subsequently-loaded operating system.
  • SMM OOB communication code 602 stored in firmware is loaded into SMRAM 600, and a corresponding OOB communications SMM handler 604 for handling OOB communications are setup.
  • An SMM handler is a type of interrupt handler, and is invoked in response to a system management interrupt (SMI).
  • SMI system management interrupt
  • an SMI interrupt may be asserted via an SMI pin on the system's processor.
  • the processor stores its current context (i.e ., information pertaining to current operations, including its current execution mode, stack and register information, etc.), and switches its execution mode to its system management mode.
  • SMM handlers are then sequentially dispatched to determine if they are the appropriate handler for servicing the SMI event.
  • this handler When this handler is identified, it is allowed to execute to completion to service the SMI event. After the SMI event is serviced, an RSM (resume) instruction is issued to return the processor to its previous execution mode using the previously saved context data. The net result is that SMM operation is completely transparent to the operating system.
  • a shared resource is any blade component or device that is to be made accessible for shared access.
  • Such components and devices include but are to limited to fixed storage devices, removable media devices, input devices (e.g., keyboard, mouse), video devices, audio devices, volatile memory (i.e ., system RAM), and non-volatile memory.
  • the logic proceeds to perform the loop operations defined within respective start and end loop blocks 508 and 509 for each sharable resource that is discovered. This includes operations in a block 510, wherein a device path to describe the shared resource is constructed and configuration parameters are collected.
  • the device path provides external users with a means for accessing the resource.
  • the configuration parameters are used to build global resources, as described below in further detail.
  • the device path and resource configuration information is transmitted or broadcasts to a global resource manager 608 via an OOB communication channel 610 in a block 512.
  • the global resource manager may generally be hosted by an existing component, such as one of the blades or management card 112.
  • a plurality of local global resource managers are employed, wherein global resource management is handled through a collective process rather than employing a single manager.
  • a selective transmission to that component may be employed.
  • a message is first broadcast over the OOB channel to identify the location of the host component.
  • OOB communications under the aforementioned SMM hidden execution mode are effectuated in the following manner.
  • an SMI is generated to cause the processor to switch into SMM, as shown occurring with BLADE 1 in Figure 6 .
  • This may be effectuated through one of two means - either an assertion of the processors SMI pin ( i.e ., a hardware-based generation), or via issuance of an "SMI" instruction ( i.e ., a software-based generation).
  • an assertion of the SMI pin may be produced by placing an appropriate signal on a management bus or the like.
  • a management bus or the like For example, when an SMBUS is deployed using I 2 C, one of the bus lines may be hardwired to the SMI pins of each blade's processor via that blade's connector.
  • the interface plane may provide a separate means for producing a similar result.
  • all SMI pins may be commonly tied to a single bus line, or the bus may be structured to enable independent SMI pin assertions for respective blades.
  • certain network interface chips such as those made by Intel®, provide a second MAC address for use as a "back channel" in addition to a primary MAC address used for conventional network communications.
  • these NICs provide a built-in system management feature, wherein an incoming communication referencing the second MAC address causes the NIC to assert an SMI signal. This scheme enables an OOB channel to be deployed over the same cabling as the "public" network (not shown).
  • a firmware driver is employed to access the OOB channel.
  • an appropriate firmware driver will be provided to access the network or serial port. Since the configuration of the firmware driver will be known in advance (and thus independent of the operating system), the SMM handler may directly access the OOB channel via the firmware driver.
  • direct access may be available to the SMM handler without a corresponding firmware driver, although this latter option could also be employed.
  • the asserted processor switches to SMM execution mode and begins dispatch of its SMM handler(s) until the appropriate handler (e.g., communication handler 604) is dispatched to facilitate the OOB communication.
  • the OOB communications are performed when the blade processors are operating in SMM, whereby the communications are transparent to the operating systems running on those blades.
  • the shared device path and resource configuration information is received by global resource manager 608.
  • shared device path and resource configuration information for other blades is received by the global resource manager.
  • Individual resources may be combined to form a global resource.
  • storage provided by individual storage devices e.g., hard disks and system RAM
  • the resource configuration information might typically include storage capacity, such as number of storage blocks, partitioning information, and other information used for accessing the device.
  • a global resource access mechanism e.g., API
  • global resource descriptor 612 are built.
  • the global resource descriptor contains information identifying how to access the resource, and describes the configuration of the resource (from a global and/or local perspective).
  • the global resource descriptor 612 is transmitted to active nodes in the rack via the OOB channel in a block 518. This transmission operation may be performed using node-to-node OOB communications, or via an OOB broadcast. In response to receiving the global resource descriptor, it is stored by the receiving node in a block 520, leading to processing the next resource.
  • the operations of blocks 510, 512, 514, 516, 518, and 520 are repeated in a similar manner for each resource that is discovered until all sharable resources are processed.
  • access to shared resources is provided by corresponding firmware device drivers that are configured to access discovered shared resources via their global resource API's in a block 522. Further details of this access scheme when applied to specific resources are discussed below. As depicted by a continuation block 524, pre-boot platform initialization operations are then continued as described above to prepare for the OS load.
  • global resource descriptors corresponding to any shared resources that are discovered are handed off to the operation system. It is noted that the global resource descriptors that are handed off to the OS may or may not be identical to those built in block 516. Essentially, the global resource descriptors contain information to enable the operating system to configure access to the resource via its own device drivers. For example, in the case of a single shared storage volume, the OS receives information indicating that it has access to a "local" storage device (or optionally a networked storage device) having a storage capacity that spans the individual storage capacities of the individual storage devices that are shared. In the case of multiple shared storage volumes, respective storage capacity information will be handed off to the OS for each volume. The completion of the OS load leads to continued OS runtime operations, as depicted by a continuation block 528.
  • this abstracted access scheme is configured as a multi-layer architecture, as shown in Figures 8a and 8b .
  • blades BLADE 1 and BLADE 2 have respective copies of the architecture components, including an OS device drivers 800-1 and 800-2, management/access driver 802-1 and 802-2, resource device drivers 804-1 and 804-2, and OOB communication handlers 604-1 and 604-2.
  • FIG. 7 A flowchart illustrating an exemplary process for accessing a shared resource in accordance with one embodiment is shown in Figure 7 .
  • the process begins with an access request from a requestor, as depicted in a start block 700.
  • a typical requestor might be an application running on the operating system for the platform.
  • Executable code corresponding to such applications are generally stored in system memory 204, as depicted by runtime (RT) applications (APP) 806 and 808 in Figures 8a and 8b .
  • RT runtime
  • APP runtime applications
  • the access request corresponds to opening a previously stored file.
  • the runtime application will first make a request to the operating system (810) to access the file, providing a location for the file (e.g ., drive designation, path, and filename).
  • the drive designation is a drive letter previously allocated by the operating system for a virtual global storage resource comprising a plurality of disk drives 218, which include resource 1of BLADE 1and resource 2 on BLADE 2.
  • operating system 810 employs its OS device driver 800-1 to access the storage resource in a block 702.
  • OS device driver 800-1 would interface directly with resource driver 804-1 to access resource 1.
  • management/access driver 802-1 is accessed instead.
  • interface information such as an API or the like is handed off to the OS during OS-load, whereby the OS is instructed to access management/access driver 802-1 whenever there is a request to access the corresponding resource ( e.g. , resource 1).
  • a mechanism is provided to identify a particular host via which the appropriate resource may be accessed. In one embodiment, this mechanism is facilitated via a global resource map.
  • this mechanism is facilitated via a global resource map.
  • local copies 812-1 and 812-2 of a common global resource map are stored on respective blades BLADE 1 and BLADE 2.
  • a shared global resource map 812a is hosted by global resource manager 608. The global resource map matches specific resources with the portions of the global resource hosted by those specific resources.
  • the management/access driver queries local global resource map 812 to determine the host of the resource underlying the particular access request.
  • This resource and/or its host is known as the "resource target;" in the illustrated example the resource target comprises a resource 2 hosted by BLADE 2.
  • OOB communication operations are, to pass the resource access request to the resource target.
  • the management/access driver on the requesting platform e.g. , 802-1
  • the processor on BLADE 1s witches its mode to SMM in a block 708 and dispatches its SMM handlers until OOB communication handler 604-1 is launched.
  • the OOB communication handler asserts an SMI signal on the resource target host (BLADE 2) to initiate OOB communication between the two blades.
  • the processor mode on BLADE 2 is switched to SMM in a block 710, launching its OOB communication handler.
  • Blades 1 and 2 are enabled to communicate via OOB channel 610, and the access request is received by OOB communications handler 604-2.
  • OOB communications handler 604-2 After the resource access request has been sent, in one embodiment an "RSM" instruction is issued to the processor on BLADE 1 to switch the processor's operating mode back to what it was before being switched to SMM.
  • a block 712 the access request is then passed to management/access driver 802-2 via its API.
  • a query is then performed in a block 714 to verity that the platform receiving the access request is the actual host of the target resource. If it isn't the correct host, in one embodiment a message is passed back to the sequester indicating so (not shown).
  • an appropriate global resource manager is apprised of the situation. In essence, this situation would occur if the local global resource maps contained different information ( i.e ., are no longer synchronized). In response, the global resource manager would issue a command to resynchronize the local global resource maps (all not shown).
  • the platform host's resource device driver (804-2) is then employed to access the resource (e.g. , resource 2) to service the access request.
  • the access returns the requested data file.
  • Data corresponding to the request is then returned to the requester via OOB channel 610 in a block 718.
  • an RSM instruction is issued to the processor on BLADE 2 to switch the processor's operating mode back to what it was before being switched to SMM.
  • the requester's processor may or may not be operating an SMM at this time.
  • the requester's (BLADE 1) processor was switched back out of SMM in a block 708.
  • a new SMI is asserted to activate the OOB communications handler in a block 722.
  • the OOB communication handler is already waiting to receive the returned data.
  • the returned data are received via OOB channel 610, and the data are passed to the requester's management/access driver (802-1) in a block 724.
  • this firmware driver passes the data to back to OS device driver 800-1 in a block 726, leading to receipt of the data by the requester via the operating system in a block 728.
  • a similar resource access process is performed using a single global resource map in place of the local copies of the global resource map in the embodiment of Figure 8b .
  • many of the operations are the same as those discusses above with reference to Figure 8a , except that global resource manager 608 is employed as a proxy for accessing the resource, rather than using local global resource maps.
  • the resource access request is sent to global resource manager 608 via OOB channel 610 rather than directly to an identified resource target.
  • a lookup of global resource map 812a is performed to determine the resource target.
  • the data request is sent to the identified resource target, along with information identifying the requester.
  • the operations of blocks 712 - 728 are preformed, with the exception of optional operations 714.
  • a blade that hosts the global resource manager functions is identified through a nomination process, wherein each blade may include firmware for performing the management tasks.
  • the nomination scheme may be based on a physical assignment, such as a chassis slot, or may be based on an activation scheme, such as a first-in ordered scheme. For example, under a slot-based scheme, the blade having the lowest slot assignment for the group would be assigned power arbiter tasks. If that blade was removed, the blade having the lowest slot assignment from among the remaining blades would be nominated to host the global resource manager. Under a first-in ordered scheme, each blade would be assigned in installation order identifier (e.g., number) based on the order the blades were inserted or activated.
  • installation order identifier e.g., number
  • the global management task would be assigned to the blade with the lowest number, that is the first installed blade to begin with. Upon removal of that blade, the blade with the next lowest installation number would be nominated as the new power arbiter.
  • a redundancy scheme may be implemented wherein a second blade is nominated as a live back-up.
  • global resource mapping data may be stored in either system memory or as firmware variable data. If stored as firmware variable data, the mapping data will be able to persist across platform shutdowns.
  • the mapping data are stored a portion of system memory that is hidden from the operating system. This hidden portion of system memory may include a portion of SMRAM or a portion of memory reserved by firmware during pre-boot operations.
  • Another way to persist global resource mapping data across shutdowns is to store the data on a persistent storage device, such as a disk drive. However, when employing a disk drive it is recommended that the mapping data are stored in a manner that is inaccessible to the platform operating system, such as in the host protected area (HPA) of the disk drive.
  • HPA host protected area
  • FIG. 9a-b and 10a-b A more specific implementation of resource sharing is illustrated in Figures 9a-b and 10a-b .
  • the resource being shared comprise disk drives 218.
  • the storage resources provided by a plurality of disk drives 218 are aggregated to form a virtual storage volume "V:"
  • V virtual storage volume
  • the storage resources for each of the disk drives is depicted as respective groups of I/O storage comprising 10 blocks.
  • each of Blades 1-16 are depicted as hosting a single disk drive 218; it will be understood that an actual implementations each blade may host 0- N disk drives (depending on its configuration), that the number of blocks for each disk drive may vary, and that the actual number of blocks will be several orders of magnitude higher than those depicted herein.
  • virtual storage volume V appears as a single storage device.
  • the shared storage resources may be configured as 1- N virtual storage volumes, with each volume spanning a respective set of storage devices.
  • virtual storage volume V spans 16 disk drives 218.
  • a global resource map comprising a lookup table 1000 is employed.
  • the lookup table maps respective ranges of I/O blocks to the blade on which the disk drive hosting the I/O blocks resides.
  • the map would contain further information identifying the specific storage device on each blade.
  • an addressing scheme would be employed rather than simply identifying a blade number; however, the illustrated blade number assignments are depicted for clarity and simplicity.
  • FIGS 9b and 10b illustrate a RAID embodiment 902 using mirroring and duplexing in accordance with the RAID (Redundant Array of Individual Disks)-1 standard.
  • RAID-1 Redundant Array of Individual Disks
  • respective sets of storage devices are paired, and data are mirrored by writing identical sets of data to each storage device in the pair.
  • the aggregated storage appears to the operating system as a virtual volume V:.
  • the number and type of storage devices are identical to those of embodiment 900, and thus the block I/O storage capacity of the virtual volume is cut in half to 80 blocks.
  • Global resource mappings are contained in a lookup table 1002 for determining what disk drives are to be accessed when the operating system makes a corresponding block I/O access request.
  • the disk drive pairs are divided into logical storage entities labeled A-H.
  • the data when a write access to a logical storage entity is performed, the data are written to each of the underlying storage devices. In contrast, during a read access, the data are (generally) retrieved from a single storage device. Depending on the complexity of the RAID-1 implementation, one of the pair may be assigned as the default read device, or both of the storage devices may facilitate this function, allowing for parallel reads (duplexing).
  • a configuration may employ one or more disk drives 218 as "hot spares.”
  • the hot spare storage devices are not used during normal access operations, but rather sit in reserve to replace any device or blade that has failed.
  • data stored on the non-failed device in the pair
  • the RAID-1 scheme may be deployed using either a single global resource manager, or via local management.
  • appropriate mapping information can be stored on each blade.
  • this information may be stored as firmware variable data, whereby it will persist through a platform reset or shutdown.
  • RAID-1 In addition to RAID-1, other RAID standard redundant storage schemes may be employed, including RAID-0, RAID-2, RAID-3, RAID-5, and RAID-10. Since each of these schemes involves some form of striping, the complexity of the global resource maps increase substantially. For this and other reasons, it will generally be casier to implement RAID-0, RAID-2, RAID-3, RAID-5, and RAID-10 via a central global resource manager rather than individual local managers.
  • Each blade may be considered to be a separate platform, such as a rack-mounted server or a stand-alone server, wherein resource sharing across a plurality of platforms may be effectuated via an OOB channel in the manner similar to that discussed above.
  • cabling and/or routing may be provided to support an OOB channel.
  • KVM keyboard, video, and mouse I/O
  • a KVM switch is employed to enable a single keyboard, video display and mouse to be shared by all servers in the rack.
  • the KVM switch routes KVM signals from individual servers (via respective cables) to single keyboard, video and mouse I/O ports, whereby a KVM signals for a selected server may be accessed by tuning a selection knob or otherwise selecting the input signal source.
  • the KVM switch may cost $1500 or more, in addition to costs for cabling and installation. KVM cabling also reduces ventilation and accessibility.
  • each of a plurality of rack-mounted servers 1100 is connected to the other servers via a switch 1102 and corresponding Ethernet cabling (depicted as a network cloud 1104).
  • Each server 1100 includes a mainboard 1106 having a plurality of components mounted thereon or coupled thereto, including a processor 1108, memory 1110, a firmware storage device 1112, and a NIC 1114.
  • a plurality of I/O ports are also coupled to the mainboard, including a mouse and keyboard ports 1116 and 1118 and a video port 1120.
  • each server will also include a plurality of disk drives 1122.
  • a second MAC address assigned to the NIC 1114 for each server 1100 is employed to support an OOB channel 1124.
  • a keyboard 1126, video display 1128, and a mouse 1130 are coupled via respective cables to respective I/O ports 1118, 1120, and 1116 disposed on the back of a server 1100A.
  • Firmware on each of servers 1110 provides support for hosting a local global resource map 1132 that routes KVM signals to keyboard 1126, video display 1128, and mouse 1130 via server 1100A.
  • FIG. 12 A protocol stack exemplifying how video signals (the most complicated of the KVM signals) are handled in accordance with one embodiment is shown in Figure 12 .
  • video data used to produce corresponding video signals are rerouted from a server 1100 N to server 1100A.
  • the software side of the protocol stack on server 1100 N includes an operating system video driver 1200 N , while the firmware components include a video router driver 1202N, a video device driver 1204 N and an OOB communications handler 604 N .
  • the data flow is similar to that described above with reference to Figures 7 and 8a , and proceeds as follows.
  • the operating system running on a server 1100 N receives a request to update the video display, typically in response to a user input to a runtime application.
  • the operating system employs its OS video driver 1200 N to effectuate the change.
  • the OS video driver will generate video data based on a virtual video display maintained by the operating system, wherein a virtual-to-physical display mapping is performed. For example, the same text/graphic content displayed on monitors having different resolutions requires different video data particular to the resolutions.
  • the OS video driver then interfaces with video router driver 1202 N to pass on the video data to the what it thinks is the destination device, server 1100 N 's video chip 1206 N .
  • video router driver 1202 N is the firmware video device driver for the server, i.e. , is video device driver 1204 N .
  • video router driver 1202N looks up the video data destination server via a lookup of global resource map 1134 N and asserts an SMI to initiate an OOB communication with server 1100A via respective OOB communication handlers 604 N and 604A.
  • video chip 1206A Upon receiving the video data, it is written to a video chip 1206A via video device driver 1204A. In a manner similar to that described above, this passing of video data may be directly from OOB communications handler 604A to video device driver 1204A, or it may be routed through video router driver 1202A. In response to receiving the video data, video chip 1206A updates its video output signal, which is received by video monitor 1128 via video port 1120. As an option, a verification lookup of a global resource map 1134A may be performed to verify that server 1100A is the correct video data destination server.
  • keyboard and mouse signals are handled in a similar manner.
  • operating systems typically maintain a virtual pointer map from which a virtual location of a pointing device can be cross-referenced to the virtual video display, thereby enabling the location of the cursor relative to the video display to be determined.
  • mouse information will traverse the reverse route of the video signals - that is mouse input received via server 1100A will be passed via the OOB channel to a selected platform ( e.g. , server 1100 N ). This will require updating the global resource map 1134A on server 1100A to reflect the proper destination platform. Routing keyboard signals also will require a similar map update. A difference with keyboard signals is that they are bi-directional, so both input and output data rerouting is required.
  • FIG. 13 An exemplary keyboard input signal processing protocol stack and flow diagram is shown in Figure 13 .
  • the software side of the protocol stack on server 1100 N includes an operating system keyboard driver 1300 N , while the firmware components include a keyboard router driver 1302 N , a video device driver 1304N and an OOB communications handler 604 N . Similar components comprise the protocol stack of server 1100A.
  • keyboard input signal is generated that is received by a keyboard chip 1306A via keyboard port 1118A.
  • Keyboard chip 1306 then produces corresponding keyboard (KB) data that is received by keyboard device driver 1304A.
  • KB keyboard
  • keyboard device driver 1304A would interface with OS keyboard driver 1300A to pass the keyboard data to the operating system.
  • the OS keyboard driver that is targeted to receive the keyboard data is running on server 1100 N . Accordingly, video data handled by keyboard device driver 1304 is passed to keyboard router driver 1302A to facilitate rerouting the keyboard data.
  • keyboard router driver In response to receiving the keyboard data, keyboard router driver queries global resource map 1134 to determine the target server to which the keyboard data is to be rerouted (server 1100 N in this example). The keyboard router driver then asserts an SMI to kick the processor running on server 1100A into SMM and passes the keyboard data along with server target identification data to OOB communications handler 604A. OOB communications handler 604A then interacts with OOB communication handler 604N to facilitate OOB communications between the two servers via OOB channel 1124, leading to the keyboard data being received by OOB communications handler 604N. In response to receiving the keyboard data, OOB communications handler 604N forwards the keyboard data to keyboard router driver 1302N.
  • the keyboard router driver may either directly pass the keyboard data to OS keyboard driver 1300 N , or perform a routing verification lookup of global resource map 1134 N to ensure that server 1100 N is the proper server to receive the keyboard data prior to passing the data to OS keyboard driver 1300 N .
  • the OS keyboard driver then processes the keyboard data and provides the processed data to a runtime application having the current focus.
  • firmware which may typically comprise instructions and data for implementing the various operations described herein, will generally be stored on a non-volatile memory device, such as but not limited to a flash device, a ROM, or an EEPROM.
  • the instructions are machine readable, either directly by a real machine (i.e ., machine code) or via interpretation by a virtual machine ( e.g. , interpreted byte-code).
  • a real machine i.e ., machine code
  • a virtual machine e.g. , interpreted byte-code
  • embodiments of the invention may be used as or to support firmware executed upon some_form of processing core (such as the CPU of a computer) or otherwise implemented or realized upon or within a machine-readable medium.
  • a machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g ., a processor).
  • a machine-readable medium can include media such as a read only memory (ROM); a random access memory (RAM); a magnetic disk storage media; an optical storage media; and a flash memory device, etc .
  • a machine-readable medium can include propagated signals such as electrical, optical, acoustical or other form of propagated signals (e.g ., carrier waves, infrared signals, digital signals, etc .).

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)

Description

    FIELD OF THE INVENTION
  • The field of invention relates generally to clustered computing environments, such as blade server computing environments, and, more specifically but not exclusively relates to techniques for sharing resources hosted by individual platforms (nodes) to create global resources that may be shared across all nodes.
  • BACKGROUND INFORMATION
  • Information Technology (IT) managers and Chief Information Officers (CIOs) are under tremendous pressure to reduce capital and operating expenses without decreasing capacity. The pressure is driving IT management to provide computing resources that more efficiently utilize all infrastructure resources. To meet this objective, aspects of the following questions are often addressed: How to better manage server utilization; how to cope with smaller IT staff levels; how to better utilize floor space; and how to handle power issues.
  • Typically, a company's IT (information technology) infrastructure is centered around computer servers that are linked together via various types of networks, such as private local area networks (LANs) and private and public wide area networks (WANs). The servers are used to deploy various applications and to manage data storage and transactional processes. Generally, these servers will include stand-alone servers and/or higher density rack-mounted servers, such as 4U, 2U and 1U servers.
  • Recently, a new server configuration has been introduced that provides unprecedented server density and economic scalability. This server configuration is known as a "blade server." A blade server employs a plurality of closely-spaced "server blades" (blades) disposed in a common chassis to deliver high-density computing functionality. Each blade provides a complete computing platform, including one or more processors, memory, network connection, and disk storage integrated on a single system board. Meanwhile, other components, such as power supplies and fans, are shared among the blades in a given chassis and/or rack. This provides a significant reduction in capital equipment costs when compared to conventional rack-mounted servers.
  • Generally, blade servers are targeted towards two markets: high density server environments under which individual blades handle independent tasks, such as web hosting; and scaled computer cluster environments. A scalable compute cluster (SCC) is a group of two or more computer systems, also known as compute nodes, configured to work together to perform computational-intensive tasks. By configuring multiple nodes to work together to perform a computational task, the task can be completed much more quickly than if a single system performed the tasks. In theory, the more nodes that are applied to a task, the quicker the task can be completed. In reality, the number of nodes that can effectively be used to complete the task is dependent on the application used.
  • A typical SCC is built using Intel®-based servers running the Linux operating system and cluster infrastructure software. These servers are often referred to as commodity off the shelf (COTS) servers. They are connected through a network to form the cluster. An SCC normally needs anywhere from tens to hundreds of servers to be effective at performing computational-intensive tasks. Fulfilling this need to group a large number of servers in one location to form a cluster is a perfect fit for a blade server. The blade server chassis design and architecture provides the ability to place a massive amount of computer horsepower in a single location. Furthermore, the built-in networking and switching capabilities of the blade server architecture enables individual blades to be added or removed, enabling optimal scaling for a given tasks. With such flexibility, blade server-based SCC's provides a cost-effective alternative to other infrastructure for performing computational tasks, such as supercomputers.
  • As discussed above, each blade in a blade server is enabled to provide full platform functionality, thus being able to operate independent of other blades in the server. However, the resources available to each blade are likewise limited to its own resources. Thus, in many instances resources are inefficiently utilized. Under current architectures, there is no scheme that enables efficient server-wide resource sharing.
  • US 2002/0124134 (EMC Corporation) discloses a data storage system cluster architecture that includes integrated cached disc arrays (ICDAs) and cluster interconnect such as a set of fibre channel links. A switch network in each ICDA provides connections between the cluster interconnect and host interfaces, disk interfaces, and memory modules that may reside in the ICDA.
  • US 2002/0124114 (Bottom, David et al ) discloses a modular server architecture with Ethernet routed across a backplane utilising an integrated Ethernet switch module.
  • Aspects of the present invention are set out in the appended independent claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same becomes better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified:
    • Figure 1a is a frontal isometric view of an exemplary blade server chassis in which a plurality of server blades are installed;
    • Figure 1b is a rear isometric view of the blade server chassis of Figure 1 a;
    • Figure 1c is an isometric frontal view of an exemplary blade server rack in which a plurality of rack-mounted blade server chassis corresponding to Figures 1a and 1 b are installed;
    • Figure 2 shows details of the components of a typical server blade;
    • Figure 3 is a schematic block diagram illustrating various firmware and operating system components used to deploy power management in accordance with the ACPI standard;
    • Figure 4 is a flowchart illustrating operations and logic employed during blade initialization to configure a blade for implementing a power management scheme in accordance with one embodiment of the invention ;
    • Figure 5 is a flowchart illustrating operations and logic employed during an initialization process to set up resource sharing in accordance with one embodiment of the invention; [0016] Figure 6 is a schematic diagram illustrating various data flows that occur during the initialization process of Figure 6;
    • Figure 7 is a flowchart illustrating operations and logic employed in response to a resource access request received at a requesting computing platform to service the request in accordance with one embodiment of the invention, wherein the servicing resource is hosted by another computing platform;
    • Figures 8a and 8b are schematic diagrams illustrating data flows between a pair of computing platforms during a shared resource access, wherein the scheme illustrated in Figure 8a employs local global resource maps, and the scheme illustrated in Figure 8b employs a single global resource map hosted by a global resource manager;
    • Figure 9a is a schematic diagram illustrating a share storage resource configured as a virtual storage volume that aggregates the storage capacity of a plurality of disk drives;
    • Figure 9b is a schematic diagram illustrating a variance of the shared storage resource scheme of Figure 9a, wherein a RAID-1 implementation is employed during resource accesses ;
    • Figure 10a is a schematic diagram illustrating further details of the virtual volume storage scheme of Figure 9a ;
    • Figure 10b is a schematic diagram illustrating further details of the RAID-1 implementation of Figure 9b;
    • Figure 11 is a schematic diagram illustrating a shared keyboard, video, and mouse (KVM) access scheme;
  • Figure 12 is a schematic diagram illustrating data flows between a pair of computing platforms to support sharing a video resource; and
  • Figure 13 is a schematic diagram illustrating data flows between a pair of computing platforms to support sharing user input resources;
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • Embodiments of methods and computer components and systems for performing resource sharing across clustered platform environments, such as a blade server environment, are described herein. In the following description, numerous specific details are set forth to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
  • Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • In accordance with aspects of the invention, techniques are disclosed herein for sharing resources across clustered platform environments in a manner under which resources hosted by individual platforms are made accessible to other platform nodes The techniques employ firmware-based functionality that provides a "behind the scenes" access mechanisms without requiring any OS complicity. In fact, the resource sharing and access operations are completely transparent to operating systems running on the blades, and thus operating system independent. Thus, the capabilities afforded by the novel techniques disclosed herein may be employed in existing and future distributed platform environments without requiring any changes to the operating systems targeted for the environments.
  • In accordance with one aspect, the resource-sharing mechanism is effectuated by several platforms that "expose" resources that are aggregated to form global resources. Each platform employs a respective set of firmware that runs prior to the operating system load (pre-boot) and coincident with the operating system runtime. In one embodiment, runtime deployment is facilitated by a hidden execution mode known as the System Management Mode (SMM), which has the ability to receive and respond to periodic System Management Interrupts (SMI) to allow resource sharing and access information to be transparently passed to firmware SMM code configured to effectuate the mechanisms. The SMM resource management code conveys information and messaging to other nodes via an out-of-band (OOB) network or communication channel in an OS-transparent manner.
  • For illustrative purposes, several embodiments of the invention are disclosed below in the context of a blade server environment. As an overview, typical blade server components and systems for which resource sharing schemes in accordance with embodiments of the invention may be generally implemented are shown in Figures 1 a-c and 2. Under a typical configuration, a rack-mounted chassis 100 is employed to provide power and communication functions for a plurality of blades 102, each of which occupies a corresponding slot. (It is noted that all slots in a chassis do not need to be occupied.) In turn, one of more chassis 100 may be installed in a blade server rack 103 shown in Figure 1c. Each blade is coupled to an interface plane 104 (i.e., a backplane or mid-plane) upon installation via one or more mating connectors. Typically, the interface plane will include a plurality of respective mating connectors that provide power and communication signals to the blades. Under current practices, many interface planes provide "hot-swapping" functionality - that is, blades can be added or removed ("hot-swapped") on the fly, without taking the entire chassis down through appropriate power and data signal buffering.
  • A typical mid-plane interface plane configuration is shown in Figures 1a and 1 b. The backside of interface plane 104 is coupled to one or more power supplies 106. Oftentimes, the power supplies are redundant and hot-swappable, being coupled to appropriate power planes and conditioning circuitry to enable continued operation in the event of a power supply failure. In an optional configuration, an array of power supplies may be used to supply power to an entire rack of blades, wherein there is not a one-to-one power supply-to-chassis correspondence. A plurality of cooling fans 108 are employed to draw air through the chassis to cool the server blades.
  • An important feature required of all blade servers is the ability to communication externally with other IT infrastructure. This is typically facilitated via one or more network connect cards 110, each of which is coupled to interface plane 104. Generally, a network connect card may include a physical interface comprising a plurality of network port connections (e.g., RJ-45 ports), or may comprise a high-density connector designed to directly connect to a network device, such as a network switch, hub, or router.
  • Blades servers usually provide some type of management interface for managing operations of the individual blades. This may generally be facilitated by an out-of-band network or communication channel or channels. For example, one or more buses for facilitating a "private" or "management" network and appropriate switching may be built into the interface plane, or a private network may be implemented through closely-coupled network cabling and a network. Optionally, the switching and other management functionality may be provided by a management card 112 that is coupled to the backside or frontside of the interface plane. As yet another option, a management server may be employed to manage blade activities, wherein communications are handled via standard computer networking infrastructure, such as Ethernet.
  • With reference to Figure 2, further details of an exemplary blade 200 are shown. As discussed above, each blade comprises a separate computing platform that is configured to perform server-type functions, i.e., is a "server on a card." Accordingly, each blade includes components common to conventional servers, including a main circuit board 201 providing internal wiring (i.e., buses) for coupling appropriate integrated circuits (ICs) and other components mounted to the board. These components include one or more processors 202 coupled to system memory 204 (e.g., DDR RAM), cache memory 206 (e.g., SDRAM), and a firmware storage device 208 (e.g., flash memory). A "public" NIC (network interface) chip 210 is provided for supporting conventional network communication functions, such as to support communication between blades and external network infrastructure. Other illustrated components include status LEDs 212, an RJ-45 console port 214, and an interface plane connector 216. Additional components include various passive components (e.g., resistors, capacitors), power conditioning components, and peripheral device connectors.
  • Generally, each blade 200 will also provide on-board storage. This is typically facilitated via one or more built-in disk controllers and corresponding connectors to which one or more disk drives 218 are coupled. For example, typical disk controllers include Ultra ATA controllers, SCSI controllers, and the like. As an option, the disk drives may be housed separate from the blades in the same or a separate rack, such as might be the case when a network-attached storage (NAS) appliance is employed to storing large volumes of data.
  • In accordance with aspects of the invention, facilities are provided for out-of-band communication between blades, and optionally, dedicated management components. As used herein, an out-of-band communication channel comprises a communication means that supports communication between devices in an OS-transparent manner - that is, a means to enable inter-blade communication without requiring operating system complicity. Generally, various approaches may be employed to provide the OOB channel. These include but are not limited to using a dedicated bus, such as a system management bus that implements the SMBUS standard (www.smbus.org), a dedicated private or management network, such as an Ethernet-based network using VLAN-802.1 Q), or a serial communication scheme, e.g., employing the RS-485 serial communication standard. One or more appropriate IC's for supporting such communication functions are also mounted to main board 201, as depicted by an OOB channel chip 220. At the same time, interface plane 104 will include corresponding buses or built-in network traces to support the selected OOB scheme. Optionally, in the case of a wired network scheme (e.g., Ethernet), appropriate network cabling and networking devices may be deployed inside or external to chassis 100.
  • As discussed above, embodiments of the invention employ a firmware-based scheme for effectuating a resource sharing set-up and access mechanism to enable sharing of resources across blade server nodes. In particular, resource management firmware code is loaded during initialization of each blade and made available for access during OS run-time. Also during initialization, resource information is collected, and global resource information is built. Based on the global resource information, appropriate global resource access is provided back to each blade. This information is handed off to the operating system upon its initialization, such that the global resource appears (from the OS standpoint) as a local resource. During OS runtime operations, accesses to the shared resources are handled via interaction between OS and/or OS drivers and corresponding firmware in conjunction with resource access management that is facilitated via the OOB channel.
  • In one embodiment, resource sharing is facilitated via an extensible firmware framework known as Extensible Firmware Interface (EFI) (specifications and examples of which may be found at hftp://developer.intel.com/technology/efi). EFI is a public industry specification (current version 1.10 released January 7, 2003) that describes an abstract programmatic interface between platform firmware and shrink-wrap operation systems or other custom application environments. The EFI framework include provisions for extending BIOS functionality beyond that provided by the BIOS code stored in a platform's BIOS device (e.g., flash memory). More particularly, EFI enables firmware, in the form of firmware modules and drivers, to be loaded from a variety of different resources, including primary and secondary flash devices, option ROMs, various persistent storage devices (e.g., hard disks, CD ROMs, etc.), and even over computer networks.
  • Figure 3 shows an event sequence/architecture diagram used to illustrate operations performed by a platform under the framework in response to a cold boot (e.g., a power off/on reset). The process is logically divided into several phases, including a pre-EFI Initialization Environment (PEI) phase, a Driver Execution Environment (DXE) phase, a Boot Device Selection (BDS) phase, a Transient System Load (TSL) phase, and an operating system runtime (RT) phase. The phases build upon one another to provide an appropriate run-time environment for the OS and platform.
  • The PEI phase provides a standardized method of loading and invoking specific initial configuration routines for the processor (CPU), chipset, and motherboard. The PEI phase is responsible for initializing enough of the system to provide a stable base for the follow on phases. Initialization of the platforms core components, including the CPU, chipset and main board (i.e., motherboard) is performed during the PEI phase. This phase is also referred to as the "early initialization" phase. Typical operations performed during this phase include the POST (power-on self test) operations, and discovery of platform resources. In particular, the PEI phase discovers memory and prepares a resource map that is handed off to the DXE phase. The state of the system at the end of the PEI phase is passed to the DXE phase through a list of position independent data structures called Hand Off Blocks (HOBs).
  • The DXE phase is the phase during which most of the system initialization is performed. The DXE phase is facilitated by several components, including the DXE core 300, the DXE dispatcher 302, and a set of DXE drivers 304. The DXE core 300 produces a set of Boot Services 306, Runtime Services 308, and DXE Services 310. The DXE dispatcher 302 is responsible for discovering and executing DXE drivers 304 in the correct order. The DXE drivers 304 are responsible for initializing the processor, chipset, and platform components as well as providing software abstractions for console and boot devices. These components work together to initialize the platform and provide the services required to boot an operating system. The DXE and the Boot Device Selection phases work together to establish consoles and attempt the booting of operating systems. The DXE phase is terminated when an operating system successfully begins its boot process (i.e., the BDS phase starts). Only the runtime services and selected DXE services provided by the DXE core and selected services provided by runtime DXE drivers are allowed to persist into the OS runtime environment. The result of DXE is the presentation of a fully formed EFI interface.
  • The DXE core is designed to be completely portable with no CPU, chipset, or platform dependencies. This is accomplished by designing in several features. First, the DXE core only depends upon the HOB list for its initial state. This means that the DXE core does not depend on any services from a previous phase, so all the prior phases can be unloaded once the HOB list is passed to the DXE core. Second, the DXE core does not contain any hard coded addresses. This further means the DXE core can be loaded anywhere in physical memory, and it can function correctly no matter where physical memory or where Firmware segments are located in the processor's physical address space. Third, the DXE core does not contain any CPU-specific, chipset specific, or platform specific information. Instead, the DXE core is abstracted from the system hardware through a set of architectural protocol interfaces. These architectural protocol interfaces are produced by DXE drivers 304, which are invoked by DXE Dispatcher 302.
  • The DXE core produces an EFI System Table 400 and its associated set of Boot Services 306 and Runtime Services 308, as shown in Figure 4. The DXE core also maintains a handle database 402. The handle database comprises a list of one or more handles, wherein a handle is a list of one or more unique protocol GUIDs (Globally Unique Identifiers) that map to respective protocols 404. A protocol is a software abstraction for a set of services. Some protocols abstract I/O devices, and other protocols abstract a common set of system services. A protocol typically contains a set of APIs and some number of data fields. Every protocol is named by a GUID, and the DXE Core produces services that allow protocols to be registered in the handle database. As the DXE Dispatcher executes DXE drivers, additional protocols will be added to the handle database including the architectural protocols used to abstract the DXE Core from platform specific details.
  • The Boot Services comprise a set of services that are used during the DXE and BDS phases. Among others, these services include Memory Services, Protocol Handler Services, and Driver Support Services: Memory Services provide services to allocate and free memory pages and allocate and free the memory pool on byte boundaries. It also provides a service to retrieve a map of all the current physical memory usage in the platform. Protocol Handler Services provides services to add and remove handles from the handle database. It also provides services to add and remove protocols from the handles in the handle database. Addition services are available that allow any component to lookup handles in the handle database, and open and close protocols in the handle database. Support Services provides services to connect and disconnect drivers to devices in the platform. These services are used by the BDS phase to either connect all drivers to all devices, or to connect only the minimum number of drivers to devices required to establish the consoles and boot an operating system (i.e., for supporting a fast boot mechanism).
  • In contrast to Boot Services, Runtime Services are available both during pre-boot and OS runtime operations. One of the Runtime Services that is leveraged by embodiments disclosed herein is the Variable Services. As described in further detail below, the Variable Services provide services to lookup, add, and remove environmental variables from both volatile and non-volatile storage.
  • The DXE Services Table includes data corresponding to a first set of DXE services 406A that are available during pre-boot only, and a second set of DXE services 406B that are available during both pre-boot and OS runtime. The pre-boot only services include Global Coherency Domain Services, which provide services to manage I/O resources, memory mapped I/O resources, and system memory resources in the platform. Also included are DXE Dispatcher Services, which provide services to manage DXE drivers that are being dispatched by the DXE dispatcher.
  • The services offered by each of Boot Services 306, Runtime Services 308, and DXE services 310 are accessed via respective sets of API's 312, 314, and 316. The API's provide an abstracted interface that enables subsequently loaded components to leverage selected services provided by the DXE Core.
  • After DXE Core 300 is initialized, control is handed to DXE Dispatcher 302. The DXE Dispatcher is responsible for loading and invoking DXE drivers found in firmware volumes, which correspond to the logical storage units from which firmware is loaded under the EFI framework. The DXE dispatcher searches for drivers in the firmware volumes described by the HOB List. As execution continues, other firmware volumes might be located. When they are, the dispatcher searches them for drivers as well.
  • There are two subclasses of DXE drivers. The first subclass includes DXE drivers that execute very early in the DXE phase. The execution order of these DXE drivers depends on the presence and contents of an a priori file and the evaluation of dependency expressions. These early DXE drivers will typically contain processor, chipset, and platform initialization code. These early drivers will also typically produce the architectural protocols that are required for the DXE core to produce its full complement of Boot Services and Runtime Services.
  • The second class of DXE drivers are those that comply with the EFI 1.10 Driver Model. These drivers do not perform any hardware initialization when they are executed by the DXE dispatcher. Instead, they register a Driver Binding Protocol interface in the handle database. The set of Driver Binding Protocols are used by the BDS phase to connect the drivers to the devices required to establish consoles and provide access to boot devices. The DXE Drivers that comply with the EFI 1.10 Driver Model ultimately provide software abstractions for console devices and boot devices when they are explicitly asked to do so.
  • Any DXE driver may consume the Boot Services and Runtime Services to perform their functions. However, the early DXE drivers need to be aware that not all of these services may be available when they execute because all of the architectural protocols might not have been registered yet. DXE drivers must use dependency expressions to guarantee that the services and protocol interfaces they require are available before they are executed.
  • The DXE drivers that comply with the EFI 1.10 Driver Model do not need to be concerned with this possibility. These drivers simply register the Driver Binding Protocol in the handle database when they are executed. This operation can be performed without the use of any architectural protocols. In connection with registration of the Driver Binding Protocols, a DXE driver may "publish" an API by using the InstallConfigurationTable function. This published drivers are depicted by API's 318. Under EFI, publication of an API exposes the API for access by other firmware components. The API's provide interfaces for the Device, Bus, or Service to which the DXE driver corresponds during their respective lifetimes.
  • The BDS architectural protocol executes during the BDS phase. The BDS architectural protocol locates and loads various applications that execute in the pre-boot services environment. Such applications might represent a traditional OS boot loader, or extended services that might run instead of, or prior to loading the final OS. Such extended pre-boot services might include setup configuration, extended diagnostics, flash update support, OEM value-adds, or the OS boot code. A Boot Dispatcher 320 is used during the BDS phase to enable selection of a Boot target, e.g., an OS to be booted by the system.
  • During the TSL phase, a final OS Boot loader 322 is run to load the selected OS. Once the OS has been loaded, there is no further need for the Boot Services 306, and for many of the services provided in connection with DXE drivers 304 via API's 318, as well as DXE Services 306A. Accordingly, these reduced sets of API's that may be accessed during OS runtime are depicted as API's 316A, and 318A in Figure 3.
  • An OS-transparent out-of-band communication scheme is employed to allow various types of resources to be shared across server nodes. At the same time, firmware-base components (e.g, firmware, drivers and API's) are employed to facilitate low-level access to the resources and rerouting of data over the OOB channel. The scheme may be effectuated across multiple computing platforms, including groups of blades, individual chassis, racks, or groups of racks. During system initialization, firmware provided on each platform is loaded and executed to set up the OOB channel and appropriate resource access and data re-routing mechanisms. Each blade then transmits information about its shared resources over the OOB to a global resource manager. The global resource manager aggregates the data and configures a "virtual" global resource. Global resource configuration data in the form of global resource descriptors is then sent back to the blades to apprise the blades of the configuration and access mechanism for the global resource. Drivers are then configured to support access to the global resource. Subsequently, the global resource descriptors are handed off to the operating system during OS load, wherein the virtual global resource appears as a local device to the operating system, and thus is employed as such during OS runtime operations without requiring any modification to the OS code. Flowchart operations and logic according to one embodiment of the process are shown in Figures 5 and 7, while corresponding operations and interactions between various components are schematically illustrated in Figures 6, 8a, and 8b.
  • With reference to Figure 5, the process begins by performing several initialization operations on each blade to set up the resource device drivers and the OOB communications framework. In response to a power on or reset event depicted in a start block 500, the system performs pre-boot system initialization operations in the manner discussed above with reference to Figure 3. First, early initialization, operations are performed in a block 502 by loading and executing firmware stored in each blade's boot firmware device (BFD). Under EFI, the BFD comprises the firmware device that stores firmware for booting the system; the BFD for server blade 200 comprises firmware device 208.
  • Continuing with block 502, processor 202 executes reset stub code that jumps execution to the base address of a boot block of the BFD via a reset vector. The boot block contains firmware instructions for performing early initialization, and is executed by processor 202 to initialize the CPU, chipset, and motherboard. (It is noted that during a warm boot (reset) early initialization is not performed, or is at least performed in a limited manner.) Execution of firmware instructions corresponding to an EFI core are executed next, leading to the DXE phase. During DXE core initialization, the Variable Services are setup in the manner discussed above with reference to Figures 3 and 4. After the DXE core is initialized, DXE dispatcher 302 begins loading DXE drivers 304. Each DXE driver corresponds to a system component, and provides an interface for directly accessing that component. Included in the DXE drivers is an OOB monitor driver that will be subsequently employed for facilitating OOB communications.
  • Next, in a block 504, the OOB monitor driver is installed in a protected area in each blade. As discussed above, an out-of-band communication channel or network that operates independent of network communications that are managed by the operating systems is employed to facilitate inter-blade communication in an OS-transparent manner.
  • During the foregoing system initialization operations of block 502, a portion of system memory 204 is setup to be employed for system management purposes. This portion of memory is referred to as SMRAM 600 (see Figure 6), and is hidden from the subsequently-loaded operating system.
  • In conjunction with the firmware load, SMM OOB communication code 602 stored in firmware is loaded into SMRAM 600, and a corresponding OOB communications SMM handler 604 for handling OOB communications are setup. An SMM handler is a type of interrupt handler, and is invoked in response to a system management interrupt (SMI). In turn, an SMI interrupt may be asserted via an SMI pin on the system's processor. In response to an SMI interrupt, the processor stores its current context (i.e., information pertaining to current operations, including its current execution mode, stack and register information, etc.), and switches its execution mode to its system management mode. SMM handlers are then sequentially dispatched to determine if they are the appropriate handler for servicing the SMI event. This determination is made very early in the SMM handler code, such that there is little latency in determining which handler is appropriate. When this handler is identified, it is allowed to execute to completion to service the SMI event. After the SMI event is serviced, an RSM (resume) instruction is issued to return the processor to its previous execution mode using the previously saved context data. The net result is that SMM operation is completely transparent to the operating system.
  • Returning to the flowchart of Figure 5, a determination is made in a decision block 506 to whether one or more sharable resources hosted by the blade is/are discovered. Generally, a shared resource is any blade component or device that is to be made accessible for shared access. Such components and devices include but are to limited to fixed storage devices, removable media devices, input devices (e.g., keyboard, mouse), video devices, audio devices, volatile memory (i.e., system RAM), and non-volatile memory.
  • If the answer to decision block 506 is YES, the logic proceeds to perform the loop operations defined within respective start and end loop blocks 508 and 509 for each sharable resource that is discovered. This includes operations in a block 510, wherein a device path to describe the shared resource is constructed and configuration parameters are collected. The device path provides external users with a means for accessing the resource. The configuration parameters are used to build global resources, as described below in further detail.
  • After the operations of block 510 are performed, in the illustrated embodiment the device path and resource configuration information is transmitted or broadcasts to a global resource manager 608 via an OOB communication channel 610 in a block 512. The global resource manager may generally be hosted by an existing component, such as one of the blades or management card 112. As described below, in one embodiment a plurality of local global resource managers are employed, wherein global resource management is handled through a collective process rather than employing a single manager. In cases in which the address of the component hosting the global resource manager is known a priori, a selective transmission to that component may be employed. In cases in which the address is not known, a message is first broadcast over the OOB channel to identify the location of the host component.
  • OOB communications under the aforementioned SMM hidden execution mode are effectuated in the following manner. First, it is necessary to switch the operating mode of the processors on the blades for which inter-blade communication is to be performed to SMM. Therefore, an SMI is generated to cause the processor to switch into SMM, as shown occurring with BLADE 1 in Figure 6. This may be effectuated through one of two means - either an assertion of the processors SMI pin (i.e., a hardware-based generation), or via issuance of an "SMI" instruction (i.e., a software-based generation).
  • In one embodiment an assertion of the SMI pin may be produced by placing an appropriate signal on a management bus or the like. For example, when an SMBUS is deployed using I2C, one of the bus lines may be hardwired to the SMI pins of each blade's processor via that blade's connector. Optionally, the interface plane may provide a separate means for producing a similar result. Depending on the configuration, all SMI pins may be commonly tied to a single bus line, or the bus may be structured to enable independent SMI pin assertions for respective blades. As yet another option, certain network interface chips (NIC), such as those made by Intel®, provide a second MAC address for use as a "back channel" in addition to a primary MAC address used for conventional network communications. Furthermore, these NICs provide a built-in system management feature, wherein an incoming communication referencing the second MAC address causes the NIC to assert an SMI signal. This scheme enables an OOB channel to be deployed over the same cabling as the "public" network (not shown).
  • In one embodiment, a firmware driver is employed to access the OOB channel. For instance, when the OOB channel is implemented via a network or serial means, an appropriate firmware driver will be provided to access the network or serial port. Since the configuration of the firmware driver will be known in advance (and thus independent of the operating system), the SMM handler may directly access the OOB channel via the firmware driver. Optionally, in the case of a dedicated management bus, such as I2C, direct access may be available to the SMM handler without a corresponding firmware driver, although this latter option could also be employed.
  • In response to assertion of the SMI pin, the asserted processor switches to SMM execution mode and begins dispatch of its SMM handler(s) until the appropriate handler (e.g., communication handler 604) is dispatched to facilitate the OOB communication. Thus, in each of the OOB communication network/channel options, the OOB communications are performed when the blade processors are operating in SMM, whereby the communications are transparent to the operating systems running on those blades.
  • In accordance with a block 514, the shared device path and resource configuration information is received by global resource manager 608. In a similar manner, shared device path and resource configuration information for other blades is received by the global resource manager.
  • Individual resources may be combined to form a global resource. For example, storage provided by individual storage devices (e.g., hard disks and system RAM) may be aggregated to form one or more "virtual" storage volumes. This is accomplished, in part, by aggregating the resource configuration information in a block 516. In the case of hard disk resources, the resource configuration information might typically include storage capacity, such as number of storage blocks, partitioning information, and other information used for accessing the device. After the resource configuration information is aggregated, a global resource access mechanism (e.g., API) and global resource descriptor 612 are built. The global resource descriptor contains information identifying how to access the resource, and describes the configuration of the resource (from a global and/or local perspective).
  • After the operations of block 516 are completed, the global resource descriptor 612 is transmitted to active nodes in the rack via the OOB channel in a block 518. This transmission operation may be performed using node-to-node OOB communications, or via an OOB broadcast. In response to receiving the global resource descriptor, it is stored by the receiving node in a block 520, leading to processing the next resource. The operations of blocks 510, 512, 514, 516, 518, and 520 are repeated in a similar manner for each resource that is discovered until all sharable resources are processed.
  • In accordance with one embodiment, access to shared resources is provided by corresponding firmware device drivers that are configured to access discovered shared resources via their global resource API's in a block 522. Further details of this access scheme when applied to specific resources are discussed below. As depicted by a continuation block 524, pre-boot platform initialization operations are then continued as described above to prepare for the OS load.
  • During the OS load in a block 526, global resource descriptors corresponding to any shared resources that are discovered are handed off to the operation system. It is noted that the global resource descriptors that are handed off to the OS may or may not be identical to those built in block 516. Essentially, the global resource descriptors contain information to enable the operating system to configure access to the resource via its own device drivers. For example, in the case of a single shared storage volume, the OS receives information indicating that it has access to a "local" storage device (or optionally a networked storage device) having a storage capacity that spans the individual storage capacities of the individual storage devices that are shared. In the case of multiple shared storage volumes, respective storage capacity information will be handed off to the OS for each volume. The completion of the OS load leads to continued OS runtime operations, as depicted by a continuation block 528.
  • During OS runtime, global resources are accessed via a combination of the operating system and firmware components configured to provide "low-level" access to the shared resource. Under modern OS/Firmware architectures, the device access scheme is intentionally abstracted such that the operating system vendor is not required to write a device driver that is specific to each individual device. Rather, these more explicit access details are provided by corresponding firmware device drivers. One result of this architecture is that the operating system may not directly access a hardware device. This proves advantageous in many ways. Most notably, this means the operating system does not need to know the particular low-level access configuration of the device. Thus, "virtual" resources that aggregate the resources of individual devices may be "built," and corresponding access to such devices may be abstracted through appropriately-configured firmware drivers, whereby the OS thinks the virtual resource is a real local device.
  • In one embodiment, this abstracted access scheme is configured as a multi-layer architecture, as shown in Figures 8a and 8b. Each of blades BLADE 1 and BLADE 2 have respective copies of the architecture components, including an OS device drivers 800-1 and 800-2, management/access driver 802-1 and 802-2, resource device drivers 804-1 and 804-2, and OOB communication handlers 604-1 and 604-2.
  • A flowchart illustrating an exemplary process for accessing a shared resource in accordance with one embodiment is shown in Figure 7. The process begins with an access request from a requestor, as depicted in a start block 700. A typical requestor might be an application running on the operating system for the platform. Executable code corresponding to such applications are generally stored in system memory 204, as depicted by runtime (RT) applications (APP) 806 and 808 in Figures 8a and 8b. For instance, suppose runtime application 806 wishes to access a shared data storage resource. In this example, the access request corresponds to opening a previously stored file. The runtime application will first make a request to the operating system (810) to access the file, providing a location for the file (e.g., drive designation, path, and filename). Furthermore, the drive designation is a drive letter previously allocated by the operating system for a virtual global storage resource comprising a plurality of disk drives 218, which include resource 1of BLADE 1and resource 2 on BLADE 2.
  • In response to the request, operating system 810 employs its OS device driver 800-1 to access the storage resource in a block 702. Normally, OS device driver 800-1 would interface directly with resource driver 804-1 to access resource 1. However, management/access driver 802-1 is accessed instead. In order to effectuate this change, interface information such as an API or the like is handed off to the OS during OS-load, whereby the OS is instructed to access management/access driver 802-1 whenever there is a request to access the corresponding resource (e.g., resource 1).
  • In order to determine which shared resource is to service the request, a mechanism is provided to identify a particular host via which the appropriate resource may be accessed. In one embodiment, this mechanism is facilitated via a global resource map. In the embodiment. of Figure 8a, local copies 812-1 and 812-2 of a common global resource map are stored on respective blades BLADE 1 and BLADE 2. In the embodiment of Figure 8b, a shared global resource map 812a is hosted by global resource manager 608. The global resource map matches specific resources with the portions of the global resource hosted by those specific resources.
  • Continuing with the flowchart of Figure 7, in a block 704 the management/access driver queries local global resource map 812 to determine the host of the resource underlying the particular access request. This resource and/or its host is known as the "resource target;" in the illustrated example the resource target comprises a resource 2 hosted by BLADE 2.
  • Once the resource target is identified, OOB communication operations are, to pass the resource access request to the resource target. First, the management/access driver on the requesting platform (e.g., 802-1) asserts an SMI to activate that platforms local OOB communications handler 604-1. In response, the processor on BLADE 1switches its mode to SMM in a block 708 and dispatches its SMM handlers until OOB communication handler 604-1 is launched. In response, the OOB communication handler asserts an SMI signal on the resource target host (BLADE 2) to initiate OOB communication between the two blades. In response to the SMI, the processor mode on BLADE 2 is switched to SMM in a block 710, launching its OOB communication handler. At this point, Blades 1 and 2 are enabled to communicate via OOB channel 610, and the access request is received by OOB communications handler 604-2. After the resource access request has been sent, in one embodiment an "RSM" instruction is issued to the processor on BLADE 1 to switch the processor's operating mode back to what it was before being switched to SMM.
  • In a block 712 the access request is then passed to management/access driver 802-2 via its API. In an optional embodiment, a query is then performed in a block 714 to verity that the platform receiving the access request is the actual host of the target resource. If it isn't the correct host, in one embodiment a message is passed back to the sequester indicating so (not shown). In another embodiment, an appropriate global resource manager is apprised of the situation. In essence, this situation would occur if the local global resource maps contained different information (i.e., are no longer synchronized). In response, the global resource manager would issue a command to resynchronize the local global resource maps (all not shown).
  • Continuing with a block 716, the platform host's resource device driver (804-2) is then employed to access the resource (e.g., resource 2) to service the access request. Under the present example, the access returns the requested data file. Data corresponding to the request is then returned to the requester via OOB channel 610 in a block 718. At the completion of the communication, an RSM instruction is issued to the processor on BLADE 2 to switch the processor's operating mode back to what it was before being switched to SMM.
  • Depending on the particular implementation, the requester's processor may or may not be operating an SMM at this time. For example, in the embodiment discussed above, the requester's (BLADE 1) processor was switched back out of SMM in a block 708. In this case, a new SMI is asserted to activate the OOB communications handler in a block 722. If the SMM mode was not terminated after sending the access request (in accordance with an optional scheme), the OOB communication handler is already waiting to receive the returned data. In either case, the returned data are received via OOB channel 610, and the data are passed to the requester's management/access driver (802-1) in a block 724. In turn, this firmware driver passes the data to back to OS device driver 800-1 in a block 726, leading to receipt of the data by the requester via the operating system in a block 728.
  • A similar resource access process is performed using a single global resource map in place of the local copies of the global resource map in the embodiment of Figure 8b. In short, many of the operations are the same as those discusses above with reference to Figure 8a, except that global resource manager 608 is employed as a proxy for accessing the resource, rather than using local global resource maps. Thus, the resource access request is sent to global resource manager 608 via OOB channel 610 rather than directly to an identified resource target. Upon receipt of the request, a lookup of global resource map 812a is performed to determine the resource target. Subsequently, the data request is sent to the identified resource target, along with information identifying the requester. Upon receiving the request, the operations of blocks 712 - 728 are preformed, with the exception of optional operations 714.
  • Each of the foregoing schemes offers their own advantages. When local global resource maps are employed, there is no need for a proxy, and thus there is not need to change any software components operating on any of the blade server components. However, there should be a mechanism for facilitating global resource map synchronization, and the management overhead for each blade is increased. The primary advantage of employing a single global resource manager is that the synchronicity of the global resource map is ensured (since there is only one copy), and changes to the map can be made without any complicity required of the individual blades. Under most implementations, the main drawback will be providing a host for the global resource manager functions. Typically, the host may be a management component or one of the blades (e.g., a nominated or default-selected blade).
  • In one embodiment, a blade that hosts the global resource manager functions is identified through a nomination process, wherein each blade may include firmware for performing the management tasks. In general, the nomination scheme may be based on a physical assignment, such as a chassis slot, or may be based on an activation scheme, such as a first-in ordered scheme. For example, under a slot-based scheme, the blade having the lowest slot assignment for the group would be assigned power arbiter tasks. If that blade was removed, the blade having the lowest slot assignment from among the remaining blades would be nominated to host the global resource manager. Under a first-in ordered scheme, each blade would be assigned in installation order identifier (e.g., number) based on the order the blades were inserted or activated. The global management task would be assigned to the blade with the lowest number, that is the first installed blade to begin with. Upon removal of that blade, the blade with the next lowest installation number would be nominated as the new power arbiter. In order to ensure continued operations across a change in the global resource manager, a redundancy scheme may be implemented wherein a second blade is nominated as a live back-up.
  • In general, global resource mapping data may be stored in either system memory or as firmware variable data. If stored as firmware variable data, the mapping data will be able to persist across platform shutdowns. In one embodiment, the mapping data are stored a portion of system memory that is hidden from the operating system. This hidden portion of system memory may include a portion of SMRAM or a portion of memory reserved by firmware during pre-boot operations. Another way to persist global resource mapping data across shutdowns is to store the data on a persistent storage device, such as a disk drive. However, when employing a disk drive it is recommended that the mapping data are stored in a manner that is inaccessible to the platform operating system, such as in the host protected area (HPA) of the disk drive. When global resource mapping data are stored in a central repository (i.e., as illustrated by the embodiment of Figure 8b), various storage options similar to those presented above may be employed. In cases in which the global resource manager is hosted by a component other than the plurality of server blades (such as hosted by management card 112 or an external management server), disk storage may be safely implemented since these hosts are not accessible by the operating systems running on the blades.
  • A more specific implementation of resource sharing is illustrated in Figures 9a-b and 10a-b. In these cases, the resource being shared comprise disk drives 218. In the embodiment 900 illustrated in Figures 9a and 10a, the storage resources provided by a plurality of disk drives 218 are aggregated to form a virtual storage volume "V:" For clarity, the storage resources for each of the disk drives is depicted as respective groups of I/O storage comprising 10 blocks. Furthermore, each of Blades 1-16 are depicted as hosting a single disk drive 218; it will be understood that an actual implementations each blade may host 0-N disk drives (depending on its configuration), that the number of blocks for each disk drive may vary, and that the actual number of blocks will be several orders of magnitude higher than those depicted herein.
  • From an operating system perspective, virtual storage volume V: appears as a single storage device. In general, the shared storage resources may be configured as 1-N virtual storage volumes, with each volume spanning a respective set of storage devices. In reality, virtual storage volume V: spans 16 disk drives 218. To effectuate this, a global resource map comprising a lookup table 1000 is employed. The lookup table maps respective ranges of I/O blocks to the blade on which the disk drive hosting the I/O blocks resides. In the case of single blades being able to host multiple disk drives, the map would contain further information identifying the specific storage device on each blade. In general, an addressing scheme would be employed rather than simply identifying a blade number; however, the illustrated blade number assignments are depicted for clarity and simplicity.
  • Figures 9b and 10b illustrate a RAID embodiment 902 using mirroring and duplexing in accordance with the RAID (Redundant Array of Individual Disks)-1 standard. Under RAID-1, respective sets of storage devices are paired, and data are mirrored by writing identical sets of data to each storage device in the pair. In a manner similar to that discussed above, the aggregated storage appears to the operating system as a virtual volume V:. In the illustrated embodiment, the number and type of storage devices are identical to those of embodiment 900, and thus the block I/O storage capacity of the virtual volume is cut in half to 80 blocks. Global resource mappings are contained in a lookup table 1002 for determining what disk drives are to be accessed when the operating system makes a corresponding block I/O access request. The disk drive pairs are divided into logical storage entities labeled A-H.
  • In accordance with RAID-1 principles, when a write access to a logical storage entity is performed, the data are written to each of the underlying storage devices. In contrast, during a read access, the data are (generally) retrieved from a single storage device. Depending on the complexity of the RAID-1 implementation, one of the pair may be assigned as the default read device, or both of the storage devices may facilitate this function, allowing for parallel reads (duplexing).
  • In addition to the illustrated configuration, a configuration may employ one or more disk drives 218 as "hot spares." In this instance, the hot spare storage devices are not used during normal access operations, but rather sit in reserve to replace any device or blade that has failed. Under standard practices, when a hot spare replacement occurs, data stored on the non-failed device (in the pair) are written to the replacement device to return the storage system to full redundancy. This may be performed in an interactive fashion (e. g., allowing new data writes concurrently), or may be performed prior to permitting new writes.
  • Generally, the RAID-1 scheme may be deployed using either a single global resource manager, or via local management. For example, in cases in which "static" maps are employed (corresponding to static resource configurations), appropriate mapping information can be stored on each blade. In one embodiment, this information may be stored as firmware variable data, whereby it will persist through a platform reset or shutdown. For dynamic configuration environments, it is advisable to employ a central global resource manager, at least for determining updated resource mappings corresponding configuration changes.
  • In addition to RAID-1, other RAID standard redundant storage schemes may be employed, including RAID-0, RAID-2, RAID-3, RAID-5, and RAID-10. Since each of these schemes involves some form of striping, the complexity of the global resource maps increase substantially. For this and other reasons, it will generally be casier to implement RAID-0, RAID-2, RAID-3, RAID-5, and RAID-10 via a central global resource manager rather than individual local managers.
  • It is noted that although the foregoing principles are discussed in the context of a blade server environment, this is not to be limiting. Each blade may be considered to be a separate platform, such as a rack-mounted server or a stand-alone server, wherein resource sharing across a plurality of platforms may be effectuated via an OOB channel in the manner similar to that discussed above. For example, in a rack-mounted server configuration cabling and/or routing may be provided to support an OOB channel.
  • A particular implementation that is well-suited to rack- mounted servers and the like concerns sharing keyboard, video, and mouse I/O, commonly known as KVM. In a typical rack server, a KVM switch is employed to enable a single keyboard, video display and mouse to be shared by all servers in the rack. The KVM switch routes KVM signals from individual servers (via respective cables) to single keyboard, video and mouse I/O ports, whereby a KVM signals for a selected server may be accessed by tuning a selection knob or otherwise selecting the input signal source. For high-density servers, the KVM switch may cost $1500 or more, in addition to costs for cabling and installation. KVM cabling also reduces ventilation and accessibility.
  • The foregoing problems are overcome by a shared KVM embodiment illustrated in Figures 11-13. In Figure 11, each of a plurality of rack-mounted servers 1100 is connected to the other servers via a switch 1102 and corresponding Ethernet cabling (depicted as a network cloud 1104). Each server 1100 includes a mainboard 1106 having a plurality of components mounted thereon or coupled thereto, including a processor 1108, memory 1110, a firmware storage device 1112, and a NIC 1114. A plurality of I/O ports are also coupled to the mainboard, including a mouse and keyboard ports 1116 and 1118 and a video port 1120. Typically, each server will also include a plurality of disk drives 1122.
  • In accordance with the NIC-based back channel OOB scheme discussed above, a second MAC address assigned to the NIC 1114 for each server 1100 is employed to support an OOB channel 1124. A keyboard 1126, video display 1128, and a mouse 1130 are coupled via respective cables to respective I/ O ports 1118, 1120, and 1116 disposed on the back of a server 1100A. Firmware on each of servers 1110 provides support for hosting a local global resource map 1132 that routes KVM signals to keyboard 1126, video display 1128, and mouse 1130 via server 1100A.
  • A protocol stack exemplifying how video signals (the most complicated of the KVM signals) are handled in accordance with one embodiment is shown in Figure 12. In the example, video data used to produce corresponding video signals are rerouted from a server 1100N to server 1100A. The software side of the protocol stack on server 1100N includes an operating system video driver 1200N, while the firmware components include a video router driver 1202N, a video device driver 1204N and an OOB communications handler 604N. The data flow is similar to that described above with reference to Figures 7 and 8a, and proceeds as follows.
  • The operating system running on a server 1100N receives a request to update the video display, typically in response to a user input to a runtime application. The operating system employs its OS video driver 1200N to effectuate the change. Generally, the OS video driver will generate video data based on a virtual video display maintained by the operating system, wherein a virtual-to-physical display mapping is performed. For example, the same text/graphic content displayed on monitors having different resolutions requires different video data particular to the resolutions. The OS video driver then interfaces with video router driver 1202N to pass on the video data to the what it thinks is the destination device, server 1100N's video chip 1206N. As far as the operating system is concerned, video router driver 1202N is the firmware video device driver for the server, i.e., is video device driver 1204N. However, upon receiving the video data, video router driver 1202N looks up the video data destination server via a lookup of global resource map 1134N and asserts an SMI to initiate an OOB communication with server 1100A via respective OOB communication handlers 604N and 604A.
  • Upon receiving the video data, it is written to a video chip 1206A via video device driver 1204A. In a manner similar to that described above, this passing of video data may be directly from OOB communications handler 604A to video device driver 1204A, or it may be routed through video router driver 1202A. In response to receiving the video data, video chip 1206A updates its video output signal, which is received by video monitor 1128 via video port 1120. As an option, a verification lookup of a global resource map 1134A may be performed to verify that server 1100A is the correct video data destination server.
  • Keyboard and mouse signals are handled in a similar manner. As with video, operating systems typically maintain a virtual pointer map from which a virtual location of a pointing device can be cross-referenced to the virtual video display, thereby enabling the location of the cursor relative to the video display to be determined. Generally, mouse information will traverse the reverse route of the video signals - that is mouse input received via server 1100A will be passed via the OOB channel to a selected platform (e.g., server 1100N). This will require updating the global resource map 1134A on server 1100A to reflect the proper destination platform. Routing keyboard signals also will require a similar map update. A difference with keyboard signals is that they are bi-directional, so both input and output data rerouting is required.
  • An exemplary keyboard input signal processing protocol stack and flow diagram is shown in Figure 13. The software side of the protocol stack on server 1100N includes an operating system keyboard driver 1300N, while the firmware components include a keyboard router driver 1302N, a video device driver 1304N and an OOB communications handler 604N. Similar components comprise the protocol stack of server 1100A.
  • In response to a user input via keyboard 1126, a keyboard input signal is generated that is received by a keyboard chip 1306A via keyboard port 1118A. Keyboard chip 1306 then produces corresponding keyboard (KB) data that is received by keyboard device driver 1304A. At this point, the handling of the keyboard input is identical to that implemented on a single platform that does not employ resource sharing (e.g., a desktop computer). Normally, keyboard device driver 1304A would interface with OS keyboard driver 1300A to pass the keyboard data to the operating system. However, the OS keyboard driver that is targeted to receive the keyboard data is running on server 1100N. Accordingly, video data handled by keyboard device driver 1304 is passed to keyboard router driver 1302A to facilitate rerouting the keyboard data.
  • In response to receiving the keyboard data, keyboard router driver queries global resource map 1134 to determine the target server to which the keyboard data is to be rerouted (server 1100N in this example). The keyboard router driver then asserts an SMI to kick the processor running on server 1100A into SMM and passes the keyboard data along with server target identification data to OOB communications handler 604A. OOB communications handler 604A then interacts with OOB communication handler 604N to facilitate OOB communications between the two servers via OOB channel 1124, leading to the keyboard data being received by OOB communications handler 604N. In response to receiving the keyboard data, OOB communications handler 604N forwards the keyboard data to keyboard router driver 1302N. At this point, the keyboard router driver may either directly pass the keyboard data to OS keyboard driver 1300N, or perform a routing verification lookup of global resource map 1134N to ensure that server 1100N is the proper server to receive the keyboard data prior to passing the data to OS keyboard driver 1300N. The OS keyboard driver then processes the keyboard data and provides the processed data to a runtime application having the current focus.
  • As discuss above, resource sharing is effectuated, at least in part, through firmware stored on each blade or platform. The firmware, which may typically comprise instructions and data for implementing the various operations described herein, will generally be stored on a non-volatile memory device, such as but not limited to a flash device, a ROM, or an EEPROM. The instructions are machine readable, either directly by a real machine (i.e., machine code) or via interpretation by a virtual machine (e.g., interpreted byte-code). Thus, embodiments of the invention may be used as or to support firmware executed upon some_form of processing core (such as the CPU of a computer) or otherwise implemented or realized upon or within a machine-readable medium. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a processor). For example, a machine-readable medium can include media such as a read only memory (ROM); a random access memory (RAM); a magnetic disk storage media; an optical storage media; and a flash memory device, etc. In addition, a machine-readable medium can include propagated signals such as electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.).
  • The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
  • These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.

Claims (9)

  1. A method for sharing resources across a plurality of computing platforms provided by a blade server, comprising:
    receiving at a first server blade (700) a resource access request to access a shared resource;
    determining a second server blade via which the shared resource may be accessed (704), wherein the first and second server blades each have resource management code and associated firmware so as to provide an extensible firmware interface for providing an out-of-band,OOB, communication channel;
    asserting a first system management interrupt,SMI, at a first processor included in said first server blade (706);
    switching an execution mode of said first processor to a system management mode, SMM, in response to the first SMI (708) and launching an OOB communication handler (604-1);
    initiating in response to said first processor entering the SMM the out-of-band communications channel between said first server blade and said second server blade, wherein initiating the OOB communications channel includes the OOB communication handler asserting a second SMI on a second processor included in said second server blade and the second processor switching an execution mode to SMM of said second processor in response to the second SMI ( 710),launching a second OOB communication handler of the second processor;
    communicating the access request via the OOB communication channel to be received by the second OOB communication handler;
    passing the access request to a management/access driver (802-2) via its application program interface, API; and
    accessing the resource using a resource device driver in the second server blade,
    whereby the resource access is performed in a manner that is transparent to operating systems running on the plurality of computing platforms.
  2. The method of claim 1, wherein the out-of-band communication channel (610) comprises one of a system management bus, an Ethernet-based network, or a serial communication link.
  3. The method of claim 2, wherein the shared resource comprises a storage device.
  4. The method of claim 3, wherein the resource access request comprises a storage device write request, and the method further comprises sending data corresponding to the storage device write request via the out-of-band communication channel (610).
  5. The method of claim 3, wherein the resource access request comprises a storage device read request, and the method further comprises:
    retrieving data corresponding to the read request from the shared resource; and
    sending the data that are retrieved back to the first server blade via the out-of-band communication channel (610).
  6. The method of any preceding claim, further comprising:
    maintaining global resource mapping data identifying which resources are accessible via which server blade; and
    employing the global resource mapping data to determine which server blade to use to access the shared resource.
  7. The method of claim 6, wherein a local copy of the global resource mapping data is maintained on each of the plurality of server blades.
  8. The method of claim 6, wherein the global resource mapping data is maintained by a central global resource manager.
  9. A blade server system, comprising:
    a chassis (100), including a plurality of slots in which respective server blades may be inserted;
    an interface plane (104) having a plurality of connectors for mating with respective connectors on inserted server blades and providing communication paths between the plurality of connectors to facilitate in out-of-band communication channel (610); and
    a plurality of server blades (102), each including a processor adapted to store firmware management code and associated firmware executable thereon to perform operations including providing an extensible firmware interface, EFI, which includes an out-of-band, OOB, communication channel, and whereby each server blade is adapted to receive a resource access request from an operating system running on a first server blade to access a shared resource hosted by at least one of the plurality of server blades;
    to determine a second server blade from among the plurality of server blades that may service the resource access request;
    to assert a first system management interrupt,SMI, at a first processor included in said first server blade (706);
    to switch an execution mode of said first processor to a system management mode, SMM, in response to the first SMI (708)and to launch an OOB communication handler (604-1);
    to initiate in response to said first processor entering the SMM said out-of-band communications channel between said first server blade and said second server blade, wherein initiating the OOB communications channel includes asserting a second SMI on a second processor included in said second server blade and switching an execution mode of said second processor in response to the second SMI (710), to launch a second OOB communication handler of the second processor;
    to communicate the access request via the OOB communication channel to be received by the second OOB communication handler;
    to pass the access request to a management/access driver (802-2) via its application program interface, API,; and
    to access the resource using a resource driver in the second server blade,
    whereby the resource access is performed in a manner that is transparent to operating systems running on the plurality of computing platforms.
EP04754766.6A 2003-06-25 2004-06-09 Os agnostic resource sharing across multiple computing platforms Expired - Lifetime EP1636696B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/606,636 US20050015430A1 (en) 2003-06-25 2003-06-25 OS agnostic resource sharing across multiple computing platforms
PCT/US2004/018253 WO2005006186A2 (en) 2003-06-25 2004-06-09 Os agnostic resource sharing across multiple computing platforms

Publications (2)

Publication Number Publication Date
EP1636696A2 EP1636696A2 (en) 2006-03-22
EP1636696B1 true EP1636696B1 (en) 2013-07-24

Family

ID=34062276

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04754766.6A Expired - Lifetime EP1636696B1 (en) 2003-06-25 2004-06-09 Os agnostic resource sharing across multiple computing platforms

Country Status (5)

Country Link
US (2) US20050015430A1 (en)
EP (1) EP1636696B1 (en)
JP (1) JP4242420B2 (en)
CN (1) CN101142553B (en)
WO (1) WO2005006186A2 (en)

Families Citing this family (131)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004098715A2 (en) * 2003-05-02 2004-11-18 Op-D-Op, Inc. Lightweight ventilated face shield frame
US7434231B2 (en) * 2003-06-27 2008-10-07 Intel Corporation Methods and apparatus to protect a protocol interface
US20050256942A1 (en) * 2004-03-24 2005-11-17 Mccardle William M Cluster management system and method
US20060053215A1 (en) * 2004-09-07 2006-03-09 Metamachinix, Inc. Systems and methods for providing users with access to computer resources
US7949798B2 (en) * 2004-12-30 2011-05-24 Intel Corporation Virtual IDE interface and protocol for use in IDE redirection communication
US8150973B2 (en) 2004-12-30 2012-04-03 Intel Corporation Virtual serial port and protocol for use in serial-over-LAN communication
US7631045B2 (en) * 2005-07-14 2009-12-08 Yahoo! Inc. Content router asynchronous exchange
US20070014307A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Content router forwarding
US20070014277A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Content router repository
US20070016636A1 (en) * 2005-07-14 2007-01-18 Yahoo! Inc. Methods and systems for data transfer and notification mechanisms
US7623515B2 (en) * 2005-07-14 2009-11-24 Yahoo! Inc. Content router notification
US20070038703A1 (en) * 2005-07-14 2007-02-15 Yahoo! Inc. Content router gateway
US7849199B2 (en) * 2005-07-14 2010-12-07 Yahoo ! Inc. Content router
US20070058657A1 (en) * 2005-08-22 2007-03-15 Graham Holt System for consolidating and securing access to all out-of-band interfaces in computer, telecommunication, and networking equipment, regardless of the interface type
US20070050765A1 (en) * 2005-08-30 2007-03-01 Geisinger Nile J Programming language abstractions for creating and controlling virtual computers, operating systems and networks
US20070074191A1 (en) * 2005-08-30 2007-03-29 Geisinger Nile J Software executables having virtual hardware, operating systems, and networks
US20070050770A1 (en) * 2005-08-30 2007-03-01 Geisinger Nile J Method and apparatus for uniformly integrating operating system resources
US20070067769A1 (en) * 2005-08-30 2007-03-22 Geisinger Nile J Method and apparatus for providing cross-platform hardware support for computer platforms
US20070074192A1 (en) * 2005-08-30 2007-03-29 Geisinger Nile J Computing platform having transparent access to resources of a host platform
US7356638B2 (en) 2005-10-12 2008-04-08 International Business Machines Corporation Using out-of-band signaling to provide communication between storage controllers in a computer storage system
US7873696B2 (en) * 2005-10-28 2011-01-18 Yahoo! Inc. Scalable software blade architecture
US7870288B2 (en) * 2005-10-28 2011-01-11 Yahoo! Inc. Sharing data in scalable software blade architecture
US7779157B2 (en) * 2005-10-28 2010-08-17 Yahoo! Inc. Recovering a blade in scalable software blade architecture
US8024290B2 (en) 2005-11-14 2011-09-20 Yahoo! Inc. Data synchronization and device handling
US8065680B2 (en) * 2005-11-15 2011-11-22 Yahoo! Inc. Data gateway for jobs management based on a persistent job table and a server table
US7986844B2 (en) * 2005-11-22 2011-07-26 Intel Corporation Optimized video compression using hashing function
TW200725475A (en) * 2005-12-29 2007-07-01 Inventec Corp Sharing method of display chip
US8527542B2 (en) * 2005-12-30 2013-09-03 Sap Ag Generating contextual support requests
US7930681B2 (en) * 2005-12-30 2011-04-19 Sap Ag Service and application management in information technology systems
US7979733B2 (en) 2005-12-30 2011-07-12 Sap Ag Health check monitoring process
US9367832B2 (en) * 2006-01-04 2016-06-14 Yahoo! Inc. Synchronizing image data among applications and devices
JP5082252B2 (en) * 2006-02-09 2012-11-28 株式会社日立製作所 Server information collection method
US7610481B2 (en) 2006-04-19 2009-10-27 Intel Corporation Method and apparatus to support independent systems in partitions of a processing system
JP2007293518A (en) * 2006-04-24 2007-11-08 Hitachi Ltd Computer system configuration method, computer, and system configuration program
US7818558B2 (en) * 2006-05-31 2010-10-19 Andy Miga Method and apparatus for EFI BIOS time-slicing at OS runtime
US8078637B1 (en) * 2006-07-28 2011-12-13 Amencan Megatrends, Inc. Memory efficient peim-to-peim interface database
US20080034008A1 (en) * 2006-08-03 2008-02-07 Yahoo! Inc. User side database
US7685476B2 (en) * 2006-09-12 2010-03-23 International Business Machines Corporation Early notification of error via software interrupt and shared memory write
JP2008129869A (en) * 2006-11-21 2008-06-05 Nec Computertechno Ltd Server monitoring operation system
US20080270629A1 (en) * 2007-04-27 2008-10-30 Yahoo! Inc. Data snychronization and device handling using sequence numbers
US7853669B2 (en) 2007-05-04 2010-12-14 Microsoft Corporation Mesh-managing data across a distributed set of devices
US7721013B2 (en) * 2007-05-21 2010-05-18 Intel Corporation Communicating graphics data via an out of band channel
US8386614B2 (en) * 2007-05-25 2013-02-26 Microsoft Corporation Network connection manager
US7932479B2 (en) * 2007-05-31 2011-04-26 Abbott Cardiovascular Systems Inc. Method for laser cutting tubing using inert gas and a disposable mask
US7873846B2 (en) * 2007-07-31 2011-01-18 Intel Corporation Enabling a heterogeneous blade environment
US7716309B2 (en) * 2007-08-13 2010-05-11 International Business Machines Corporation Consistent data storage subsystem configuration replication
US20090125901A1 (en) * 2007-11-13 2009-05-14 Swanson Robert C Providing virtualization of a server management controller
CN101868784B (en) * 2007-11-22 2013-05-08 爱立信电话股份有限公司 Method and device for agile computing
TW200931259A (en) * 2008-01-10 2009-07-16 June On Co Ltd Computer adaptor device capable of automatically updating the device-mapping
US8838669B2 (en) * 2008-02-08 2014-09-16 Oracle International Corporation System and method for layered application server processing
US9298747B2 (en) * 2008-03-20 2016-03-29 Microsoft Technology Licensing, Llc Deployable, consistent, and extensible computing environment platform
US8572033B2 (en) * 2008-03-20 2013-10-29 Microsoft Corporation Computing environment configuration
US8484174B2 (en) * 2008-03-20 2013-07-09 Microsoft Corporation Computing environment representation
US9753712B2 (en) 2008-03-20 2017-09-05 Microsoft Technology Licensing, Llc Application management within deployable object hierarchy
US20090248737A1 (en) * 2008-03-27 2009-10-01 Microsoft Corporation Computing environment representation
US7886021B2 (en) * 2008-04-28 2011-02-08 Oracle America, Inc. System and method for programmatic management of distributed computing resources
US8352371B2 (en) * 2008-04-30 2013-01-08 General Instrument Corporation Limiting access to shared media content
US8555048B2 (en) * 2008-05-17 2013-10-08 Hewlett-Packard Development Company, L.P. Computer system for booting a system image by associating incomplete identifiers to complete identifiers via querying storage locations according to priority level where the querying is self adjusting
CN105117309B (en) * 2008-05-21 2019-03-29 艾利森电话股份有限公司 Resource pool in blade cluster switching center server
US9025592B2 (en) * 2008-05-21 2015-05-05 Telefonaktiebolaget L M Ericsson (Publ) Blade cluster switching center server and method for signaling
WO2009140979A1 (en) * 2008-05-21 2009-11-26 Telefonaktiebolaget L M Ericsson (Publ) Resource pooling in a blade cluster switching center server
EP2304580A4 (en) * 2008-06-20 2011-09-28 Hewlett Packard Development Co Low level initializer
WO2010008707A1 (en) * 2008-07-17 2010-01-21 Lsi Corporation Systems and methods for installing a bootable virtual storage appliance on a virtualized server platform
US8041794B2 (en) 2008-09-29 2011-10-18 Intel Corporation Platform discovery, asset inventory, configuration, and provisioning in a pre-boot environment using web services
US7904630B2 (en) * 2008-10-15 2011-03-08 Seagate Technology Llc Bus-connected device with platform-neutral layers
CN101783736B (en) * 2009-01-15 2016-09-07 华为终端有限公司 A kind of terminal accepts the method for multiserver administration, device and communication system
US20120158923A1 (en) * 2009-05-29 2012-06-21 Ansari Mohamed System and method for allocating resources of a server to a virtual machine
CN101594235B (en) * 2009-06-02 2011-07-20 浪潮电子信息产业股份有限公司 Method for managing blade server based on SMBUS
US8271704B2 (en) * 2009-06-16 2012-09-18 International Business Machines Corporation Status information saving among multiple computers
US8402186B2 (en) * 2009-06-30 2013-03-19 Intel Corporation Bi-directional handshake for advanced reliabilty availability and serviceability
US7970954B2 (en) * 2009-08-04 2011-06-28 Dell Products, Lp System and method of providing a user-friendly device path
US10185594B2 (en) * 2009-10-29 2019-01-22 International Business Machines Corporation System and method for resource identification
US8667110B2 (en) * 2009-12-22 2014-03-04 Intel Corporation Method and apparatus for providing a remotely managed expandable computer system
US8806231B2 (en) 2009-12-22 2014-08-12 Intel Corporation Operating system independent network event handling
US8667191B2 (en) * 2010-01-15 2014-03-04 Kingston Technology Corporation Managing and indentifying multiple memory storage devices
JP5636703B2 (en) * 2010-03-11 2014-12-10 沖電気工業株式会社 Blade server
US20110288932A1 (en) * 2010-05-21 2011-11-24 Inedible Software, LLC, a Wyoming Limited Liability Company Apparatuses, systems and methods for determining installed software applications on a computing device
US8370618B1 (en) 2010-06-16 2013-02-05 American Megatrends, Inc. Multiple platform support in computer system firmware
US8281043B2 (en) * 2010-07-14 2012-10-02 Intel Corporation Out-of-band access to storage devices through port-sharing hardware
US20120020349A1 (en) * 2010-07-21 2012-01-26 GraphStream Incorporated Architecture for a robust computing system
US8441792B2 (en) 2010-07-21 2013-05-14 Birchbridge Incorporated Universal conduction cooling platform
US8441793B2 (en) 2010-07-21 2013-05-14 Birchbridge Incorporated Universal rack backplane system
US8410364B2 (en) 2010-07-21 2013-04-02 Birchbridge Incorporated Universal rack cable management system
US8411440B2 (en) 2010-07-21 2013-04-02 Birchbridge Incorporated Cooled universal hardware platform
US8259450B2 (en) 2010-07-21 2012-09-04 Birchbridge Incorporated Mobile universal hardware platform
US8386618B2 (en) 2010-09-24 2013-02-26 Intel Corporation System and method for facilitating wireless communication during a pre-boot phase of a computing device
US8984109B2 (en) 2010-11-02 2015-03-17 International Business Machines Corporation Ensemble having one or more computing systems and a controller thereof
US8959220B2 (en) 2010-11-02 2015-02-17 International Business Machines Corporation Managing a workload of a plurality of virtual servers of a computing environment
US8966020B2 (en) 2010-11-02 2015-02-24 International Business Machines Corporation Integration of heterogeneous computing systems into a hybrid computing system
US9081613B2 (en) 2010-11-02 2015-07-14 International Business Machines Corporation Unified resource manager providing a single point of control
US9253016B2 (en) * 2010-11-02 2016-02-02 International Business Machines Corporation Management of a data network of a computing environment
US9195509B2 (en) 2011-01-05 2015-11-24 International Business Machines Corporation Identifying optimal platforms for workload placement in a networked computing environment
US8819708B2 (en) * 2011-01-10 2014-08-26 Dell Products, Lp System and method to abstract hardware routing via a correlatable identifier
US8868749B2 (en) 2011-01-18 2014-10-21 International Business Machines Corporation Workload placement on an optimal platform in a networked computing environment
US9858241B2 (en) 2013-11-05 2018-01-02 Oracle International Corporation System and method for supporting optimized buffer utilization for packet processing in a networking device
US8634415B2 (en) 2011-02-16 2014-01-21 Oracle International Corporation Method and system for routing network traffic for a blade server
WO2012141677A1 (en) 2011-04-11 2012-10-18 Hewlett-Packard Development Company, L.P. Performing a task in a system having different types of hardware resources
US10966339B1 (en) * 2011-06-28 2021-03-30 Amazon Technologies, Inc. Storage system with removable solid state storage devices mounted on carrier circuit boards
DE102011078630A1 (en) * 2011-07-05 2013-01-10 Robert Bosch Gmbh Method for setting up a system of technical units
CN102955509B (en) * 2011-08-31 2017-07-21 赛恩倍吉科技顾问(深圳)有限公司 Hard disk backboard and hard disk storage system
US9558092B2 (en) 2011-12-12 2017-01-31 Microsoft Technology Licensing, Llc Runtime-agnostic management of applications
CN102546782B (en) * 2011-12-28 2015-04-29 北京奇虎科技有限公司 Distribution system and data operation method thereof
JP5966466B2 (en) 2012-03-14 2016-08-10 富士通株式会社 Backup control method and information processing apparatus
CN103379104B (en) * 2012-04-23 2017-03-01 联想(北京)有限公司 A kind of teledata sharing method and device
US9292108B2 (en) 2012-06-28 2016-03-22 Dell Products Lp Systems and methods for remote mouse pointer management
US9712373B1 (en) 2012-07-30 2017-07-18 Rambus Inc. System and method for memory access in server communications
US10187452B2 (en) 2012-08-23 2019-01-22 TidalScale, Inc. Hierarchical dynamic scheduling
US20150370721A1 (en) * 2013-01-31 2015-12-24 Hewlett-Packard Development Company, L.P. Mapping mechanism for large shared address spaces
WO2014158161A1 (en) 2013-03-28 2014-10-02 Hewlett-Packard Development Company, L.P. Error coordination message for a blade device having a logical processor in another system firmware domain
US9747116B2 (en) 2013-03-28 2017-08-29 Hewlett Packard Enterprise Development Lp Identifying memory of a blade device for use by an operating system of a partition including the blade device
US9781015B2 (en) 2013-03-28 2017-10-03 Hewlett Packard Enterprise Development Lp Making memory of compute and expansion devices available for use by an operating system
US9203772B2 (en) 2013-04-03 2015-12-01 Hewlett-Packard Development Company, L.P. Managing multiple cartridges that are electrically coupled together
CN103353856A (en) * 2013-07-02 2013-10-16 华为技术有限公司 Hard disk and method for forwarding and obtaining hard disk data
US9489327B2 (en) 2013-11-05 2016-11-08 Oracle International Corporation System and method for supporting an efficient packet processing model in a network environment
WO2015084300A1 (en) 2013-12-02 2015-06-11 Hewlett-Packard Development Company, L.P. System wide manageability
US9195429B2 (en) * 2014-03-10 2015-11-24 Gazoo, Inc. Multi-user display system and method
KR101996896B1 (en) 2014-12-29 2019-07-05 삼성전자주식회사 Method for sharing resource using a virtual device driver and electronic device thereof
CN105808550B (en) * 2014-12-30 2019-02-15 迈普通信技术股份有限公司 A kind of method and device accessing file
US11360673B2 (en) 2016-02-29 2022-06-14 Red Hat, Inc. Removable data volume management
JP6705266B2 (en) * 2016-04-07 2020-06-03 オムロン株式会社 Control device, control method and program
US10601725B2 (en) * 2016-05-16 2020-03-24 International Business Machines Corporation SLA-based agile resource provisioning in disaggregated computing systems
US10034407B2 (en) * 2016-07-22 2018-07-24 Intel Corporation Storage sled for a data center
US10353736B2 (en) 2016-08-29 2019-07-16 TidalScale, Inc. Associating working sets and threads
US10609130B2 (en) 2017-04-28 2020-03-31 Microsoft Technology Licensing, Llc Cluster resource management in distributed computing systems
US11023135B2 (en) 2017-06-27 2021-06-01 TidalScale, Inc. Handling frequently accessed pages
US10817347B2 (en) 2017-08-31 2020-10-27 TidalScale, Inc. Entanglement of pages and guest threads
US10992593B2 (en) 2017-10-06 2021-04-27 Bank Of America Corporation Persistent integration platform for multi-channel resource transfers
US11552803B1 (en) 2018-09-19 2023-01-10 Amazon Technologies, Inc. Systems for provisioning devices
US11229135B2 (en) 2019-04-01 2022-01-18 Dell Products L.P. Multiple function chassis mid-channel
CN111402083B (en) * 2020-02-21 2021-09-21 浙江口碑网络技术有限公司 Resource information processing method and device, storage medium and terminal
RU209333U1 (en) * 2021-09-27 2022-03-15 Российская Федерация, от имени которой выступает Государственная корпорация по атомной энергии "Росатом" (Госкорпорация "Росатом") HIGH DENSITY COMPUTING NODE

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5696895A (en) * 1995-05-19 1997-12-09 Compaq Computer Corporation Fault tolerant multiple network servers
US5721842A (en) * 1995-08-25 1998-02-24 Apex Pc Solutions, Inc. Interconnection system for viewing and controlling remotely connected computers with on-screen video overlay for controlling of the interconnection switch
US6671756B1 (en) 1999-05-06 2003-12-30 Avocent Corporation KVM switch having a uniprocessor that accomodate multiple users and multiple computers
US7343441B1 (en) * 1999-12-08 2008-03-11 Microsoft Corporation Method and apparatus of remote computer management
AU2001259075A1 (en) * 2000-04-17 2001-10-30 Circadence Corporation System and method for web serving
WO2001097016A2 (en) 2000-06-13 2001-12-20 Intel Corporation Providing client accessible network-based storage
US6889340B1 (en) * 2000-10-13 2005-05-03 Phoenix Technologies Ltd. Use of extra firmware flash ROM space as a diagnostic drive
US6477618B2 (en) * 2000-12-28 2002-11-05 Emc Corporation Data storage system cluster architecture
US7339786B2 (en) * 2001-03-05 2008-03-04 Intel Corporation Modular server architecture with Ethernet routed across a backplane utilizing an integrated Ethernet switch module
US7374974B1 (en) * 2001-03-22 2008-05-20 T-Ram Semiconductor, Inc. Thyristor-based device with trench dielectric material
US7424551B2 (en) * 2001-03-29 2008-09-09 Avocent Corporation Passive video multiplexing method and apparatus priority to prior provisional application
US7073059B2 (en) * 2001-06-08 2006-07-04 Hewlett-Packard Development Company, L.P. Secure machine platform that interfaces to operating systems and customized control programs
US7225245B2 (en) * 2001-08-09 2007-05-29 Intel Corporation Remote diagnostics system
US7269630B2 (en) * 2001-10-17 2007-09-11 International Business Machines Corporation Automatically switching shared remote devices in a dense server environment thereby allowing the remote devices to function as a local device
US7003563B2 (en) * 2001-11-02 2006-02-21 Hewlett-Packard Development Company, L.P. Remote management system for multiple servers
GB2382419B (en) * 2001-11-22 2005-12-14 Hewlett Packard Co Apparatus and method for creating a trusted environment
US6968414B2 (en) * 2001-12-04 2005-11-22 International Business Machines Corporation Monitoring insertion/removal of server blades in a data processing system
US6901534B2 (en) * 2002-01-15 2005-05-31 Intel Corporation Configuration proxy service for the extended firmware interface environment
US6848034B2 (en) * 2002-04-04 2005-01-25 International Business Machines Corporation Dense server environment that shares an IDE drive
US7398293B2 (en) * 2002-04-17 2008-07-08 Dell Products L.P. System and method for using a shared bus for video communications
US7114180B1 (en) * 2002-07-16 2006-09-26 F5 Networks, Inc. Method and system for authenticating and authorizing requestors interacting with content servers
US7191347B2 (en) * 2002-12-31 2007-03-13 International Business Machines Corporation Non-disruptive power management indication method, system and apparatus for server
US20040181601A1 (en) * 2003-03-14 2004-09-16 Palsamy Sakthikumar Peripheral device sharing
US7440998B2 (en) * 2003-06-18 2008-10-21 Intel Corporation Provisioning for a modular server

Also Published As

Publication number Publication date
JP4242420B2 (en) 2009-03-25
US20050021847A1 (en) 2005-01-27
WO2005006186A3 (en) 2007-05-10
CN101142553B (en) 2012-05-30
US20050015430A1 (en) 2005-01-20
JP2007526527A (en) 2007-09-13
CN101142553A (en) 2008-03-12
US7730205B2 (en) 2010-06-01
WO2005006186A2 (en) 2005-01-20
EP1636696A2 (en) 2006-03-22

Similar Documents

Publication Publication Date Title
EP1636696B1 (en) Os agnostic resource sharing across multiple computing platforms
US7222339B2 (en) Method for distributed update of firmware across a clustered platform infrastructure
US7483974B2 (en) Virtual management controller to coordinate processing blade management in a blade server environment
US7624262B2 (en) Apparatus, system, and method for booting using an external disk through a virtual SCSI connection
US7051215B2 (en) Power management for clustered computing platforms
US7930371B2 (en) Deployment method and system
US9471234B2 (en) Systems and methods for mirroring virtual functions in a chassis configured to receive a plurality of modular information handling systems and a plurality of modular information handling resources
US7581229B2 (en) Systems and methods for supporting device access from multiple operating systems
US9092022B2 (en) Systems and methods for load balancing of modular information handling resources in a chassis
US20050080982A1 (en) Virtual host bus adapter and method
US8379541B2 (en) Information platform and configuration method of multiple information processing systems thereof
EP1756712A1 (en) System and method for managing virtual servers
US11055104B2 (en) Network-adapter configuration using option-ROM in multi-CPU devices
US20140149658A1 (en) Systems and methods for multipath input/output configuration
US7366867B2 (en) Computer system and storage area allocation method
Meier et al. IBM systems virtualization: Servers, storage, and software
US20140280663A1 (en) Apparatus and Methods for Providing Performance Data of Nodes in a High Performance Computing System
US11314455B2 (en) Mapping of RAID-CLI requests to vSAN commands by an out-of-band management platform using NLP
Shaw et al. Linux Installation and Configuration
Opsahl A Comparison of Management of Virtual Machines with z/VM and ESX Server

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20051121

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL HR LT LV MK

DAX Request for extension of the european patent (deleted)
PUAK Availability of information related to the publication of the international search report

Free format text: ORIGINAL CODE: 0009015

17Q First examination report despatched

Effective date: 20070726

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602004042829

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G06F0009400000

Ipc: G06F0009500000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 9/50 20060101AFI20121212BHEP

Ipc: G06F 9/44 20060101ALI20121212BHEP

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 623784

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130815

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602004042829

Country of ref document: DE

Effective date: 20130919

REG Reference to a national code

Ref country code: NL

Ref legal event code: T3

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 623784

Country of ref document: AT

Kind code of ref document: T

Effective date: 20130724

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131125

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130710

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20131025

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20140425

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20140604

Year of fee payment: 11

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602004042829

Country of ref document: DE

Effective date: 20140425

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20140603

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20140610

Year of fee payment: 11

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20140609

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20150227

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140630

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140609

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140630

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602004042829

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20150609

REG Reference to a national code

Ref country code: NL

Ref legal event code: MM

Effective date: 20150701

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150609

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20160101

Ref country code: NL

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20150701

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20040609

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20130724