WO2016164013A1 - Pre-boot device configuration - Google Patents

Pre-boot device configuration Download PDF

Info

Publication number
WO2016164013A1
WO2016164013A1 PCT/US2015/024975 US2015024975W WO2016164013A1 WO 2016164013 A1 WO2016164013 A1 WO 2016164013A1 US 2015024975 W US2015024975 W US 2015024975W WO 2016164013 A1 WO2016164013 A1 WO 2016164013A1
Authority
WO
WIPO (PCT)
Prior art keywords
storage device
storage
data
shared
management
Prior art date
Application number
PCT/US2015/024975
Other languages
French (fr)
Inventor
Lee A. Preimesberger
Jorge Daniel Cisneros
Thomas A. Schwartz
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/024975 priority Critical patent/WO2016164013A1/en
Priority to TW105111047A priority patent/TW201638801A/en
Publication of WO2016164013A1 publication Critical patent/WO2016164013A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1044Group management mechanisms 
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0632Configuration or reconfiguration of storage systems by initialisation or re-initialisation of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4406Loading of operating system
    • G06F9/4408Boot device selection

Definitions

  • Network systems generally bind servers and other network units together into management groups.
  • the binding can be performed by entering unit identification and access information into a management application for each system being added to the management group, for example, by username and password.
  • the information is usually included on a sticker on each unit.
  • This binding can be used to create a cluster of file systems on multiple servers to provide normal redundant array of inexpensive disks (RAID) services on volumes of multiple servers. Entrance to the cluster of file systems typically involves a configured and booted system or device submitting a credential to the cluster of file systems. This credential can establish the binding that allows movement of data between the devices and clusters.
  • FIG. 1 is a block diagram of an example computing device that can be added to a management group using panel buttons and configured from a shared storage;
  • FIG. 2 is a flowchart of an example method to configure a device from a shared storage pool prior to booting
  • FIG. 3 is a diagram of an example system at a high level showing the interaction of a target system and a shared storage pool;
  • FIG. 4 is a block diagram of an example non-transitory, computer readable medium including instructions to direct a processor to configure a device from a shared storage pool;
  • FIG. 5 is a block diagram of an example method for enabling block level storage for a host operating system after it is configured with information from a shared storage group.
  • the current techniques for binding network units into management groups may take significant time or reduce system security during the binding process. Further, the current techniques may take significant expertise and access to both the server units and a network management system.
  • Current techniques for configuring a system can require volume creation or distribution between nodes. When a volume is created, the information used to enable communication between nodes may be included in the redundant array of inexpensive disks (RAID) volume metadata.
  • RAID redundant array of inexpensive disks
  • configuration and binding may use text files and manually extracted keys sent from system to system using software with a synchronized login between devices.
  • Examples described herein provide techniques by which servers and other network units and devices can obtain copy-on-write system images or establish presence in RAID-type storage platforms prior to booting an operating system (OS).
  • OS operating system
  • a computing device may use their group binding to skip OS setup prior to configuration as a host device, server, RAID server, or other network component.
  • network units may be bound into management groups when an operator enters a numeric code into a network unit using panel buttons, often positioned on the front of the network unit.
  • the code may be selected by the operator at the time of entry, and may be used as a signaling and identification tool by the network unit. Once the code has been entered, the operator can enter the same code on a second network unit.
  • the network units can locate each other over the network, for example, using Uniform Datagram Protocol (UDP) broadcasts containing the numeric code. Other network messages can be used in addition to, or instead of, the UDP broadcasts. If the entered codes match, the network units are bound together into a single
  • Fig. 1 is a block diagram of an example computing device 100 that can be added to a management group using panel buttons 102 and configured from a shared storage.
  • the computing device 102 is a desktop computer, a business server, a blade server, a storage attached network (SAN), a network attached storage (NAS), and the like.
  • the computing device 102 includes at least one processor 104.
  • the processor can be a single core processor, a multicore processor, a processor cluster, and the like.
  • the processor 104 is coupled to other units through a bus 1 06.
  • the bus 106 can include PCIe interconnects, PCIx, or any number of other suitable technologies.
  • the computing device 100 can be linked through the bus 106 to a system memory 108.
  • the system memory 108 can include random access memory (RAM), including volatile memory such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), non-volatile memory such as resistive random-access memory (RRAM), and any other suitable memory types or combinations thereof.
  • RAM random access memory
  • volatile memory such as static random-access memory (SRAM) and dynamic random-access memory (DRAM)
  • non-volatile memory such as resistive random-access memory (RRAM), and any other suitable memory types or combinations thereof.
  • the computing device 100 can include a tangible, non- transitory, computer-readable storage media, such as a storage device 1 10 for the long-term storage of operating programs and data, including the operating programs and data such as user files.
  • the processor 104 may be coupled through the bus 106 to an I/O interface 1 12.
  • the I/O interface 1 1 2 may be coupled to any suitable type of I/O devices 1 14, including input devices, such as a mouse, touch screen, keyboard, display, and the like.
  • the I/O devices 1 14 may also be output devices such as a display monitors.
  • the I/O interface 1 12 may couple the computing device 100 to the panel buttons 102. This may include both the input functions and the output or status lighting functions.
  • the computing device 100 can also include a network interface controller (NIC) 1 16, for connecting the computing device 100 to a network 1 18.
  • the network 1 1 8 may be an enterprise server network, a storage area network (SAN), a local area network (LAN), a wide-area network (WAN), or the Internet, for example.
  • the processor 104 can also be coupled to a storage controller 120, which may be coupled to one or more storage devices 122, such as a storage disk, a solid state drive, an array of storage disks, a network attached storage appliance, among others.
  • the presence of the storage devices 122 may allow the computing device 100 to function as a storage attached network (SAN) on the network.
  • the computing device 100 also includes a management controller 124, which may be communicatively coupled to a management controller network 126.
  • the management controller may be a proprietary baseboard management controller such as Hewlett-Packard's integrated Lights Out (il_O) interface, any baseboard management controller, or any other suitable unit for to aid in the remote monitoring of a computing device.
  • the management controller 124 may enable a system administrator to remotely monitor and control the computing device 100 through a dedicated, out-of-band management network, the management controller network 126. While in some examples, the management controller network 126 may overlap with the network 1 18, it does not rely on other components of the computing device 100 to operate. In particular, no block storage needs to be accessed by an
  • the management controller 124 and management controller network 126 may provide an alternate channel for the pairing messages sent after a numeric code has been entered using the front panel buttons 102.
  • the storage device 1 1 0 may include local storage in a hard disk or other non-volatile storage elements. While generally system information may be stored on the storage device 1 10, in this computing device 100, the management group information is stored by the management controller 1 24. This information storage scheme allows a device to communicate and work with a management device through the management controller network 126 prior to configuration of the computer system 100 or allowing a local OS access to block level storage in the storage device 1 1 0.
  • the preboot configurer 128 located in the management controller 124 can implement the processes described above. Specifically, once a management controller has bound the computing device 102 to a management group the preboot configurer 128 may communicate volume information to the management group. Through this communication, the management group may rebalance data for a redundant array of inexpensive disks (RAID) system using the newly bound computing device 100.
  • RAID redundant array of inexpensive disks
  • This method of binding a computing device 100 enables each of these actions and processes to occur within the system unit prior to booting the computing device 100 or fully loading an operating system (OS) of the system unit.
  • OS operating system
  • a copy-on-write system image is copied to each node prior to booting to allow each unit to instantly boot without any prior setup or deployment of the system.
  • FIG. 1 the block diagram of Fig. 1 is not intended to indicate that the computing device 100 is to include all of the components shown in Fig. 1 . Rather, the computing device 100 can include fewer or additional
  • the management controller 124 and management controller network 126 may not be present.
  • the computing device 100 is a server or blade server, the storage controller 120 and external storage devices 122 may not be present.
  • the computing device 1 00 may include additional processors, memory controller devices, network interfaces, etc.
  • Fig. 2 is a flowchart of an example method to configure a device from a shared storage pool prior to booting.
  • the device or computing system may already have received management group information from a management group while in an inactive, unconfigured mode.
  • the example method begins at block 200, where a device or computer system is powered on.
  • the system may include a processor, a storage component, and a management controller or other drivers and firmware to implement this example method.
  • a driver of a computing device 1 00 reads a peer list of peer nodes in a management group from the management controller storage.
  • This driver can correspond with the preboot configurer 128 of Fig. 1 .
  • This peer list is read by a virtual driver of the device or storage service of the system and may also read stored management group information from information assigned to the system on a system storage device such as the management controller 124 or the storage 1 10. Other information may include credentials, keys, and array settings for the management group.
  • This information may also contain volume parameters and basic information such as whether the volume is a pure data volume, as well as the type of volume to be created or distributed.
  • the information may clarify whether the volume to be created or distributed is to be a copy-on-write boot volume, RAID storage volume, etc.
  • a copy-on-write boot volume may include a system image that a computer device 100 can use instantly rather than an OS or configuration that requires setup specifically for the device.
  • the communication can be made with nodes already existing or bound to the management group.
  • This communication establishes a connection between the system and each peer to the storage proxy. Connection to the storage proxy aids in the passing of data between the system and the management group.
  • the communication may also be facilitated by stored credentials earlier received and matched to the management group based on an input from a user.
  • the communication between the system and the management group may include the receipt of network topography, indices, and block maps of data to access file system structures if they exist. Regardless of whether block maps, indices, or topologies are received, the system may signal to elected management group node or nodes to generate data for the system based on the stored volume information in the system. The kind of data generated may vary based on the stored information in the system and two examples are shown in the steps below.
  • a driver of the device might signal to the management group to begin rebalancing data between nodes based on the number of systems in the management group.
  • RAID-like functions and data may be rebalanced onto the system as determined by the management group. If the volume information stored in the system indicates that the management group is a RAID-like storage system, the device or system may signal that once it joins this group, the data in this storage system may be rebalanced or redundantly copied to the new system.
  • copy-on-write may begin claiming and verifying the local storage area for a device or system with this volume information stored.
  • a copy-on-write indication in the stored information of the system would signal the management group to provide a configured system image to the system that could be used instead of a locally installed and configured copy of an OS.
  • the system can run on an object storage cloud through the OS system image provided by the peer nodes of the management group.
  • a local or remote copy of the OS may be used.
  • the device OS begins and completes the booting process, however all RAID system redistribution or OS system image sharing has already taken place before the booting process began. This example provides one mechanism of this pre-boot method of RAID file system rebalancing or OS system image sharing within a management group.
  • Fig. 3 is a diagram of an example system 300 at a high level showing the interaction of a target system and a shared storage pool.
  • the size of a shared storage pool can be determined by the quantity of data it may store, not just the number of devices bound to the management group.
  • a computing system 302 is shown and may correspond to the computer system 100 in Fig. 1 or the system discussed in Fig. 2. As shown the computing system 302 may be bound to a management group 304. Although the management group 304 is shown having a plurality of nodes and devices, in some examples the management group may simply be a second computing device to which the computing system 302 is bound through a management controller network 1 26.
  • the computing system 302 is shown directly binding to a proxy node 306. While here, the proxy node 306 is shown as a stand-alone system, in some examples, the proxy node may be part of another system or storage space. In some examples, the connection between the computing system 302 and the proxy node 306 is not fixed, but is only shown here bound to a single system for simplicity.
  • the management group 304 may include not only a proxy node 306, but also a number of management group devices 308.
  • Each management group device 308 (A-D) is shown here with a different piece of a puzzle to represent the fact that each only has a piece of data belonging to a larger volume of data 310.
  • the management controller 124 of the computing system 302 When the management controller 124 of the computing system 302 is powered on it may request data from the management group. This request may involve communication with each of the management group devices 408 when the management group is a RAID-like file storage system that may rebalance or redistribute files to include the newly connected computing system 302.
  • the computing system 302 instead may communicate to each of the management group devices 308 (A-D) to request an OS system image volume of data 310. While in some examples, this system image may be located on a single management group device 308, in the pictured example a different piece of a complete OS system image is distributed among each of the management group devices 308(A-D). Further, while the entire volume of data 310 may be shown as being sent to the computing device 302, in some examples, the computing device 302 may be granting access to an OS running on a device in the management group 304 rather than locally on the computing device 302.
  • Fig. 4 is a block diagram of an example non-transitory, computer readable medium including instructions to direct a processor to configure a device from a shared storage pool.
  • the computer readable medium includes instructions 404 to direct the processor to configure a device from a shared storage pool prior to granting an OS block level storage in the computer readable medium.
  • Instructions 406 direct the processor 402 to search for other units broadcasting the same code. For example, these instructions may direct the processor 402 to form messages that include the numeric code and an invitation to connect. These instructions can further send the message out over a network.
  • These instructions 404 may direct the processor to store in the computer readable medium 400 information received from the management (MGMT) group.
  • MGMT management
  • Instructions may also direct the processor to elect a node of the management group, based on the MGMT group information, and instruct the elected node or nodes based on the information received.
  • This information may include a request to begin RAID rebalancing or the transmission or implementation of an OS system image on the computing device 302.
  • instructions may direct the processor to enable block level storage to an OS received from the MGMT group in response to instructions provided by the elected node instructor 406. This allows the computing devices to form a new management group.
  • Fig. 5 is a block diagram of an example method for enabling block level storage for a host operating system after it is configured with information from a shared storage group.
  • a computing system 302 stores management group information prior to booting. This information may have been obtained by a management controller 124 communicating over a management controller network 126 and obtaining volume information and credential information for the
  • this pre-boot binding was implemented by a communication of a common personal identification number (PIN) entered at both a management group device and the computing system 302.
  • PIN personal identification number
  • the computing system 302 may elect a node of the management group to communicate with in order to facilitate data transfer and processing.
  • This data may include volumes of data used in RAID file storage, transmission of an OS system file image, transmission of configuration data shared by the management group, or other similar management group data volumes.
  • This data may be generated or complied by the elected node of the management group for use by the computing system 302.
  • the computing system may enable block level storage for a host operating system after configuration of the storage device 210 of the computing device 302.
  • the configuration of the storage device 210 of the computing device 302 or the computing device 302 itself may be accomplished prior to booting a local configured OS due to the communication enabled by a
  • the method step at block 504 waits until this configuration or transmission is complete so that the booting of the OS or operation of the RAID file system on the computing device 302 does not occur until indicated as complete.
  • a benefit of this method is seen when a computing system 302 or systems may automatically form a large pool of shared storage of much higher capacity from small, cheap volumes. These pools may be automatically formed, shrunk, or grown, and automatically configured through the direction of the management controller prior to booting an OS locally on the computing system 302.
  • systems may boot from a base OS image installed into the management group or from a shared storage pool.
  • the OS may be maintained in a similar way and not require local or device specific provisioning.
  • the driver for the storage level provider is included in the kernel or boot image or can be loaded from the UEFI boot environment, and the system can boot directly to the production OS in seconds after power up.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Examples disclosed herein relate to pre-boot device configuration. An example system includes a storage device to store management group information obtained prior to booting from a shared storage pool. A management controller may communicate with an elected node of the shared storage pool, based on the management group information, to generate preboot data for receipt by the storage device. The management controller may also enable block level storage for a host operating system after configuration of the storage device. In an example, this configuration is based on both the preboot data received and the management group information.

Description

PRE-BOOT DEVICE CONFIGURATION
BACKGROUND
[0001] Network systems generally bind servers and other network units together into management groups. The binding can be performed by entering unit identification and access information into a management application for each system being added to the management group, for example, by username and password. The information is usually included on a sticker on each unit. This binding can be used to create a cluster of file systems on multiple servers to provide normal redundant array of inexpensive disks (RAID) services on volumes of multiple servers. Entrance to the cluster of file systems typically involves a configured and booted system or device submitting a credential to the cluster of file systems. This credential can establish the binding that allows movement of data between the devices and clusters.
DESCRIPTION OF THE DRAWINGS
[0002] Certain exemplary embodiments are described in the following detailed description and in reference to the drawings, in which:
[0003] Fig. 1 is a block diagram of an example computing device that can be added to a management group using panel buttons and configured from a shared storage;
[0004] Fig. 2 is a flowchart of an example method to configure a device from a shared storage pool prior to booting;
[0005] Fig. 3 is a diagram of an example system at a high level showing the interaction of a target system and a shared storage pool;
[0006] Fig. 4 is a block diagram of an example non-transitory, computer readable medium including instructions to direct a processor to configure a device from a shared storage pool; and
[0007] Fig. 5 is a block diagram of an example method for enabling block level storage for a host operating system after it is configured with information from a shared storage group. DETAILED DESCRIPTION
[0008] The current techniques for binding network units into management groups, as described above, may take significant time or reduce system security during the binding process. Further, the current techniques may take significant expertise and access to both the server units and a network management system. Current techniques for configuring a system can require volume creation or distribution between nodes. When a volume is created, the information used to enable communication between nodes may be included in the redundant array of inexpensive disks (RAID) volume metadata. Other techniques for system
configuration and binding may use text files and manually extracted keys sent from system to system using software with a synchronized login between devices.
[0009] Examples described herein provide techniques by which servers and other network units and devices can obtain copy-on-write system images or establish presence in RAID-type storage platforms prior to booting an operating system (OS). In some examples, a computing device may use their group binding to skip OS setup prior to configuration as a host device, server, RAID server, or other network component.
[0010] In an example of one pre-boot binding method, network units may be bound into management groups when an operator enters a numeric code into a network unit using panel buttons, often positioned on the front of the network unit. The code may be selected by the operator at the time of entry, and may be used as a signaling and identification tool by the network unit. Once the code has been entered, the operator can enter the same code on a second network unit. The network units can locate each other over the network, for example, using Uniform Datagram Protocol (UDP) broadcasts containing the numeric code. Other network messages can be used in addition to, or instead of, the UDP broadcasts. If the entered codes match, the network units are bound together into a single
management group.
[0011] Fig. 1 is a block diagram of an example computing device 100 that can be added to a management group using panel buttons 102 and configured from a shared storage. In some examples, the computing device 102 is a desktop computer, a business server, a blade server, a storage attached network (SAN), a network attached storage (NAS), and the like. The computing device 102 includes at least one processor 104. The processor can be a single core processor, a multicore processor, a processor cluster, and the like. The processor 104 is coupled to other units through a bus 1 06. The bus 106 can include PCIe interconnects, PCIx, or any number of other suitable technologies.
[0012] The computing device 100 can be linked through the bus 106 to a system memory 108. The system memory 108 can include random access memory (RAM), including volatile memory such as static random-access memory (SRAM) and dynamic random-access memory (DRAM), non-volatile memory such as resistive random-access memory (RRAM), and any other suitable memory types or combinations thereof. The computing device 100 can include a tangible, non- transitory, computer-readable storage media, such as a storage device 1 10 for the long-term storage of operating programs and data, including the operating programs and data such as user files.
[0013] The processor 104 may be coupled through the bus 106 to an I/O interface 1 12. The I/O interface 1 1 2 may be coupled to any suitable type of I/O devices 1 14, including input devices, such as a mouse, touch screen, keyboard, display, and the like. The I/O devices 1 14 may also be output devices such as a display monitors. The I/O interface 1 12 may couple the computing device 100 to the panel buttons 102. This may include both the input functions and the output or status lighting functions.
[0014] The computing device 100 can also include a network interface controller (NIC) 1 16, for connecting the computing device 100 to a network 1 18. In some examples, the network 1 1 8 may be an enterprise server network, a storage area network (SAN), a local area network (LAN), a wide-area network (WAN), or the Internet, for example.
[0015] The processor 104 can also be coupled to a storage controller 120, which may be coupled to one or more storage devices 122, such as a storage disk, a solid state drive, an array of storage disks, a network attached storage appliance, among others. The presence of the storage devices 122 may allow the computing device 100 to function as a storage attached network (SAN) on the network. [0016] The computing device 100 also includes a management controller 124, which may be communicatively coupled to a management controller network 126. The management controller may be a proprietary baseboard management controller such as Hewlett-Packard's integrated Lights Out (il_O) interface, any baseboard management controller, or any other suitable unit for to aid in the remote monitoring of a computing device. The management controller 124 may enable a system administrator to remotely monitor and control the computing device 100 through a dedicated, out-of-band management network, the management controller network 126. While in some examples, the management controller network 126 may overlap with the network 1 18, it does not rely on other components of the computing device 100 to operate. In particular, no block storage needs to be accessed by an
Operating System (OS) in order for the management controller 1 24 to operate. The management controller 124 and management controller network 126 may provide an alternate channel for the pairing messages sent after a numeric code has been entered using the front panel buttons 102.
[0017] The storage device 1 1 0 may include local storage in a hard disk or other non-volatile storage elements. While generally system information may be stored on the storage device 1 10, in this computing device 100, the management group information is stored by the management controller 1 24. This information storage scheme allows a device to communicate and work with a management device through the management controller network 126 prior to configuration of the computer system 100 or allowing a local OS access to block level storage in the storage device 1 1 0.
[0018] The preboot configurer 128 located in the management controller 124 can implement the processes described above. Specifically, once a management controller has bound the computing device 102 to a management group the preboot configurer 128 may communicate volume information to the management group. Through this communication, the management group may rebalance data for a redundant array of inexpensive disks (RAID) system using the newly bound computing device 100. This method of binding a computing device 100 enables each of these actions and processes to occur within the system unit prior to booting the computing device 100 or fully loading an operating system (OS) of the system unit. In some examples, a copy-on-write system image is copied to each node prior to booting to allow each unit to instantly boot without any prior setup or deployment of the system.
[0019] It is to be understood that the block diagram of Fig. 1 is not intended to indicate that the computing device 100 is to include all of the components shown in Fig. 1 . Rather, the computing device 100 can include fewer or additional
components not illustrated in Fig. 1 . For example, the management controller 124 and management controller network 126 may not be present. Further, if the computing device 100 is a server or blade server, the storage controller 120 and external storage devices 122 may not be present. In one example, the computing device 1 00 may include additional processors, memory controller devices, network interfaces, etc.
[0020] Fig. 2 is a flowchart of an example method to configure a device from a shared storage pool prior to booting. In this example method, the device or computing system may already have received management group information from a management group while in an inactive, unconfigured mode.
[0021] The example method begins at block 200, where a device or computer system is powered on. The system may include a processor, a storage component, and a management controller or other drivers and firmware to implement this example method.
[0022] At block 202, a driver of a computing device 1 00 reads a peer list of peer nodes in a management group from the management controller storage. This driver can correspond with the preboot configurer 128 of Fig. 1 . This peer list is read by a virtual driver of the device or storage service of the system and may also read stored management group information from information assigned to the system on a system storage device such as the management controller 124 or the storage 1 10. Other information may include credentials, keys, and array settings for the management group. This information may also contain volume parameters and basic information such as whether the volume is a pure data volume, as well as the type of volume to be created or distributed. For example, the information may clarify whether the volume to be created or distributed is to be a copy-on-write boot volume, RAID storage volume, etc. A copy-on-write boot volume may include a system image that a computer device 100 can use instantly rather than an OS or configuration that requires setup specifically for the device.
[0023] At block 204, the communication can be made with nodes already existing or bound to the management group. This communication establishes a connection between the system and each peer to the storage proxy. Connection to the storage proxy aids in the passing of data between the system and the management group. The communication may also be facilitated by stored credentials earlier received and matched to the management group based on an input from a user.
[0024] The communication between the system and the management group may include the receipt of network topography, indices, and block maps of data to access file system structures if they exist. Regardless of whether block maps, indices, or topologies are received, the system may signal to elected management group node or nodes to generate data for the system based on the stored volume information in the system. The kind of data generated may vary based on the stored information in the system and two examples are shown in the steps below.
[0025] At block 206, based on the volume settings and information, a driver of the device might signal to the management group to begin rebalancing data between nodes based on the number of systems in the management group. At this point, as the system is connected to the management group, RAID-like functions and data may be rebalanced onto the system as determined by the management group. If the volume information stored in the system indicates that the management group is a RAID-like storage system, the device or system may signal that once it joins this group, the data in this storage system may be rebalanced or redundantly copied to the new system.
[0026] At block 208, copy-on-write may begin claiming and verifying the local storage area for a device or system with this volume information stored. As mentioned above, a copy-on-write indication in the stored information of the system would signal the management group to provide a configured system image to the system that could be used instead of a locally installed and configured copy of an OS.
[0027] At block 210, the system can run on an object storage cloud through the OS system image provided by the peer nodes of the management group. A local or remote copy of the OS may be used. At the completion of the step in block 210, the device OS begins and completes the booting process, however all RAID system redistribution or OS system image sharing has already taken place before the booting process began. This example provides one mechanism of this pre-boot method of RAID file system rebalancing or OS system image sharing within a management group.
[0028] Fig. 3 is a diagram of an example system 300 at a high level showing the interaction of a target system and a shared storage pool. The size of a shared storage pool can be determined by the quantity of data it may store, not just the number of devices bound to the management group. A computing system 302 is shown and may correspond to the computer system 100 in Fig. 1 or the system discussed in Fig. 2. As shown the computing system 302 may be bound to a management group 304. Although the management group 304 is shown having a plurality of nodes and devices, in some examples the management group may simply be a second computing device to which the computing system 302 is bound through a management controller network 1 26.
[0029] The computing system 302 is shown directly binding to a proxy node 306. While here, the proxy node 306 is shown as a stand-alone system, in some examples, the proxy node may be part of another system or storage space. In some examples, the connection between the computing system 302 and the proxy node 306 is not fixed, but is only shown here bound to a single system for simplicity.
[0030] The management group 304 may include not only a proxy node 306, but also a number of management group devices 308. Each management group device 308 (A-D) is shown here with a different piece of a puzzle to represent the fact that each only has a piece of data belonging to a larger volume of data 310. When the management controller 124 of the computing system 302 is powered on it may request data from the management group. This request may involve communication with each of the management group devices 408 when the management group is a RAID-like file storage system that may rebalance or redistribute files to include the newly connected computing system 302.
[0031] In some examples, the computing system 302 instead may communicate to each of the management group devices 308 (A-D) to request an OS system image volume of data 310. While in some examples, this system image may be located on a single management group device 308, in the pictured example a different piece of a complete OS system image is distributed among each of the management group devices 308(A-D). Further, while the entire volume of data 310 may be shown as being sent to the computing device 302, in some examples, the computing device 302 may be granting access to an OS running on a device in the management group 304 rather than locally on the computing device 302.
[0032] Fig. 4 is a block diagram of an example non-transitory, computer readable medium including instructions to direct a processor to configure a device from a shared storage pool. The computer readable medium includes instructions 404 to direct the processor to configure a device from a shared storage pool prior to granting an OS block level storage in the computer readable medium. Instructions 406 direct the processor 402 to search for other units broadcasting the same code. For example, these instructions may direct the processor 402 to form messages that include the numeric code and an invitation to connect. These instructions can further send the message out over a network. These instructions 404 may direct the processor to store in the computer readable medium 400 information received from the management (MGMT) group. Instructions may also direct the processor to elect a node of the management group, based on the MGMT group information, and instruct the elected node or nodes based on the information received. This information may include a request to begin RAID rebalancing or the transmission or implementation of an OS system image on the computing device 302. Once these instructions are implemented, instructions may direct the processor to enable block level storage to an OS received from the MGMT group in response to instructions provided by the elected node instructor 406. This allows the computing devices to form a new management group.
[0033] Fig. 5 is a block diagram of an example method for enabling block level storage for a host operating system after it is configured with information from a shared storage group.
[0034] At block 500 a computing system 302 stores management group information prior to booting. This information may have been obtained by a management controller 124 communicating over a management controller network 126 and obtaining volume information and credential information for the
management group. In some examples this pre-boot binding was implemented by a communication of a common personal identification number (PIN) entered at both a management group device and the computing system 302.
[0035] At block 502, the computing system 302 may elect a node of the management group to communicate with in order to facilitate data transfer and processing. This data may include volumes of data used in RAID file storage, transmission of an OS system file image, transmission of configuration data shared by the management group, or other similar management group data volumes. This data may be generated or complied by the elected node of the management group for use by the computing system 302.
[0036] At block 504, the computing system may enable block level storage for a host operating system after configuration of the storage device 210 of the computing device 302. As discussed above, the configuration of the storage device 210 of the computing device 302 or the computing device 302 itself may be accomplished prior to booting a local configured OS due to the communication enabled by a
management controller already bound to a management group. Accordingly, the method step at block 504 waits until this configuration or transmission is complete so that the booting of the OS or operation of the RAID file system on the computing device 302 does not occur until indicated as complete. A benefit of this method is seen when a computing system 302 or systems may automatically form a large pool of shared storage of much higher capacity from small, cheap volumes. These pools may be automatically formed, shrunk, or grown, and automatically configured through the direction of the management controller prior to booting an OS locally on the computing system 302.
[0037] Through the methods discussed above, systems may boot from a base OS image installed into the management group or from a shared storage pool. The OS may be maintained in a similar way and not require local or device specific provisioning. In some examples, the driver for the storage level provider is included in the kernel or boot image or can be loaded from the UEFI boot environment, and the system can boot directly to the production OS in seconds after power up. [0038] While the present techniques may be susceptible to various modifications and alternative forms, the techniques discussed above have been shown only by way of example. It is to be understood that the technique is not intended to be limited to the particular examples disclosed herein. Indeed, the present techniques include all alternatives, modifications, and equivalents falling within the scope of the following claims.

Claims

CLAIMS What is claimed is:
1 . A system to configure itself from a shared storage pool prior to booting, comprising:
a storage device to store management group information obtained prior to booting from a shared storage pool; and
a management controller to communicate with an elected node of the shared storage pool, based on the management group information, to generate preboot data for receipt by the storage device;
wherein the management controller is to enable block level storage for a host operating system after configuration of the storage device, the configuration of the storage device based on both the preboot data received and the management group information.
2. The system of claim 1 , wherein the management group information includes a credential for accessing the shared storage pool and volume information about data shared by the shared storage pool.
3. The system of claim 2, wherein the volume information includes a type of volume, an indication of whether the volume is copy-on-write, and an indication of whether the volume is pure data or another topography.
4. The system of claim 1 , wherein the management controller loads indices and block maps of data from the shared storage pool.
5. The system of claim 1 , wherein the management controller is a baseboard management controller, a virtual driver, or is firmware located locally in the system.
6. The system of claim 1 , wherein the management controller communicates with a plurality of elected nodes and draws data from each elected node.
7. The system of claim 1 , wherein configuration of the storage device includes rebalancing data between the elected node and the storage device based on a size of the shared storage pool.
8. The system of claim 1 , wherein configuration of the storage device includes a copy-on-write process to begin claiming and verifying local storage area in the storage device.
9. A method to configure a device from a shared storage pool prior to booting, comprising:
storing, in a storage device, management group information obtained prior to booting from a shared storage pool;
instructing an elected node to generate preboot data for receipt by the storage device based on communication between a management controller and the elected node of the shared storage pool; and
enabling block level storage for a host operating system after configuration of the storage device, the configuration of the storage device based on both the preboot data received and the management group information.
10. The method of claim 9, wherein the management group information includes a credential for accessing the shared storage pool and volume information about data shared by the shared storage pool.
1 1 . The method of claim 10, wherein the volume information includes a type of volume, an indication of whether the volume is copy-on-write, and an indication of whether the volume is pure data or another topography.
12. The method of claim 9, comprising loading, with the management controller, indices and block maps of data from the shared storage pool.
13. The method of claim 9, wherein configuration of the storage device includes rebalancing data between the elected node and the storage device based on a size of the shared storage pool.
14. A non-transitory computer readable medium, comprising code to direct a processor to:
store, in a storage device, management group information obtained prior to booting from a shared storage pool;
instruct an elected node to generate preboot data for receipt by the storage device based on communication between a management controller and an elected node of the shared storage pool; and
enable block level storage for a host operating system after configuration of the storage device, the configuration of the storage device based on both the preboot data received and the management group information.
15. The method of claim 14, wherein configuration of the storage device includes a copy-on-write process to begin claiming and verifying local storage area in the storage device.
PCT/US2015/024975 2015-04-08 2015-04-08 Pre-boot device configuration WO2016164013A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US2015/024975 WO2016164013A1 (en) 2015-04-08 2015-04-08 Pre-boot device configuration
TW105111047A TW201638801A (en) 2015-04-08 2016-04-08 Pre-boot device configuration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/024975 WO2016164013A1 (en) 2015-04-08 2015-04-08 Pre-boot device configuration

Publications (1)

Publication Number Publication Date
WO2016164013A1 true WO2016164013A1 (en) 2016-10-13

Family

ID=57072006

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/024975 WO2016164013A1 (en) 2015-04-08 2015-04-08 Pre-boot device configuration

Country Status (2)

Country Link
TW (1) TW201638801A (en)
WO (1) WO2016164013A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060020837A1 (en) * 2004-06-29 2006-01-26 Rothman Michael A Booting from a remote BIOS image
US20090172135A1 (en) * 2007-12-31 2009-07-02 Paul Carbin Pre-boot retrieval of an external boot file
US20090319806A1 (en) * 2008-06-23 2009-12-24 Ned Smith Extensible pre-boot authentication
US20090327683A1 (en) * 2008-06-30 2009-12-31 Mason Cabot System and method to accelerate access to network data using a networking unit accessible non-volatile storage
US20130254521A1 (en) * 2012-03-22 2013-09-26 International Business Machines Corporation Simulated Network Boot Environment for Bootstrap Redirection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060020837A1 (en) * 2004-06-29 2006-01-26 Rothman Michael A Booting from a remote BIOS image
US20090172135A1 (en) * 2007-12-31 2009-07-02 Paul Carbin Pre-boot retrieval of an external boot file
US20090319806A1 (en) * 2008-06-23 2009-12-24 Ned Smith Extensible pre-boot authentication
US20090327683A1 (en) * 2008-06-30 2009-12-31 Mason Cabot System and method to accelerate access to network data using a networking unit accessible non-volatile storage
US20130254521A1 (en) * 2012-03-22 2013-09-26 International Business Machines Corporation Simulated Network Boot Environment for Bootstrap Redirection

Also Published As

Publication number Publication date
TW201638801A (en) 2016-11-01

Similar Documents

Publication Publication Date Title
CN106528194B (en) Network switch and method for updating device using network switch
US10127032B2 (en) System and method for unified firmware management
US9912535B2 (en) System and method of performing high availability configuration and validation of virtual desktop infrastructure (VDI)
US9003000B2 (en) System and method for operating system installation on a diskless computing platform
US8176486B2 (en) Maintaining a pool of free virtual machines on a server computer
TWI492064B (en) Cloud system and the boot up and deployment method for the cloud system
US20140129819A1 (en) Cloud cluster system and boot deployment method for the same
US20110225274A1 (en) Bios parameter virtualization via bios configuration profiles
US8909746B2 (en) System and method for operating system installation on a diskless computing platform
US20110264879A1 (en) Making Automated Use of Data Volume Copy Service Targets
US20130262700A1 (en) Information processing system and virtual address setting method
US11188407B1 (en) Obtaining computer crash analysis data
CN105721534A (en) System and method for network-based iscsi boot parameter deployment
CN105095103A (en) Storage device management method and device used for cloud environment
US10972350B2 (en) Asynchronous imaging of computing nodes
US20180157509A1 (en) System and method to expose remote virtual media partitions to virtual machines
US20160241432A1 (en) System and method for remote configuration of nodes
WO2016069011A1 (en) Management controller
US9189286B2 (en) System and method for accessing storage resources
US10332182B2 (en) Automatic application layer suggestion
CN114443148B (en) Method for centrally managing server starting disk and server
WO2016164013A1 (en) Pre-boot device configuration
JP6051798B2 (en) Firmware verification system, firmware verification method, and firmware verification program
Tosatto Citrix Xenserver 6. 0 Administration Essential Guide
US10652247B2 (en) System and method for user authorization in a virtual desktop access device using authentication and authorization subsystems of a virtual desktop environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15888665

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15888665

Country of ref document: EP

Kind code of ref document: A1