WO2017023310A1 - Selecting hardware combinations - Google Patents

Selecting hardware combinations Download PDF

Info

Publication number
WO2017023310A1
WO2017023310A1 PCT/US2015/043786 US2015043786W WO2017023310A1 WO 2017023310 A1 WO2017023310 A1 WO 2017023310A1 US 2015043786 W US2015043786 W US 2015043786W WO 2017023310 A1 WO2017023310 A1 WO 2017023310A1
Authority
WO
WIPO (PCT)
Prior art keywords
hardware
leverage
combinations
server
cloud
Prior art date
Application number
PCT/US2015/043786
Other languages
French (fr)
Inventor
Sahana Alva DEREBAIL
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/043786 priority Critical patent/WO2017023310A1/en
Publication of WO2017023310A1 publication Critical patent/WO2017023310A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Definitions

  • Test planners can use a complex matrix of supported hardware to generate testing plans for potential combinations.
  • maintaining an entire supported server configuration includes handling procurement (i.e. cost of resources and maintenance of hardware).
  • FIG. 1 is a block diagram of an example computing device for selecting hardware combinations
  • FIG. 2 is a block diagram of an example system including a computing device for selecting hardware combinations
  • FIG. 3 is a flowchart of an example method for execution by a computing device for selecting hardware combinations
  • FIG. 4 is a flowchart of an example method for execution by a computing device for selecting a hardware combination to satisfy application requirements.
  • a full set of potential hardware combinations that can support the cloud application are determined, where the full set of potential hardware combinations are determined based on the application requirements and the server parameters.
  • a narrowed set of potential hardware combinations are determined based on a hardware leverage guide.
  • a selection of a hardware combination from the narrowed set of potential hardware combinations is received.
  • the cloud application is deployed using the selected hardware combination.
  • FIG. 1 is a block diagram of an example computing device 100 for selecting hardware combinations.
  • the example computing device 100 may be a desktop computer, server, notebook computer, tablet, or other device suitable for analyzing hardware systems as described below.
  • computing device 100 includes processor 1 10, interface 1 15, and machine-readable storage medium 120.
  • Processor 1 10 may be one or more central processing units (CPUs), microprocessors, and/or other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium 120.
  • Processor 1 10 may fetch, decode, and execute instructions 122, 124, 126, 128, 130, 132 to enable selecting hardware combinations, as described below.
  • processor 1 10 may include one or more electronic circuits including a number of electronic components for performing the functionality of one or more of instructions 122, 124, 126, 128, 130, 132.
  • Interface 1 15 may include a number of electronic components for communicating with computing devices.
  • interface 1 15 may be wireless interfaces such as wireless local area network (WLAN) interfaces and/or physical interfaces such as Ethernet interfaces, Universal Serial Bus (USB) interfaces, external Serial Advanced Technology Attachment (eSATA) interfaces, or any other physical connection interface suitable for communication with end devices.
  • WLAN wireless local area network
  • USB Universal Serial Bus
  • eSATA external Serial Advanced Technology Attachment
  • interface 1 15 may be used to send and receive data to and from other computing devices.
  • Machine-readable storage medium 120 may be any electronic, magnetic, optical, or other physical storage device that stores executable instructions.
  • machine-readable storage medium 120 may be, for example, Random Access Memory (RAM), Content Addressable Memory (CAM), Ternary Content Addressable Memory (TCAM), an Electrically-Erasable Programmable Read-Only Memory (EEPROM), flash memory, a storage drive, an optical disc, and the like.
  • RAM Random Access Memory
  • CAM Content Addressable Memory
  • TCAM Ternary Content Addressable Memory
  • EEPROM Electrically-Erasable Programmable Read-Only Memory
  • flash memory a storage drive, an optical disc, and the like.
  • storage drive an optical disc, and the like.
  • Server parameter identifying instructions 122 may identify server parameters.
  • the server parameters may correspond to components of a hardware combination. Examples of server parameters include operating system(s), server(s), host bus adapter(s) (HBA's), networking device(s), etc.
  • the server parameters can be initially identified by querying a cloud infrastructure for available hardware.
  • Each server parameter can have multiple values.
  • the operating system (OS) parameter can have a first OS value, a second OS value, and a third OS value, where each OS value may represent an OS available in, for example, the marketplace.
  • the server parameter can have different values for different server types (e.g., blade server, rack server, tower server, etc.), and the HBA parameter can have a single channel value and a dual channel value.
  • OS operating system
  • HBA host bus adapter
  • OS parameter - (1 ) first OS value; (2) second OS value; and (3) third OS value
  • Application requirements identifying instructions 124 may identify application requirements of a cloud application. For example, a service request may be received from a user, where the service request specifies requirements for a cloud application that the user would like to deploy in a cloud infrastructure. Examples of application requirements may include storage capacity, minimum response time, processing power, etc.
  • Full set determining instructions 126 may determine the full set of hardware combinations for the server parameters identified above.
  • the full set can be determined based on the application requirements and the server parameters. Specifically, some server parameters can be restricted based on the application requirements (e.g., a minimum storage capacity requirement could restrict a server type parameter to particular servers).
  • a minimum storage capacity requirement could restrict a server type parameter to particular servers.
  • Narrowed set determining instructions 128 may narrow the full set of hardware combinations based on a hardware leverage guide.
  • the hardware leverage guide may include a number of hardware leverage rules that each specify a priority for specific kinds of hardware in the server parameter values. For example, a hardware leverage rule can specify that the two possible server types are in the same family. In another example, a hardware leverage rule can specify that dual channel HBA's have a higher priority than single channel HBA's. In this example, the full set of twelve combinations can be reduced to three (one for each OS value) because the hardware leverage guide specifies that the servers are equivalent and that there is a preference for a particular HBA. The application of the hardware leverage guide can result in a leveraged set of hardware combinations. The leveraged hardware combinations are considered leveraged because they account for the priorities specified in the hardware leverage guide.
  • an orthogonal array technique may also be applied to the set of hardware combinations before or after the hardware leverage guide.
  • the orthogonal array technique can be used when a set of inputs (e.g., server parameters) results in numerous combinations that would be prohibitive to test exhaustively.
  • a narrowed set resulting from the orthogonal array technique may be considered orthogonal because each combination may have different characteristics from the other combinations in the narrowed set (i.e., redundancy in the narrowed set is minimized).
  • the orthogonal array technique constructs a narrowed set of hardware combinations that include all pairings of each value of each set of parameters. In other words, the technique can maximize the hardware combination coverage while minimizing the number of hardware combinations.
  • the twelve possible combinations could be reduced to six combinations by the orthogonal array technique (e.g., six combinations of the three different OS values and server type values with the single channel HBA and double channel HBA each being combined with each server type value within those six combinations) as shown below in TABLE 3:
  • Orthogonal combination 1 first OS value, first server type value, and single channel value
  • Orthogonal combination 2 first OS value, second server type value, and dual channel value
  • Orthogonal combination 3 second OS value, first server type value, and dual channel value
  • Orthogonal combination 4 second OS value, second server type value, and single channel value
  • Orthogonal combination 6 third OS value, second server type value, and dual channel value
  • the hardware leverage guide may then be applied to the narrowed set to generate a leveraged set of hardware combinations as shown below in TABLE 4:
  • Leveraged combination 1 first OS value, second server type value, and dual channel value
  • Combination selection receiving instructions 130 may receive a selection of a hardware combination from the narrowed set of hardware combinations.
  • the narrowed set can be displayed for an administrator to review the combinations and to select a hardware combination for deploying the cloud application.
  • the combinations in the narrowed set can be displayed for the user that submitted the service request, where the user selects the hardware combination.
  • Cloud application deploying instructions 132 may deploy the cloud application using the selected hardware combination.
  • the selected hardware combination can be assigned to the user in the cloud infrastructure and then used to deploy the cloud application. In this manner, cloud resources can be assigned to between users of the cloud infrastructure with improved efficiency.
  • FIG. 2 is a block diagram of an example system 200 including computing device 200 and server devices 250A, 250N for selecting hardware combinations.
  • the components of computing device 200 may be similar to the corresponding components of computing device 100 described with respect to FIG. 1 .
  • Computing device 200 is in communication with server devices 250A, 250N via a network 245.
  • computing device 200 includes interface module 210, configuration module 220, and hardware guide module 226. While computing device 200 may include a number of modules 210-228. Each of the modules may include a series of instructions encoded on a machine-readable storage medium and executable by a processor of computing device 200. In addition or as an alternative, each module may include one or more hardware devices including electronic circuitry for implementing the functionality described below.
  • Interface module 210 may manage communications with the server devices 250A, 250N. Specifically, the interface module 210 may initiate connections with the server devices 250A, 250N and then send or receive data to/from the server devices 250A, 250N.
  • Configuration module 220 may generate hardware combinations for analysis. Specifically, configuration module 220 may generate leveraged hardware combinations as described above with respect to FIG. 1 .
  • Combination module 222 of configuration module 220 may determine server hardware combinations from a set of available hardware.
  • Combination module 222 may access a cloud infrastructure to identify available hardware components, which can be used as values for server parameters.
  • the cloud infrastructure can include a hardware database that can be queried to identify the available hardware components.
  • API module 260 of the server devices 250A, 250N in the cloud infrastructure may be used to discover the available hardware components as described below.
  • Combination module 222 may then determine the possible combinations of the server parameter values to create a full set of hardware combinations.
  • Narrowing module 224 of configuration module 220 may reduce the full set of hardware combinations to a narrowed set of hardware combinations.
  • narrowing module 224 can apply an orthogonal array technique to reduce the full set to a narrowed set of hardware combinations.
  • the orthogonal array technique reduces the full set by maximizing coverage of the hardware combinations in a select subset of the initial set.
  • the term orthogonal indicates that factors can be evaluated independently of one another (i.e., the effect of one factor does not interfere with the estimation of the effect of another factor).
  • the orthogonal array technique constructs a reasonably small set of combinations (i.e., narrowed set of hardware combinations) that include all pairings of each value of each of a set of parameters.
  • Narrowing module 224 may also apply a hardware leverage guide to the full or narrowed set of combinations to create a leveraged set of combinations.
  • a hardware leverage guide may specify families (i.e., sets) of hardware.
  • a hardware family can be available hardware of a specified type that shares common characteristics, and each family can be divided by a set of rules, where components in the same family can be leveraged (i.e., prioritized).
  • a server family can be defined as servers (1 ) from a same server group; (2) using the same chipset (e.g., host bridge chipset); (3) using the same processor type or family; and (4) having the same host bus type.
  • a hardware leverage rule can specify that all servers in the family have the same priority.
  • an HBA family can be defined as HBA's (1 ) from the same supplier; (2) with the same card type; (3) using the same application-specific integrated circuit (ASIC); (4) having the same host bus type; (5) having the same bus speed limits; and (6) having the same I/O rate.
  • ASIC application-specific integrated circuit
  • a hardware leverage rule can specify that dual channel HBA's are prioritized over single channel HBA's.
  • Hardware leverage rules can be applied to various hardware device types (e.g., servers, HBA's, storage arrays, networking devices, etc.).
  • Narrowing module 224 uses the hardware leverage rules to reduce the full set or further reduce the narrowed set of combinations by enforcing the hardware priorities.
  • Hardware guide module 226 may allow an administrator of computing device 200 to manage hardware leverage guides. Each type of hardware (i.e., device type) can have a different procedure for creating a hardware family with corresponding hardware leverage rules. Hardware guide module 226 allows the administrator to create hardware leverage guides like those described in the examples above as well as similar guides for storage arrays, networking devices, etc. For example, the hardware guide module 226 can display a user interface for the administrator that allows the administrator to specify preferred values for server parameters, which are used to create hardware leverage rules in the hardware leverage guide.
  • Server devices 250A, 250N may be any server device (e.g., servers, HBA's, storage arrays, networking devices, etc.) accessible to computing device 200 over a network 245 that is suitable for executing the functionality described below.
  • each server device 250A, 250N may include a series of modules 260-264 and components 270 for providing computing services.
  • Server devices 250A, 250N can be any number of servers at various locations that are members of a cloud infrastructure.
  • the cloud infrastructure provides users with universal access to the pool of hardware resources in the server devices 250A, 250N.
  • API module 260 may provide access to hardware data of server device A 250A.
  • Infrastructure module 262 of API module 260 may manage hardware resources of server device A 250A.
  • infrastructure module 262 may provision, configure, and discover hardware resources.
  • infrastructure module 262 can provide access to a hardware profile (i.e., list of components 270 and associated attributes) of server device A 250A, which is used by combination module 222 to identify server parameters.
  • Application module 264 of API module 260 may manage the deployment of cloud applications.
  • application module 264 can deploy various cloud services (e.g., database services, storage services, messaging services, etc.).
  • Application module 264 can also provide access to performance statistics (e.g., average throughput, average load, average response time, error logs, etc.) related to the cloud services provided by server device A 250A.
  • performance statistics e.g., average throughput, average load, average response time, error logs, etc.
  • FIG. 3 is a flowchart of an example method 300 for execution by a computing device 100 for selecting hardware combinations. Although execution of method 300 is described below with reference to computing device 100 of FIG. 1 , other suitable devices for execution of method 300 may be used such as computing device 200 of FIG. 2. Method 300 may be implemented in the form of executable instructions stored on a machine-readable storage medium, such as computer readable medium 120 of FIG. 1 , and/or in the form of electronic circuitry.
  • Method 300 may start in block 305 and continue to block 310, where computing device 100 may identify server parameters.
  • the server parameters can be initially identified by querying a cloud infrastructure for available hardware.
  • computing device 100 identifies application requirements of a cloud application.
  • a service request may be received from a user, where the service request specifies requirements for a cloud application that the user would like to deploy in a cloud infrastructure.
  • computing device 100 may determine the full set of hardware combinations for the server parameters identified above.
  • the full set can be determined based on the application requirements and the server parameters. For example, some values for server parameters can be excluded based on the application requirements (e.g., a minimum storage capacity requirement could restrict a server type parameter to particular servers), and then the remaining value can be used to determine the full set.
  • computing device 100 may narrow the full set of hardware combinations based on a hardware leverage guide.
  • the application of the hardware leverage guide can result in a narrowed set of hardware combinations that account for hardware priorities specified in the hardware leverage guide.
  • computing device 100 may receive a selection of a hardware combination from the narrowed set of hardware combinations. For example, the narrowed set of combinations can be displayed for the user to make a selection.
  • computing device 100 may deploy the cloud application using the selected hardware combination. Method 300 may then continue block 340, where method 300 may stop.
  • FIG. 4 is a flowchart of an example method 300 for execution by a computing device 100 for selecting a hardware combination to satisfy application requirements.
  • execution of method 400 is described below with reference to computing device 100 of FIG. 1 , other suitable devices for execution of method 300 may be used such as computing device 200 of FIG. 2.
  • Method 400 may be implemented in the form of executable instructions stored on a machine- readable storage medium, such as computer readable medium 120 of FIG. 1 , and/or in the form of electronic circuitry.
  • Method 400 may start in block 402 and continue to block 404, where computing device 100 may search a cloud infrastructure for available hardware to satisfy application requirements.
  • the application requirements may be specified in a request from a customer for a hardware combination. In some cases, the application requirements can be for deploying a cloud application that is provided on behalf of the customer.
  • computing device 100 may determine if a prebuilt hardware combination that satisfies the application requirements is available. If a prebuilt hardware combination is available, method 400 can proceed to block 420, where computing device 100 may deploy the cloud application on the prebuilt hardware combination.
  • computing device 100 may determine if the cloud application is compatible with the available hardware that isn't prebuilt in block 408. For example, a cloud application can be incompatible with the software that was used to create the cloud infrastructure. If the cloud application is not compatible, method 400 may proceed to block 422, where method may stop. If the cloud application is compatible, computing device 100 may initially determine the full set of possible combinations based on the available hardware (i.e., server parameters) in block 412. In block 414, computing device 100 may apply a hardware leverage guide to the full set of hardware combinations.
  • the hardware leverage guide may include a number of hardware leverage rules that each specify a priority for specific kinds of hardware in the server parameter values. The application of the hardware leverage guide results in a leveraged set of hardware combinations.
  • computing device 100 may apply an orthogonal array technique to the leveraged set of hardware combinations.
  • the orthogonal array technique constructs a narrowed set of hardware combinations that include all pairings of values for each set of parameters in the leveraged set.
  • the hardware leverage guide is applied before the orthogonal array technique; however, in other examples, the orthogonal array technique could be applied before the hardware leverage guide.
  • computing device 100 may receive a selection of a hardware combination from the narrowed set of hardware combinations. Performance statistics related to the hardware combinations (e.g., average throughput, average load, average response time, error logs, etc.) can be collected to assist in performing the selection of the hardware combination. For example, the hardware combinations with the relevant statistics can be displayed for review by an administrator.
  • computing device 100 may deploy the cloud application using the selected hardware combination in block 420. For example, the hardware combination may be assigned to the customer and used to provide the cloud application on behalf of the customer. Method 400 may then continue block 422, where method 400 may stop.
  • the foregoing disclosure describes a number of examples for selecting hardware combinations.
  • the examples disclosed herein facilitate analyzing hardware combinations by using a hardware leverage guide to determine a narrowed set of hardware combinations, which are used to select a hardware combination for deploying a cloud application.

Abstract

Examples relate to selecting hardware combinations. In some examples, server parameters corresponding to hardware components in a cloud infrastructure are identified, and application requirements of a cloud application to be deployed using at least a portion of the cloud infrastructure are identified. A full set of potential hardware combinations that can support the cloud application may be determined, where the full set of potential hardware combinations may be determined based on the application requirements and the server parameters. A narrowed set of potential hardware combinations may be determined based on a hardware leverage guide. A selection of a hardware combination from the narrowed set of potential hardware combinations may be received, and the cloud application may be deployed using the selected hardware combination.

Description

SELECTING HARDWARE COMBINATIONS
BACKGROUND
[0001 ] Often testing and managing hardware products includes supporting a wide variety of hardware architecture and operating system combinations. Test planners can use a complex matrix of supported hardware to generate testing plans for potential combinations. For datacenter administrators, maintaining an entire supported server configuration includes handling procurement (i.e. cost of resources and maintenance of hardware).
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] The following detailed description references the drawings, wherein:
[0003] FIG. 1 is a block diagram of an example computing device for selecting hardware combinations;
[0004] FIG. 2 is a block diagram of an example system including a computing device for selecting hardware combinations;
[0005] FIG. 3 is a flowchart of an example method for execution by a computing device for selecting hardware combinations; and
[0006] FIG. 4 is a flowchart of an example method for execution by a computing device for selecting a hardware combination to satisfy application requirements.
DETAILED DESCRIPTION
[0007] As described above, complex hardware systems can be manually configured according to complex matrices of supported hardware. Such processes are time consuming and error prone. Examples herein apply hardware leverage rules to reduce the combinations of hardware resources (e.g., servers, host bus adapter, switches, storage array, etc.) that are available in a cloud infrastructure. From a datacenter management point of view, the narrowed set of hardware combinations facilitates the selection of a hardware combination that satisfies requirements of a cloud application. [0008] In some examples, server parameters corresponding to hardware components in a cloud infrastructure are identified, and application requirements of a cloud application to be deployed using at least a portion of the cloud infrastructure are identified. A full set of potential hardware combinations that can support the cloud application are determined, where the full set of potential hardware combinations are determined based on the application requirements and the server parameters. A narrowed set of potential hardware combinations are determined based on a hardware leverage guide. A selection of a hardware combination from the narrowed set of potential hardware combinations is received. At this stage, the cloud application is deployed using the selected hardware combination.
[0009] Referring now to the drawings, FIG. 1 is a block diagram of an example computing device 100 for selecting hardware combinations. The example computing device 100 may be a desktop computer, server, notebook computer, tablet, or other device suitable for analyzing hardware systems as described below. In the example of FIG. 1 , computing device 100 includes processor 1 10, interface 1 15, and machine-readable storage medium 120.
[0010] Processor 1 10 may be one or more central processing units (CPUs), microprocessors, and/or other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium 120. Processor 1 10 may fetch, decode, and execute instructions 122, 124, 126, 128, 130, 132 to enable selecting hardware combinations, as described below. As an alternative or in addition to retrieving and executing instructions, processor 1 10 may include one or more electronic circuits including a number of electronic components for performing the functionality of one or more of instructions 122, 124, 126, 128, 130, 132.
[001 1 ] Interface 1 15 may include a number of electronic components for communicating with computing devices. For example, interface 1 15 may be wireless interfaces such as wireless local area network (WLAN) interfaces and/or physical interfaces such as Ethernet interfaces, Universal Serial Bus (USB) interfaces, external Serial Advanced Technology Attachment (eSATA) interfaces, or any other physical connection interface suitable for communication with end devices. In operation, as detailed below, interface 1 15 may be used to send and receive data to and from other computing devices.
[0012] Machine-readable storage medium 120 may be any electronic, magnetic, optical, or other physical storage device that stores executable instructions. Thus, machine-readable storage medium 120 may be, for example, Random Access Memory (RAM), Content Addressable Memory (CAM), Ternary Content Addressable Memory (TCAM), an Electrically-Erasable Programmable Read-Only Memory (EEPROM), flash memory, a storage drive, an optical disc, and the like. As described in detail below, machine-readable storage medium 120 may be encoded with executable instructions for selecting hardware combinations.
[0013] Server parameter identifying instructions 122 may identify server parameters. The server parameters may correspond to components of a hardware combination. Examples of server parameters include operating system(s), server(s), host bus adapter(s) (HBA's), networking device(s), etc. The server parameters can be initially identified by querying a cloud infrastructure for available hardware. Each server parameter can have multiple values. For example, the operating system (OS) parameter can have a first OS value, a second OS value, and a third OS value, where each OS value may represent an OS available in, for example, the marketplace. Similarly, the server parameter can have different values for different server types (e.g., blade server, rack server, tower server, etc.), and the HBA parameter can have a single channel value and a dual channel value. A summary of values for the server parameters in this example are shown below in TABLE 1 :
TABLE 1 : Example of Server Parameter Values
OS parameter - (1 ) first OS value; (2) second OS value; and (3) third OS value
Server parameter - (1 ) first server type value and (2) second server type value
HBA parameter - (1 ) single channel value and (2) dual channel value [0014] Application requirements identifying instructions 124 may identify application requirements of a cloud application. For example, a service request may be received from a user, where the service request specifies requirements for a cloud application that the user would like to deploy in a cloud infrastructure. Examples of application requirements may include storage capacity, minimum response time, processing power, etc.
[0015] Full set determining instructions 126 may determine the full set of hardware combinations for the server parameters identified above. The full set can be determined based on the application requirements and the server parameters. Specifically, some server parameters can be restricted based on the application requirements (e.g., a minimum storage capacity requirement could restrict a server type parameter to particular servers). In an example with three OS values, two server type values, and two HBA values, there are twelve possible combinations for the set of hardware combinations as shown below in TABLE 2:
TABLE 2: Example of Full Set of Combinations
Combination 1 - first OS value, first server type value, and single channel value
Combination 2 - first OS value, second server type value, and single channel value
Combination 3 - second OS value, first server type value, and single channel value
Combination 4 - second OS value, second server type value, and single channel value
Combination 5 - third OS value, first server type value, and single channel value
Combination 6 - third OS value, second server type value, and single channel value
Combination 7 - first OS value, first server type value, and dual channel value
Combination 8 - first OS value, second server type value, and dual channel value
Combination 9 - second OS value, first server type value, and dual channel value
Combination 10 - second OS value, second server type value, and dual channel value
Combination 1 1 - third OS value, first server type value, and dual channel value
Combination 12 - third OS value, second server type value, and dual channel value
[0016] Narrowed set determining instructions 128 may narrow the full set of hardware combinations based on a hardware leverage guide. The hardware leverage guide may include a number of hardware leverage rules that each specify a priority for specific kinds of hardware in the server parameter values. For example, a hardware leverage rule can specify that the two possible server types are in the same family. In another example, a hardware leverage rule can specify that dual channel HBA's have a higher priority than single channel HBA's. In this example, the full set of twelve combinations can be reduced to three (one for each OS value) because the hardware leverage guide specifies that the servers are equivalent and that there is a preference for a particular HBA. The application of the hardware leverage guide can result in a leveraged set of hardware combinations. The leveraged hardware combinations are considered leveraged because they account for the priorities specified in the hardware leverage guide.
[0017] In some cases, an orthogonal array technique may also be applied to the set of hardware combinations before or after the hardware leverage guide. The orthogonal array technique can be used when a set of inputs (e.g., server parameters) results in numerous combinations that would be prohibitive to test exhaustively. A narrowed set resulting from the orthogonal array technique may be considered orthogonal because each combination may have different characteristics from the other combinations in the narrowed set (i.e., redundancy in the narrowed set is minimized).
[0018] The orthogonal array technique constructs a narrowed set of hardware combinations that include all pairings of each value of each set of parameters. In other words, the technique can maximize the hardware combination coverage while minimizing the number of hardware combinations. In this example if applied before the hardware leverage rule, the twelve possible combinations could be reduced to six combinations by the orthogonal array technique (e.g., six combinations of the three different OS values and server type values with the single channel HBA and double channel HBA each being combined with each server type value within those six combinations) as shown below in TABLE 3:
TABLE 3: Example Application of Orthogonal Array Technique
Orthogonal combination 1 - first OS value, first server type value, and single channel value
Orthogonal combination 2 - first OS value, second server type value, and dual channel value
Orthogonal combination 3 - second OS value, first server type value, and dual channel value
Orthogonal combination 4 - second OS value, second server type value, and single channel value
Orthogonal combination 5 - third OS value, first server type value, and single channel value
Orthogonal combination 6 - third OS value, second server type value, and dual channel value
[0019] In this example, the hardware leverage guide may then be applied to the narrowed set to generate a leveraged set of hardware combinations as shown below in TABLE 4:
TABLE 4: Example Application of Hardware Leverage Guide
Leveraged combination 1 - first OS value, second server type value, and dual channel value
Leveraged combination 2 - second OS value, second server type value, and single channel value
Leveraged combination 3 - third OS value, second server type value, and dual channel value
[0020] Combination selection receiving instructions 130 may receive a selection of a hardware combination from the narrowed set of hardware combinations. For example, the narrowed set can be displayed for an administrator to review the combinations and to select a hardware combination for deploying the cloud application. In another examples, the combinations in the narrowed set can be displayed for the user that submitted the service request, where the user selects the hardware combination.
[0021 ] Cloud application deploying instructions 132 may deploy the cloud application using the selected hardware combination. For example, the selected hardware combination can be assigned to the user in the cloud infrastructure and then used to deploy the cloud application. In this manner, cloud resources can be assigned to between users of the cloud infrastructure with improved efficiency.
[0022] FIG. 2 is a block diagram of an example system 200 including computing device 200 and server devices 250A, 250N for selecting hardware combinations. The components of computing device 200 may be similar to the corresponding components of computing device 100 described with respect to FIG. 1 . Computing device 200 is in communication with server devices 250A, 250N via a network 245.
[0023] In the embodiment of FIG. 2, computing device 200 includes interface module 210, configuration module 220, and hardware guide module 226. While computing device 200 may include a number of modules 210-228. Each of the modules may include a series of instructions encoded on a machine-readable storage medium and executable by a processor of computing device 200. In addition or as an alternative, each module may include one or more hardware devices including electronic circuitry for implementing the functionality described below.
[0024] Interface module 210 may manage communications with the server devices 250A, 250N. Specifically, the interface module 210 may initiate connections with the server devices 250A, 250N and then send or receive data to/from the server devices 250A, 250N.
[0025] Configuration module 220 may generate hardware combinations for analysis. Specifically, configuration module 220 may generate leveraged hardware combinations as described above with respect to FIG. 1 . Combination module 222 of configuration module 220 may determine server hardware combinations from a set of available hardware. Combination module 222 may access a cloud infrastructure to identify available hardware components, which can be used as values for server parameters. For example, the cloud infrastructure can include a hardware database that can be queried to identify the available hardware components. In another example, API module 260 of the server devices 250A, 250N in the cloud infrastructure may be used to discover the available hardware components as described below. Combination module 222 may then determine the possible combinations of the server parameter values to create a full set of hardware combinations.
[0026] Narrowing module 224 of configuration module 220 may reduce the full set of hardware combinations to a narrowed set of hardware combinations. For example, narrowing module 224 can apply an orthogonal array technique to reduce the full set to a narrowed set of hardware combinations. The orthogonal array technique reduces the full set by maximizing coverage of the hardware combinations in a select subset of the initial set. The term orthogonal indicates that factors can be evaluated independently of one another (i.e., the effect of one factor does not interfere with the estimation of the effect of another factor). The orthogonal array technique constructs a reasonably small set of combinations (i.e., narrowed set of hardware combinations) that include all pairings of each value of each of a set of parameters.
[0027] Narrowing module 224 may also apply a hardware leverage guide to the full or narrowed set of combinations to create a leveraged set of combinations. A hardware leverage guide may specify families (i.e., sets) of hardware. A hardware family can be available hardware of a specified type that shares common characteristics, and each family can be divided by a set of rules, where components in the same family can be leveraged (i.e., prioritized). For example with respect to servers, a server family can be defined as servers (1 ) from a same server group; (2) using the same chipset (e.g., host bridge chipset); (3) using the same processor type or family; and (4) having the same host bus type. In this example, a hardware leverage rule can specify that all servers in the family have the same priority. In another example with respect to HBA's, an HBA family can be defined as HBA's (1 ) from the same supplier; (2) with the same card type; (3) using the same application-specific integrated circuit (ASIC); (4) having the same host bus type; (5) having the same bus speed limits; and (6) having the same I/O rate. In this example, a hardware leverage rule can specify that dual channel HBA's are prioritized over single channel HBA's.
[0028] When applying a hardware leverage guide, some hardware can have a higher priority than others in the same family. Hardware leverage rules can be applied to various hardware device types (e.g., servers, HBA's, storage arrays, networking devices, etc.). Narrowing module 224 uses the hardware leverage rules to reduce the full set or further reduce the narrowed set of combinations by enforcing the hardware priorities.
[0029] Hardware guide module 226 may allow an administrator of computing device 200 to manage hardware leverage guides. Each type of hardware (i.e., device type) can have a different procedure for creating a hardware family with corresponding hardware leverage rules. Hardware guide module 226 allows the administrator to create hardware leverage guides like those described in the examples above as well as similar guides for storage arrays, networking devices, etc. For example, the hardware guide module 226 can display a user interface for the administrator that allows the administrator to specify preferred values for server parameters, which are used to create hardware leverage rules in the hardware leverage guide.
[0030] Server devices 250A, 250N may be any server device (e.g., servers, HBA's, storage arrays, networking devices, etc.) accessible to computing device 200 over a network 245 that is suitable for executing the functionality described below. As detailed below, each server device 250A, 250N may include a series of modules 260-264 and components 270 for providing computing services. Server devices 250A, 250N can be any number of servers at various locations that are members of a cloud infrastructure. The cloud infrastructure provides users with universal access to the pool of hardware resources in the server devices 250A, 250N. [0031 ] API module 260 may provide access to hardware data of server device A 250A. Infrastructure module 262 of API module 260 may manage hardware resources of server device A 250A. Specifically, infrastructure module 262 may provision, configure, and discover hardware resources. For example, infrastructure module 262 can provide access to a hardware profile (i.e., list of components 270 and associated attributes) of server device A 250A, which is used by combination module 222 to identify server parameters.
[0032] Application module 264 of API module 260 may manage the deployment of cloud applications. For example, application module 264 can deploy various cloud services (e.g., database services, storage services, messaging services, etc.). Application module 264 can also provide access to performance statistics (e.g., average throughput, average load, average response time, error logs, etc.) related to the cloud services provided by server device A 250A.
[0033] FIG. 3 is a flowchart of an example method 300 for execution by a computing device 100 for selecting hardware combinations. Although execution of method 300 is described below with reference to computing device 100 of FIG. 1 , other suitable devices for execution of method 300 may be used such as computing device 200 of FIG. 2. Method 300 may be implemented in the form of executable instructions stored on a machine-readable storage medium, such as computer readable medium 120 of FIG. 1 , and/or in the form of electronic circuitry.
[0034] Method 300 may start in block 305 and continue to block 310, where computing device 100 may identify server parameters. For example, the server parameters can be initially identified by querying a cloud infrastructure for available hardware. In block 315, computing device 100 identifies application requirements of a cloud application. For example, a service request may be received from a user, where the service request specifies requirements for a cloud application that the user would like to deploy in a cloud infrastructure.
[0035] In block 320, computing device 100 may determine the full set of hardware combinations for the server parameters identified above. The full set can be determined based on the application requirements and the server parameters. For example, some values for server parameters can be excluded based on the application requirements (e.g., a minimum storage capacity requirement could restrict a server type parameter to particular servers), and then the remaining value can be used to determine the full set.
[0036] In block 325, computing device 100 may narrow the full set of hardware combinations based on a hardware leverage guide. The application of the hardware leverage guide can result in a narrowed set of hardware combinations that account for hardware priorities specified in the hardware leverage guide. In block 330, computing device 100 may receive a selection of a hardware combination from the narrowed set of hardware combinations. For example, the narrowed set of combinations can be displayed for the user to make a selection.
[0037] In block 335, computing device 100 may deploy the cloud application using the selected hardware combination. Method 300 may then continue block 340, where method 300 may stop.
[0038] FIG. 4 is a flowchart of an example method 300 for execution by a computing device 100 for selecting a hardware combination to satisfy application requirements. Although execution of method 400 is described below with reference to computing device 100 of FIG. 1 , other suitable devices for execution of method 300 may be used such as computing device 200 of FIG. 2. Method 400 may be implemented in the form of executable instructions stored on a machine- readable storage medium, such as computer readable medium 120 of FIG. 1 , and/or in the form of electronic circuitry.
[0039] Method 400 may start in block 402 and continue to block 404, where computing device 100 may search a cloud infrastructure for available hardware to satisfy application requirements. The application requirements may be specified in a request from a customer for a hardware combination. In some cases, the application requirements can be for deploying a cloud application that is provided on behalf of the customer. In block 406, computing device 100 may determine if a prebuilt hardware combination that satisfies the application requirements is available. If a prebuilt hardware combination is available, method 400 can proceed to block 420, where computing device 100 may deploy the cloud application on the prebuilt hardware combination.
[0040] If prebuilt hardware is not available, computing device 100 may determine if the cloud application is compatible with the available hardware that isn't prebuilt in block 408. For example, a cloud application can be incompatible with the software that was used to create the cloud infrastructure. If the cloud application is not compatible, method 400 may proceed to block 422, where method may stop. If the cloud application is compatible, computing device 100 may initially determine the full set of possible combinations based on the available hardware (i.e., server parameters) in block 412. In block 414, computing device 100 may apply a hardware leverage guide to the full set of hardware combinations. The hardware leverage guide may include a number of hardware leverage rules that each specify a priority for specific kinds of hardware in the server parameter values. The application of the hardware leverage guide results in a leveraged set of hardware combinations.
[0041 ] In block 416, computing device 100 may apply an orthogonal array technique to the leveraged set of hardware combinations. The orthogonal array technique constructs a narrowed set of hardware combinations that include all pairings of values for each set of parameters in the leveraged set. In this example, the hardware leverage guide is applied before the orthogonal array technique; however, in other examples, the orthogonal array technique could be applied before the hardware leverage guide.
[0042] In block 418, computing device 100 may receive a selection of a hardware combination from the narrowed set of hardware combinations. Performance statistics related to the hardware combinations (e.g., average throughput, average load, average response time, error logs, etc.) can be collected to assist in performing the selection of the hardware combination. For example, the hardware combinations with the relevant statistics can be displayed for review by an administrator. After a hardware combination is selected, computing device 100 may deploy the cloud application using the selected hardware combination in block 420. For example, the hardware combination may be assigned to the customer and used to provide the cloud application on behalf of the customer. Method 400 may then continue block 422, where method 400 may stop.
[0043] The foregoing disclosure describes a number of examples for selecting hardware combinations. The examples disclosed herein facilitate analyzing hardware combinations by using a hardware leverage guide to determine a narrowed set of hardware combinations, which are used to select a hardware combination for deploying a cloud application.

Claims

CLAIMS We claim:
1 . A device comprising:
a processor to:
identify server parameters corresponding to hardware components in a cloud infrastructure;
identify application requirements of a cloud application to be deployed using at least a portion of the cloud infrastructure;
determine a full set of potential hardware combinations that can support the cloud application, the full set of potential hardware combinations being determined based on the application requirements and the server parameters;
determine a narrowed set of potential hardware combinations based on a hardware leverage guide;
receive a selection of a hardware combination from the narrowed set of potential hardware combinations; and
deploy the cloud application using the selected hardware combination.
2. The device of claim 1 , wherein the narrowed set is determined based on an orthogonal array technique.
3. The device of claim 2, wherein the narrowed set includes all pairings of values for each of the server parameters.
4. The device of claim 1 , wherein the hardware leverage guide comprises hardware leverage rules that each define a preferred value for one of the server parameters
5. The device of claim 4, wherein each of the hardware leverage rules applies to a different device type.
6. The device of claim 4, wherein the processor is further to:
display a user interface for managing the hardware leverage guide; and receive the preferred value for each of the hardware leverage rules in the hardware leverage guide from an administrator of the cloud infrastructure via the user interface.
7. A method for selecting hardware combinations, the method comprising: identifying server parameters corresponding to hardware components in a cloud infrastructure;
receive preferred values for the server parameters;
generate hardware leverage rules based on the preferred values of the server parameters;
identifying application requirements of a cloud application to be deployed using at least a portion of the cloud infrastructure;
determining a full set of potential hardware combinations that can support the cloud application, the full set of potential hardware combinations being determined based on the application requirements and the server parameters; applying the hardware leverage rules to the full set to determine a narrowed set of potential hardware combinations;
receiving a selection of a hardware combination from the narrowed set of potential hardware combinations; and
deploying the cloud application using the selected hardware combination.
8. The method of claim 7, further comprising applying an orthogonal array technique to determine the narrowed set of potential hardware combinations.
9. The method of claim 8, wherein the narrowed set includes all pairings of values for each of the server parameters.
10. The method of claim 9, wherein the orthogonal array technique is applied to the full set prior to the hardware leverage guide.
1 1 . The method of claim 9, wherein the hardware leverage guide is applied to the full set prior to the orthogonal array technique.
12. The method of claim 7, wherein each of the hardware leverage rules applies to a different device type.
13. A non-transitory machine-readable storage medium encoded with instructions executable by a processor, the machine-readable storage medium comprising instructions to:
identify application requirements of a cloud application to be deployed using at least a portion of a cloud infrastructure;
search the cloud infrastructure to identify server parameters that correspond to hardware components in the cloud infrastructure that satisfy the application requirements;
determine a full set of potential hardware combinations based on the server parameters;
apply a hardware leverage guide to the full set to determine a narrowed set of hardware combinations;
receive a selection of a hardware combination from the narrowed set of potential hardware combinations; and
deploy the cloud application using the selected hardware combination.
14. The non-transitory machine-readable storage medium of claim 13, wherein the hardware leverage guide comprises hardware leverage rules that each define a preferred value for one of the server parameters, and wherein the instructions are further to receive a preferred value for each of the hardware leverage rules from an administrator of the cloud infrastructure.
15. The non-transitory machine-readable storage medium of claim 13, wherein the instructions are further to apply an orthogonal array technique to determine the narrowed set of potential hardware combinations prior to applying the hardware leverage guide.
PCT/US2015/043786 2015-08-05 2015-08-05 Selecting hardware combinations WO2017023310A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/043786 WO2017023310A1 (en) 2015-08-05 2015-08-05 Selecting hardware combinations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/043786 WO2017023310A1 (en) 2015-08-05 2015-08-05 Selecting hardware combinations

Publications (1)

Publication Number Publication Date
WO2017023310A1 true WO2017023310A1 (en) 2017-02-09

Family

ID=57943393

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/043786 WO2017023310A1 (en) 2015-08-05 2015-08-05 Selecting hardware combinations

Country Status (1)

Country Link
WO (1) WO2017023310A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506820A (en) * 2020-12-03 2021-03-16 深圳微步信息股份有限公司 USB port hardware parameter analysis method, device, equipment and storage medium
US11323325B1 (en) * 2021-04-26 2022-05-03 At&T Intellectual Property I, L.P. System and method for remote configuration of scalable datacenter

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130247136A1 (en) * 2012-03-14 2013-09-19 International Business Machines Corporation Automated Validation of Configuration and Compliance in Cloud Servers
US20140040656A1 (en) * 2010-08-26 2014-02-06 Adobe Systems Incorporated System and method for managing cloud deployment configuration of an application
US20140040438A1 (en) * 2010-08-26 2014-02-06 Adobe Systems Incorporated Dynamic configuration of applications deployed in a cloud
KR20140056371A (en) * 2011-09-30 2014-05-09 알까뗄 루슨트 Hardware consumption architecture
WO2014189529A1 (en) * 2013-05-24 2014-11-27 Empire Technology Development, Llc Datacenter application packages with hardware accelerators

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140040656A1 (en) * 2010-08-26 2014-02-06 Adobe Systems Incorporated System and method for managing cloud deployment configuration of an application
US20140040438A1 (en) * 2010-08-26 2014-02-06 Adobe Systems Incorporated Dynamic configuration of applications deployed in a cloud
KR20140056371A (en) * 2011-09-30 2014-05-09 알까뗄 루슨트 Hardware consumption architecture
US20130247136A1 (en) * 2012-03-14 2013-09-19 International Business Machines Corporation Automated Validation of Configuration and Compliance in Cloud Servers
WO2014189529A1 (en) * 2013-05-24 2014-11-27 Empire Technology Development, Llc Datacenter application packages with hardware accelerators

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506820A (en) * 2020-12-03 2021-03-16 深圳微步信息股份有限公司 USB port hardware parameter analysis method, device, equipment and storage medium
US11323325B1 (en) * 2021-04-26 2022-05-03 At&T Intellectual Property I, L.P. System and method for remote configuration of scalable datacenter
US20220345362A1 (en) * 2021-04-26 2022-10-27 At&T Intellectual Property I, L.P. System and method for remote configuration of scalable datacenter
US11665060B2 (en) 2021-04-26 2023-05-30 At&T Intellectual Property I, L.P. System and method for remote configuration of scalable datacenter

Similar Documents

Publication Publication Date Title
US20230195346A1 (en) Technologies for coordinating disaggregated accelerator device resources
US9116775B2 (en) Relationship-based dynamic firmware management system
US10476951B2 (en) Top-of-rack switch replacement for hyper-converged infrastructure computing environments
US20060069761A1 (en) System and method for load balancing virtual machines in a computer network
US8185905B2 (en) Resource allocation in computing systems according to permissible flexibilities in the recommended resource requirements
US20100042821A1 (en) Methods and systems for providing manufacturing mode detection and functionality in a UEFI BIOS
US10459812B2 (en) Seamless method for booting from a degraded software raid volume on a UEFI system
US20180241643A1 (en) Placement of application services in converged infrastructure information handling systems
US10193969B2 (en) Parallel processing system, method, and storage medium
US10536329B2 (en) Assisted configuration of data center infrastructure
US9229762B2 (en) Host providing system and host providing method
US8745232B2 (en) System and method to dynamically allocate electronic mailboxes
US8995424B2 (en) Network infrastructure provisioning with automated channel assignment
WO2017023310A1 (en) Selecting hardware combinations
US11334436B2 (en) GPU-based advanced memory diagnostics over dynamic memory regions for faster and efficient diagnostics
US20070260606A1 (en) System and method for using a network file system mount from a remote management card
US9519527B1 (en) System and method for performing internal system interface-based communications in management controller
WO2017146618A1 (en) Methods and modules relating to allocation of host machines
US10129082B2 (en) System and method for determining a master remote access controller in an information handling system
US8225009B1 (en) Systems and methods for selectively discovering storage devices connected to host computing devices
US10353741B2 (en) Load distribution of workflow execution request among distributed servers
US10305740B2 (en) System and method for performing mass renaming of list of items at run-time with variable differentiation factor
US11323385B2 (en) Communication system and communication method
US10200242B2 (en) System and method to replicate server configurations across systems using sticky attributions
US20190146851A1 (en) Method, device, and non-transitory computer readable storage medium for creating virtual machine

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15900576

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15900576

Country of ref document: EP

Kind code of ref document: A1