WO2011103393A1 - Methods and apparatus related to management of unit-based virtual resources within a data center environment - Google Patents

Methods and apparatus related to management of unit-based virtual resources within a data center environment Download PDF

Info

Publication number
WO2011103393A1
WO2011103393A1 PCT/US2011/025393 US2011025393W WO2011103393A1 WO 2011103393 A1 WO2011103393 A1 WO 2011103393A1 US 2011025393 W US2011025393 W US 2011025393W WO 2011103393 A1 WO2011103393 A1 WO 2011103393A1
Authority
WO
WIPO (PCT)
Prior art keywords
data center
center units
user
units
resources
Prior art date
Application number
PCT/US2011/025393
Other languages
French (fr)
Inventor
Julian J. Box
Kevin D. Reid
Karl J. Simpson
Original Assignee
Virtustream, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Virtustream, Inc. filed Critical Virtustream, Inc.
Priority to EP20110745300 priority Critical patent/EP2539829A4/en
Priority to CN201180020260.XA priority patent/CN102971724B/en
Publication of WO2011103393A1 publication Critical patent/WO2011103393A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
    • H04L41/5051Service on demand, e.g. definition and deployment of services in real time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5019Ensuring fulfilment of SLA
    • H04L41/5025Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/501Performance criteria

Definitions

  • Embodiments described herein relate generally to virtual resources within a data center, and, in particular, to management of unit-based virtual resources within a data center environment.
  • a processor-readable medium can be configured to store code representing instructions to be executed by a processor.
  • the code can include code to receive a request to change a value representing a number of data center units included in a set of data center units assigned to a user.
  • Each of the data center units from the set of data center units can be associated with hardware resources managed based on a set of predefined hardware resource limit values.
  • the code can include code to determine, in response to the request, whether hardware resources of a data center unit mutually exclusive from hardware resources of the set of data center units and managed based on the set of predefined resource limit values is available for assignment to the user when the request to change is an increase request.
  • FIG. 1 is a schematic diagram that illustrates a management module, a hardware controller, and a data center, according to an embodiment.
  • FIG. 2 is a schematic diagram that illustrates a database that can be stored in a memory of a management module, according to an embodiment.
  • FIG. 3 is a schematic diagram that illustrates a database that includes information about the availability of data center resources, according to an embodiment.
  • FIG. 4 is a graph that illustrates values of a performance metric, according to an embodiment.
  • FIG. 5 is a schematic diagram that illustrates resources controller in
  • FIG. 6 is a flowchart that illustrates a method for modifying a set of data center units based on a performance metric, according to an embodiment.
  • FIG. 7 is a flowchart that illustrates a method for modifying a number of data center units in response to a request, according to an embodiment.
  • FIG. 1 is a schematic diagram that illustrates a management module 130, a resource controller 170, and a data center 100, according to an embodiment.
  • the management module 130 is configured to send one or more instructions to the resource controller 170 (or a portion thereof) to trigger the resource controller 170 to managed one or more hardware resources of the data center units 180 within the data center 100.
  • the data center units 180 include data center unit DUl, data center unit DU2, and data center unit DU3.
  • the data center units 180 can be referred to as a set of data center units.
  • the hardware resources of a data center unit can also be referred to as processing resources of a data center unit.
  • the hardware resources of the data center units 180 are managed (e.g., allocated, provisioned, reserved) for use by the user 50 (e.g., for processing associated with the user 50). Said differently, the data center units 180 (or the data center units of the data center units 180) are assigned to the user 50. Because the data center units 180 are assigned to the user 50, the user 50 can use the hardware resources of data center units 180 to, for example, perform one or more functions specified by the user 50. For example, the hardware resources of data center units 180 can be used by the user 50, for example, to operate one or more virtual resources (e.g., virtual machines) (not shown) of the user 50.
  • virtual resources e.g., virtual machines
  • the user 50 can be a customer, a client, a company, and/or so forth.
  • the user 50 can represent a computing element (e.g., a server, a personal computer, a personal digital assistant (PDA)) associated with, for example, a human user.
  • a computing element e.g., a server, a personal computer, a personal digital assistant (PDA)
  • the data center units 180 can each be managed as a specified portion of resources (e.g., hardware resources, software resources) of the data center 100.
  • resources of the data center 100 can be divided into (e.g., partitioned into) data center units 180 that can be used, for example, to handle processing associated with one or more virtual resources (for users such as user 50).
  • the virtual resource(s) can be configured to, for example, emulate the functionality of a physical source device and/or its associated software.
  • the hardware resources (and the associated software resources to support the hardware resources) of one or more of the data center units 180 can be managed so that they perform at (or are capable of performing at), for example, predefined hardware resource limit values.
  • the hardware resources of one or more of the data center units 180 can managed so that they perform at, for example, a specified level of network bandwidth (e.g., 10 megabits/second (Mb/s) of network bandwidth, a specified level of network bandwidth of more than 1 Mb/s of network bandwidth), a specified level of processing speed (e.g., a processor speed of 300 megahertz (MHz), a processor speed of 600 MHz, a specific processor speed of more than 200 MHz), a specified input/output (I/O) speed of a storage device (e.g., a disk I/O speed of 40 I/O operations per second, a specified disk I/O speed of more than 10 IOPS), and/or a specified storage device bandwidth (e.g., a disk bandwidth of 10 Mb/s, a specified level of disk bandwidth of more than 10 Mb/s).
  • a specified level of network bandwidth e.g., 10 megabits/second (Mb/s) of network bandwidth, a specified level of network bandwidth
  • a specified portion of hardware resources can also be reserved as part of one or more of the data center unit(s) 180.
  • the data center unit(s) 180 can also have a specified level of a storage device (e.g., a disk size of 30 gigabytes (GB), a specified disk size of more than 1 GB) and/or a specified memory space (e.g., a memory storage capacity of 768 megabytes (MB), a specified memory storage capacity of more than 64 MB) allocated to the data center unit(s) 180.
  • a specified level of a storage device e.g., a disk size of 30 gigabytes (GB), a specified disk size of more than 1 GB
  • a specified memory space e.g., a memory storage capacity of 768 megabytes (MB), a specified memory storage capacity of more than 64 MB
  • the hardware resources (and accompanying software) of the data center 100 can be partitioned so that the data center units 180 are guaranteed, if necessary, to perform at, or have hardware resources at, the predefined hardware resource limit values.
  • the hardware resources of the data center units 180 can be managed so that they provide guaranteed levels of service that correspond with each (or every) predefined hardware resource limit value from a set of predefined hardware resource limit values.
  • the hardware resources (or portions thereof) of a data center unit from the data center units 180 can be reserved so that they are available for processing associated with the user 50.
  • a first hardware resource (or a portion thereof) e.g., a memory component
  • a second hardware resource (or a portion thereof) e.g., a network card
  • a first hardware resource e.g., a memory component
  • a second hardware resource e.g., a network card
  • the hardware resource(s) (or portions thereof) that are associated with the data center units 180 may be idle (or substantially idle).
  • the hardware resource(s) of the data center units 180 will be idle (or substantially idle) so that they are guaranteed to be available for processing for the user 50 when they are needed.
  • a guaranteed level of service can also be referred to as a guaranteed level of functionality.
  • the set of predefined hardware resource limit values (which can be used to define the data center units 180) can be defined based on statistical data based on a predefined set of virtual resources that indicates a particular combination of hardware resources can be used to operate a a virtual resource.
  • a set of predefined hardware resource limit values can be defined based empirical data.
  • a hardware resource limit value associated with a particular hardware type can first be selected. Additional hardware resource limit values associated with other hardware types can be defined based on empirical data related to desirable operation of the additional hardware resources when the particular hardware type is operating at the selected hardware resource limit value. Accordingly, the set of predefined hardware resource limits values can be defined based on the collective performance of the hardware resources using the selected hardware resource limit value as a starting point.
  • the data center units 180 can be defined by a set of predefined hardware resource limit values so that the data center unit can operate a particular type of virtual resource or set of virtual resources in a desirable fashion (within a particular set of performance specifications).
  • the hardware resources of the data center units 180 can be managed (e.g., allocated, reserved), at least in part, by the resource controller 170 (or a portion thereof) based on one or more predefined hardware resource limit values.
  • the resource controller 170 can be configured to manage a resource (e.g., a software resource, a hardware resource) of the data center 100, or a portion thereof, to one or more of the data center units 180 based on a predefined hardware resource limit value (e.g., a predefined hardware resource limit value from a set of predefined hardware resource limit values).
  • a predefined hardware resource limit value e.g., a predefined hardware resource limit value from a set of predefined hardware resource limit values.
  • the predefined hardware resource limit values can be policed or enforced by the resource controller 170.
  • the resource controller 170 can be configured to manage processing resources of a processor of a host device (not shown) within the data center 100 so that a specified portion of the processing capacity of the processor (which can correspond with a hardware resource limit value) is reserved for the data center unit DUl .
  • the resource controller 170 (or a portion thereof) can be configured to interface with the resources of the data center 100 so that the hardware resources (from the resources of the data center 100) of data center units 180 can provide guaranteed levels of service that correspond with a set of predefined hardware resource limit values.
  • the resource controller 170 can include one or more specialized resource controllers that are each configured to manage resources associated with a particular type of resource (e.g., a memory type, a central processing unit). More details related to a resource controller and specialized resource controllers are described in connection with FIG. 5.
  • the hardware resources of one or more of the data center units 180 can be managed so that only certain predefined hardware resource limit values of the hardware resources of the data center unit(s) 180 are guaranteed.
  • the hardware resources of data center unit DUl can be managed by the resource controller 170 (or a portion thereof) so that the hardware resources of data center unit DUl can provide a guaranteed level of processing speed and have a guaranteed portion of disk space available, but can be managed so that the hardware resources of data center unit DUl may provide a specified bandwidth speed in only certain situations. Accordingly, the bandwidth speed of the hardware resources of data center unit DUl is not guaranteed. In such circumstances, the data center unit DUl can be referred to as a partially guaranteed data center unit.
  • the hardware resources of data center units 180 can be managed so that the hardware resources of each of the data center units 180 is managed based on the same set of hardware resource limit values. Accordingly, hardware resources of each data center unit from the data center units 180 may be managed so that they provide the same (or substantially the same) guaranteed level of service.
  • the hardware resources of one or more of the data center units 180 can be based on different sets of predefined hardware resource limit values.
  • the hardware resources of data center unit DUl can be based on a first set of predefined hardware resource limit values and the hardware resources of data center unit DU2 can be based on a second set of predefined hardware resource limit values different than the first set of predefined hardware resource limit values.
  • the hardware resources of data center unit DUl can provide a different guaranteed level of service than the guaranteed level of service provided by hardware resources of data center unit DU2.
  • the resource controller 170 can be configured to managed the hardware resources of these different data center units based on the different sets of predefined hardware resource limit values.
  • one or more of the data center units 180 can include software resources.
  • software resources can be associated with (and can define) at least a portion of the data center unit(s) 180.
  • the hardware resources of data center unit DUl can have a software resource licensed specifically for operation of and/or operation within the hardware resources of data center unit DUl .
  • the resource controller 170 (or a portion thereof) can be configured to manage the software resources of the data center 100 so that the software resources are allocated (e.g., assigned), as specified, to the hardware resources of each of the data center units 180.
  • Resource controllers configured to manage a portion of a data center unit that is hardware-based can be referred to as hardware resource controllers.
  • a data center unit that includes specified allotment of memory can be defined by a hardware controller.
  • resource controllers configured to manage a portion of a data center unit that is software-based can be referred to as software resource controllers.
  • Software resources and hardware resources of a data center unit can be collectively referred to as processing resources. Accordingly, the processing resources of a data center unit can be managed by (e.g., collectively managed by) a resource controller.
  • the management module 130 can be in communication with (e.g., can be accessed via) a user interface (UI) 160.
  • the user interface 130 can be configured so that a user (e.g., a data center administrator, a network administrator, a customer, a source owner) can send signals (e.g., control signals, input signals, signals related to instructions) to the management module 130 and/or receive signals (e.g., output signals) from the
  • signals e.g., control signals, input signals, signals related to instructions
  • signals e.g., output signals
  • the user interface 160 can be configured so that the user can trigger one or more functions to be performed (e.g., executed) at the management module 130 via the user interface 160 and/or receive an output signal from the onboard engine 130 at, for example, a display (not shown) of the user interface 160.
  • a user can manage at least a portion of the database 124 via the user interface 160.
  • the user interface 160 can be a graphical user interface (GUI).
  • GUI graphical user interface
  • an integer number of data center units 180 (which can each have hardware resources managed based on the same set of predefined hardware resource limit values) are assigned to (e.g., reserved for use by) the user 50.
  • a request for a specified number or a change in a number of the data center units, such as the data center units 180 shown in FIG. 1, can be received at the management module 130.
  • the request can be defined in response to an input from the user 50.
  • the user can make a request for a specified number of data center units via the user interface 160.
  • a value representing the number of data center units can be stored in a database 124 within a memory 120 of the management module 130. Each number can represent the hardware resources collectively managed as a data center unit. In some embodiments, the value can be associated with an identifier representing the user 50.
  • An example of a database storing information related to data center units assigned to a user is shown in FIG. 2.
  • FIG. 2 is a schematic diagram that illustrates a database 200 that can be stored in a memory of a management module, according to an embodiment.
  • the database 200 can be stored in a memory such as the memory 120 of the management module 130 shown in FIG. 1.
  • data center units DCi through DC N are assigned to a user represented by the user identifier "A" (shown in the column labeled user identifier 210)
  • data center units DC R through DC R + M shown in the column labeled data center units 230
  • user identifier "B" shown in the column labeled user identifier 210.
  • the number of data center units (column 220) assigned to user A is represented by the value N
  • the number of data center units (column 220) assigned to user B is represented by the value M.
  • the values "N" and "M" can be integer numbers.
  • virtual resources AVRi through AVRQ are associated with user A
  • virtual resources BVRi through BVRs are associated with user B.
  • the database 200 can also be defined to represent which of the data center resources of the data center units are operating each of the virtual resources.
  • the database 200 can be configured to store information representing that data center resources defining data center unit DC 2 are operating virtual resources AVR 4 through AVRQ.
  • the virtual resources 240 can be configured to emulate one or more functions of, for example, a legacy source device being migrated to the virtual resources 240. More details related to migration of a source to a data center to be emulated as one or more virtual resource are described in connection with copending patent application bearing attorney docket no.
  • VITU-002/00US 311331-2002 filed on same date, entitled, “Methods and Apparatus Related to Migration of Customer Resources to Virtual Resources within a Data Center Environment,” and co-pending patent application bearing attorney docket no. VITU-001/00US 311331-2001, filed on same date, entitled, "Methods and Apparatus for Movement of Virtual Resources within a Data Center
  • the database 200 can be dynamically updated to represent changes in resources (e.g., software resources, hardware resources) such as a decrease or increase in a number of data center resources assigned to the one or more of the users.
  • resources e.g., software resources, hardware resources
  • values representing the number of data center units 220 and assigned to a user represented by the user identifiers 210 can be dynamically modified.
  • the user 50 can request, via the user interface 160, an increase in a number of data center units (such as data center units 180) assigned to the user 50. Accordingly, a request to change a value representing the number of data center units assigned to the user 50 can be received at the management module 130. The value can be stored at the memory 120 of the management module 130. A request to increase a number of data center units can be referred to as an increase request. The request can be received at the management module 130 from the user interface 160.
  • the management module 130 can be configured to determine whether or not resources of the data center 100 are available to be assigned to the user 50 as resources of a data center unit.
  • the management module 130 can be configured to store inventory information representing resources available at the data center 100 in the memory 120, for example, in database 124.
  • the management module 130 can be configured to access the inventory information and determine, based on the inventory information, whether one or more data center units, or hardware resources of one or more data center units, (not shown) are available to be assigned to a user such as user 50.
  • inventory information representing an unassigned pool of resources and/or data center units (or hardware of data center units) can be stored in the memory 120.
  • An example of inventory information that can be stored in the database 124 of the memory 120 is shown in FIG. 3.
  • FIG. 3 is a schematic diagram that illustrates a database 300 that includes information about the availability of data center resources, according to an embodiment.
  • the data center units or hardware resources of data center units represented by identifiers Ul and U3 (shown in column 320) are not available for assignment to a user because, as indicated in column 310, these data center units are already assigned to a user.
  • the data center units represented by identifiers U2, U4, and U5 (shown in column 320) are available for assignment to a user because, as indicated in column 310, these data center units not assigned to a user.
  • the data center units represented by identifiers U2, U4, and U5, because they are not assigned can be referred to as a pool of unassigned data center resources, or as a pool of unassigned data center units.
  • a database can be configured to store inventory information related to individual hardware resources (e.g., processor, network interface cards, storage devices) that can be managed as a data center unit. Specifically, the availability or unavailability of the individual hardware resources (or portions thereof) can be stored in the database. Based on this inventory information about the hardware resources (or portions of the hardware resources), a management module (such as management module 130 shown in FIG. 1) can determine whether or not hardware resources may be available to define a data center unit that can be assigned to a user.
  • individual hardware resources e.g., processor, network interface cards, storage devices
  • the management module 130 can be configured to assign the available data center unit to user 50 so that the hardware resources of the data center unit can be used by, for example, a virtual resource associated with the user 50. In other words, if a sufficient number of data center units are available to satisfy the increase request, the management module 130 can grant the request and assign the data center unit(s) to the user 50.
  • the data center units that are assigned to the user 50 can be removed from, for example, a pool of unassigned resources (or data center units).
  • one or more data center units that are assigned to a user in response to an increase request can have a status associated with the data center unit(s) changed from an available status to an unavailable status.
  • the availability or unavailability of a data center unit can be determined based on inventory information stored in the memory 120.
  • the user 50 can request, via the user interface 160, a decrease in a number of data center units (such as data center units 180) assigned to the user 50. Accordingly, a request to change a value representing the number of data center units (which can represented hardware resources collectively managed as a data center unit) assigned to the user 50 can be received at the management module 130. A request to decrease a number of data center units can be referred to as an decrease request. The request can be received at the management module 130 from the user interface 160.
  • hardware resources of a data center unit removed from a set of data center units previously assigned to a user can be reassigned to another user.
  • the hardware resources managed as data center unit DU2 can be assigned to another user (not shown).
  • the reassignment can be represented in a database 124 stored in the memory 120.
  • the data center unit DU2 e.g., the hardware resources of the data center unit DU2
  • the memory 120 can be, for example, a random-access memory (RAM) (e.g., a dynamic RAM, a static RAM), a flash memory, a removable memory, and/or so forth.
  • the database 124 can be implemented as, for example, a relational database, an indexed database, a table, and/or so forth. Although the memory 120 and the database 124 are shown as being local to the management module 130, in some embodiments, one or more portions of the database 124 can be stored in a remote memory that can be accessed by the management module 130.
  • the portions of the database 124 can be stored in a separate (e.g., a remote) storage device (e.g., storage facility) that can be accessed by the management module 130 via a network (e.g., a local area network (LAN), a wide area network (WAN)) (not shown).
  • the management module 130 can include a monitoring module 134.
  • the monitoring module 134 can be configured to trigger a change in a number of data center units (such as data center units 180) assigned to a user (such as user 50) based on one or more values representing performance associated with the hardware resources of data center units.
  • the values representing performance can be referred to as performance metric values.
  • the monitoring module 134 can be configured to trigger an increase or a decrease in a number of data center units assigned to user 50 in response to a threshold condition being satisfied based on one or more performance metric values.
  • the monitoring module 134 can be configured to remove data center unit DU2 from the set of data center units 180 assigned to user 50 in response to a performance metric value indicating that the data center units 180 collectively are being under-utilized.
  • the monitoring module 134 can be configured to add an additional data center unit (not shown) (or hardware resources of the additional data center unit) to the set of data center units 180 assigned to user 50 in response to a performance metric value indicating that the hardware resources of data center units 180 collectively are being over-utilized.
  • the over-utilization can be manifested in, for example, failure data.
  • the monitoring module 134 can be configured to replace one or more data center units (such as data center units 180) assigned to a user (such as user 50) based on a threshold condition being satisfied based on a performance metric value. In some embodiments, the monitoring module 134 can be configured to modify a number of data center assigned to a user by modifying a value stored in a database (such as database 124) that represents a number of data center units.
  • data center units such as data center units 180 assigned to a user (such as user 50) based on a threshold condition being satisfied based on a performance metric value.
  • the monitoring module 134 can be configured to modify a number of data center assigned to a user by modifying a value stored in a database (such as database 124) that represents a number of data center units.
  • FIG. 4 is a graph that illustrates values of a performance metric, according to an embodiment.
  • the values of the performance metric (shown on the y- axis) are plotted versus time (shown on the x-axis).
  • the values of the performance metric are above a lower limit value (shown as "LL") before time Tl, and the values of the performance metric are below the lower limit value after time Tl .
  • a monitoring module such as monitoring module 134 shown in FIG. 1, can be configured to modify a number of data center units assigned to a user in response to the values of the performance metric falling below the lower limit value at time Tl . In other words, the monitoring module can modify the number of data center units assigned to the user in response to the values of the performance metric satisfying a threshold condition associated with the lower limit value at time Tl .
  • the monitoring module 134 can be configured to modify a number of data center units assigned to a user (such as user 50) based on various performance metric values, such as, for example, a capacity value, a value representing a failure rate, a utilization value, and/or so forth.
  • the performance metric values can be associated with a specified period of time.
  • the monitoring module 134 can be configured to receive the values (e.g., pushed values, pulled values) representing the performance metric from the data center 100 (or a portion thereof). For example, in some embodiments, the monitoring module 134 can be configured to receive one or more performance metric values produced by virtual resources operating within the hardware resources of data center units of the data center 100. In some embodiments, the performance metric values can be received periodically, randomly, in a preselected manner, and/or in response to a request from the monitoring module 134. In some embodiments, the monitoring module 134 can be configured to request and receive data from one or more resources (e.g., hardware resources, software resources, virtual resources) of the data center 100 that can be used to calculate a performance metric value.
  • resources e.g., hardware resources, software resources, virtual resources
  • the monitoring module 134 can be configured to send a notification to, for example, the user 50 via user interface 160, indicating that a number of data center units assigned to the user 50 should be modified.
  • the monitoring module 134 can be configured to modify a number of data center units (by modifying a value representing the number of data center units) assigned to the user 50 only when authorized to do so by the user.
  • the monitoring module 134 can be configured to solicit authorization from the user 50 via the user interface 160 for modification of the number of the data center units 180. When authorization is received from the user 50 via the user interface 160 the monitoring module 134 can be configured to modify the number of data center units 180 assigned to the user 50.
  • the management module 130 can be configured to identify a minimum number of data center units (such as data center units 180) to operate a virtual resource. For example, the management module 130 can be configured to identify (or calculate) a minimum number of data center units (based on one or more assessment parameter values) to operate a virtual resource within a data center environment. In some embodiments, the management module 130 can be configured to determine that a particular minimum number of data center units are used to operate a virtual resource emulating, for example, at least a portion of a particular physical device.
  • the number of discrete data center units selected to operate a virtual resource can be determined by the management module 130 based on, for example, an ability of the data center units to handle burst processing levels of the virtual resource(s) and/or an average processing level of the virtual resource(s).
  • the calculations related to numbers of data center units to operate a virtual resource can be performed by an assessment module portion (not shown) of the management module 130. More details related to an assessment module are described in connection with co-pending patent application bearing attorney docket no. VITU-002/00US 311331-2002, filed on same date, entitled, "Methods and Apparatus Related to Migration of Customer Resources to Virtual Resources within a Data Center Environment," which has been incorporated herein by reference in its entirety.
  • the monitoring module 134 can be configured to modify a number of data center units assigned to the user 50 based on a user preference of the user 50.
  • the user preference can identify the performance metric values to be used by the monitoring module 134 to modify a number of data center units (such data center units 180 shown in FIG. 1) assigned to a user.
  • the user preference can identify one or more threshold conditions to be used the monitoring module 134 to modify a number of data center units assigned to the user 50.
  • one or more user preferences can be stored in memory 120.
  • a user preference UA (shown in column 250) is associated with the user represented by user identifier A (shown in column 210), and a user preference UB (shown in column 250) is associated with the user represented by user identifier B (shown in column 210).
  • the user preference UA and UB can represent user preferences related to monitoring of data center units (which are shown in column 230).
  • the monitoring module 134 can be configured to access the user preferences 250 and can be configured modify the number of data center units (shown in column 220) based on the user preferences 250.
  • one or more portions of the management module 130 can be (or can include) a hardware-based module (e.g., an application-specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA)) and/or a software-based module (e.g., a module of computer code, a set of processor-readable instructions that can be executed at a processor).
  • the management module 130 can include one or more memory portions (e.g., a random access memory (RAM) portion, a shift register, a cache) that can be used during operation of one or more functions of the management module 130.
  • RAM random access memory
  • the hardware resources and/or software resources of the data center 100 can include one or more levels of infrastructure.
  • the hardware resources of the data center 100 can include, host devices (e.g., server devices), storage devices, access switches, aggregation devices, routers, interface components, cables, and/or so forth.
  • the data center 100 can be configured so that host devices (which can be configured to host virtual resources) and/or storage devices can be in communication with (e.g., coupled to) a layer of access switches that are in communication with (e.g., coupled to) a layer of aggregation devices.
  • the aggregation devices can function as gateway devices into a set of routers/s witches that function as core switching elements of the data center 100.
  • the software resources of the data center 100 can include, for example, management modules, operating systems, hypervisors (e.g., VMware
  • the data center 100 can be a cloud computing environment where hardware resources and/or software resources are shared by multiple virtual resources associated with one or more users (e.g., clients, customers).
  • the virtualized environment defined by the data center 100 can be referred to as a data center virtualized environment.
  • the software resources of the data center 100 can include, for example, management modules, operating systems, hypervisors, and/or so forth.
  • the hypervisors can be configured to facilitate virtualization of hardware resources of host devices.
  • the operating systems can be installed at routers, aggregation devices, routers, core switching elements, and/or forth.
  • the management module 130 can be a centralized management module configured to handle data center management for the entire data center 100, or can be a de-centralized management module configured to handle management of only a portion of the data center 100.
  • the management module 130 can be configured to perform various functions in addition to management of data center units such as data center units 180.
  • the management module 130 can be configured to handle disaster recovery, migration of virtual resources to a data center, and/or so forth. More details related to a management module configured to perform various operations related to a data center environment are set forth in co-pending patent application bearing attorney docket no. VITU-004/00US 311331-2004, filed on same date, entitled, "Methods and Apparatus for Data Center Management Independent of Hypervisor Platform," which is incorporated herein by reference in its entirety.
  • the data center 100 can be managed locally or can have consolidated management.
  • the entire data center 100, or a portion thereof can be managed via a single management module (not shown).
  • the entire data center 120, or a portion thereof can be managed via multiple management modules (not shown) that can be distributed through the data center 100 infrastructure.
  • some functionality of the data center 100 can be managed based on a consolidated management scheme, while other functionality of the data center 100 can be managed based on a distributed management scheme.
  • FIG. 5 is a schematic diagram that illustrates resources controllers 570 in communication with a data center 500, according to embodiment.
  • the resource controllers 570 include a processor (e.g., a central processing unit (CPU)) controller 540, a memory controller 542, a network controller 544, a storage input/output operations per second (IOPS) controller 546, a storage controller 548, and a storage bandwidth controller 550.
  • the resources controllers 570 can include, for example, a VMware capacity planning tool, a VMware vSphere controller, a Converged Network Adapter controller, a Compellent SAN controller, and/or so forth.
  • Each of the resource controllers 570 shown in FIG. 5 are configured to manage resources associated with a particular type of hardware of a data center 500. As represented in FIG. 5, the resource controllers 570 can be configured to manage a portion of the data center unit 580. Accordingly, the resources controllers 570 collectively manage the hardware resources of data center unit 580. One or more of the resources controller 570 shown in FIG. 5 can be included in the resource controller 170 shown in FIG. 1.
  • the processor controller 540 can be configured to manage the resources of one or more processors (not shown) of the data center 500 so that a certain portion of the computing cycles of the processor(s) are reserved for the data center unit 580.
  • the computing cycles can be reserved so that if the computing cycles are needed by a virtual resource of a user to whom the data center unit 580 is assigned, the computing cycles will be available for use by the virtual resource of the user.
  • computing cycles substantially equivalent to, for example, a 100 MHz processor, a 1.5 GHz processor, or so forth, can be reserved for the data center unit 580.
  • a hardware resource limit value specific to the processor controller 540 can be referred to as a processor limit value.
  • the memory controller 542 can be configured to manage the resources of one or more memory components (not shown) of the data center 500 so that a certain portion of the memory component(s) can be reserved for the data center unit 580.
  • a memory storage capacity of 1 MB, 10 MB, or so forth, of a memory component can be reserved for the data center unit 580.
  • a hardware resource limit value specific to the memory controller 542 can be referred to as a memory limit value.
  • the network controller 544 can be configured to manage the resources of one or more network components (e.g., network interface cards) (not shown) of the data center 500 so that a certain portion of processing power of the network
  • a data transfer capacity of a network component can be time- division multiplexed so that a specified level of network bandwidth substantially equal to, for example, 5 Mb/s, 100 Mb/s, 1 Gb/s, or so forth, can be managed for the data center unit 580.
  • a hardware resource limit value specific to the network controller 544 can be referred to as a network limit value.
  • a storage IOPS controller 546 can be configured to manage the resources of one or more storage components (e.g., hard disk drive, server storage) (not shown) of the data center 500 so that a certain IO capacity of the storage component (e.g., more than 1 IOPS, 50 IOPS) can be managed for (e.g., reserved for) the data center unit 580.
  • a hardware resource limit value specific to the storage IOPS controller 546 can be referred to as an IOPS limit value.
  • a storage controller 548 can be configured to manage the resources of one or more storage components so that a certain portion of storage capacity of the storage component(s) (e.g., 50 GB, 100 GB, 10 Terabytes (TB)) can be reserved for the data center unit 580.
  • a hardware resource limit value specific to the storage controller 548 can be referred to as a storage limit value.
  • a storage bandwidth controller 550 can be configured to manage the bandwidth of one or more storage components so that a certain portion of the bandwidth (e.g., 10 Mb/s, 1 Gb/s) can be managed (e.g., reserved) for the data center unit 580.
  • a hardware resource limit value specific to the storage bandwidth controller 542 can be referred to as a storage bandwidth limit value.
  • the resource controllers 570 can be triggered to reserve a specified portion of hardware resources of the data center 500 for the data center unit 580 based on one or more hardware resource limit values.
  • the hardware resource limit values can be communicated to the resource controllers 570 in an instruction 60.
  • the instruction 60 can be defined in response to a request (e.g., an increase request, a decrease request) received at the management module 530 from the user interface (UI) 560.
  • the management module 530 can be executed within a processor 594 of a processing device 590.
  • the processing device 590 can also include a memory 592 (e.g., a storage device, a buffer, a RAM) configured to facilitate the functions of, for example, the management module 530.
  • the memory 592 can be used by the management module 530 during communication with the resource controllers 570.
  • the instruction 60 can be sent to each of the resource controllers.
  • the instruction 60 can include hardware resource limit values for each of the resource controllers within the resource controller 570.
  • the instruction 60 which can include hardware resource limit values specific to each of the resource controllers within the resource controller 570, can be defined at the management module 530 and sent to the memory controller 542.
  • the memory controller 542 can be configured to parse a hardware resource limit value specific to the memory controller 542 from the instruction 60.
  • the memory controller 542 can be configured to manage hardware resources of the data center 500 for use as the data center unit 580 based on the hardware resource limit value.
  • the management module 530 can be configured to define and send two or more different instructions to each of the resource controllers within the resource controllers 570.
  • the different instructions can be sent to the resource controllers 570 because some of the resource controllers can be configured to operate based on different platforms (e.g., hardware and/or software platforms, protocols) than other resource controllers from the resource controller 570.
  • the platforms e.g., hardware and/or software platforms, protocols
  • management module 530 can be configured to send a first instruction (which includes a network limit value) based on a first application programming interface (API) to the network controller 544, and send a second instruction (which includes a storage limit value) based on a second API to the storage controller 548.
  • API application programming interface
  • resource controllers 570 Although six different types of resource controllers are shown in the resource controllers 570, in some embodiments, a different combination of resource controllers can be used to manage the hardware resources of data center unit 580. For example, less than all of the resource controllers 570 shown in FIG. 5 can be used to manage the hardware resources of data center unit 580. In some embodiments, a different resource controller such as a bus speed resource controller can be used to manage a bus portion of the data center unit 580. In some embodiments, the instruction 60 (or set of instructions) can be configured to trigger all or a portion of the resource controllers 570 to manage the hardware resources of the data center unit 580 in a customized fashion.
  • one or more of the resource controllers 570 can be integrated into the management module 530. In some embodiments, one or more functions of the resources controller 570 can be included as a function of the management module 530. Although not shown, in some embodiments, the resource controllers 570 can include a software resource controller.
  • FIG. 6 is a flowchart that illustrates a method for modifying a set of data center units based on a performance metric, according to an embodiment.
  • a value of a performance metric representing performance of a set of virtual resources associated with a user identifier is received, at 600.
  • the set of virtual resources can be associated with a user (based on a user identifier representing the user).
  • the performance metric can be, for example, related to a failure rate of the virtual resources.
  • the performance metric can be optionally specified in a user preference 630 associated with the user, or can be based on a default value.
  • a set of data center units assigned to operate the set of virtual resources is modified in response to the performance metric satisfying a threshold condition, at 610.
  • a number of data center units mapped to the set of virtual resources can be increased or decreased.
  • the threshold condition can be optionally defined within the user preference 630 associated with a user, or can be based on a default value.
  • FIG. 7 is a flowchart that illustrates a method for modifying a number of data center units in response to a request, according to an embodiment.
  • Each number can represent hardware resources collectively managed as a data center unit.
  • a request for a change in a value representing a number of data center units included in a set of data center units assigned to a user is received, at 700.
  • the request can be triggered by the user and can be received at a management module from a user interface.
  • the number of data center units assigned to the user can be represented by a value in a database stored in a memory.
  • the request is denied, at 750.
  • the additional data center units may not be available because they are assigned to another user, or because resources (e.g., hardware resources, software resources) managed as a data center unit are not available.
  • the request may not be denied, but can instead be assigned a pending state (e.g., a held state) until resources (e.g., hardware resources, software resources) that can be assigned to the user become available.
  • resources e.g., hardware resources, software resources
  • the request can be pending until resources that can be managed as a data center unit (and can be assigned to the user) become available.
  • the request can be queued with other requests in a request queue.
  • a queue of requests can be handled in a first-in-first-out (FIFO) fashion, or in some other order.
  • FIFO first-in-first-out
  • At 710 at least one data center unit is included in a pool of unassigned data center units, at 750.
  • An assignment of the data center unit(s) can be modified so that they are no longer assigned to the user and instead included in a pool of unassigned data center units.
  • a representation of the data center unit(s) are included in a pool of unassigned data center units so that, for example, a management module can identify these data center unit(s) as not being assigned to a user.
  • a management module for example, can be configured to reassign one or more of the data center units to another user rather than include the data center unit(s) in the pool of unassigned data center units.
  • Some embodiments described herein relate to a computer storage product with a computer-readable medium (also can be referred to as a processor-readable medium) having instructions or computer code thereon for performing various computer-implemented operations.
  • the media and computer code also can be referred to as code
  • Examples of computer code include, but are not limited to, micro-code or microinstructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter.
  • embodiments may be implemented using, for example, a run-time environment and/or an application framework such as a Microsoft .NET framework and/or Java, C++, or other programming languages (e.g., object-oriented programming languages) and/or development tools.
  • Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.

Abstract

In one embodiment, a processor-readable medium can be configured to store code representing instructions to be executed by a processor. The code can include code to receive a request to change a value representing a number of data center units included in a set of data center units assigned to a user. Each of the data center units from the set of data center units can be associated with hardware resources managed based on a set of predefined hardware resource limit values. The code can include code to determine, in response to the request, whether hardware resources of a data center unit mutually exclusive from hardware resources of the set of data center units and managed based on the set of predefined resource limit values is available for assignment to the user when the request to change is an increase request.

Description

METHODS AND APPARATUS RELATED TO MANAGEMENT OF UNIT-BASED VIRTUAL RESOURCES WITHIN A DATA CENTER ENVIRONMENT
Cross-Reference to Related Application
[1001] This application claims priority to and is a continuation of U.S. Patent Application Serial No. 12/709, 962, entitled "Methods and Apparatus Related to Management of Unit- Based Virtual Resources Within a Data Center Environment," filed February 22, 2010, the disclosure of which is hereby incorporated by reference in its entirety.
Background
[1002] Embodiments described herein relate generally to virtual resources within a data center, and, in particular, to management of unit-based virtual resources within a data center environment.
[1003] Because data center environments (e.g., cloud computing data center
environments) are defined by a relatively large-scale infrastructure, management of the various components within the infrastructure can be complicated and may not be handled in a desirable fashion using known methods and apparatus. In particular, known methods and apparatus for managing resources of infrastructure to provide a specified level service (e.g., a guaranteed level of service) to users with virtual resources operating within the data center environment may not be adequate for some applications. Many of these known methods and apparatus, for example, may be too specialized for some applications and/or may not integrate the diverse functionality of various systems that control and/or manage components within the data center environment to provide a specified level of service in a desirable fashion.
[1004] Thus, a need exists for methods and apparatus for management of unit-based virtual resources within a data center environment. Summary
[1005] In one embodiment, a processor-readable medium can be configured to store code representing instructions to be executed by a processor. The code can include code to receive a request to change a value representing a number of data center units included in a set of data center units assigned to a user. Each of the data center units from the set of data center units can be associated with hardware resources managed based on a set of predefined hardware resource limit values. The code can include code to determine, in response to the request, whether hardware resources of a data center unit mutually exclusive from hardware resources of the set of data center units and managed based on the set of predefined resource limit values is available for assignment to the user when the request to change is an increase request.
Brief Description of the Drawings
[1006] FIG. 1 is a schematic diagram that illustrates a management module, a hardware controller, and a data center, according to an embodiment.
[1007] FIG. 2 is a schematic diagram that illustrates a database that can be stored in a memory of a management module, according to an embodiment.
[1008] FIG. 3 is a schematic diagram that illustrates a database that includes information about the availability of data center resources, according to an embodiment.
[1009] FIG. 4 is a graph that illustrates values of a performance metric, according to an embodiment.
[1010] FIG. 5 is a schematic diagram that illustrates resources controller in
communication with a data center, according to embodiment.
[1011] FIG. 6 is a flowchart that illustrates a method for modifying a set of data center units based on a performance metric, according to an embodiment.
[1012] FIG. 7 is a flowchart that illustrates a method for modifying a number of data center units in response to a request, according to an embodiment. Detailed Description
[1013] FIG. 1 is a schematic diagram that illustrates a management module 130, a resource controller 170, and a data center 100, according to an embodiment. The
management module 130 is configured to send one or more instructions to the resource controller 170 (or a portion thereof) to trigger the resource controller 170 to managed one or more hardware resources of the data center units 180 within the data center 100. As shown in FIG. 1, the data center units 180 include data center unit DUl, data center unit DU2, and data center unit DU3. In some embodiments, the data center units 180 can be referred to as a set of data center units. In some embodiments, the hardware resources of a data center unit can also be referred to as processing resources of a data center unit.
[1014] As represented by the dashed lines from the data center units 180 to a user 50, the hardware resources of the data center units 180 are managed (e.g., allocated, provisioned, reserved) for use by the user 50 (e.g., for processing associated with the user 50). Said differently, the data center units 180 (or the data center units of the data center units 180) are assigned to the user 50. Because the data center units 180 are assigned to the user 50, the user 50 can use the hardware resources of data center units 180 to, for example, perform one or more functions specified by the user 50. For example, the hardware resources of data center units 180 can be used by the user 50, for example, to operate one or more virtual resources (e.g., virtual machines) (not shown) of the user 50. In some embodiments, the user 50 can be a customer, a client, a company, and/or so forth. In some embodiments, the user 50 can represent a computing element (e.g., a server, a personal computer, a personal digital assistant (PDA)) associated with, for example, a human user.
[1015] The data center units 180 can each be managed as a specified portion of resources (e.g., hardware resources, software resources) of the data center 100. In other words, resources of the data center 100 can be divided into (e.g., partitioned into) data center units 180 that can be used, for example, to handle processing associated with one or more virtual resources (for users such as user 50). In some embodiments, the virtual resource(s) can be configured to, for example, emulate the functionality of a physical source device and/or its associated software. [1016] For example, in some embodiments, the hardware resources (and the associated software resources to support the hardware resources) of one or more of the data center units 180 can be managed so that they perform at (or are capable of performing at), for example, predefined hardware resource limit values. Specifically, the hardware resources of one or more of the data center units 180 can managed so that they perform at, for example, a specified level of network bandwidth (e.g., 10 megabits/second (Mb/s) of network bandwidth, a specified level of network bandwidth of more than 1 Mb/s of network bandwidth), a specified level of processing speed (e.g., a processor speed of 300 megahertz (MHz), a processor speed of 600 MHz, a specific processor speed of more than 200 MHz), a specified input/output (I/O) speed of a storage device (e.g., a disk I/O speed of 40 I/O operations per second, a specified disk I/O speed of more than 10 IOPS), and/or a specified storage device bandwidth (e.g., a disk bandwidth of 10 Mb/s, a specified level of disk bandwidth of more than 10 Mb/s). A specified portion of hardware resources can also be reserved as part of one or more of the data center unit(s) 180. For example, the data center unit(s) 180 can also have a specified level of a storage device (e.g., a disk size of 30 gigabytes (GB), a specified disk size of more than 1 GB) and/or a specified memory space (e.g., a memory storage capacity of 768 megabytes (MB), a specified memory storage capacity of more than 64 MB) allocated to the data center unit(s) 180.
[1017] In some embodiments, the hardware resources (and accompanying software) of the data center 100 can be partitioned so that the data center units 180 are guaranteed, if necessary, to perform at, or have hardware resources at, the predefined hardware resource limit values. In other words, the hardware resources of the data center units 180 can be managed so that they provide guaranteed levels of service that correspond with each (or every) predefined hardware resource limit value from a set of predefined hardware resource limit values. Said another way, the hardware resources (or portions thereof) of a data center unit from the data center units 180 can be reserved so that they are available for processing associated with the user 50. For example, a first hardware resource (or a portion thereof) (e.g., a memory component) that defines a first portion of data center unit DU3 can provide a guaranteed level of service that corresponds within a first predefined hardware resource limit value from a set of predefined hardware resource limit values, and a second hardware resource (or a portion thereof) (e.g., a network card) that defines a second portion of data center unit DU3 can provide a guaranteed level of service that corresponds within a second predefined hardware resource limit value from the set of predefined hardware resource limit values.
[1018] In some embodiments, if one or more of the hardware resources of the data center units 180 are not performing functions for the user 50 (e.g., performing processing of virtual resources associated with the user 50), the hardware resource(s) (or portions thereof) that are associated with the data center units 180 may be idle (or substantially idle). The hardware resource(s) of the data center units 180 will be idle (or substantially idle) so that they are guaranteed to be available for processing for the user 50 when they are needed. In some embodiments, a guaranteed level of service can also be referred to as a guaranteed level of functionality.
[1019] In some embodiments, the set of predefined hardware resource limit values (which can be used to define the data center units 180) can be defined based on statistical data based on a predefined set of virtual resources that indicates a particular combination of hardware resources can be used to operate a a virtual resource. In some embodiments, for example, a set of predefined hardware resource limit values can be defined based empirical data.
Specifically, a hardware resource limit value associated with a particular hardware type (e.g., a disk type) can first be selected. Additional hardware resource limit values associated with other hardware types can be defined based on empirical data related to desirable operation of the additional hardware resources when the particular hardware type is operating at the selected hardware resource limit value. Accordingly, the set of predefined hardware resource limits values can be defined based on the collective performance of the hardware resources using the selected hardware resource limit value as a starting point. In some embodiments, the data center units 180 can be defined by a set of predefined hardware resource limit values so that the data center unit can operate a particular type of virtual resource or set of virtual resources in a desirable fashion (within a particular set of performance specifications).
[1020] The hardware resources of the data center units 180 can be managed (e.g., allocated, reserved), at least in part, by the resource controller 170 (or a portion thereof) based on one or more predefined hardware resource limit values. For example, the resource controller 170 can be configured to manage a resource (e.g., a software resource, a hardware resource) of the data center 100, or a portion thereof, to one or more of the data center units 180 based on a predefined hardware resource limit value (e.g., a predefined hardware resource limit value from a set of predefined hardware resource limit values). In other words, the predefined hardware resource limit values can be policed or enforced by the resource controller 170. For example, the resource controller 170 can be configured to manage processing resources of a processor of a host device (not shown) within the data center 100 so that a specified portion of the processing capacity of the processor (which can correspond with a hardware resource limit value) is reserved for the data center unit DUl . The resource controller 170 (or a portion thereof) can be configured to interface with the resources of the data center 100 so that the hardware resources (from the resources of the data center 100) of data center units 180 can provide guaranteed levels of service that correspond with a set of predefined hardware resource limit values. In some embodiments, the resource controller 170 can include one or more specialized resource controllers that are each configured to manage resources associated with a particular type of resource (e.g., a memory type, a central processing unit). More details related to a resource controller and specialized resource controllers are described in connection with FIG. 5.
[1021] In some embodiments, the hardware resources of one or more of the data center units 180 can be managed so that only certain predefined hardware resource limit values of the hardware resources of the data center unit(s) 180 are guaranteed. In some embodiments, for example, the hardware resources of data center unit DUl can be managed by the resource controller 170 (or a portion thereof) so that the hardware resources of data center unit DUl can provide a guaranteed level of processing speed and have a guaranteed portion of disk space available, but can be managed so that the hardware resources of data center unit DUl may provide a specified bandwidth speed in only certain situations. Accordingly, the bandwidth speed of the hardware resources of data center unit DUl is not guaranteed. In such circumstances, the data center unit DUl can be referred to as a partially guaranteed data center unit.
[1022] In some embodiments, the hardware resources of data center units 180 can be managed so that the hardware resources of each of the data center units 180 is managed based on the same set of hardware resource limit values. Accordingly, hardware resources of each data center unit from the data center units 180 may be managed so that they provide the same (or substantially the same) guaranteed level of service.
[1023] In some embodiments, the hardware resources of one or more of the data center units 180 can be based on different sets of predefined hardware resource limit values. For example, the hardware resources of data center unit DUl can be based on a first set of predefined hardware resource limit values and the hardware resources of data center unit DU2 can be based on a second set of predefined hardware resource limit values different than the first set of predefined hardware resource limit values. In such instances, the hardware resources of data center unit DUl can provide a different guaranteed level of service than the guaranteed level of service provided by hardware resources of data center unit DU2. The resource controller 170 can be configured to managed the hardware resources of these different data center units based on the different sets of predefined hardware resource limit values.
[1024] In some embodiments, one or more of the data center units 180 can include software resources. In other words, software resources can be associated with (and can define) at least a portion of the data center unit(s) 180. For example, the hardware resources of data center unit DUl can have a software resource licensed specifically for operation of and/or operation within the hardware resources of data center unit DUl . In some
embodiments, the resource controller 170 (or a portion thereof) can be configured to manage the software resources of the data center 100 so that the software resources are allocated (e.g., assigned), as specified, to the hardware resources of each of the data center units 180.
[1025] Resource controllers configured to manage a portion of a data center unit that is hardware-based can be referred to as hardware resource controllers. For example, a data center unit that includes specified allotment of memory can be defined by a hardware controller. Similarly, resource controllers configured to manage a portion of a data center unit that is software-based can be referred to as software resource controllers. Software resources and hardware resources of a data center unit can be collectively referred to as processing resources. Accordingly, the processing resources of a data center unit can be managed by (e.g., collectively managed by) a resource controller.
[1026] As shown in FIG. 1, the management module 130 can be in communication with (e.g., can be accessed via) a user interface (UI) 160. The user interface 130 can be configured so that a user (e.g., a data center administrator, a network administrator, a customer, a source owner) can send signals (e.g., control signals, input signals, signals related to instructions) to the management module 130 and/or receive signals (e.g., output signals) from the
management module 130. Specifically, the user interface 160 can be configured so that the user can trigger one or more functions to be performed (e.g., executed) at the management module 130 via the user interface 160 and/or receive an output signal from the onboard engine 130 at, for example, a display (not shown) of the user interface 160. For example, in some embodiments, a user can manage at least a portion of the database 124 via the user interface 160. In some embodiments, the user interface 160 can be a graphical user interface (GUI).
[1027] As shown in FIG. 1, an integer number of data center units 180 (which can each have hardware resources managed based on the same set of predefined hardware resource limit values) are assigned to (e.g., reserved for use by) the user 50. A request for a specified number or a change in a number of the data center units, such as the data center units 180 shown in FIG. 1, can be received at the management module 130. In some embodiments, the request can be defined in response to an input from the user 50. In other words, the user can make a request for a specified number of data center units via the user interface 160.
[1028] A value representing the number of data center units can be stored in a database 124 within a memory 120 of the management module 130. Each number can represent the hardware resources collectively managed as a data center unit. In some embodiments, the value can be associated with an identifier representing the user 50. An example of a database storing information related to data center units assigned to a user is shown in FIG. 2.
[1029] FIG. 2 is a schematic diagram that illustrates a database 200 that can be stored in a memory of a management module, according to an embodiment. The database 200 can be stored in a memory such as the memory 120 of the management module 130 shown in FIG. 1. As shown in FIG. 2, data center units DCi through DCN (shown in the column labeled data center units 230) are assigned to a user represented by the user identifier "A" (shown in the column labeled user identifier 210), and data center units DCR through DCR+M (shown in the column labeled data center units 230) are assigned to a user represented by user identifier "B" (shown in the column labeled user identifier 210). The number of data center units (column 220) assigned to user A is represented by the value N, and the number of data center units (column 220) assigned to user B is represented by the value M. In some embodiments, the values "N" and "M" can be integer numbers.
[1030] As shown in FIG. 2, virtual resources AVRi through AVRQ (shown in the column labeled virtual resources 240) are associated with user A, and virtual resources BVRi through BVRs (shown in the column labeled virtual resources 240) are associated with user B.
Although not shown in FIG. 2, the database 200 can also be defined to represent which of the data center resources of the data center units are operating each of the virtual resources. For example, although not shown, the database 200 can be configured to store information representing that data center resources defining data center unit DC2 are operating virtual resources AVR4 through AVRQ. In some embodiments, the virtual resources 240 can be configured to emulate one or more functions of, for example, a legacy source device being migrated to the virtual resources 240. More details related to migration of a source to a data center to be emulated as one or more virtual resource are described in connection with copending patent application bearing attorney docket no. VITU-002/00US 311331-2002, filed on same date, entitled, "Methods and Apparatus Related to Migration of Customer Resources to Virtual Resources within a Data Center Environment," and co-pending patent application bearing attorney docket no. VITU-001/00US 311331-2001, filed on same date, entitled, "Methods and Apparatus for Movement of Virtual Resources within a Data Center
Environment," which are both incorporated herein by reference in their entireties.
[1031] In some embodiments, the database 200 can be dynamically updated to represent changes in resources (e.g., software resources, hardware resources) such as a decrease or increase in a number of data center resources assigned to the one or more of the users.
Specifically, values representing the number of data center units 220 and assigned to a user represented by the user identifiers 210 can be dynamically modified.
[1032] Referring back to FIG. 1, in some embodiments, the user 50 can request, via the user interface 160, an increase in a number of data center units (such as data center units 180) assigned to the user 50. Accordingly, a request to change a value representing the number of data center units assigned to the user 50 can be received at the management module 130. The value can be stored at the memory 120 of the management module 130. A request to increase a number of data center units can be referred to as an increase request. The request can be received at the management module 130 from the user interface 160.
[1033] In response to an increase request, the management module 130 can be configured to determine whether or not resources of the data center 100 are available to be assigned to the user 50 as resources of a data center unit. In some embodiments, the management module 130 can be configured to store inventory information representing resources available at the data center 100 in the memory 120, for example, in database 124. In such embodiments, the management module 130 can be configured to access the inventory information and determine, based on the inventory information, whether one or more data center units, or hardware resources of one or more data center units, (not shown) are available to be assigned to a user such as user 50. In some embodiments, inventory information representing an unassigned pool of resources and/or data center units (or hardware of data center units) can be stored in the memory 120. An example of inventory information that can be stored in the database 124 of the memory 120 is shown in FIG. 3.
[1034] FIG. 3 is a schematic diagram that illustrates a database 300 that includes information about the availability of data center resources, according to an embodiment. As shown in FIG. 3, the data center units (or hardware resources of data center units) represented by identifiers Ul and U3 (shown in column 320) are not available for assignment to a user because, as indicated in column 310, these data center units are already assigned to a user. The data center units represented by identifiers U2, U4, and U5 (shown in column 320) are available for assignment to a user because, as indicated in column 310, these data center units not assigned to a user. In some embodiments, the data center units represented by identifiers U2, U4, and U5, because they are not assigned, can be referred to as a pool of unassigned data center resources, or as a pool of unassigned data center units.
[1035] Although not shown, in some embodiments, a database can be configured to store inventory information related to individual hardware resources (e.g., processor, network interface cards, storage devices) that can be managed as a data center unit. Specifically, the availability or unavailability of the individual hardware resources (or portions thereof) can be stored in the database. Based on this inventory information about the hardware resources (or portions of the hardware resources), a management module (such as management module 130 shown in FIG. 1) can determine whether or not hardware resources may be available to define a data center unit that can be assigned to a user.
[1036] Referring back to FIG. 1, if a data center unit (not shown) is available for assignment to user 50 (or if hardware resources of the data center 100 are available to be managed as a data center unit that can be assigned to user 50), the management module 130 can be configured to assign the available data center unit to user 50 so that the hardware resources of the data center unit can be used by, for example, a virtual resource associated with the user 50. In other words, if a sufficient number of data center units are available to satisfy the increase request, the management module 130 can grant the request and assign the data center unit(s) to the user 50. The data center units that are assigned to the user 50 can be removed from, for example, a pool of unassigned resources (or data center units). In some embodiments, one or more data center units that are assigned to a user in response to an increase request can have a status associated with the data center unit(s) changed from an available status to an unavailable status. In some embodiments, the availability or unavailability of a data center unit (or hardware resources that can be used to define a data center unit) can be determined based on inventory information stored in the memory 120.
[1037] In some embodiments, the user 50 can request, via the user interface 160, a decrease in a number of data center units (such as data center units 180) assigned to the user 50. Accordingly, a request to change a value representing the number of data center units (which can represented hardware resources collectively managed as a data center unit) assigned to the user 50 can be received at the management module 130. A request to decrease a number of data center units can be referred to as an decrease request. The request can be received at the management module 130 from the user interface 160.
[1038] In some embodiments, hardware resources of a data center unit removed from a set of data center units previously assigned to a user can be reassigned to another user. For example, if hardware resources managed as a data center unit DU2 are removed from the set of data center units 180 in response to a decrease request from the user 50, the hardware resources managed as data center unit DU2 can be assigned to another user (not shown). The reassignment can be represented in a database 124 stored in the memory 120. In some embodiments, the data center unit DU2 (e.g., the hardware resources of the data center unit DU2) can be returned to a pool of unassigned data center units.
[1039] In some embodiments, the memory 120 can be, for example, a random-access memory (RAM) (e.g., a dynamic RAM, a static RAM), a flash memory, a removable memory, and/or so forth. In some embodiments, the database 124 can be implemented as, for example, a relational database, an indexed database, a table, and/or so forth. Although the memory 120 and the database 124 are shown as being local to the management module 130, in some embodiments, one or more portions of the database 124 can be stored in a remote memory that can be accessed by the management module 130. For example, the portions of the database 124 can be stored in a separate (e.g., a remote) storage device (e.g., storage facility) that can be accessed by the management module 130 via a network (e.g., a local area network (LAN), a wide area network (WAN)) (not shown). [1040] As shown in FIG. 1, the management module 130 can include a monitoring module 134. The monitoring module 134 can be configured to trigger a change in a number of data center units (such as data center units 180) assigned to a user (such as user 50) based on one or more values representing performance associated with the hardware resources of data center units. The values representing performance can be referred to as performance metric values. In some embodiments, the monitoring module 134 can be configured to trigger an increase or a decrease in a number of data center units assigned to user 50 in response to a threshold condition being satisfied based on one or more performance metric values.
[1041] For example, the monitoring module 134 can be configured to remove data center unit DU2 from the set of data center units 180 assigned to user 50 in response to a performance metric value indicating that the data center units 180 collectively are being under-utilized. In some embodiments, the monitoring module 134 can be configured to add an additional data center unit (not shown) (or hardware resources of the additional data center unit) to the set of data center units 180 assigned to user 50 in response to a performance metric value indicating that the hardware resources of data center units 180 collectively are being over-utilized. The over-utilization can be manifested in, for example, failure data. In some embodiments, the monitoring module 134 can be configured to replace one or more data center units (such as data center units 180) assigned to a user (such as user 50) based on a threshold condition being satisfied based on a performance metric value. In some embodiments, the monitoring module 134 can be configured to modify a number of data center assigned to a user by modifying a value stored in a database (such as database 124) that represents a number of data center units.
[1042] FIG. 4 is a graph that illustrates values of a performance metric, according to an embodiment. As shown in FIG. 4, the values of the performance metric (shown on the y- axis) are plotted versus time (shown on the x-axis). The values of the performance metric are above a lower limit value (shown as "LL") before time Tl, and the values of the performance metric are below the lower limit value after time Tl . A monitoring module, such as monitoring module 134 shown in FIG. 1, can be configured to modify a number of data center units assigned to a user in response to the values of the performance metric falling below the lower limit value at time Tl . In other words, the monitoring module can modify the number of data center units assigned to the user in response to the values of the performance metric satisfying a threshold condition associated with the lower limit value at time Tl .
[1043] Referring back to FIG. 1, the monitoring module 134 can be configured to modify a number of data center units assigned to a user (such as user 50) based on various performance metric values, such as, for example, a capacity value, a value representing a failure rate, a utilization value, and/or so forth. In some embodiments, the performance metric values can be associated with a specified period of time.
[1044] In some embodiments, the monitoring module 134 can be configured to receive the values (e.g., pushed values, pulled values) representing the performance metric from the data center 100 (or a portion thereof). For example, in some embodiments, the monitoring module 134 can be configured to receive one or more performance metric values produced by virtual resources operating within the hardware resources of data center units of the data center 100. In some embodiments, the performance metric values can be received periodically, randomly, in a preselected manner, and/or in response to a request from the monitoring module 134. In some embodiments, the monitoring module 134 can be configured to request and receive data from one or more resources (e.g., hardware resources, software resources, virtual resources) of the data center 100 that can be used to calculate a performance metric value.
[1045] In some embodiments, the monitoring module 134 can be configured to send a notification to, for example, the user 50 via user interface 160, indicating that a number of data center units assigned to the user 50 should be modified. In some embodiments, the monitoring module 134 can be configured to modify a number of data center units (by modifying a value representing the number of data center units) assigned to the user 50 only when authorized to do so by the user. In some embodiments, the monitoring module 134 can be configured to solicit authorization from the user 50 via the user interface 160 for modification of the number of the data center units 180. When authorization is received from the user 50 via the user interface 160 the monitoring module 134 can be configured to modify the number of data center units 180 assigned to the user 50.
[1046] In some embodiments, the management module 130 can be configured to identify a minimum number of data center units (such as data center units 180) to operate a virtual resource. For example, the management module 130 can be configured to identify (or calculate) a minimum number of data center units (based on one or more assessment parameter values) to operate a virtual resource within a data center environment. In some embodiments, the management module 130 can be configured to determine that a particular minimum number of data center units are used to operate a virtual resource emulating, for example, at least a portion of a particular physical device. In some embodiments, the number of discrete data center units selected to operate a virtual resource (or set of virtual resources) can be determined by the management module 130 based on, for example, an ability of the data center units to handle burst processing levels of the virtual resource(s) and/or an average processing level of the virtual resource(s). In some embodiments, the calculations related to numbers of data center units to operate a virtual resource can be performed by an assessment module portion (not shown) of the management module 130. More details related to an assessment module are described in connection with co-pending patent application bearing attorney docket no. VITU-002/00US 311331-2002, filed on same date, entitled, "Methods and Apparatus Related to Migration of Customer Resources to Virtual Resources within a Data Center Environment," which has been incorporated herein by reference in its entirety.
[1047] In some embodiments, the monitoring module 134 can be configured to modify a number of data center units assigned to the user 50 based on a user preference of the user 50. In some embodiments, the user preference can identify the performance metric values to be used by the monitoring module 134 to modify a number of data center units (such data center units 180 shown in FIG. 1) assigned to a user. In some embodiments, the user preference can identify one or more threshold conditions to be used the monitoring module 134 to modify a number of data center units assigned to the user 50. In some embodiments, one or more user preferences can be stored in memory 120.
[1048] As shown in FIG. 2, a user preference UA (shown in column 250) is associated with the user represented by user identifier A (shown in column 210), and a user preference UB (shown in column 250) is associated with the user represented by user identifier B (shown in column 210). The user preference UA and UB can represent user preferences related to monitoring of data center units (which are shown in column 230). In some embodiments, the monitoring module 134 can be configured to access the user preferences 250 and can be configured modify the number of data center units (shown in column 220) based on the user preferences 250. [1049] In some embodiments, one or more portions of the management module 130 can be (or can include) a hardware-based module (e.g., an application-specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA)) and/or a software-based module (e.g., a module of computer code, a set of processor-readable instructions that can be executed at a processor). Although not shown, in some embodiments, the management module 130 can include one or more memory portions (e.g., a random access memory (RAM) portion, a shift register, a cache) that can be used during operation of one or more functions of the management module 130. In some embodiments, one or more of the functions associated with the management module 130 can be included in different modules and/or combined into one or more modules.
[1050] Although not shown, in some embodiments, the hardware resources and/or software resources of the data center 100 can include one or more levels of infrastructure. For example, in some embodiments, the hardware resources of the data center 100 can include, host devices (e.g., server devices), storage devices, access switches, aggregation devices, routers, interface components, cables, and/or so forth. For example, the data center 100 can be configured so that host devices (which can be configured to host virtual resources) and/or storage devices can be in communication with (e.g., coupled to) a layer of access switches that are in communication with (e.g., coupled to) a layer of aggregation devices. The aggregation devices can function as gateway devices into a set of routers/s witches that function as core switching elements of the data center 100.
[1051] In some embodiments, the software resources of the data center 100 can include, for example, management modules, operating systems, hypervisors (e.g., VMware
hypervisor, Xen hypervisor, Hyper-V hypervisor), and/or so forth. In some embodiments, the data center 100 can be a cloud computing environment where hardware resources and/or software resources are shared by multiple virtual resources associated with one or more users (e.g., clients, customers). In some embodiments, the virtualized environment defined by the data center 100 can be referred to as a data center virtualized environment. In some embodiments, the software resources of the data center 100 can include, for example, management modules, operating systems, hypervisors, and/or so forth. The hypervisors can be configured to facilitate virtualization of hardware resources of host devices. The operating systems can be installed at routers, aggregation devices, routers, core switching elements, and/or forth. [1052] In some embodiments, the management module 130 can be a centralized management module configured to handle data center management for the entire data center 100, or can be a de-centralized management module configured to handle management of only a portion of the data center 100. In some embodiments, the management module 130 can be configured to perform various functions in addition to management of data center units such as data center units 180. For example, the management module 130 can be configured to handle disaster recovery, migration of virtual resources to a data center, and/or so forth. More details related to a management module configured to perform various operations related to a data center environment are set forth in co-pending patent application bearing attorney docket no. VITU-004/00US 311331-2004, filed on same date, entitled, "Methods and Apparatus for Data Center Management Independent of Hypervisor Platform," which is incorporated herein by reference in its entirety.
[1053] In some embodiments, the data center 100 can be managed locally or can have consolidated management. For example, the entire data center 100, or a portion thereof, can be managed via a single management module (not shown). In some embodiments, the entire data center 120, or a portion thereof, can be managed via multiple management modules (not shown) that can be distributed through the data center 100 infrastructure. In some embodiments, some functionality of the data center 100 can be managed based on a consolidated management scheme, while other functionality of the data center 100 can be managed based on a distributed management scheme.
[1054] FIG. 5 is a schematic diagram that illustrates resources controllers 570 in communication with a data center 500, according to embodiment. As shown in FIG. 5, the resource controllers 570 include a processor (e.g., a central processing unit (CPU)) controller 540, a memory controller 542, a network controller 544, a storage input/output operations per second (IOPS) controller 546, a storage controller 548, and a storage bandwidth controller 550. In some embodiments, the resources controllers 570 can include, for example, a VMware capacity planning tool, a VMware vSphere controller, a Converged Network Adapter controller, a Compellent SAN controller, and/or so forth.
[1055] Each of the resource controllers 570 shown in FIG. 5 are configured to manage resources associated with a particular type of hardware of a data center 500. As represented in FIG. 5, the resource controllers 570 can be configured to manage a portion of the data center unit 580. Accordingly, the resources controllers 570 collectively manage the hardware resources of data center unit 580. One or more of the resources controller 570 shown in FIG. 5 can be included in the resource controller 170 shown in FIG. 1.
[1056] In some embodiments, the processor controller 540 can be configured to manage the resources of one or more processors (not shown) of the data center 500 so that a certain portion of the computing cycles of the processor(s) are reserved for the data center unit 580. In other words, the computing cycles can be reserved so that if the computing cycles are needed by a virtual resource of a user to whom the data center unit 580 is assigned, the computing cycles will be available for use by the virtual resource of the user. For example, in some embodiments, computing cycles substantially equivalent to, for example, a 100 MHz processor, a 1.5 GHz processor, or so forth, can be reserved for the data center unit 580. In some embodiments, a hardware resource limit value specific to the processor controller 540 can be referred to as a processor limit value.
[1057] In some embodiments, the memory controller 542 can be configured to manage the resources of one or more memory components (not shown) of the data center 500 so that a certain portion of the memory component(s) can be reserved for the data center unit 580. For example, in some embodiments, a memory storage capacity of 1 MB, 10 MB, or so forth, of a memory component can be reserved for the data center unit 580. In some embodiments, a hardware resource limit value specific to the memory controller 542 can be referred to as a memory limit value.
[1058] In some embodiments, the network controller 544 can be configured to manage the resources of one or more network components (e.g., network interface cards) (not shown) of the data center 500 so that a certain portion of processing power of the network
component(s) can be managed (e.g., reserved) as part of the data center unit 580. For example, in some embodiments, a data transfer capacity of a network component can be time- division multiplexed so that a specified level of network bandwidth substantially equal to, for example, 5 Mb/s, 100 Mb/s, 1 Gb/s, or so forth, can be managed for the data center unit 580. In some embodiments, a hardware resource limit value specific to the network controller 544 can be referred to as a network limit value.
[1059] In some embodiments, a storage IOPS controller 546 can be configured to manage the resources of one or more storage components (e.g., hard disk drive, server storage) (not shown) of the data center 500 so that a certain IO capacity of the storage component (e.g., more than 1 IOPS, 50 IOPS) can be managed for (e.g., reserved for) the data center unit 580. In some embodiments, a hardware resource limit value specific to the storage IOPS controller 546 can be referred to as an IOPS limit value.
[1060] In some embodiments, a storage controller 548 can be configured to manage the resources of one or more storage components so that a certain portion of storage capacity of the storage component(s) (e.g., 50 GB, 100 GB, 10 Terabytes (TB)) can be reserved for the data center unit 580. In some embodiments, a hardware resource limit value specific to the storage controller 548 can be referred to as a storage limit value.
[1061] In some embodiments, a storage bandwidth controller 550 can be configured to manage the bandwidth of one or more storage components so that a certain portion of the bandwidth (e.g., 10 Mb/s, 1 Gb/s) can be managed (e.g., reserved) for the data center unit 580. In some embodiments, a hardware resource limit value specific to the storage bandwidth controller 542 can be referred to as a storage bandwidth limit value.
[1062] The resource controllers 570 can be triggered to reserve a specified portion of hardware resources of the data center 500 for the data center unit 580 based on one or more hardware resource limit values. The hardware resource limit values can be communicated to the resource controllers 570 in an instruction 60. In some embodiments, the instruction 60 can be defined in response to a request (e.g., an increase request, a decrease request) received at the management module 530 from the user interface (UI) 560. As shown in FIG. 5, the management module 530 can be executed within a processor 594 of a processing device 590. The processing device 590 can also include a memory 592 (e.g., a storage device, a buffer, a RAM) configured to facilitate the functions of, for example, the management module 530. For example, the memory 592 can be used by the management module 530 during communication with the resource controllers 570.
[1063] As shown in FIG. 5, the instruction 60 can be sent to each of the resource controllers. The instruction 60 can include hardware resource limit values for each of the resource controllers within the resource controller 570. For example, with respect to the memory controller 542, the instruction 60, which can include hardware resource limit values specific to each of the resource controllers within the resource controller 570, can be defined at the management module 530 and sent to the memory controller 542. The memory controller 542 can be configured to parse a hardware resource limit value specific to the memory controller 542 from the instruction 60. The memory controller 542 can be configured to manage hardware resources of the data center 500 for use as the data center unit 580 based on the hardware resource limit value.
[1064] Although not shown, in some embodiments, the management module 530 can be configured to define and send two or more different instructions to each of the resource controllers within the resource controllers 570. The different instructions can be sent to the resource controllers 570 because some of the resource controllers can be configured to operate based on different platforms (e.g., hardware and/or software platforms, protocols) than other resource controllers from the resource controller 570. For example, the
management module 530 can be configured to send a first instruction (which includes a network limit value) based on a first application programming interface (API) to the network controller 544, and send a second instruction (which includes a storage limit value) based on a second API to the storage controller 548.
[1065] Although six different types of resource controllers are shown in the resource controllers 570, in some embodiments, a different combination of resource controllers can be used to manage the hardware resources of data center unit 580. For example, less than all of the resource controllers 570 shown in FIG. 5 can be used to manage the hardware resources of data center unit 580. In some embodiments, a different resource controller such as a bus speed resource controller can be used to manage a bus portion of the data center unit 580. In some embodiments, the instruction 60 (or set of instructions) can be configured to trigger all or a portion of the resource controllers 570 to manage the hardware resources of the data center unit 580 in a customized fashion.
[1066] Although shown as being separate from the management module 530 in FIG. 5, in some embodiments, one or more of the resource controllers 570 can be integrated into the management module 530. In some embodiments, one or more functions of the resources controller 570 can be included as a function of the management module 530. Although not shown, in some embodiments, the resource controllers 570 can include a software resource controller.
[1067] FIG. 6 is a flowchart that illustrates a method for modifying a set of data center units based on a performance metric, according to an embodiment. As shown in FIG. 6, a value of a performance metric representing performance of a set of virtual resources associated with a user identifier is received, at 600. The set of virtual resources can be associated with a user (based on a user identifier representing the user). In some
embodiments, the performance metric can be, for example, related to a failure rate of the virtual resources. In some embodiments, the performance metric can be optionally specified in a user preference 630 associated with the user, or can be based on a default value.
[1068] A set of data center units assigned to operate the set of virtual resources is modified in response to the performance metric satisfying a threshold condition, at 610. In some embodiments, a number of data center units mapped to the set of virtual resources can be increased or decreased. In some embodiments, the threshold condition can be optionally defined within the user preference 630 associated with a user, or can be based on a default value.
[1069] FIG. 7 is a flowchart that illustrates a method for modifying a number of data center units in response to a request, according to an embodiment. Each number can represent hardware resources collectively managed as a data center unit. As shown in FIG. 7, a request for a change in a value representing a number of data center units included in a set of data center units assigned to a user is received, at 700. The request can be triggered by the user and can be received at a management module from a user interface. In some
embodiments, the number of data center units assigned to the user can be represented by a value in a database stored in a memory.
[1070] If the request is an increase request, at 710, the availability of an additional data center unit is determined, at 720. The availability can be determined based on information about a pool of unassigned data center units stored in a database. As shown in FIG. 7, the additional data center unit is assigned to the user, at 740, when the data center unit is available, at 730.
[1071] When the additional data center units are not available, at 730, the request is denied, at 750. In some embodiments, the additional data center units may not be available because they are assigned to another user, or because resources (e.g., hardware resources, software resources) managed as a data center unit are not available. In some embodiments, the request may not be denied, but can instead be assigned a pending state (e.g., a held state) until resources (e.g., hardware resources, software resources) that can be assigned to the user become available. In other words, the request can be pending until resources that can be managed as a data center unit (and can be assigned to the user) become available. In some embodiments, the request can be queued with other requests in a request queue. In some embodiments, a queue of requests can be handled in a first-in-first-out (FIFO) fashion, or in some other order.
[1072] If the request is an decrease request, at 710, at least one data center unit is included in a pool of unassigned data center units, at 750. An assignment of the data center unit(s) can be modified so that they are no longer assigned to the user and instead included in a pool of unassigned data center units. In other words, a representation of the data center unit(s) are included in a pool of unassigned data center units so that, for example, a management module can identify these data center unit(s) as not being assigned to a user. Although not shown, in some embodiments, a management module, for example, can be configured to reassign one or more of the data center units to another user rather than include the data center unit(s) in the pool of unassigned data center units.
[1073] Some embodiments described herein relate to a computer storage product with a computer-readable medium (also can be referred to as a processor-readable medium) having instructions or computer code thereon for performing various computer-implemented operations. The media and computer code (also can be referred to as code) may be those designed and constructed for the specific purpose or purposes. Examples of computer- readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices; magneto-optical storage media such as optical disks; carrier wave signal processing modules; and hardware devices that are specially configured to store and execute program code, such as Application-Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), and Read-Only Memory (ROM) and Random- Access Memory (RAM) devices.
[1074] Examples of computer code include, but are not limited to, micro-code or microinstructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter. For example, embodiments may be implemented using, for example, a run-time environment and/or an application framework such as a Microsoft .NET framework and/or Java, C++, or other programming languages (e.g., object-oriented programming languages) and/or development tools. Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
[1075] While various embodiments have been described above, it should be understood that they have been presented by way of example only, not limitation, and various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The embodiments described herein can include various combinations and/or sub-combinations of the functions, components and/or features of the different embodiments described. For example, multiple management modules can be configured to cooperatively handle assignment of data center units to one or more users.

Claims

What is claimed is:
1. A processor-readable medium storing code representing instructions to be executed by a processor, the code comprising code to:
receive a request to change a value representing a number of data center units included in a set of data center units assigned to a user, each of the data center units from the set of data center units associated with hardware resources managed based on a set of predefined resource limit values; and
determine, in response to the request, whether hardware resources of a data center unit mutually exclusive from hardware resources of the set of data center units and managed based on the set of predefined resource limit values is available for assignment to the user when the request to change is an increase request.
2. The processor-readable medium of claim 1, wherein at least a portion of the data center units from the set of data centers units are configured to operate a set of virtual resources at a guaranteed service level.
3. The processor-readable medium of claim 1, further comprising code to:
remove a data center unit from the set of data center units when the request to change is a decrease request.
4. The processor-readable medium of claim 1, further comprising code to:
modify a distribution of a set of virtual resources operating within the set of data center units when the request to change is a decrease request such that the set of virtual resources operate within a first subset of the set of data center units before the code to modify is executed and operate within a second subset of the set of data center units different from the first subset of the set of data center units after the code to modify is executed.
5. The processor-readable medium of claim 1, further comprising code to:
identify a data center unit for removal from the set of data center units when the request to change is a decrease request; and
associate the data center unit with a pool of unassigned data center units.
6. The processor-readable medium of claim 1, wherein the user is a first user, the code further comprising code to:
reassign a data center unit from the set of data center units associated with the first user to a set of data center units associated with a second user when the request to change is a decrease request.
7. The processor-readable medium of claim 1, further comprising code to:
send a notification that the request to change has been denied when the additional data center unit is unavailable for assignment to the user.
8. The processor-readable medium of claim 1, wherein the set of predefined hardware resource limit values includes:
at least one of a processor speed limit value, a memory space limit value, or a network bandwidth limit value; and
at least one of a disk space limit value, a disk bandwidth limit value, or a disk input/output limit value.
9. An apparatus, comprising:
a memory configured to store information representing assignment of a set of data center units to a user identifier, each data center unit from the set of data center units associated with a set of processing resources managed based on a set of predefined resource limit values; and
a management module configured to modify a value representing a number of data center units included in the set of data center units in response to at least one of (1) a request associated with the user identifier, or (2) a threshold condition being satisfied based on a change in performance of a set of virtual resources operating within the set of data center units.
10. The apparatus of claim 9, wherein the management module is configured to send a plurality of instructions related to modification of the value representing the number of data center units to a plurality of resource controllers, each resource controller from the plurality of resource controllers is configured to implement a predefined resource limit value from the set of predefined resource limit values.
11. The apparatus of claim 9, wherein at least a portion of the data center units from the set of data centers units are configured to operate the set of virtual resources associated with user identifier at a guaranteed service level.
12. The apparatus of claim 9, wherein the memory is further configured to store a user preference, the management module is configured to modify the value representing the number of data center units based on the user preference.
13. The apparatus of claim 9, wherein the memory is further configured to store a user preference, the threshold condition being defined within the user preference.
14. The apparatus of claim 9, wherein the management module is configured operate for a plurality of hypervisor platforms, the set of virtual resources being associated with a first hypervisor platform from the plurality of hypervisor platforms.
15. A processor-readable medium storing code representing instructions to be executed by a processor, the code comprising code to:
receive information representing that a set of data center units is assigned to operate a set of virtual resources, each data center unit from the plurality of data center units associated with a set of hardware resources managed based on a set of predefined hardware resource limit values;
receive an indicator representing that a utilization value of at least a portion of the set of virtual resources, when operating within the set of data center units, has satisfied a threshold condition; and
modify a value representing a number of data center units included in the set of data center units in response to the indicator.
16. The processor-readable medium of claim 15, wherein the threshold condition is satisfied when the utilization value is below a specified value for a specified period of time, the threshold condition is based on a user preference.
17. The processor-readable medium of claim 15, wherein a magnitude of the modification of the value representing the number of data center units is based on a magnitude of the utilization value relative to the threshold condition.
18. The processor-readable medium of claim 15, wherein the value representing the number of data center units represents a minimum number of data center units to operate the set of virtual resources at a guaranteed service level.
19. The processor-readable medium of claim 15, wherein the threshold condition is satisfied when a guaranteed service level is maintained.
20. The processor-readable medium of claim 15, further comprising code to:
receive authorization from a user to modify the value representing the number of data center units before modifying the value representing the number of data center units.
PCT/US2011/025393 2010-02-22 2011-02-18 Methods and apparatus related to management of unit-based virtual resources within a data center environment WO2011103393A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20110745300 EP2539829A4 (en) 2010-02-22 2011-02-18 Methods and apparatus related to management of unit-based virtual resources within a data center environment
CN201180020260.XA CN102971724B (en) 2010-02-22 2011-02-18 The method and apparatus relevant with the management based on modular virtual resource in data center environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/709,962 2010-02-22
US12/709,962 US9122538B2 (en) 2010-02-22 2010-02-22 Methods and apparatus related to management of unit-based virtual resources within a data center environment

Publications (1)

Publication Number Publication Date
WO2011103393A1 true WO2011103393A1 (en) 2011-08-25

Family

ID=44477562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/025393 WO2011103393A1 (en) 2010-02-22 2011-02-18 Methods and apparatus related to management of unit-based virtual resources within a data center environment

Country Status (4)

Country Link
US (3) US9122538B2 (en)
EP (1) EP2539829A4 (en)
CN (1) CN102971724B (en)
WO (1) WO2011103393A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102621970A (en) * 2012-04-16 2012-08-01 哈尔滨工业大学 Urban industrial gas safety intelligent monitoring system based on Internet of Things and urban industrial gas safety intelligent monitoring method
CN103399555A (en) * 2013-08-12 2013-11-20 山东兖矿国拓科技工程有限公司 Wireless intelligent monitoring system for combustible and toxic gas

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9729464B1 (en) 2010-06-23 2017-08-08 Brocade Communications Systems, Inc. Method and apparatus for provisioning of resources to support applications and their varying demands
US20160154673A1 (en) * 2014-07-23 2016-06-02 Sitting Man, Llc Methods, systems, and computer program products for providing a minimally complete operating environment
US9448847B2 (en) 2011-07-15 2016-09-20 Throughputer, Inc. Concurrent program execution optimization
EP2748705A4 (en) * 2011-08-25 2015-05-20 Virtustream Inc Systems and methods of host-aware resource management involving cluster-based resource pools
WO2014039046A1 (en) * 2012-09-06 2014-03-13 Empire Technology Development, Llc Cost reduction for servicing a client through excess network performance
GB2506195A (en) 2012-09-25 2014-03-26 Ibm Managing a virtual computer resource
US10244080B2 (en) * 2013-03-15 2019-03-26 VCE IP Holding Company LLC Accessing multiple converged IT infrastructures
CN103309745B (en) * 2013-04-19 2017-04-05 无锡成电科大科技发展有限公司 The method and device of the virtual resource configuration of cloud framework
US9912570B2 (en) 2013-10-25 2018-03-06 Brocade Communications Systems LLC Dynamic cloning of application infrastructures
FR3030977B1 (en) * 2014-12-19 2017-01-27 Sagemcom Broadband Sas METHOD OF ANNOUNCING SERVICES IN A COMMUNICATION NETWORK
FR3030966A1 (en) * 2014-12-23 2016-06-24 Orange SYSTEM FOR GENERATING A VIRTUALIZED NETWORK FUNCTION
US9684562B2 (en) * 2015-07-21 2017-06-20 International Business Machines Corporation Automatic serial starting of resource groups on failover based on the prediction of aggregate resource usage
DE102015214385A1 (en) * 2015-07-29 2017-02-02 Robert Bosch Gmbh Method and device for securing the application programming interface of a hypervisor
US10839420B2 (en) * 2015-07-31 2020-11-17 International Business Machines Corporation Constrained large-data markdown optimizations based upon markdown budget
CN105468358B (en) * 2015-11-17 2019-11-05 腾讯科技(深圳)有限公司 A kind of data processing method and device of moving game
JP6744985B2 (en) * 2016-08-27 2020-08-19 ニシラ, インコーポレイテッド Extend network control system to public cloud
CN110113176B (en) * 2018-02-01 2022-12-02 北京京东尚科信息技术有限公司 Information synchronization method and device for configuration server
US11216312B2 (en) 2018-08-03 2022-01-04 Virtustream Ip Holding Company Llc Management of unit-based virtual accelerator resources
US10747580B2 (en) * 2018-08-17 2020-08-18 Vmware, Inc. Function as a service (FaaS) execution distributor
US11030015B2 (en) * 2019-09-19 2021-06-08 International Business Machines Corporation Hardware and software resource optimization
CN114884900B (en) * 2022-06-09 2023-10-31 中国联合网络通信集团有限公司 Resource allocation method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1170662A2 (en) 2000-07-07 2002-01-09 Hitachi, Ltd. Apparatus and method for dynamically allocating computer resources based on service contract with user
US20030028642A1 (en) * 2001-08-03 2003-02-06 International Business Machines Corporation Managing server resources for hosted applications
US20060069594A1 (en) * 2004-07-01 2006-03-30 Yasushi Yamasaki Method and computer program product for resource planning
US20090199198A1 (en) * 2008-02-04 2009-08-06 Hiroshi Horii Multinode server system, load distribution method, resource management server, and program product

Family Cites Families (105)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8234650B1 (en) * 1999-08-23 2012-07-31 Oracle America, Inc. Approach for allocating resources to an apparatus
US7007276B1 (en) * 1999-09-28 2006-02-28 International Business Machines Corporation Method, system and program products for managing groups of partitions of a computing environment
US7748005B2 (en) * 2000-01-28 2010-06-29 Hewlett-Packard Development Company, L.P. System and method for allocating a plurality of resources between a plurality of computing domains
US7140020B2 (en) * 2000-01-28 2006-11-21 Hewlett-Packard Development Company, L.P. Dynamic management of virtual partition computer workloads through service level optimization
US7051098B2 (en) * 2000-05-25 2006-05-23 United States Of America As Represented By The Secretary Of The Navy System for monitoring and reporting performance of hosts and applications and selectively configuring applications in a resource managed system
JP2002202959A (en) * 2000-12-28 2002-07-19 Hitachi Ltd Virtual computer system for performing dynamic resource distribution
US20020184363A1 (en) 2001-04-20 2002-12-05 Steven Viavant Techniques for server-controlled measurement of client-side performance
US6889253B2 (en) 2001-04-30 2005-05-03 International Business Machines Corporation Cluster resource action in clustered computer system incorporation prepare operation
US7194616B2 (en) 2001-10-27 2007-03-20 International Business Machines Corporation Flexible temporary capacity upgrade/downgrade in a computer system without involvement of the operating system
US7565398B2 (en) 2002-06-27 2009-07-21 International Business Machines Corporation Procedure for dynamic reconfiguration of resources of logical partitions
US7783759B2 (en) * 2002-12-10 2010-08-24 International Business Machines Corporation Methods and apparatus for dynamic allocation of servers to a plurality of customers to maximize the revenue of a server farm
US20060294238A1 (en) * 2002-12-16 2006-12-28 Naik Vijay K Policy-based hierarchical management of shared resources in a grid environment
US7290260B2 (en) * 2003-02-20 2007-10-30 International Business Machines Corporation Dynamic processor redistribution between partitions in a computing system
US20040267897A1 (en) * 2003-06-24 2004-12-30 Sychron Inc. Distributed System Providing Scalable Methodology for Real-Time Control of Server Pools and Data Centers
US8776050B2 (en) 2003-08-20 2014-07-08 Oracle International Corporation Distributed virtual machine monitor for managing multiple virtual resources across multiple physical nodes
JP4066932B2 (en) * 2003-11-10 2008-03-26 株式会社日立製作所 Computer resource allocation method based on prediction
US7437730B2 (en) 2003-11-14 2008-10-14 International Business Machines Corporation System and method for providing a scalable on demand hosting system
US20050149940A1 (en) * 2003-12-31 2005-07-07 Sychron Inc. System Providing Methodology for Policy-Based Resource Allocation
US8346909B2 (en) 2004-01-22 2013-01-01 International Business Machines Corporation Method for supporting transaction and parallel application workloads across multiple domains based on service level agreements
US7664110B1 (en) 2004-02-07 2010-02-16 Habanero Holdings, Inc. Input/output controller for coupling the processor-memory complex to the fabric in fabric-backplane interprise servers
US7971204B2 (en) * 2004-03-13 2011-06-28 Adaptive Computing Enterprises, Inc. System and method of co-allocating a reservation spanning different compute resources types
US8336040B2 (en) 2004-04-15 2012-12-18 Raytheon Company System and method for topology-aware job scheduling and backfilling in an HPC environment
JP4639676B2 (en) * 2004-07-21 2011-02-23 株式会社日立製作所 Rental server system
GB0420057D0 (en) 2004-09-09 2004-10-13 Level 5 Networks Ltd Dynamic resource allocation
US7484242B2 (en) * 2004-09-16 2009-01-27 International Business Machines Corporation Enabling user control over automated provisioning environment
US7356770B1 (en) * 2004-11-08 2008-04-08 Cluster Resources, Inc. System and method of graphically managing and monitoring a compute environment
US8145872B2 (en) 2004-11-08 2012-03-27 International Business Machines Corporation Autonomic self-tuning of database management system in dynamic logical partitioning environment
US9753754B2 (en) 2004-12-22 2017-09-05 Microsoft Technology Licensing, Llc Enforcing deterministic execution of threads of guest operating systems running in a virtual machine hosted on a multiprocessor machine
US20060143617A1 (en) * 2004-12-29 2006-06-29 Knauerhase Robert C Method, apparatus and system for dynamic allocation of virtual platform resources
US7716743B2 (en) 2005-01-14 2010-05-11 Microsoft Corporation Privacy friendly malware quarantines
US7908605B1 (en) * 2005-01-28 2011-03-15 Hewlett-Packard Development Company, L.P. Hierarchal control system for controlling the allocation of computer resources
US8051170B2 (en) * 2005-02-10 2011-11-01 Cisco Technology, Inc. Distributed computing based on multiple nodes with determined capacity selectively joining resource groups having resource requirements
US8140371B2 (en) * 2005-02-18 2012-03-20 International Business Machines Corporation Providing computing service to users in a heterogeneous distributed computing environment
US7490353B2 (en) 2005-02-22 2009-02-10 Kidaro, Inc. Data transfer security
US7730486B2 (en) * 2005-02-28 2010-06-01 Hewlett-Packard Development Company, L.P. System and method for migrating virtual machines on cluster systems
US8108869B2 (en) * 2005-03-11 2012-01-31 Adaptive Computing Enterprises, Inc. System and method for enforcing future policies in a compute environment
US7698430B2 (en) * 2005-03-16 2010-04-13 Adaptive Computing Enterprises, Inc. On-demand compute environment
US9015324B2 (en) * 2005-03-16 2015-04-21 Adaptive Computing Enterprises, Inc. System and method of brokering cloud computing resources
US8468530B2 (en) * 2005-04-07 2013-06-18 International Business Machines Corporation Determining and describing available resources and capabilities to match jobs to endpoints
US8429630B2 (en) 2005-09-15 2013-04-23 Ca, Inc. Globally distributed utility computing cloud
US7643472B2 (en) * 2005-10-19 2010-01-05 At&T Intellectual Property I, Lp Methods and apparatus for authorizing and allocating outdial communication services
JP4546382B2 (en) 2005-10-26 2010-09-15 株式会社日立製作所 Device quarantine method and device quarantine system
US7941804B1 (en) * 2005-10-31 2011-05-10 Hewlett-Packard Development Company, L.P. Allocating resources among tiered partitions of different types
JP4377369B2 (en) * 2005-11-09 2009-12-02 株式会社日立製作所 Resource allocation arbitration device and resource allocation arbitration method
JP4129988B2 (en) * 2005-11-10 2008-08-06 インターナショナル・ビジネス・マシーンズ・コーポレーション How to provision resources
WO2007097451A1 (en) 2006-02-27 2007-08-30 Takeda Pharmaceutical Company Limited Pharmaceutical package
US20070266433A1 (en) 2006-03-03 2007-11-15 Hezi Moore System and Method for Securing Information in a Virtual Computing Environment
US8555274B1 (en) * 2006-03-31 2013-10-08 Vmware, Inc. Virtualized desktop allocation system using virtual infrastructure
US8261277B2 (en) * 2006-04-10 2012-09-04 General Electric Company System and method for dynamic allocation of resources in a computing grid
US9280662B2 (en) 2006-04-21 2016-03-08 Hewlett Packard Enterprise Development Lp Automatic isolation of misbehaving processes on a computer system
US20070271560A1 (en) 2006-05-18 2007-11-22 Microsoft Corporation Deploying virtual machine to host based on workload characterizations
US8136111B2 (en) * 2006-06-27 2012-03-13 International Business Machines Corporation Managing execution of mixed workloads in a simultaneous multi-threaded (SMT) enabled system
US8161475B2 (en) 2006-09-29 2012-04-17 Microsoft Corporation Automatic load and balancing for virtual machines to meet resource requirements
US8732699B1 (en) * 2006-10-27 2014-05-20 Hewlett-Packard Development Company, L.P. Migrating virtual machines between physical machines in a define group
US8255915B1 (en) * 2006-10-31 2012-08-28 Hewlett-Packard Development Company, L.P. Workload management for computer system with container hierarchy and workload-group policies
US7673113B2 (en) 2006-12-29 2010-03-02 Intel Corporation Method for dynamic load balancing on partitioned systems
US8108855B2 (en) 2007-01-02 2012-01-31 International Business Machines Corporation Method and apparatus for deploying a set of virtual software resource templates to a set of nodes
US8468244B2 (en) 2007-01-05 2013-06-18 Digital Doors, Inc. Digital information infrastructure and method for security designated data and with granular data stores
US20080244222A1 (en) * 2007-03-30 2008-10-02 Intel Corporation Many-core processing using virtual processors
US8196138B2 (en) 2007-04-19 2012-06-05 International Business Machines Corporation Method and system for migrating virtual machines between hypervisors
US8291411B2 (en) * 2007-05-21 2012-10-16 International Business Machines Corporation Dynamic placement of virtual machines for managing violations of service level agreements (SLAs)
US20090024713A1 (en) 2007-07-18 2009-01-22 Metrosource Corp. Maintaining availability of a data center
US8819104B1 (en) * 2007-09-26 2014-08-26 Emc Corporation Communication with multiple storage processors using network infrastructure
US8230070B2 (en) * 2007-11-09 2012-07-24 Manjrasoft Pty. Ltd. System and method for grid and cloud computing
US8819675B2 (en) 2007-11-28 2014-08-26 Hitachi, Ltd. Virtual machine monitor and multiprocessor system
JP5229232B2 (en) * 2007-12-04 2013-07-03 富士通株式会社 Resource lending control device, resource lending method, and resource lending program
US8185894B1 (en) * 2008-01-10 2012-05-22 Hewlett-Packard Development Company, L.P. Training a virtual machine placement controller
US20090241192A1 (en) * 2008-03-21 2009-09-24 Thomas Andrew J Virtual machine configuration sharing between host and virtual machines and between virtual machines
US8473594B2 (en) * 2008-05-02 2013-06-25 Skytap Multitenant hosted virtual machine infrastructure
US8484355B1 (en) * 2008-05-20 2013-07-09 Verizon Patent And Licensing Inc. System and method for customer provisioning in a utility computing platform
US9501124B2 (en) 2008-05-22 2016-11-22 Microsoft Technology Licensing, Llc Virtual machine placement based on power calculations
US8924543B2 (en) * 2009-01-28 2014-12-30 Headwater Partners I Llc Service design center for device assisted services
US8180604B2 (en) * 2008-09-30 2012-05-15 Hewlett-Packard Development Company, L.P. Optimizing a prediction of resource usage of multiple applications in a virtual environment
US8627328B2 (en) * 2008-11-14 2014-01-07 Oracle International Corporation Operation control for deploying and managing software service in a virtual environment
US8549516B2 (en) * 2008-12-23 2013-10-01 Citrix Systems, Inc. Systems and methods for controlling, by a hypervisor, access to physical resources
US8190769B1 (en) * 2008-12-30 2012-05-29 Juniper Networks, Inc. Methods and apparatus for provisioning at a network device in response to a virtual resource migration notification
US8560677B2 (en) * 2009-02-13 2013-10-15 Schneider Electric It Corporation Data center control
US8793365B2 (en) * 2009-03-04 2014-07-29 International Business Machines Corporation Environmental and computing cost reduction with improved reliability in workload assignment to distributed computing nodes
US8321862B2 (en) 2009-03-20 2012-11-27 Oracle America, Inc. System for migrating a virtual machine and resource usage data to a chosen target host based on a migration policy
US8296419B1 (en) * 2009-03-31 2012-10-23 Amazon Technologies, Inc. Dynamically modifying a cluster of computing nodes used for distributed execution of a program
US9396042B2 (en) * 2009-04-17 2016-07-19 Citrix Systems, Inc. Methods and systems for evaluating historical metrics in selecting a physical host for execution of a virtual machine
US9501329B2 (en) * 2009-05-08 2016-11-22 Rackspace Us, Inc. Methods and systems for cloud computing management
US8135934B2 (en) * 2009-05-28 2012-03-13 International Business Machines Corporation Dynamically allocating limited system memory for DMA among multiple adapters
US8234236B2 (en) * 2009-06-01 2012-07-31 International Business Machines Corporation System and method for efficient allocation of resources in virtualized desktop environments
JP5400482B2 (en) * 2009-06-04 2014-01-29 株式会社日立製作所 Management computer, resource management method, resource management program, recording medium, and information processing system
US9852011B1 (en) * 2009-06-26 2017-12-26 Turbonomic, Inc. Managing resources in virtualization systems
US8359594B1 (en) * 2009-06-30 2013-01-22 Sychron Advanced Technologies, Inc. Automated rapid virtual machine provisioning system
US8930731B2 (en) * 2009-07-21 2015-01-06 Oracle International Corporation Reducing power consumption in data centers having nodes for hosting virtual machines
CN102043673B (en) 2009-10-21 2015-06-03 Sap欧洲公司 Calibration of resource allocation during parallel processing
JP5378946B2 (en) * 2009-10-26 2013-12-25 株式会社日立製作所 Server management apparatus and server management method
US8640139B2 (en) * 2009-10-29 2014-01-28 Nec Corporation System deployment determination system, system deployment determination method, and program
US9274851B2 (en) * 2009-11-25 2016-03-01 Brocade Communications Systems, Inc. Core-trunking across cores on physically separated processors allocated to a virtual machine based on configuration information including context information for virtual machines
US8490087B2 (en) * 2009-12-02 2013-07-16 International Business Machines Corporation System and method for transforming legacy desktop environments to a virtualized desktop model
US20120054624A1 (en) * 2010-08-27 2012-03-01 Owens Jr Kenneth Robert Systems and methods for a multi-tenant system providing virtual data centers in a cloud configuration
US8924982B2 (en) * 2010-01-12 2014-12-30 Amazon Technologies, Inc. Managing private use of program execution capacity
US8433802B2 (en) * 2010-01-26 2013-04-30 International Business Machines Corporation System and method for fair and economical resource partitioning using virtual hypervisor
US8473959B2 (en) 2010-02-22 2013-06-25 Virtustream, Inc. Methods and apparatus related to migration of customer resources to virtual resources within a data center environment
US9027017B2 (en) 2010-02-22 2015-05-05 Virtustream, Inc. Methods and apparatus for movement of virtual resources within a data center environment
JP5544967B2 (en) 2010-03-24 2014-07-09 富士通株式会社 Virtual machine management program and virtual machine management apparatus
WO2012057942A1 (en) 2010-10-27 2012-05-03 High Cloud Security, Inc. System and method for secure storage of virtual machines
US8667496B2 (en) 2011-01-04 2014-03-04 Host Dynamics Ltd. Methods and systems of managing resources allocated to guest virtual machines
EP2748705A4 (en) 2011-08-25 2015-05-20 Virtustream Inc Systems and methods of host-aware resource management involving cluster-based resource pools
JP5584186B2 (en) * 2011-10-28 2014-09-03 株式会社ソニー・コンピュータエンタテインメント Storage system and storage device
US20140289377A1 (en) * 2013-03-22 2014-09-25 Netapp Inc. Configuring network storage system over a network
US9686143B2 (en) * 2014-09-24 2017-06-20 Intel Corporation Mechanism for management controllers to learn the control plane hierarchy in a data center environment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1170662A2 (en) 2000-07-07 2002-01-09 Hitachi, Ltd. Apparatus and method for dynamically allocating computer resources based on service contract with user
US20030028642A1 (en) * 2001-08-03 2003-02-06 International Business Machines Corporation Managing server resources for hosted applications
US20060069594A1 (en) * 2004-07-01 2006-03-30 Yasushi Yamasaki Method and computer program product for resource planning
US20090199198A1 (en) * 2008-02-04 2009-08-06 Hiroshi Horii Multinode server system, load distribution method, resource management server, and program product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2539829A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102621970A (en) * 2012-04-16 2012-08-01 哈尔滨工业大学 Urban industrial gas safety intelligent monitoring system based on Internet of Things and urban industrial gas safety intelligent monitoring method
CN103399555A (en) * 2013-08-12 2013-11-20 山东兖矿国拓科技工程有限公司 Wireless intelligent monitoring system for combustible and toxic gas

Also Published As

Publication number Publication date
CN102971724A (en) 2013-03-13
US9866450B2 (en) 2018-01-09
EP2539829A1 (en) 2013-01-02
CN102971724B (en) 2016-12-28
US9122538B2 (en) 2015-09-01
US20150333977A1 (en) 2015-11-19
US10659318B2 (en) 2020-05-19
US20110209147A1 (en) 2011-08-25
EP2539829A4 (en) 2015-04-29
US20180097709A1 (en) 2018-04-05

Similar Documents

Publication Publication Date Title
US10659318B2 (en) Methods and apparatus related to management of unit-based virtual resources within a data center environment
US11314551B2 (en) Resource allocation and scheduling for batch jobs
US10554740B2 (en) Dynamic allocation of a workload across a plurality of clouds
US9027017B2 (en) Methods and apparatus for movement of virtual resources within a data center environment
US10394477B2 (en) Method and system for memory allocation in a disaggregated memory architecture
US9183016B2 (en) Adaptive task scheduling of Hadoop in a virtualized environment
US9471258B2 (en) Performance isolation for storage clouds
US9665154B2 (en) Subsystem-level power management in a multi-node virtual machine environment
US10698785B2 (en) Task management based on an access workload
US11740921B2 (en) Coordinated container scheduling for improved resource allocation in virtual computing environment
US9800484B2 (en) Optimizing resource utilization in a networked computing environment
US10169101B2 (en) Software based collection of performance metrics for allocation adjustment of virtual resources
US10956228B2 (en) Task management using a virtual node
EP4260185A1 (en) System and method for performing workloads using composed systems
US8806121B2 (en) Intelligent storage provisioning within a clustered computing environment
US9898219B1 (en) Allocating a device to a container

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180020260.X

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11745300

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2011745300

Country of ref document: EP