WO2015150977A1 - Virtual-machine placement based on information from multiple data centers - Google Patents
Virtual-machine placement based on information from multiple data centers Download PDFInfo
- Publication number
- WO2015150977A1 WO2015150977A1 PCT/IB2015/052178 IB2015052178W WO2015150977A1 WO 2015150977 A1 WO2015150977 A1 WO 2015150977A1 IB 2015052178 W IB2015052178 W IB 2015052178W WO 2015150977 A1 WO2015150977 A1 WO 2015150977A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- workloads
- placement
- performance characteristics
- directives
- hosts
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/82—Miscellaneous aspects
- H04L47/829—Topology based
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/70—Admission control; Resource allocation
- H04L47/83—Admission control; Resource allocation based on usage prediction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/501—Performance criteria
Definitions
- the present invention relates generally to virtualized computing, and particularly to methods and systems for Virtual-Machine (VM) placement.
- VM Virtual-Machine
- Machine virtualization is commonly used in various computing environments, such as in data centers and cloud computing.
- Various virtualization solutions are known in the art.
- VMware, Inc. (Palo Alto, California) offers virtualization software for environments such as data centers and cloud computing.
- Virtualized computing systems often run VM placement processes for selecting which physical host is to run a given VM.
- the first and second computer networks include virtualized data centers, and the first and second workloads include Virtual Machines (VMs).
- deriving the placement directives includes classifying the first workloads into classes depending on the performance characteristics, and specifying the placement directives in terms of the classes.
- assigning the second workloads to the second hosts includes predicting a resource usage pattern of a second workload based on a placement directive derived from the first workloads, and assigning the second workload to a second host based on the predicted resource usage pattern.
- collecting the performance characteristics includes gathering the performance characteristics over the first and second workloads in both the first computer network and the second computer network, and deriving the placement directives includes specifying the placement directives based on the performance characteristics gathered over the first and second computer networks.
- the method may include assigning one or more physical resources of one or more of the second hosts in the second computing system to one or more of the second workloads, based on the collected performance characteristics.
- a system including first and second local placement units, and a global placement unit.
- the first local placement unit is configured to collect performance characteristics of first workloads that run on first hosts in a first computer network.
- the global placement unit is configured to derive from the performance characteristics of the first workloads one or more placement directives for assigning workloads to hosts.
- the second local placement unit is configured to assign second workloads to second hosts in a second computing system that is separate from the first computing system, in accordance with the placement directives.
- an apparatus including an interface and a processor.
- the interface is configured to communicate with first and second separate computer networks.
- the processor is configured to receive via the interface performance characteristics of first workloads that run on first hosts in the first computer network, to derive from the performance characteristics of the first workloads one or more placement directives for assigning workloads to hosts, and to send the directives via the interface to the second computing system, for use in assigning second workloads to second hosts in the second computing system.
- Fig. 1 is a block diagram that schematically illustrates a VM placement system, in accordance with an embodiment of the present invention.
- Fig. 2 is a flow chart that schematically illustrates a method for VM placement, in accordance with an embodiment of the present invention.
- VM placement in one data center can be optimized using information collected in another data center.
- Such a technique is advantageous, for example, in small or new data centers that can benefit from information collected in larger or more mature data centers.
- the disclosed techniques enable the global placement unit to specify, test and refine placement directives over a large number of VMs and hosts, beyond the scale of any individual data center. As such, the placement directives are typically more accurate and enable each individual data center to better utilize its available resources.
- Fig. 1 is a block diagram that schematically illustrates a VM placement system 20, in accordance with an embodiment of the present invention.
- System 20 operates across multiple data centers.
- the example of Fig. 1 shows only two data centers 24A and 24B, for the sake of clarity. Alternatively, however, system 20 may operate over any desired number of data centers.
- Data centers 24 are typically separate from one another, and may be operated by different parties.
- Each data center comprises physical hosts 28 that are connected by a communication network 36.
- Each host runs one or more Virtual Machines (VMs) 32.
- the VMs consume physical resources of the hosts, e.g., memory, CPU and networking resources.
- Hosts 28 may comprise, for example, servers, workstations or any other suitable computing platforms.
- Network 36 may comprise, for example, an Ethernet or Infiniband Local-Area Network (LAN).
- each data center 24 comprises a respective local placement unit 40, which carries out the various tasks relating to placement of VMs in that data center.
- system 20 comprises a global placement unit 52, which specifies VM placement directives based on information collected across the multiple data centers. The functions of local placement units 40 and global placement unit 52 are described in detail below.
- Local placement units 40 communicate with global placement unit 52 over a Wide- Area Network 56, such as the Internet.
- Each local placement unit 40 comprises a network interface 44 for communicating with hosts 28 of its respective data center over network 36, and for communicating with global placement unit 52 over network 56.
- Each local placement unit further comprises a processor 48 that carries out the various processing tasks of the local placement unit.
- Global placement unit 52 comprises a network interface 60 for communicating with local placement units 40 over network 56, and a processor 64 that carries out the various processing tasks of the global placement unit.
- Fig. 1 The system configuration shown in Fig. 1 is an example configuration that is chosen purely for the sake of conceptual clarity. In alternative embodiments, any other suitable system configuration can be used.
- the embodiments described herein refer mainly to placement of VMs
- the disclosed techniques can be used for placement of any other suitable type of workload, such as applications and/or operating-system processes or containers.
- the embodiments described herein refer mainly to virtualized data centers, the disclosed techniques can be used for placement of workloads in any other suitable type of computer systems.
- processors 48 and/or 64 may be implemented in software or using a combination of hardware/firmware and software elements.
- processors 48 and/or 64 comprise general-purpose processors, which are programmed in software to carry out the functions described herein.
- the software may be downloaded to the processors in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
- each local placement unit 40 makes placement decisions and assigns VMs 32 to hosts 28 accordingly. Placement decisions are based, for example, on the performance characteristics of the VMs and on the available physical resources (e.g., CPU, memory and networking resources) of the hosts. Placement decisions typically aim to predict the future resource consumption of VMs, and to assign VMs to hosts so as to best provide the required resources.
- placement decisions are based, for example, on the performance characteristics of the VMs and on the available physical resources (e.g., CPU, memory and networking resources) of the hosts. Placement decisions typically aim to predict the future resource consumption of VMs, and to assign VMs to hosts so as to best provide the required resources.
- Each local placement unit 40 typically assigns VMs 32 to hosts 28 by applying a set of placement directives.
- the placement directives are specified by global placement unit 52, based on information collected across the multiple data centers 24.
- Local placement units 40 typically collect various performance characteristics of VMs 32. Performance characteristics of a given VM may comprise, for example, the size of the image from which the VM was created, the profile of memory, CPU and networking resource usage over time, the VM temporal usage pattern (e.g., start times, stop times, usage durations). Local placement units 40 report these performance characteristics to global placement unit 52, which uses them to specify placement directives.
- local placement units 40 classify VMs 32 into several classes, and the placement directives are also defined in terms of the classes. By classifying a given VM, local placement unit 40 is able to predict the expected resource usage pattern of the VM, and assign it to a host that will be able to provide the expected resources.
- the placement directives may specify how to identify and place a shortlived VM, e.g., a VM that starts, performs a number of computations in a short time duration, saves the results and stops. Assume, for example, that most short-lived VMs are created from an image of a certain size.
- a placement directive may thus specify that a VM having such an image size should be placed on a host that will be able to provide certain specified memory/CPU/networking resources in the next specified time duration.
- some VMs may be classified as "bursty" VMs, i.e., VMs that consume little or no resources during most of the time, except for a short time period in which the resource consumption spikes to a large value. If bursty VMs are common in one data center but rare in another data center, it is possible to use the information from the first data center in order to specify how to identify and place a bursty VM. This placement directive can then be applied effectively in the second data center.
- a pair of VMs having certain performance characteristics may be known to communicate extensively with one another.
- global placement unit 52 may define a directive for placing such VMs on the same host. This directive may be applied by the local placement unit of a second data center, even though the second data center does not have sufficient statistics for deriving such a directive.
- Global placement unit 52 derives one or more VM placement directives from the information collected across the multiple data centers, at a directive derivation step 78.
- the directives may specify how to identify that a VM belongs to a given class, and how to place VMs of that class.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Debugging And Monitoring (AREA)
Abstract
A method includes collecting performance characteristics of first workloads that run on first hosts in a first computer network (24A). One or more placement directives, for assigning workloads to hosts, are derived from the performance characteristics of the first workloads. Second workloads are assigned to second hosts in a second computing system (24B) that is separate from the first computing system, in accordance with the placement directives.
Description
VIRTUAL-MACHINE PLACEMENT BASED ON INFORMATION FROM MULTIPLE
DATA CENTERS
FIELD OF THE INVENTION
The present invention relates generally to virtualized computing, and particularly to methods and systems for Virtual-Machine (VM) placement.
BACKGROUND OF THE INVENTION
Machine virtualization is commonly used in various computing environments, such as in data centers and cloud computing. Various virtualization solutions are known in the art. For example, VMware, Inc. (Palo Alto, California), offers virtualization software for environments such as data centers and cloud computing. Virtualized computing systems often run VM placement processes for selecting which physical host is to run a given VM.
SUMMARY OF THE INVENTION
An embodiment of the present invention that is described herein provides a method including collecting performance characteristics of first workloads that run on first hosts in a first computer network. One or more placement directives, for assigning workloads to hosts, are derived from the performance characteristics of the first workloads. Second workloads are assigned to second hosts in a second computing system that is separate from the first computing system, in accordance with the placement directives.
In some embodiments, the first and second computer networks include virtualized data centers, and the first and second workloads include Virtual Machines (VMs). In an embodiment, deriving the placement directives includes classifying the first workloads into classes depending on the performance characteristics, and specifying the placement directives in terms of the classes. In another embodiment, assigning the second workloads to the second hosts includes predicting a resource usage pattern of a second workload based on a placement directive derived from the first workloads, and assigning the second workload to a second host based on the predicted resource usage pattern.
In a disclosed embodiment, collection of the performance characteristics and application of the placement directives are performed by local placement units in the first and second computer systems, and derivation of the placement directives is performed by a global placement unit external to the first and second computer networks. In another embodiment, collecting the performance characteristics includes collecting temporal resource usage patterns of the first workloads.
In yet another embodiment, collecting the performance characteristics includes collecting communication interaction between two or more of the first workloads. In still another embodiment, the method includes estimating available resources of the first hosts, and deriving the placement directives includes specifying the placement directives based on the estimated available resources.
In an embodiment, collecting the performance characteristics includes gathering the performance characteristics over the first and second workloads in both the first computer network and the second computer network, and deriving the placement directives includes specifying the placement directives based on the performance characteristics gathered over the first and second computer networks. In some embodiments, the method may include assigning one or more physical resources of one or more of the second hosts in the second computing system to one or more of the second workloads, based on the collected performance characteristics.
There is additionally provided, in accordance with an embodiment of the present invention, a system including first and second local placement units, and a global placement unit. The first local placement unit is configured to collect performance characteristics of first workloads that run on first hosts in a first computer network. The global placement unit is configured to derive from the performance characteristics of the first workloads one or more placement directives for assigning workloads to hosts. The second local placement unit is configured to assign second workloads to second hosts in a second computing system that is separate from the first computing system, in accordance with the placement directives.
There is further provided, in accordance with an embodiment of the present invention, an apparatus including an interface and a processor. The interface is configured to communicate with first and second separate computer networks. The processor is configured to receive via the interface performance characteristics of first workloads that run on first hosts in the first computer network, to derive from the performance characteristics of the first workloads one or more placement directives for assigning workloads to hosts, and to send the directives via the interface to the second computing system, for use in assigning second workloads to second hosts in the second computing system.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a block diagram that schematically illustrates a VM placement system, in accordance with an embodiment of the present invention; and
Fig. 2 is a flow chart that schematically illustrates a method for VM placement, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS
OVERVIEW
Embodiments of the present invention that are described herein provide improved methods and systems for placement of workloads in computer networks. In the present context, the term "placement" means the assignment of workloads to physical hosts, including the decision of which workload is to run on which host. Placement of a given workload may be performed before or after the workload is provisioned and running. The latter process is often referred to as migration.
The embodiments described herein refer mainly to placement of Virtual Machines (VMs) in virtualized data centers. The disclosed techniques, however, can be used with various other types of workloads and in various other types of computer networks.
In the disclosed embodiments, multiple separate data centers are provisioned with respective software components referred to as local placement units. In addition, a global placement unit communicates with the various local placement units, e.g., as a cloud service. Each local placement unit collects performance characteristics of VMs running in its respective data center. The global placement unit accumulates the performance characteristics collected across the multiple data centers, and derives VM placement directives from the accumulated performance characteristics. The placement directives are sent back to the local placement units, which in turn apply them in their respective data centers.
For example, several classes of VMs may be defined, e.g., short-lived VMs, bursty VMs, or pairs of VMs that tend to communicate extensively with one another. The placement directives may specify how to classify a VM into one of the classes, and how to place VMs of that class. In this manner, the local placement units are able to predict the resource usage patterns of VMs, and to assign them to host accordingly.
When using the disclosed techniques, VM placement in one data center can be optimized using information collected in another data center. Such a technique is advantageous, for example, in small or new data centers that can benefit from information collected in larger or more mature data centers. Moreover, the disclosed techniques enable the
global placement unit to specify, test and refine placement directives over a large number of VMs and hosts, beyond the scale of any individual data center. As such, the placement directives are typically more accurate and enable each individual data center to better utilize its available resources.
Moreover, when using the disclosed techniques, a local placement unit in a given data center may use the workload performance characteristics collected in another data center for assigning physical host resources (e.g., CPU, memory or network resources) to VMs in the local data center.
SYSTEM DESCRIPTION
Fig. 1 is a block diagram that schematically illustrates a VM placement system 20, in accordance with an embodiment of the present invention. System 20 operates across multiple data centers. The example of Fig. 1 shows only two data centers 24A and 24B, for the sake of clarity. Alternatively, however, system 20 may operate over any desired number of data centers. Data centers 24 are typically separate from one another, and may be operated by different parties.
Each data center comprises physical hosts 28 that are connected by a communication network 36. Each host runs one or more Virtual Machines (VMs) 32. The VMs consume physical resources of the hosts, e.g., memory, CPU and networking resources. Hosts 28 may comprise, for example, servers, workstations or any other suitable computing platforms. Network 36 may comprise, for example, an Ethernet or Infiniband Local-Area Network (LAN).
In some embodiments, each data center 24 comprises a respective local placement unit 40, which carries out the various tasks relating to placement of VMs in that data center. In addition, system 20 comprises a global placement unit 52, which specifies VM placement directives based on information collected across the multiple data centers. The functions of local placement units 40 and global placement unit 52 are described in detail below.
Local placement units 40 communicate with global placement unit 52 over a Wide- Area Network 56, such as the Internet. Each local placement unit 40 comprises a network interface 44 for communicating with hosts 28 of its respective data center over network 36, and for communicating with global placement unit 52 over network 56. Each local placement unit further comprises a processor 48 that carries out the various processing tasks of the local placement unit. Global placement unit 52 comprises a network interface 60 for communicating
with local placement units 40 over network 56, and a processor 64 that carries out the various processing tasks of the global placement unit.
The system configuration shown in Fig. 1 is an example configuration that is chosen purely for the sake of conceptual clarity. In alternative embodiments, any other suitable system configuration can be used. For example, although the embodiments described herein refer mainly to placement of VMs, the disclosed techniques can be used for placement of any other suitable type of workload, such as applications and/or operating-system processes or containers. Although the embodiments described herein refer mainly to virtualized data centers, the disclosed techniques can be used for placement of workloads in any other suitable type of computer systems.
The various elements of system 20, and in particular the elements of placement units 40 and/or 52, may be implemented using hardware/firmware, such as in one or more Application-Specific Integrated Circuit (ASICs) or Field-Programmable Gate Array (FPGAs). Alternatively, some system elements, e.g., processors 48 and/or 64, may be implemented in software or using a combination of hardware/firmware and software elements. In some embodiments, processors 48 and/or 64 comprise general-purpose processors, which are programmed in software to carry out the functions described herein. The software may be downloaded to the processors in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.
PLACEMENT DIRECTIVES BASED ON PERFORMANCE CHARACTERISTICS COLLECTED OVER MULTIPLE DATA CENTERS
As part of the on-going operation of each data center 24, each local placement unit 40 makes placement decisions and assigns VMs 32 to hosts 28 accordingly. Placement decisions are based, for example, on the performance characteristics of the VMs and on the available physical resources (e.g., CPU, memory and networking resources) of the hosts. Placement decisions typically aim to predict the future resource consumption of VMs, and to assign VMs to hosts so as to best provide the required resources.
Each local placement unit 40 typically assigns VMs 32 to hosts 28 by applying a set of placement directives. In some embodiments, the placement directives are specified by global placement unit 52, based on information collected across the multiple data centers 24.
Local placement units 40 typically collect various performance characteristics of VMs 32. Performance characteristics of a given VM may comprise, for example, the size of the
image from which the VM was created, the profile of memory, CPU and networking resource usage over time, the VM temporal usage pattern (e.g., start times, stop times, usage durations). Local placement units 40 report these performance characteristics to global placement unit 52, which uses them to specify placement directives.
In some embodiments, local placement units 40 classify VMs 32 into several classes, and the placement directives are also defined in terms of the classes. By classifying a given VM, local placement unit 40 is able to predict the expected resource usage pattern of the VM, and assign it to a host that will be able to provide the expected resources.
For example, the placement directives may specify how to identify and place a shortlived VM, e.g., a VM that starts, performs a number of computations in a short time duration, saves the results and stops. Assume, for example, that most short-lived VMs are created from an image of a certain size. A placement directive may thus specify that a VM having such an image size should be placed on a host that will be able to provide certain specified memory/CPU/networking resources in the next specified time duration.
In practice, the behavior of short-lived VMs may be well defined and classified in one data center, e.g., because it is a large data center or because it has been in operation for a long time. Another data center, which may be new or small, may benefit from the placement directives derived from the VMs of the former data center.
As another example, some VMs may be classified as "bursty" VMs, i.e., VMs that consume little or no resources during most of the time, except for a short time period in which the resource consumption spikes to a large value. If bursty VMs are common in one data center but rare in another data center, it is possible to use the information from the first data center in order to specify how to identify and place a bursty VM. This placement directive can then be applied effectively in the second data center.
As yet another example, based on analysis in a first data center, a pair of VMs having certain performance characteristics may be known to communicate extensively with one another. Using this information, global placement unit 52 may define a directive for placing such VMs on the same host. This directive may be applied by the local placement unit of a second data center, even though the second data center does not have sufficient statistics for deriving such a directive.
The placement directives described above are depicted purely by way of example. In alternative embodiments, system 20 may define and apply any other suitable placement directives based on any other suitable VM performance characteristics. In some embodiments, local placement units 40 also report the available resources of the various hosts to global
placement unit 52. The global placement unit may consider the reported resources in deriving the placement directives.
Fig. 2 is a flow chart that schematically illustrates a method for VM placement, in accordance with an embodiment of the present invention. The method begins with local placement units 40 collecting VM performance characteristics, e.g., usage patterns, at a collection step 70. Each local placement unit collects the information over the VMs in its respective data center. Local placement units 40 forward the collected information to global placement unit 52, at a forwarding step 74.
Global placement unit 52 derives one or more VM placement directives from the information collected across the multiple data centers, at a directive derivation step 78. For example, as explained above, the directives may specify how to identify that a VM belongs to a given class, and how to place VMs of that class.
Global placement unit 52 distributes the placement directives to local placement units 40 in the various data centers, at a directive distribution step 82. In each data center, local placement unit 40 assigns VMs to hosts based on the directives, at a placement step 86. The process of Fig. 2 is typically on-going, i.e., repeated and updated over time.
Although the embodiments described herein mainly address placement of VMs or other workloads, the methods and systems described herein can also be used in other applications. For example, a local placement unit 40 in a given data center may use the workload performance characteristics collected in another data center for assigning physical host resources (e.g., CPU, memory or network resources) to workloads.
It will thus be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.
Claims
1. A method, comprising:
collecting performance characteristics of first workloads that run on first hosts in a first computer network;
deriving from the performance characteristics of the first workloads one or more placement directives for assigning workloads to hosts; and
assigning second workloads to second hosts in a second computing system that is separate from the first computing system, in accordance with the placement directives.
2. The method according to claim 1, wherein the first and second computer networks comprise virtualized data centers, and wherein the first and second workloads comprise Virtual Machines (VMs).
3. The method according to claim 1, wherein deriving the placement directives comprises classifying the first workloads into classes depending on the performance characteristics, and specifying the placement directives in terms of the classes.
4. The method according to any of claims 1-3, wherein assigning the second workloads to the second hosts comprises predicting a resource usage pattern of a second workload based on a placement directive derived from the first workloads, and assigning the second workload to a second host based on the predicted resource usage pattern.
5. The method according to any of claims 1-3, wherein collection of the performance characteristics and application of the placement directives are performed by local placement units in the first and second computer systems, and wherein derivation of the placement directives is performed by a global placement unit external to the first and second computer networks.
6. The method according to any of claims 1-3, wherein collecting the performance characteristics comprises collecting temporal resource usage patterns of the first workloads.
7. The method according to any of claims 1-3, wherein collecting the performance characteristics comprises collecting communication interaction between two or more of the first workloads.
8. The method according to any of claims 1-3, and comprising estimating available resources of the first hosts, wherein deriving the placement directives comprises specifying the placement directives based on the estimated available resources.
9. The method according to any of claims 1-3, wherein collecting the performance characteristics comprises gathering the performance characteristics over the first and second workloads in both the first computer network and the second computer network, and wherein deriving the placement directives comprises specifying the placement directives based on the performance characteristics gathered over the first and second computer networks.
10. The method according to any of claims 1-3, further comprising assigning one or more physical resources of one or more of the second hosts in the second computing system to one or more of the second workloads, based on the collected performance characteristics.
11. A system, comprising:
a first local placement unit, which is configured to collect performance characteristics of first workloads that run on first hosts in a first computer network;
a global placement unit, which is configured to derive from the performance characteristics of the first workloads one or more placement directives for assigning workloads to hosts; and
a second local placement unit, which is configured to assign second workloads to second hosts in a second computing system that is separate from the first computing system, in accordance with the placement directives.
12. The system according to claim 11, wherein the first and second computer networks comprise virtualized data centers, and wherein the first and second workloads comprise Virtual Machines (VMs).
13. The system according to claim 11, wherein the global placement unit is configured to classify the first workloads into classes depending on the performance characteristics, and to derive the placement directives in terms of the classes.
14. The system according to any of claims 11-13, wherein the second local placement unit is configured to predict a resource usage pattern of a second workload based on a placement directive derived from the first workloads, and to assign the second workload to a second host based on the predicted resource usage pattern.
15. The system according to any of claims 11-13, wherein the first local placement unit is configured to collect temporal resource usage patterns of the first workloads.
16. The system according to any of claims 11-13, wherein the first local placement unit is configured to collect communication interaction between two or more of the first workloads.
17. The system according to any of claims 11-13, wherein the first local placement unit is configured to estimate available resources of the first hosts, and wherein the global placement unit is configured to specify the placement directives based on the estimated available resources.
18. The system according to any of claims 11-13, wherein the first and second local placement units are configured to gather the performance characteristics over the first and second workloads in both the first computer network and the second computer network, and wherein the global placement unit is configured to derive the placement directives based on the performance characteristics gathered over the first and second computer networks.
19. The system according to any of claims 11-13, wherein the second local placement unit is configured to assign one or more physical resources of one or more of the second hosts in the second computing system to one or more of the second workloads, based on the collected performance characteristics.
20. Apparatus, comprising:
an interface, which is configured to communicate with first and second separate computer networks; and
a processor, which is configured to receive via the interface performance characteristics of first workloads that run on first hosts in the first computer network, to derive from the performance characteristics of the first workloads one or more placement directives for assigning workloads to hosts, and to send the directives via the interface to the second computing system, for use in assigning second workloads to second hosts in the second computing system.
21. The apparatus according to claim 20, wherein the first and second computer networks comprise virtualized data centers, and wherein the first and second workloads comprise Virtual Machines (VMs).
22. The apparatus according to claim 20 or 21, wherein the processor is configured to classify the first workloads into classes depending on the performance characteristics, and to derive the placement directives in terms of the classes.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201580017023.6A CN106133715A (en) | 2014-04-03 | 2015-03-25 | Virtual machine based on the information from multiple data centers is placed |
EP15773379.1A EP3126996A4 (en) | 2014-04-03 | 2015-03-25 | Virtual-machine placement based on information from multiple data centers |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461974479P | 2014-04-03 | 2014-04-03 | |
US61/974,479 | 2014-04-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015150977A1 true WO2015150977A1 (en) | 2015-10-08 |
Family
ID=54209828
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2015/052178 WO2015150977A1 (en) | 2014-04-03 | 2015-03-25 | Virtual-machine placement based on information from multiple data centers |
Country Status (4)
Country | Link |
---|---|
US (1) | US20150286493A1 (en) |
EP (1) | EP3126996A4 (en) |
CN (1) | CN106133715A (en) |
WO (1) | WO2015150977A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017172276A1 (en) * | 2016-04-01 | 2017-10-05 | Intel Corporation | Workload behavior modeling and prediction for data center adaptation |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9524328B2 (en) | 2014-12-28 | 2016-12-20 | Strato Scale Ltd. | Recovery synchronization in a distributed storage system |
US10505862B1 (en) * | 2015-02-18 | 2019-12-10 | Amazon Technologies, Inc. | Optimizing for infrastructure diversity constraints in resource placement |
US10649417B2 (en) | 2017-05-31 | 2020-05-12 | Microsoft Technology Licensing, Llc | Controlling tenant services based on tenant usage performance indicators |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6571288B1 (en) * | 1999-04-26 | 2003-05-27 | Hewlett-Packard Company | Apparatus and method that empirically measures capacity of multiple servers and forwards relative weights to load balancer |
US20070250838A1 (en) * | 2006-04-24 | 2007-10-25 | Belady Christian L | Computer workload redistribution |
US20130086235A1 (en) * | 2011-09-30 | 2013-04-04 | James Michael Ferris | Systems and methods for generating cloud deployment targets based on predictive workload estimation |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8341626B1 (en) * | 2007-11-30 | 2012-12-25 | Hewlett-Packard Development Company, L. P. | Migration of a virtual machine in response to regional environment effects |
US8478878B2 (en) * | 2010-03-11 | 2013-07-02 | International Business Machines Corporation | Placement of virtual machines based on server cost and network cost |
US8806015B2 (en) * | 2011-05-04 | 2014-08-12 | International Business Machines Corporation | Workload-aware placement in private heterogeneous clouds |
US9317336B2 (en) * | 2011-07-27 | 2016-04-19 | Alcatel Lucent | Method and apparatus for assignment of virtual resources within a cloud environment |
US8910173B2 (en) * | 2011-11-18 | 2014-12-09 | Empire Technology Development Llc | Datacenter resource allocation |
-
2015
- 2015-03-25 EP EP15773379.1A patent/EP3126996A4/en not_active Withdrawn
- 2015-03-25 CN CN201580017023.6A patent/CN106133715A/en active Pending
- 2015-03-25 WO PCT/IB2015/052178 patent/WO2015150977A1/en active Application Filing
- 2015-04-01 US US14/675,844 patent/US20150286493A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6571288B1 (en) * | 1999-04-26 | 2003-05-27 | Hewlett-Packard Company | Apparatus and method that empirically measures capacity of multiple servers and forwards relative weights to load balancer |
US20070250838A1 (en) * | 2006-04-24 | 2007-10-25 | Belady Christian L | Computer workload redistribution |
US20130086235A1 (en) * | 2011-09-30 | 2013-04-04 | James Michael Ferris | Systems and methods for generating cloud deployment targets based on predictive workload estimation |
Non-Patent Citations (1)
Title |
---|
See also references of EP3126996A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017172276A1 (en) * | 2016-04-01 | 2017-10-05 | Intel Corporation | Workload behavior modeling and prediction for data center adaptation |
Also Published As
Publication number | Publication date |
---|---|
CN106133715A (en) | 2016-11-16 |
EP3126996A4 (en) | 2017-12-27 |
US20150286493A1 (en) | 2015-10-08 |
EP3126996A1 (en) | 2017-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10797973B2 (en) | Server-client determination | |
US9959146B2 (en) | Computing resources workload scheduling | |
CN106776005B (en) | Resource management system and method for containerized application | |
CN110865867B (en) | Method, device and system for discovering application topological relation | |
WO2018177042A1 (en) | Method and device for realizing resource scheduling | |
US9952891B2 (en) | Anomalous usage of resources by a process in a software defined data center | |
EP2742426B1 (en) | Network-aware coordination of virtual machine migrations in enterprise data centers and clouds | |
US9471394B2 (en) | Feedback system for optimizing the allocation of resources in a data center | |
JP2017507572A5 (en) | ||
US10841173B2 (en) | System and method for determining resources utilization in a virtual network | |
US9836298B2 (en) | Deployment rule system | |
US20180024866A1 (en) | System, virtualization control apparatus, method for controlling a virtualization control apparatus, and program | |
US10938688B2 (en) | Network costs for hyper-converged infrastructures | |
WO2016045489A1 (en) | System and method for load estimation of virtual machines in a cloud environment and serving node | |
US20150286493A1 (en) | Virtual-machine placement based on information from multiple data centers | |
US10305974B2 (en) | Ranking system | |
Rupprecht et al. | SquirrelJoin: Network-aware distributed join processing with lazy partitioning | |
CN104580194A (en) | Virtual resource management method and device oriented to video applications | |
EP3111595A1 (en) | Technologies for cloud data center analytics | |
CN105045667B (en) | A kind of resource pool management method for virtual machine vCPU scheduling | |
US9367351B1 (en) | Profiling input/output behavioral characteristics in distributed infrastructure | |
EP2940600A1 (en) | Data scanning method and device | |
CN109800052B (en) | Anomaly detection and positioning method and device applied to distributed container cloud platform | |
Ajayi et al. | Multi-Class load balancing scheme for QoS and energy conservation in cloud computing | |
EP3126972A1 (en) | Register-type-aware scheduling of virtual central processing units |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15773379 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2015773379 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015773379 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |