EP2304565B1 - Method and system for power management in a virtual machine environment withouth disrupting network connectivity - Google Patents

Method and system for power management in a virtual machine environment withouth disrupting network connectivity Download PDF

Info

Publication number
EP2304565B1
EP2304565B1 EP09774210.0A EP09774210A EP2304565B1 EP 2304565 B1 EP2304565 B1 EP 2304565B1 EP 09774210 A EP09774210 A EP 09774210A EP 2304565 B1 EP2304565 B1 EP 2304565B1
Authority
EP
European Patent Office
Prior art keywords
blade
migration
vnic
virtual
chassis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP09774210.0A
Other languages
German (de)
French (fr)
Other versions
EP2304565A1 (en
Inventor
Sunay Tripathi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oracle America Inc
Original Assignee
Oracle America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oracle America Inc filed Critical Oracle America Inc
Publication of EP2304565A1 publication Critical patent/EP2304565A1/en
Application granted granted Critical
Publication of EP2304565B1 publication Critical patent/EP2304565B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4812Task transfer initiation or dispatching by interrupt, e.g. masked
    • G06F9/4818Priority circuits therefor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • a network is an arrangement of physical computer systems configured to communicate with each other.
  • the physical computer systems include virtual machines, which may also be configured to interact with the network (i.e ., communicate with other physical computers and/or virtual machines in the network).
  • virtual machines which may also be configured to interact with the network (i.e ., communicate with other physical computers and/or virtual machines in the network).
  • a network may be broadly categorized as wired (using a tangible connection medium such as Ethernet cables) or wireless (using an intangible connection medium such as radio waves). Different connection methods may also be combined in a single network.
  • a wired network may be extended to allow devices to connect to the network wirelessly.
  • core network components such as routers, switches, and servers are generally connected using physical wires.
  • Ethernet is defined within the Institute of Electrical and Electronics Engineers (IEEE) 802.3 standards, which are supervised by the IEEE 802.3 Working Group.
  • WO 2005109195 discloses a system that includes a number of server computing devices and a management server computing device.
  • Each server computing device has a virtual host computer program running thereon to support one or more virtual machine computer programs.
  • Each virtual machine computer program is able to execute an instance of an operating system on which application computer programs are executable.
  • the management server computing device monitors the server computing devices, and causes the virtual machine computer programs supported by the virtual host computer program of a first server computing device to dynamically migrate to the virtual host computer program of a second server computing device, upon one or more conditions being satisfied.
  • the conditions may include the first server being predicted as failure prone, the first server consuming power less than a threshold, and the first server having resource utilization less than a threshold.
  • a method for power management comprises: gathering resource usage data for a first blade and a second blade on a blade chassis; migrating each virtual machine "VM" executing on the first blade to the second blade based on the resource usage data and a first migration policy, wherein the first migration policy defines when to condense the number of blades operating on the blade chassis; and powering down the first blade after each VM executing on the first blade is migrated from the first blade, wherein at least one VM executing on the first blade is connected to a VM on the second blade using a virtual wire, wherein the connectivity provided by the virtual wire is maintained during the migration of the at least one VM to the second blade, and wherein the virtual wire is implemented by a virtual switching table.
  • embodiments of the invention provide a method and system for migrating virtual machines located on one blade in a blade chassis to another blade inn the blade chassis for performing power management.
  • the blade is powered down.
  • the total power consumption of the system may be reduced.
  • embodiments of the invention provide a mechanism for powering up blades when additional resources are required. Specifically, in one or more embodiments of the invention, the performance of each virtual machine is monitored to ensure that each blade is executing according to the performance standards. If the execution of a virtual machine is not adhering to performance standards because of lack of resources, then a blade is powered up to provide additional resources and virtual machines are migrated to the powered up blade.
  • the powering up and powering down of blades may be time based in accordance with one or more embodiments of the invention. Specifically, at a certain time, a blade may be selected to power down. Virtual machines may be migrated from the selected blade before powering down the blade. Conversely, the blade may be selected to be powered up. Specifically, the blade is powered up and virtual machines are migrated to the blade in accordance with one or more embodiments of the invention.
  • FIG. 1 shows a diagram of a blade chassis (100) in accordance with one or more embodiments of the invention.
  • the blade chassis (100) includes multiple blades (e.g., blade A (102), blade B (104)) communicatively coupled with a chassis interconnect (106).
  • the blade chassis (100) may be a Sun Blade 6048 Chassis by Sun Microsystems Inc., an IBM BladeCenter® chassis, an HP BladeSystem enclosure by Hewlett Packard Inc., or any other type of blade chassis.
  • the blades may be of any type(s) compatible with the blade chassis (100).
  • BladeCenter® is a registered trademark of International Business Machines, Inc. (IBM), headquartered in Armonk, New York.
  • the blades are configured to communicate with each other via the chassis interconnect (106).
  • the blade chassis (100) allows for communication between the blades without requiring traditional network wires (such as Ethernet cables) between the blades.
  • the chassis interconnect (106) may be a Peripheral Component Interface Express (PCI-E) backplane, and the blades may be configured to communicate with each other via PCI-E endpoints.
  • PCI-E Peripheral Component Interface Express
  • the blades are configured to share a physical network interface (110).
  • the physical network interface (110) includes one or more network ports (for example, Ethernet ports), and provides an interface between the blade chassis (100) and the network (i.e., interconnected computer systems external to the blade chassis (100)) to which the blade chassis (100) is connected.
  • the blade chassis (100) may be connected to multiple networks, for example using multiple network ports.
  • the physical network interface (110) is managed by a network express manager (108).
  • the network express manager (108) is configured to manage access by the blades to the physical network interface (110).
  • the network express manager (108) may also be configured to manage internal communications between the blades themselves, in a manner discussed in detail below.
  • the network express manager (108) may be any combination of hardware, software, and/or firmware including executable logic for managing network traffic.
  • Blade is a term of art referring to a computer system located within a blade chassis (for example, the blade chassis (100) of Figure 1 ).
  • Blades typically include fewer components than stand-alone computer systems or conventional servers. In one embodiment of the invention, fully featured stand-alone computer systems or conventional servers may also be used instead of or in combination with the blades.
  • blades in a blade chassis each include one or more processors and associated memory.
  • Blades may also include storage devices (for example, hard drives and/or optical drives) and numerous other elements and functionalities typical of today's computer systems (not shown), such as a keyboard, a mouse, and/or output means such as a monitor.
  • storage devices for example, hard drives and/or optical drives
  • numerous other elements and functionalities typical of today's computer systems such as a keyboard, a mouse, and/or output means such as a monitor.
  • One or more of the aforementioned components may be shared by multiple blades located in the blade chassis. For example, multiple blades may share a single output device.
  • the blade (200) includes a host operating system (not shown) configured to execute one or more virtual machines (e.g., virtual machine C (202), virtual machine D (204)).
  • the virtual machines are distinct operating environments configured to inherit underlying functionality of the host operating system via an abstraction layer.
  • each virtual machine includes a separate instance of an operating system (e.g. , operating system instance C (206), operating system instance D (208)).
  • the Xen® virtualization project allows for multiple guest operating systems executing in a host operating system.
  • Xen® is a trademark overseen by the Xen Project Advisory Board.
  • the host operating system supports virtual execution environments (not shown).
  • SolarisTM Container An example of virtual execution environment is a SolarisTM Container.
  • the SolarisTM Container may execute in the host operating system, which may be a SolarisTM operating system.
  • SolarisTM is a trademark of Sun Microsystems, Inc.
  • the host operating system may include both virtual machines and virtual execution environments.
  • virtual machines may include many different types of functionality, such as a switch, a router, a firewall, a load balancer, an application server, any other type of network-enabled service, or any combination thereof.
  • the virtual machines and virtual execution environments inherit network connectivity from the host operating system via VNICs (e.g., VNIC C (210), VNIC D (212)). To the virtual machines and the virtual execution environments, the VNICs appear as physical NICs.
  • VNICs e.g., VNIC C (210), VNIC D (212)
  • the VNICs appear as physical NICs.
  • the use of VNICs allows an arbitrary number of virtual machines or and virtual execution environments to share the blade's (200) networking functionality.
  • each virtual machine or and virtual execution environment may be associated with an arbitrary number of VNICs, thereby providing increased flexibility in the types of networking functionality available to the virtual machines and/or and virtual execution environments.
  • a virtual machine may use one VNIC for incoming network traffic, and another VNIC for outgoing network traffic.
  • VNICs in accordance with one or more embodiments of the invention are described in detail in commonly owned U.S. Patent Application Serial No. 11/489,942 , entitled “Multiple Virtual Network Stack Instances using Virtual Network Interface Cards,” in the names of Nicolas G. Droux, Erik Nordmark, and Sunay Tripathi, US 2008-0019359 .
  • VNICs in accordance with one or more embodiments of the invention also are described in detail in commonly owned U.S. Patent Application Serial No. 11/480,000 , entitled “Method and System for Controlling Virtual Machine Bandwidth" in the names of Sunay Tripathi, Tim P. Marsland, and Nicolas G. Droux, US 2008-0002704 .
  • one of the blades in the blade chassis includes a control operating system executing in a virtual machine (also referred to as the control virtual machine).
  • the control operating system is configured to manage the creation and maintenance of the virtual wires and/or virtual network paths (discussed below).
  • the control operating system also includes functionality to migrate virtual machines between blades in the blade chassis (discussed below).
  • each blade's networking functionality includes access to a shared physical network interface and communication with other blades via the chassis interconnect.
  • Figure 3 shows a diagram of a network express manager (300) in accordance with one or more embodiments of the invention.
  • the network express manager (300) is configured to route network traffic traveling to and from VNICs located in the blades.
  • the network express manager (300) includes a virtual switching table (302), which includes a mapping of VNIC identifiers (304) to VNIC locations (306) in the chassis interconnect.
  • the VNIC identifiers (304) are Internet Protocol (IP) addresses, and the VNIC locations (306) are PCI-E endpoints associated with the blades ( e.g., if the chassis interconnect is a PCI-E backplane).
  • IP Internet Protocol
  • the VNIC identifiers (304) may be media access control (MAC) addresses. Alternatively, another routing scheme may be used.
  • the network express manager (300) is configured to receive network traffic via the physical network interface and route the network traffic to the appropriate location (i.e., where the VNIC is located) using the virtual switching table (302).
  • the packet is stored in the appropriate receive buffer (308) or transmit buffer (310).
  • each VNIC listed in the virtual switching table (302) is associated with a receive buffer (308) and a transmit buffer (310).
  • the receive buffer (308) is configured to temporarily store packets destined for a given VNIC prior to the VNIC receiving (via a polling or interrupt mechanism) the packets.
  • the transmit buffer (310) is configured to temporarily store packets received from the VNIC prior to send the packet towards its packet destination.
  • the receive buffer (308) enables the VNICs to implement bandwidth control. More specifically, when the VNIC is implementing bandwidth control, packets remain in the receive buffer (308) until the VNIC (or an associated process) requests packets from the receive buffer (308). As such, if the rate at which packets are received is greater than the rate at which packets requested by the VNIC (or an associated process), then packets may be dropped from the receive buffer once the receive buffer is full. Those skilled in the art will appreciate that the rate at which packets are dropped from the receive buffer is determined by the size of the receive buffer.
  • the network express manager (300) may be configured to route network traffic between different VNICs located in the blade chassis.
  • using the virtual switching table (302) in this manner facilitates the creation of a virtual network path, which includes virtual wires (discussed below).
  • virtual machines located in different blades may be interconnected to form an arbitrary virtual network topology, where the VNICs associated with each virtual machine do not need to know the physical locations of other VNICs.
  • the virtual network topology may be preserved by updating the virtual switching table (302) to reflect the corresponding VNIC's new physical location (for example, a different PCI-E endpoint).
  • network traffic from one VNIC may be destined for a VNIC located in the same blade, but associated with a different virtual machine.
  • a virtual switch may be used to route the network traffic between the VNICs independent of the blade chassis.
  • Virtual switches in accordance with one or more embodiments of the invention are discussed in detail in commonly owned U.S. Patent Application Serial No. 11/480,261 , entitled “Virtual Switch,” in the names of Nicolas G. Droux, Sunay Tripathi, and Erik Nordmark, US 2008-0002683 .
  • FIG. 4 shows a diagram of a virtual switch (400) in accordance with one or more embodiments of the invention.
  • the virtual switch (400) provides connectivity between VNIC X (406) associated with virtual machine X (402) and VNIC Y (408) associated with virtual machine Y (404).
  • the virtual switch (400) is managed by a host (410) within which virtual machine X (402) and virtual machine Y (404) are located.
  • the host (410) may be configured to identify network traffic targeted at a VNIC in the same blade, and route the traffic to the VNIC using the virtual switch (400).
  • the virtual switch (400) may reduce utilization of the blade chassis and the network express manager by avoiding unnecessary round-trip network traffic.
  • Figure 5 shows a flowchart of a method for creating a virtual network path in accordance with one or more embodiments of the invention.
  • one or more of the steps shown in Figure 5 may be omitted, repeated, and/or performed in a different order. Accordingly, embodiments of the invention should not be considered limited to the specific arrangement of steps shown in Figure 5 .
  • VNICs are instantiated for multiple virtual machines.
  • the virtual machines are located in blades, as discussed above. Further, the virtual machines may each be associated with one or more VNICs.
  • instantiating a VNIC involves loading a VNIC object in memory and registering the VNIC object with a host, i.e., an operating system that is hosting the virtual machine associated with the VNIC. Registering the VNIC object establishes an interface between the host's networking functionality and the abstraction layer provided by the VNIC. Thereafter, when the host receives network traffic addressed to the VNIC, the host forwards the network traffic to the VNIC.
  • a host i.e., an operating system that is hosting the virtual machine associated with the VNIC.
  • Registering the VNIC object establishes an interface between the host's networking functionality and the abstraction layer provided by the VNIC.
  • the host receives network traffic addressed to the VNIC, the host forwards the network traffic to the VNIC.
  • Instantiation of VNICs in accordance with one or more embodiments of the invention
  • a single blade may include multiple virtual machines configured to communicate with each other.
  • a virtual switch is instantiated to facilitate communication between the virtual machines.
  • the virtual switch allows communication between VNICs independent of the chassis interconnect. Instantiation of virtual switches in accordance with one or more embodiments of the invention is discussed in detail in U.S. Patent Application 11/480,261 .
  • a virtual switching table is populated.
  • the virtual switching table may be located in a network express manager configured to manage network traffic flowing to and from the virtual machines. Populating the virtual switching table involves associating VNIC identifiers (for example, IP addresses) with VNIC locations (for example, PCI-E endpoints).
  • VNIC identifiers for example, IP addresses
  • VNIC locations for example, PCI-E endpoints.
  • the virtual switching table is populated in response to a user command issued via a control operating system, i.e., an operating system that includes functionality to control the network express manager.
  • VNICs include settings for controlling the processing of network packets.
  • settings are assigned to the VNICs according to a networking policy.
  • Many different types of networking policies may be enforced using settings in the VNICs.
  • a setting may be used to provision a particular portion of a blade's available bandwidth to one or more VNICs.
  • a setting may be used to restrict use of a VNIC to a particular type of network traffic, such as Voice over IP (VoIP) or Transmission Control Protocol/IP (TCP/IP).
  • VoIP Voice over IP
  • TCP/IP Transmission Control Protocol/IP
  • VNICs in a virtual network path may be capped at the same bandwidth limit, thereby allowing for consistent data flow across the virtual network path.
  • a network express manager is configured to transmit the desired settings to the VNICs.
  • network traffic may be transmitted from a VNIC in one blade to a VNIC in another blade.
  • the connection between the two VNICs may be thought of as a "virtual wire," because the arrangement obviates the need for traditional network wires such as Ethernet cables.
  • a virtual wire functions similar to a physical wire in the sense that network traffic passing through one virtual wire is isolated from network traffic passing through another virtual wire, even though the network traffic may pass through the same blade ( i.e ., using the same virtual machine or different virtual machines located in the blade).
  • each virtual wire may be associated with a priority (discussed below in Figures 11A-11C ).
  • each virtual wire may be associated with a security setting, which defines packet security (e.g., encryption, etc.) for packets transmitted over the virtual wire.
  • packet security e.g., encryption, etc.
  • the bandwidth, priority and security settings are defined on a per-wire basis. Further, the aforementioned settings are the same for VNICs on either end of the virtual wire.
  • a combination of two or more virtual wires may be thought of as a "virtual network path.”
  • the bandwidth, priority and security settings for all virtual wires in the virtual network path are the same. Further, the aforementioned settings are the same for VNICs on either end of the virtual wires, which make up the virtual network path.
  • network traffic may be transmitted over the virtual network path through, for example, a first virtual wire (Step 510) and then through a second virtual wire (Step 512).
  • a first virtual wire may be located between the physical network interface and a VNIC
  • a second virtual wire may be located between the VNIC and another VNIC.
  • at least Steps 502-508 are performed and/or managed by the control operating system.
  • Figures 6A-6C show an example of creating virtual network paths in accordance with one or more embodiments of the invention. Specifically, Figure 6A shows a diagram of an actual topology (600) in accordance with one or more embodiments of the invention, Figure 6B shows how network traffic may be routed through the actual topology (600), and Figure 6C shows a virtual network topology (640) created by routing network traffic as shown in Figure 6B .
  • Figures 6A-6C are provided as examples only, and should not be construed as limiting the scope of the invention.
  • the actual topology (600) includes multiple virtual machines. Specifically, the actual topology (600) includes a router (602), a firewall (604), application server M (606), and application server N (608), each executing in a separate virtual machine.
  • the virtual machines are located in blades communicatively coupled with a chassis interconnect (622), and include networking functionality provided by the blades via VNICs ( i.e., VNIC H (610), VNIC J (612), VNIC K (614), VNIC M (618), and VNIC N (620)).
  • VNIC H (610), VNIC J (612), VNIC K (614), VNIC M (618), and VNIC N (620) As shown in Figure 6A , each virtual machine is communicatively coupled to all other virtual machines.
  • embodiments of the invention create virtual wires and/or virtual network paths to limit the connectivity of the virtual machines. For ease of illustration, the blades themselves are not shown in the diagram.
  • the router (602), the firewall (604), application server M (606), and application server N (608) are each located in separate blades.
  • a blade may include multiple virtual machines.
  • the router (602) and the firewall (604) may be located in a single blade.
  • each virtual machine may be associated with a different number of VNICs than the number of VNICs shown in Figure 6A .
  • a network express manager (624) is configured to manage network traffic flowing to and from the virtual machines. Further, the network express manager (624) is configured to manage access to a physical network interface (626) used to communicate with client O (628) and client P (630).
  • the virtual machines, VNICs, chassis interconnect (622), network express manager (624), and physical network interface (626) are all located within a chassis interconnect.
  • Client O (628) and client P (630) are located in one or more networks (not shown) to which the chassis interconnect is connected.
  • Figure 6B shows how network traffic may be routed through the actual topology (600) in accordance with one or more embodiments of the invention.
  • the routing is performed by the network express manager (624) using a virtual switching table (634).
  • FIG. 6B shows a virtual wire (632) located between application server M (606) and application server N (608).
  • application server M 606 transmits a network packet via VNIC M (618).
  • the network packet is addressed to VNIC N (620) associated with application server N (608).
  • the network express manager (624) receives the network packet via the chassis interconnect (622), inspects the network packet, and determines the target VNIC location using the virtual switching table (634). If the target VNIC location is not found in the virtual switching table (634), then the network packet may be dropped.
  • the target VNIC location is the blade in which VNIC N (620) is located.
  • the network express manager (624) routes the network packet to the target VNIC location, and application server N (608) receives the network packet via VNIC N (620), thereby completing the virtual wire (632).
  • the virtual wire (632) may also be used to transmit network traffic in the opposite direction, i.e., from application server N (608) to application server M (606).
  • FIG. 6B shows virtual network path R (636), which flows from client O (628), through the router (602), through the firewall (604), and terminates at application server M (606).
  • the virtual network path R (636) includes the following virtual wires.
  • a virtual wire is located between the physical network interface (626) and VNIC H (610).
  • Another virtual wire is located between VNIC J (612) and VNIC K (614).
  • Yet another virtual wire is located between VNIC L (616) and VNIC M (618).
  • a virtual switch may be substituted for the virtual wire located between VNIC J (612) and VNIC K (614), thereby eliminating use of the chassis interconnect (622) from communications between the router (602) and the firewall (604).
  • FIG. 6B shows virtual network path S (638), which flows from client P (630), through the router (602), and terminates at application server N (608).
  • Virtual network path S (638) includes a virtual wire between the physical network interface (626) and VNIC H (610), and a virtual wire between VNIC J (612) and VNIC N (620).
  • the differences between virtual network path R (636) and virtual network path S (638) exemplify how multiple virtual network paths may be located in the same blade chassis.
  • VNIC settings are applied separately for each virtual network path. For example, different bandwidth limits may be used for virtual network path R (636) and virtual network path S (638).
  • the virtual network paths may be thought of as including many of the same features as traditional network paths (e.g., using Ethernet cables), even though traditional network wires are not used within the blade chassis.
  • traditional network wires may still be required outside the blade chassis, for example between the physical network interface (626) and client O (628) and/or client P (630).
  • FIG. 6C shows a diagram of the virtual network topology (640) that results from the use of the virtual network path R (636), virtual network path S (638), and virtual wire (632) shown in Figure 6B .
  • the virtual network topology (640) allows the various components of the network (i.e ., router (602), firewall (604), application server M (606), application server N (608), client O (628), and client P (630)) to interact in a manner similar to a traditional wired network.
  • communication between the components located within the blade chassis i.e., router (602), firewall (604), application server M (606), and application server N (608) is accomplished without the use of traditional network wires.
  • Embodiments of the invention allow for virtual network paths to be created using virtual wires, without the need for traditional network wires. Specifically, by placing virtual machines in blades coupled via a chassis interconnect, and routing network traffic using VNICs and a virtual switching table, the need for traditional network wires between the virtual machines is avoided. Thus, embodiments of the invention facilitate the creation and reconfiguration of virtual network topologies without the physical labor typically involved in creating a traditional wired network.
  • one or more virtual machines may be migrated from one blade to another blade in the blade chassis.
  • Migration may be necessitated by a number of factors. For example, a virtual machine may need to be migrated from one blade to another blade because the virtual machine requires additional resources, which are not available on the blade on which it is currently executing. Alternatively, a virtual machine may need to be migrated from one blade to another blade because the blade on which the virtual machine is currently executing is powering down, failing, and/or other suspending operation. Alternatively, the migration may be triggered based on a migration policy. Migration policies are discussed below and in Figures 9A-11C .
  • At least the bandwidth constraint associated with virtual machine is preserved across the migration, such that at least the bandwidth constraint associated with virtual machine is the same before and after the migration of the virtual machine.
  • the bandwidth associated with a given virtual machine is enforced by VNIC associated with the virtual machine.
  • the host includes functionality to associate the VNIC with the virtual machine and set the bandwidth of the VNIC.
  • Figures 7A-7B show flowcharts of a method for migrating a virtual machine in accordance with one or more embodiments of the invention.
  • one or more of the steps shown in Figures 7A-7B may be omitted, repeated, and/or performed in a different order. Accordingly, embodiments of the invention should not be considered limited to the specific arrangement of steps shown in Figure 7A-7B .
  • Step 700 a virtual machine (VM) to migrate is identified.
  • the determination of whether to migrate a given VM may be based on any number of factors, some of which are discussed above.
  • Step 207 migration criteria for the VM are obtained.
  • the migration criteria corresponds the bandwidth constraint of the VM (e.g., the minimum bandwidth and/or maximum bandwidth available to the VM), a hardware constraint ( e.g ., minimum amount of computing resources required by the VM), a software constraint (e.g ., version of host operating system required by VM), and/or any other constraint required by the VM.
  • the migration constraints may be obtained from the VM, the host on which the VM is executing, the control operating system, or any combination thereof.
  • Step 704 the control operating system sends a request including the migration criteria to hosts executing on blades in the blade chassis.
  • the control operating system uses a multicast message to send the request.
  • the control operating system receives responses from the hosts.
  • the responses may include: (i) a response indicating that the host which sent the response is unable to satisfy the migration criteria or (ii) a response indicating that the host which sent the response is able to satisfy the migration criteria.
  • Step 708 a determination is made, using the responses received in Step 706, about whether there are sufficient resources available to migrate the VM. If there are insufficient resources, the method proceeds to Figure 7B (described below). Alternatively, if there are sufficient resources, the method proceeds to Step 710. In Step 710, a target host is selected. The target host corresponds to a host to which the VM will be migrated. This selection is made by the control operating system based on the responses received in Step 706.
  • Step 712 execution on the VM is suspended.
  • suspending the VM may also include suspending execution of associated VNICs (discussed below).
  • Step 714 state information required to migrate the VM is obtained.
  • the state information corresponds to information required to resume execution of the VM on the target host from the state of the VM prior to being suspended in Step 712.
  • Step 716 the VNIC(s) to migrate with the VM is identified. Identifying the VNIC(s) corresponds to determining which VNIC(s) is associated with the VM. In one embodiment of the invention, a VNIC is associated with the VM if the VNIC is executing on the same host as the VM and the VM receives packets from and/or transmits packets to the VNIC. In Step 718, information required to migrate the VNIC identified in Step 716 is obtained. In one embodiment of the invention, the information corresponds to information required to resume execution of the VNIC on the target host from the state of the VNIC prior to suspending the VM in Step 712.
  • Step 720 VM and VNIC(s) are migrated to the target host.
  • the VM and VNIC(s) are configured on the target host.
  • the VM and VNIC(s) are configured such that they operate in the same manner on the target host as they operated on the source host (i.e., the host from which they were migrated).
  • Configuring the VM and VNICs may also include configuring various portions of the target host.
  • the VM and VNIC(s) are configured using the information obtained in Steps 714 and 718.
  • Step 722 is initiated and monitored by the control operating system.
  • Step 724 the virtual switching table is updated to reflect that the VNIC(s) identified in Step 716 are on the target host.
  • Step 726 the execution of the VM is resumed on the host.
  • Step 726 the lowest priority active virtual wire operating in the blade chassis is obtained.
  • the control operating system maintains a data structure which includes the priorities of the various virtual wires operating in the blade chassis. Further, in one embodiment of the invention, only the control operating system includes functionality to set and change the priorities of the virtual wires.
  • Step 728 the lowest priority active virtual wire is suspended.
  • suspending the lowest priority active virtual wire includes suspending operation of the VNICs on either end of the virtual wire.
  • the VMs associated with the VNICs may also be suspended. Further, suspending the VNICs and, optionally, the VMs, results in freeing bandwidth and computing resources on the respective blades on which the suspended VNICs and VMs were executed.
  • Step 730 the control operating system sends a request including the migration criteria to hosts executing on blades in the blade chassis.
  • the control operating system uses a multicast message to send the request.
  • the control operating system receives responses from the hosts.
  • the responses may include: (i) a response indicating that the host which sent the response is unable to satisfy the migration criteria or (ii) a response indicating that the host which sent the response is able to satisfy the migration criteria.
  • Step 734 a determination is made, using the responses received in Step 732, about whether there are sufficient resources available to migrate the VM. If there are insufficient resources, the method proceeds to Step 726. Alternatively, if there are sufficient resources, the method proceeds to Step 710 in Figure 7A .
  • the method described in Figures 7A and 7B may be used to migrate the VMs associated with the suspended virtual wires.
  • the order in which VMs are migrated to resume activity of suspended virtual wires is based on the priority of the suspended virtual wires.
  • Figures 8A-8B show an example of migrating a virtual machine in accordance with one or more embodiments of the invention.
  • Figures 8A-8B are provided as examples only, and should not be construed as limiting the scope of the invention.
  • Blade A includes Host A (806)
  • Blade B includes Host B (808)
  • Blade C includes Host C (810).
  • Host A (806) includes VNIC A (818) associated with Control Operation System (OS) (812) and VNIC B (820) associated with Virtual Machine (VM) A (814). Further, Host B (808) includes VNIC C (822) associated with VM B (816). Host C (810) initially does not include any VMs.
  • OS Control Operation System
  • VM Virtual Machine
  • Host B (808) includes VNIC C (822) associated with VM B (816).
  • Host C (810) initially does not include any VMs.
  • VM A (814) communicates with VM B (816) using a virtual wire with a bandwidth limit of 5 gigabits per second (GBPS).
  • the virtual wire connects VNIC B (820) with VNIC C (822).
  • network traffic from VM A (814) is transmitted through VNIC B (820) to receive buffer C (830) where it remains until it is sent to or requested by VNIC C (822).
  • network traffic from VM B (816) is transmitted through VNIC C (822) to receive buffer B (828) where it remains until it is sent to or requested by VNIC B (820).
  • the virtual switching table (832) in the interface manager (826) implements the virtual wire.
  • VM A (814) requires additional processing resources and that such resources are not available on Host A (806).
  • the control OS determines the migration criteria for VM A (806).
  • the migration criteria include a hardware constraint defining the processing resources required by VM A (818) as well as a bandwidth constraint (i.e., 5 GBPS).
  • the control OS then sends a request including the migration criteria to the hosts in the blade chassis (i.e., Host B (808) and Host C (810)).
  • Host B (808) responds that it does not have sufficient resources to satisfy the migration criteria.
  • Host C responds that it has sufficient resources to satisfy the migration criteria.
  • the control OS selects Host C (810) as the target host.
  • the control OS then initiates the migration of VM A (814) along with VNIC B (820) to Host C (810) in accordance with Steps 712-726.
  • FIG. 8B The result of the migration is shown in Figure 8B .
  • VM A (814) and VNIC B (820) are located on Host C (820). Further, the virtual wire is preserved across the migration.
  • migration of virtual machines is based on one or more migration policies.
  • a migration policy defines when to coalesce or expand the number of blades executing in a blade chassis.
  • a migration policy may be used to perform power management.
  • the migration policy may be a coalesce migration policy, an expansion migration policy, or a time based migration policy.
  • a coalesce policy defines when to decrease the number of blades executing in the blade chassis. Decreasing the number of blades executing may cause a decrease in the total power usage by the blade chassis.
  • the coalesce policy is based on the percentage resource usage on the blade chassis. A percentage resource usage is the amount that one or more resources on the blade chassis are used as compared to the total resources available. For example, the resource may be the percentage of the processor used, the throughput of packets, latency, input/output requests, and bandwidth availability.
  • the coalesce migration policy may be defined with different resource usage percentage ranges. In the lowest range (i.e., the percentage resource used is low) the migration policy may specify that the virtual machines on the blade are migratable and, subsequently, the blade may be powered down. In the medium range, the migration policy may specify that that the blade may accept virtual machines from another blade in order for the other blade to be powered down. In the high range, the migration policy may specify that the blade should not be powered down and the blade cannot accept additional virtual machines. For example, consider the scenario in which a migration policy defines the low range, medium range, and high range as below 40%, between 40% and 80%, and above 80% respectively.
  • blade X may be powered down.
  • an expansion migration policy defines performance standards for the virtual machines executing on a blade.
  • the performance standard may be defined on a per-blade or per-virtual machine basis. For example, the performance standard may require that the throughput of packets for the virtual machine is at a certain level, the latency for processing instructions is below a specified level, and/or the available bandwidth for the blade is at a specified level.
  • the expansion migration policy may indicate that one or more virtual machines should be migrated from the current blade to a different blade. If all of the currently executing blades do not have resources available, then a blade which was previously powered down may be powered up. Thus, the virtual machines may be migrated to the newly powered up blade.
  • the migration policy may be a time based migration policy.
  • a time based migration policy defines, for different times, the number of blades that should be powered up. For example, during the day (e.g., between 8:00 AM and 6:00 PM), the time based migration policy may specify that all blades should be powered up. During the evening and on weekends, the time based migration policy may specify that only half of the blades should be powered up. Thus, when the blades are expected to be operating at peak capacity, such as during the day, the migration policy specifies that all blades should be powered up. Conversely, when the blades are expected to be operating minimally, such as at night or on the weekends, the migration policy specifies that only a portion of the blades should be powered up in order to conserve energy.
  • FIGS 9A-10 show flowcharts of migrating virtual machines for power management in one or more embodiments of the invention. While the various steps in these flowcharts are presented and described sequentially, one of ordinary skill will appreciate that some or all of the steps may be executed in different orders, may be combined or omitted, and some or all of the steps may be executed in parallel.
  • FIG. 9A shows a flowchart of a method for migrating virtual machines based on a coalesce migration policy.
  • resource usage data for a blade is obtained. Specifically, each blade monitors how much of each resource specified in the coalesce migration policy is in use by the blade. Monitoring resource usage may be performed using techniques known in the art. If a centralized system for determining when to migrate is used, the obtaining the resource usage data may be performed by the control OS. Alternatively, a peer-to-peer based system may be used to determine resource usage.
  • step 902 resource usage data is compared with the migration policy.
  • the comparison is performed on a per- blade basis.
  • the blade being compared with the migration policy may be referred to as the current blade.
  • Comparing the current blade with the migration policy may be performed by comparing the amount of each resource used as defined in the resource usage data with the migration policy.
  • step 904 a determination is made whether to migrate virtual machines from the current blade. Specifically, a determination is made whether the migration policy indicates that virtual machines should be migrated from current the blade.
  • One method for determining whether a target blade is available may be performed by gathering migration criteria for each virtual machine executing on the current blade. If a host on the current blade is performing step 902 and/or step 904, the host may send a request to hosts executing on each blade in the blade chassis. The request may include migration criteria for each virtual machine executing on the current blade. Hosts on the other blades may respond to the request with an indication of whether they can satisfy the migration criteria of one or more virtual machines. If the host on another blade can satisfy the migration criteria, then this host becomes the target host for one or more virtual machines on the current blade. In one or more embodiments of the invention, more than one target host may exist.
  • the hosts may use shared data repository in the network express manager. Specifically, each host may read and write migration criteria, resource availability, and/or responses to requests to the shared data repository. Thus, by accessing the shared data repository, the current blade may determine whether a target blade is available.
  • the current blade may send a message to the control OS that the virtual machines on the current blade need to migrate or the control OS may be performing step 902 and/or step 904.
  • the control OS may manage the migration as discussed above and in Figures 7A and 7B .
  • the control OS may suspend virtual wires based on the priority.
  • the migration policy may indicate that virtual wires operating at a specified low level of priority may be suspended for the purposes of power management. In such a scenario, if sufficient resources are available after suspending one or more virtual wires, then the target host is considered available.
  • step 910 the current blade is powered down. Specifically, the blade may be powered down or placed in standby mode.
  • step 904 a determination is made not to migrate virtual machines from the current blade, then a determination is made whether to accept virtual machines on the current blade in Step 912. Specifically, a determination is made whether the coalesce migration policy indicates that the current blade can receive data based on the resource usage data.
  • a migratable virtual machine is a virtual machine on a different blade, such as a blade being powered down, that can be migrated to the current blade. Determining whether migratable virtual machines exist may be based on, for example, whether any requests with migration criteria are sent to the current blade, whether the shared data repository indicates that at least one blade is powering down, or whether the control OS identifies a virtual machine to migrate to the current blade after comparing resource usage data on the different blades.
  • the blade to power down is selected in step 916.
  • the selected blade to power down corresponds to a blade that may be powered down in accordance with the migration policy and has the migratable virtual machines.
  • the migratable virtual machines are migrated from the selected blade to the current blade. Migrating the virtual machines may be performed as discussed above and in step 712-726 of Figure 7A .
  • the selected blade is powered down. Specifically, the blade may be powered down or placed in standby mode.
  • the current blade may continue executing without migrating virtual machines to the current blade.
  • FIG. 9B shows a flowchart of a method for migrating virtual machines based on an expansion migration policy.
  • step 950 the performance of each virtual machine executing on the blade is monitored. Monitoring the performance of each virtual machine may be performed using techniques known in the art.
  • step 952 the performance is compared with the migration policy to identify virtual machines not adhering to the performance standards of the migration policy. If any virtual machines are not adhering to the performance standards, than a determination may be made as to which resources (e.g., main memory, number of processors, guaranteed bandwidth, etc.) would be required for the identified virtual machines to comply with the performance standards.
  • resources e.g., main memory, number of processors, guaranteed bandwidth, etc.
  • the virtual machines to migrate are selected in step 954.
  • the selected virtual machines may be the identified virtual machines not adhering to the performance standards or one or more of the other virtual machines that are executing on the blade. For example, if the resource that would be required for the identified virtual machines would be available once other virtual machines are migrated, then a determination may be made to migrate the other virtual machines. The determination may be based on the allocation of resources. For example, if load balancing across the blades is desired, then the selected virtual machines may be based on the virtual machines that achieve load balancing.
  • step 956 if a determination is made not to migrate the virtual machines to an existing blade than a determination is made whether to migrate the virtual machines to a new blade in step 960.
  • the determination is made whether to migrate to a new blade may be based on whether a powered down blade exists and can be powered up, and whether the resources of the powered down blade as sufficient to satisfy the migration criteria for the selected virtual machines.
  • a blade is powered up in step 962. Powering up the blade may be performed using techniques known in the art.
  • the selected virtual machines are migrated to the powered up blade. Migrating the selected virtual machines may be performed as discussed above.
  • FIG 10 shows a flowchart of a method for migrating virtual machines based on a time based migration policy.
  • the time is monitored.
  • a determination is made whether to trigger a time-based migration. Triggering a time-based migration is based on whether the current time matches a time in the time based migration policy. If a determination is made not to trigger a time based migration, then monitoring the time is continued in step 1000.
  • a determination is made whether to expand the number of blades in use i.e., powered up
  • step 1008 virtual machines to migrate to the new blade are selected.
  • the virtual machines are selected based on migration criteria for each virtual machine and/or to achieve load balancing across the blades in the blade chassis.
  • step 1010 virtual machines are migrated to the new blade. Migrating the virtual machines may be performed as discussed above.
  • step 1004 if in step 1004, a determination is made not to expand the number of blades in use, then the number of blades in use is coalesced according to the time based migration policy. Specifically, the migration policy indicates that the number of blades should be reduced. Accordingly, a blade to power down is selected in step 1012. Selecting the blade to power down may be based on load balancing, the current resource usage for each blade and other such criteria. For example, the blade that is selected may be the blade that has the least number of virtual machines executing on the blade. In step 1014, virtual machines are migrated from the selected blade and the blade is powered down in step 1016. Migrating virtual machines and powering down the blade may be performed as discussed above.
  • Figures 11A-11C show an example of migrating a virtual machine in accordance with one or more embodiments of the invention.
  • Figures 11A-11C are provided as examples only, and should not be construed as limiting the scope of the invention.
  • Blade A (1100) includes Host A (1106)
  • Blade B (1102) includes Host B (1108)
  • Blade C (1104) includes Host C (1110).
  • Host A (1106) includes VNIC A (1122) associated with VM A (1112) and VNIC B (1124) associated with VM B (1114). Further, Host B (1108) includes VNIC C1 (1126) and VNIC C2 (1128) both associated with VM C (1116). Finally, Host C (1110) includes VNIC D (1130) associated with VM D (1118) and VNIC E (1132) associated with VM E (1120).
  • VM A (1112) communicates with VM E (1120) using virtual wire (VW) A (1136) with a bandwidth limit of 5 gigabits per second (GBPS).
  • VW A (1136) connects VNIC A (1122) with VNIC E (1132).
  • VM B (1114) communicates with VM C (1116) using VW B (1134) with a bandwidth limit of 3 GBPS.
  • VW B (1134) connects VNIC B (1124) with VNIC C1 (1126).
  • VM C (1116) communicates with VM D (1118) using VW C2 (1136) with a bandwidth limit of 8 GBPS.
  • VW C2 (1136) connects VNIC C2 (1128) with VNIC D (1130).
  • Each of the VWs is associated with the priority.
  • the priority of the VWs in Figure 11A from highest to lowest is: VW B (1134), VW C2 (1136), and VW A (1138).
  • the resource usage data is monitored for each blade (1100, 1102, 1104).
  • the resource usage data indicates that blade B (1102) is using only 10% of the resources. Accordingly, the coalesce migration policy indicates to power down blade B (1102). As such, VM C (1116) must be migrated to another host.
  • the control OS then sends a request including the migration criteria to the hosts in the blade chassis (i.e ., Host A (1108) and Host C (1110)).
  • the migration criteria indicates the amount of resources required by VM C (1116).
  • Host C (1110) responds that Host C (1110) is using only 60% of the resources, accordingly, Host C (1110) has sufficient resources.
  • the control OS selects Host C (1110) as the target host.
  • the control OS then initiates the migration of VM C (1116) along with VNIC C1 (1126) and VNIC C2 (1128) to Host C (1110) in accordance with Steps 712-726.
  • Blade B (1120) is powered down as shown in Figure 11B .
  • VNIC C2 (1128) and VNIC D (1130) are now located on Host C (1118), virtual switch (VS) C2 (1137) instead of VW C2 (1136) is used to connect VNIC C2 (1128) and VNIC D (1130).
  • VS virtual switch
  • FIG. 11C consider the scenario in which the system shown in Figure 11A is subject to a time-based migration policy and that the current time is 6:00pm.
  • the time-based migration policy indicates that only two blades should be executing at 6:00pm.
  • the control OS may select blade B (1102) to power down. As such, VM C (1116) must be migrated to another host.
  • the control OS then sends a request including the migration criteria to the hosts in the blade chassis ( i.e., Host A (1108) and Host C (1110)).
  • the migration criteria indicate the amount of resources required by VM C (1116).
  • the control OS (not shown) determines the migration criteria for VM C (1116).
  • the migration criteria include a bandwidth constraint (i.e., 11 GBPS).
  • the control OS then sends a request including the migration criteria to the hosts in the blade chassis (i.e., Host A (1108) and Host C (1110)). Both Host A (1108) and Host C (1110) respond that they do not have sufficient resources to satisfy the migration criteria.
  • VW A (1138) is subsequently suspended.
  • Suspending VW A (1138) includes suspending VM A (1112), VNIC A (1122), VM E (1120), and VNIC E (1132)
  • the control OS then resends the request include the migration criteria to the hosts in the blade chassis (i.e ., Host A (1108) and Host C (1110)).
  • Host A (1106) responds that it does not have sufficient resources to satisfy the migration criteria.
  • Host C (1110) responds that it has sufficient resources to satisfy the migration criteria.
  • the control OS selects Host C (1110) as the target host.
  • the control OS then initiates the migration of VM C (1116) along with VNIC C1 (1126) and VNIC C2 (1128) to Host C (1110) in accordance with Steps 712-726.
  • Blade B (1120) is powered down.
  • the result of the migration is shown in Figure 11C .
  • the VWs are preserved across the migration.
  • VNIC C2 (1128) and VNIC D (1130) are now located on Host C (1118)
  • virtual switch (VS) C2 (1137) instead of VW C2 (1136) is used to connect VNIC C2 (1128) and VNIC D (1130).
  • the invention may be extended for use with other computer systems, which are not blades.
  • the invention may be extended to any computer, which includes at least memory, a processor, and a mechanism to physically connect to and communicate over the chassis bus.
  • Examples of such computers include, but are not limited to, multiprocessor servers, network appliances, and light-weight computing devices (e.g., computers that only include memory, a processor, a mechanism to physically connect to and communicate over the chassis bus), and the necessary hardware to enable the aforementioned components to interact.
  • Software instructions to perform embodiments of the invention may be stored on a computer readable medium such as a compact disc (CD), a diskette, a tape, a file, or any other computer readable storage device.
  • a computer readable medium such as a compact disc (CD), a diskette, a tape, a file, or any other computer readable storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Power Sources (AREA)
  • Stored Programmes (AREA)
  • Remote Monitoring And Control Of Power-Distribution Networks (AREA)

Description

    BACKGROUND
  • Conventionally, in the computer-related arts, a network is an arrangement of physical computer systems configured to communicate with each other. In some cases, the physical computer systems include virtual machines, which may also be configured to interact with the network (i.e., communicate with other physical computers and/or virtual machines in the network). Many different types of networks exist, and a network may be classified based on various aspects of the network, such as scale, connection method, functional relationship of computer systems in the network, and/or network topology.
  • Regarding connection methods, a network may be broadly categorized as wired (using a tangible connection medium such as Ethernet cables) or wireless (using an intangible connection medium such as radio waves). Different connection methods may also be combined in a single network. For example, a wired network may be extended to allow devices to connect to the network wirelessly. However, core network components such as routers, switches, and servers are generally connected using physical wires. Ethernet is defined within the Institute of Electrical and Electronics Engineers (IEEE) 802.3 standards, which are supervised by the IEEE 802.3 Working Group.
  • To create a wired network, computer systems must be physically connected to each other. That is, the ends of physical wires (for example, Ethernet cables) must be physically connected to network interface cards in the computer systems forming the network. To reconfigure the network (for example, to replace a server or change the network topology), one or more of the physical wires must be disconnected from a computer system and connected to a different computer system.
  • WO 2005109195 (A2 ) discloses a system that includes a number of server computing devices and a management server computing device. Each server computing device has a virtual host computer program running thereon to support one or more virtual machine computer programs. Each virtual machine computer program is able to execute an instance of an operating system on which application computer programs are executable. The management server computing device monitors the server computing devices, and causes the virtual machine computer programs supported by the virtual host computer program of a first server computing device to dynamically migrate to the virtual host computer program of a second server computing device, upon one or more conditions being satisfied. The conditions may include the first server being predicted as failure prone, the first server consuming power less than a threshold, and the first server having resource utilization less than a threshold.
  • SUMMARY
  • The invention is defined in the claims.
  • In one embodiment a method for power management, comprises: gathering resource usage data for a first blade and a second blade on a blade chassis; migrating each virtual machine "VM" executing on the first blade to the second blade based on the resource usage data and a first migration policy, wherein the first migration policy defines when to condense the number of blades operating on the blade chassis; and powering down the first blade after each VM executing on the first blade is migrated from the first blade, wherein at least one VM executing on the first blade is connected to a VM on the second blade using a virtual wire, wherein the connectivity provided by the virtual wire is maintained during the migration of the at least one VM to the second blade, and wherein the virtual wire is implemented by a virtual switching table.
  • Other aspects of the invention will be apparent from the following description and the appended claims.
  • BRIEF DESCRIPTION OF DRAWINGS
    • Figure 1 shows a diagram of a blade chassis in accordance with one or more embodiments of the invention.
    • Figure 2 shows a diagram of a blade in accordance with one or more embodiments of the invention.
    • Figure 3 shows a diagram of a network express manager in accordance with one or more embodiments of the invention.
    • Figure 4 shows a diagram of a virtual machine in accordance with one or more embodiments of the invention.
    • Figure 5 shows a flowchart of a method for creating a virtual network path in accordance with one or more embodiments of the invention.
    • Figures 6A-6C show an example of creating virtual network paths in accordance with one or more embodiments of the invention.
    • Figures 7A-7B show a flowchart of a method for migrating a virtual machine in accordance with one or more embodiments of the invention.
    • Figures 8A-8B show an example of migrating a virtual machine in accordance with one or more embodiments of the invention.
    • Figures 9A-10 show flowcharts of migrating virtual machines for power management in one or more embodiments of the invention.
    • Figures 11A-11C show an example of migrating a virtual machine for power management in accordance with one or more embodiments of the invention.
    DETAILED DESCRIPTION
  • Specific embodiments of the invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
  • In the following detailed description of embodiments of the invention, numerous specific details are set forth in order to provide a more thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
  • In general, embodiments of the invention provide a method and system for migrating virtual machines located on one blade in a blade chassis to another blade inn the blade chassis for performing power management. In one embodiment of the invention, after the migration of the virtual machines, the blade is powered down. Thus, the total power consumption of the system may be reduced.
  • Further, embodiments of the invention provide a mechanism for powering up blades when additional resources are required. Specifically, in one or more embodiments of the invention, the performance of each virtual machine is monitored to ensure that each blade is executing according to the performance standards. If the execution of a virtual machine is not adhering to performance standards because of lack of resources, then a blade is powered up to provide additional resources and virtual machines are migrated to the powered up blade.
  • Additionally or alternatively, the powering up and powering down of blades may be time based in accordance with one or more embodiments of the invention. Specifically, at a certain time, a blade may be selected to power down. Virtual machines may be migrated from the selected blade before powering down the blade. Conversely, the blade may be selected to be powered up. Specifically, the blade is powered up and virtual machines are migrated to the blade in accordance with one or more embodiments of the invention.
  • Figure 1 shows a diagram of a blade chassis (100) in accordance with one or more embodiments of the invention. The blade chassis (100) includes multiple blades (e.g., blade A (102), blade B (104)) communicatively coupled with a chassis interconnect (106). For example, the blade chassis (100) may be a Sun Blade 6048 Chassis by Sun Microsystems Inc., an IBM BladeCenter® chassis, an HP BladeSystem enclosure by Hewlett Packard Inc., or any other type of blade chassis. The blades may be of any type(s) compatible with the blade chassis (100). BladeCenter® is a registered trademark of International Business Machines, Inc. (IBM), headquartered in Armonk, New York.
  • In one or more embodiments of the invention, the blades are configured to communicate with each other via the chassis interconnect (106). Thus, the blade chassis (100) allows for communication between the blades without requiring traditional network wires (such as Ethernet cables) between the blades. For example, depending on the type of blade chassis (100), the chassis interconnect (106) may be a Peripheral Component Interface Express (PCI-E) backplane, and the blades may be configured to communicate with each other via PCI-E endpoints. Those skilled in the art will appreciate that other connection technologies may be used to connect the blades to the blade chassis.
  • Continuing with the discussion of Figure 1, to communicate with clients outside the blade chassis (100), the blades are configured to share a physical network interface (110). The physical network interface (110) includes one or more network ports (for example, Ethernet ports), and provides an interface between the blade chassis (100) and the network (i.e., interconnected computer systems external to the blade chassis (100)) to which the blade chassis (100) is connected. The blade chassis (100) may be connected to multiple networks, for example using multiple network ports.
  • In one or more embodiments, the physical network interface (110) is managed by a network express manager (108). Specifically, the network express manager (108) is configured to manage access by the blades to the physical network interface (110). The network express manager (108) may also be configured to manage internal communications between the blades themselves, in a manner discussed in detail below. The network express manager (108) may be any combination of hardware, software, and/or firmware including executable logic for managing network traffic.
  • Figure 2 shows a diagram of a blade (200) in accordance with one or more embodiments of the invention. "Blade" is a term of art referring to a computer system located within a blade chassis (for example, the blade chassis (100) of Figure 1). Blades typically include fewer components than stand-alone computer systems or conventional servers. In one embodiment of the invention, fully featured stand-alone computer systems or conventional servers may also be used instead of or in combination with the blades. Generally, blades in a blade chassis each include one or more processors and associated memory. Blades may also include storage devices (for example, hard drives and/or optical drives) and numerous other elements and functionalities typical of today's computer systems (not shown), such as a keyboard, a mouse, and/or output means such as a monitor. One or more of the aforementioned components may be shared by multiple blades located in the blade chassis. For example, multiple blades may share a single output device.
  • Continuing with discussion of Figure 2, the blade (200) includes a host operating system (not shown) configured to execute one or more virtual machines (e.g., virtual machine C (202), virtual machine D (204)). Broadly speaking, the virtual machines are distinct operating environments configured to inherit underlying functionality of the host operating system via an abstraction layer. In one or more embodiments of the invention, each virtual machine includes a separate instance of an operating system (e.g., operating system instance C (206), operating system instance D (208)). For example, the Xen® virtualization project allows for multiple guest operating systems executing in a host operating system. Xen® is a trademark overseen by the Xen Project Advisory Board. In one embodiment of the invention, the host operating system supports virtual execution environments (not shown). An example of virtual execution environment is a Solaris™ Container. In such cases, the Solaris™ Container may execute in the host operating system, which may be a Solaris™ operating system. Solaris™ is a trademark of Sun Microsystems, Inc. In one embodiment of the invention, the host operating system may include both virtual machines and virtual execution environments.
  • Many different types of virtual machines and virtual execution environment exist. Further, the virtual machines may include many different types of functionality, such as a switch, a router, a firewall, a load balancer, an application server, any other type of network-enabled service, or any combination thereof.
  • In one or more embodiments of the invention, the virtual machines and virtual execution environments inherit network connectivity from the host operating system via VNICs (e.g., VNIC C (210), VNIC D (212)). To the virtual machines and the virtual execution environments, the VNICs appear as physical NICs. In one or more embodiments of the invention, the use of VNICs allows an arbitrary number of virtual machines or and virtual execution environments to share the blade's (200) networking functionality. Further, in one or more embodiments of the invention, each virtual machine or and virtual execution environment may be associated with an arbitrary number of VNICs, thereby providing increased flexibility in the types of networking functionality available to the virtual machines and/or and virtual execution environments. For example, a virtual machine may use one VNIC for incoming network traffic, and another VNIC for outgoing network traffic. VNICs in accordance with one or more embodiments of the invention are described in detail in commonly owned U.S. Patent Application Serial No. 11/489,942 , entitled "Multiple Virtual Network Stack Instances using Virtual Network Interface Cards," in the names of Nicolas G. Droux, Erik Nordmark, and Sunay Tripathi, US 2008-0019359 .
  • VNICs in accordance with one or more embodiments of the invention also are described in detail in commonly owned U.S. Patent Application Serial No. 11/480,000 , entitled "Method and System for Controlling Virtual Machine Bandwidth" in the names of Sunay Tripathi, Tim P. Marsland, and Nicolas G. Droux, US 2008-0002704 .
  • In one embodiment of the invention, one of the blades in the blade chassis includes a control operating system executing in a virtual machine (also referred to as the control virtual machine). The control operating system is configured to manage the creation and maintenance of the virtual wires and/or virtual network paths (discussed below). In addition, the control operating system also includes functionality to migrate virtual machines between blades in the blade chassis (discussed below).
  • Continuing with the discussion of Figure 2, each blade's networking functionality (and, by extension, networking functionality inherited by the VNICs) includes access to a shared physical network interface and communication with other blades via the chassis interconnect. Figure 3 shows a diagram of a network express manager (300) in accordance with one or more embodiments of the invention. The network express manager (300) is configured to route network traffic traveling to and from VNICs located in the blades. Specifically, the network express manager (300) includes a virtual switching table (302), which includes a mapping of VNIC identifiers (304) to VNIC locations (306) in the chassis interconnect. In one or more embodiments, the VNIC identifiers (304) are Internet Protocol (IP) addresses, and the VNIC locations (306) are PCI-E endpoints associated with the blades (e.g., if the chassis interconnect is a PCI-E backplane). In another embodiment of the invention, the VNIC identifiers (304) may be media access control (MAC) addresses. Alternatively, another routing scheme may be used.
  • In one or more embodiments, the network express manager (300) is configured to receive network traffic via the physical network interface and route the network traffic to the appropriate location (i.e., where the VNIC is located) using the virtual switching table (302). In one embodiment of the invention, once a determination is made about where to route a given packet, the packet is stored in the appropriate receive buffer (308) or transmit buffer (310). In one embodiment of the invention, each VNIC listed in the virtual switching table (302) is associated with a receive buffer (308) and a transmit buffer (310). The receive buffer (308) is configured to temporarily store packets destined for a given VNIC prior to the VNIC receiving (via a polling or interrupt mechanism) the packets. Similarly, the transmit buffer (310) is configured to temporarily store packets received from the VNIC prior to send the packet towards its packet destination.
  • In one embodiment of the invention, the receive buffer (308) enables the VNICs to implement bandwidth control. More specifically, when the VNIC is implementing bandwidth control, packets remain in the receive buffer (308) until the VNIC (or an associated process) requests packets from the receive buffer (308). As such, if the rate at which packets are received is greater than the rate at which packets requested by the VNIC (or an associated process), then packets may be dropped from the receive buffer once the receive buffer is full. Those skilled in the art will appreciate that the rate at which packets are dropped from the receive buffer is determined by the size of the receive buffer.
  • Continuing with the discussion of Figure 3, the network express manager (300) may be configured to route network traffic between different VNICs located in the blade chassis. In one or more embodiments of the invention, using the virtual switching table (302) in this manner facilitates the creation of a virtual network path, which includes virtual wires (discussed below). Thus, using the virtual switching table (302), virtual machines located in different blades may be interconnected to form an arbitrary virtual network topology, where the VNICs associated with each virtual machine do not need to know the physical locations of other VNICs. Further, if a virtual machine is migrated from one blade to another, the virtual network topology may be preserved by updating the virtual switching table (302) to reflect the corresponding VNIC's new physical location (for example, a different PCI-E endpoint).
  • In some cases, network traffic from one VNIC may be destined for a VNIC located in the same blade, but associated with a different virtual machine. In one or more embodiments of the invention, a virtual switch may be used to route the network traffic between the VNICs independent of the blade chassis. Virtual switches in accordance with one or more embodiments of the invention are discussed in detail in commonly owned U.S. Patent Application Serial No. 11/480,261 , entitled "Virtual Switch," in the names of Nicolas G. Droux, Sunay Tripathi, and Erik Nordmark, US 2008-0002683 .
  • For example, Figure 4 shows a diagram of a virtual switch (400) in accordance with one or more embodiments of the invention. The virtual switch (400) provides connectivity between VNIC X (406) associated with virtual machine X (402) and VNIC Y (408) associated with virtual machine Y (404). In one or more embodiments, the virtual switch (400) is managed by a host (410) within which virtual machine X (402) and virtual machine Y (404) are located. Specifically, the host (410) may be configured to identify network traffic targeted at a VNIC in the same blade, and route the traffic to the VNIC using the virtual switch (400). In one or more embodiments of the invention, the virtual switch (400) may reduce utilization of the blade chassis and the network express manager by avoiding unnecessary round-trip network traffic.
  • Figure 5 shows a flowchart of a method for creating a virtual network path in accordance with one or more embodiments of the invention. In one or more embodiments of the invention, one or more of the steps shown in Figure 5 may be omitted, repeated, and/or performed in a different order. Accordingly, embodiments of the invention should not be considered limited to the specific arrangement of steps shown in Figure 5.
  • In one or more embodiments of the invention, in Step 502, VNICs are instantiated for multiple virtual machines. The virtual machines are located in blades, as discussed above. Further, the virtual machines may each be associated with one or more VNICs. In one or more embodiments of the invention, instantiating a VNIC involves loading a VNIC object in memory and registering the VNIC object with a host, i.e., an operating system that is hosting the virtual machine associated with the VNIC. Registering the VNIC object establishes an interface between the host's networking functionality and the abstraction layer provided by the VNIC. Thereafter, when the host receives network traffic addressed to the VNIC, the host forwards the network traffic to the VNIC. Instantiation of VNICs in accordance with one or more embodiments of the invention is discussed in detail in U.S. Patent Application 11/489,942 .
  • As discussed above, a single blade may include multiple virtual machines configured to communicate with each other. In one or more embodiments of the invention, in Step 504, a virtual switch is instantiated to facilitate communication between the virtual machines. As noted above, the virtual switch allows communication between VNICs independent of the chassis interconnect. Instantiation of virtual switches in accordance with one or more embodiments of the invention is discussed in detail in U.S. Patent Application 11/480,261 .
  • In one or more embodiments of the invention, in Step 506, a virtual switching table is populated. As noted above, the virtual switching table may be located in a network express manager configured to manage network traffic flowing to and from the virtual machines. Populating the virtual switching table involves associating VNIC identifiers (for example, IP addresses) with VNIC locations (for example, PCI-E endpoints). In one or more embodiments of the invention, the virtual switching table is populated in response to a user command issued via a control operating system, i.e., an operating system that includes functionality to control the network express manager.
  • In one or more embodiments of the invention, VNICs include settings for controlling the processing of network packets. In one or more embodiments of the invention, in Step 508, settings are assigned to the VNICs according to a networking policy. Many different types of networking policies may be enforced using settings in the VNICs. For example, a setting may be used to provision a particular portion of a blade's available bandwidth to one or more VNICs. As another example, a setting may be used to restrict use of a VNIC to a particular type of network traffic, such as Voice over IP (VoIP) or Transmission Control Protocol/IP (TCP/IP). Further, settings for multiple VNICs in a virtual network path may be identical. For example, VNICs in a virtual network path may be capped at the same bandwidth limit, thereby allowing for consistent data flow across the virtual network path. In one or more embodiments of the invention, a network express manager is configured to transmit the desired settings to the VNICs.
  • In one or more embodiments of the invention, once the VNICs are instantiated and the virtual switching table is populated, network traffic may be transmitted from a VNIC in one blade to a VNIC in another blade. The connection between the two VNICs may be thought of as a "virtual wire," because the arrangement obviates the need for traditional network wires such as Ethernet cables. A virtual wire functions similar to a physical wire in the sense that network traffic passing through one virtual wire is isolated from network traffic passing through another virtual wire, even though the network traffic may pass through the same blade (i.e., using the same virtual machine or different virtual machines located in the blade).
  • In one embodiment of the invention, each virtual wire may be associated with a priority (discussed below in Figures 11A-11C). In addition, each virtual wire may be associated with a security setting, which defines packet security (e.g., encryption, etc.) for packets transmitted over the virtual wire. In one embodiment of the invention, the bandwidth, priority and security settings are defined on a per-wire basis. Further, the aforementioned settings are the same for VNICs on either end of the virtual wire.
  • In one embodiment of the invention, a combination of two or more virtual wires may be thought of as a "virtual network path." In one embodiment of the invention, the bandwidth, priority and security settings for all virtual wires in the virtual network path are the same. Further, the aforementioned settings are the same for VNICs on either end of the virtual wires, which make up the virtual network path.
  • Continuing with the discussion of Figure 5, once the virtual wires and/or virtual network paths have been created and configured, network traffic may be transmitted over the virtual network path through, for example, a first virtual wire (Step 510) and then through a second virtual wire (Step 512). For example, when receiving network traffic from a client via the physical network interface, one virtual wire may be located between the physical network interface and a VNIC, and a second virtual wire may be located between the VNIC and another VNIC. In one embodiment of the invention, at least Steps 502-508 are performed and/or managed by the control operating system.
  • Figures 6A-6C show an example of creating virtual network paths in accordance with one or more embodiments of the invention. Specifically, Figure 6A shows a diagram of an actual topology (600) in accordance with one or more embodiments of the invention, Figure 6B shows how network traffic may be routed through the actual topology (600), and Figure 6C shows a virtual network topology (640) created by routing network traffic as shown in Figure 6B. Figures 6A-6C are provided as examples only, and should not be construed as limiting the scope of the invention.
  • Referring first to Figure 6A, the actual topology (600) includes multiple virtual machines. Specifically, the actual topology (600) includes a router (602), a firewall (604), application server M (606), and application server N (608), each executing in a separate virtual machine. The virtual machines are located in blades communicatively coupled with a chassis interconnect (622), and include networking functionality provided by the blades via VNICs (i.e., VNIC H (610), VNIC J (612), VNIC K (614), VNIC M (618), and VNIC N (620)). As shown in Figure 6A, each virtual machine is communicatively coupled to all other virtual machines. However, as discussed below, while there is full connectivity between the virtual machines, embodiments of the invention create virtual wires and/or virtual network paths to limit the connectivity of the virtual machines. For ease of illustration, the blades themselves are not shown in the diagram.
  • In one or more embodiments of the invention, the router (602), the firewall (604), application server M (606), and application server N (608) are each located in separate blades. Alternatively, as noted above, a blade may include multiple virtual machines. For example, the router (602) and the firewall (604) may be located in a single blade. Further, each virtual machine may be associated with a different number of VNICs than the number of VNICs shown in Figure 6A.
  • Continuing with discussion of Figure 6A, a network express manager (624) is configured to manage network traffic flowing to and from the virtual machines. Further, the network express manager (624) is configured to manage access to a physical network interface (626) used to communicate with client O (628) and client P (630).
  • In Figure 6A, the virtual machines, VNICs, chassis interconnect (622), network express manager (624), and physical network interface (626) are all located within a chassis interconnect. Client O (628) and client P (630) are located in one or more networks (not shown) to which the chassis interconnect is connected.
  • Figure 6B shows how network traffic may be routed through the actual topology (600) in accordance with one or more embodiments of the invention. In one or more embodiments of the invention, the routing is performed by the network express manager (624) using a virtual switching table (634).
  • As discussed above, network traffic routed to and from the VNICs may be thought of as flowing through a "virtual wire." For example, Figure 6B shows a virtual wire (632) located between application server M (606) and application server N (608). To use the virtual wire, application server M (606) transmits a network packet via VNIC M (618). The network packet is addressed to VNIC N (620) associated with application server N (608). The network express manager (624) receives the network packet via the chassis interconnect (622), inspects the network packet, and determines the target VNIC location using the virtual switching table (634). If the target VNIC location is not found in the virtual switching table (634), then the network packet may be dropped. In this example, the target VNIC location is the blade in which VNIC N (620) is located. The network express manager (624) routes the network packet to the target VNIC location, and application server N (608) receives the network packet via VNIC N (620), thereby completing the virtual wire (632). In one or more embodiments of the invention, the virtual wire (632) may also be used to transmit network traffic in the opposite direction, i.e., from application server N (608) to application server M (606).
  • Further, as discussed above, multiple virtual wires may be combined to form a "virtual network path." For example, Figure 6B shows virtual network path R (636), which flows from client O (628), through the router (602), through the firewall (604), and terminates at application server M (606). Specifically, the virtual network path R (636) includes the following virtual wires. A virtual wire is located between the physical network interface (626) and VNIC H (610). Another virtual wire is located between VNIC J (612) and VNIC K (614). Yet another virtual wire is located between VNIC L (616) and VNIC M (618). If the router (602) and the firewall (604) are located in the same blade, then a virtual switch may be substituted for the virtual wire located between VNIC J (612) and VNIC K (614), thereby eliminating use of the chassis interconnect (622) from communications between the router (602) and the firewall (604).
  • Similarly, Figure 6B shows virtual network path S (638), which flows from client P (630), through the router (602), and terminates at application server N (608). Virtual network path S (638) includes a virtual wire between the physical network interface (626) and VNIC H (610), and a virtual wire between VNIC J (612) and VNIC N (620). The differences between virtual network path R (636) and virtual network path S (638) exemplify how multiple virtual network paths may be located in the same blade chassis.
  • In one or more embodiments of the invention, VNIC settings are applied separately for each virtual network path. For example, different bandwidth limits may be used for virtual network path R (636) and virtual network path S (638). Thus, the virtual network paths may be thought of as including many of the same features as traditional network paths (e.g., using Ethernet cables), even though traditional network wires are not used within the blade chassis. However, traditional network wires may still be required outside the blade chassis, for example between the physical network interface (626) and client O (628) and/or client P (630).
  • Figure 6C shows a diagram of the virtual network topology (640) that results from the use of the virtual network path R (636), virtual network path S (638), and virtual wire (632) shown in Figure 6B. The virtual network topology (640) allows the various components of the network (i.e., router (602), firewall (604), application server M (606), application server N (608), client O (628), and client P (630)) to interact in a manner similar to a traditional wired network. However, as discussed above, communication between the components located within the blade chassis (i.e., router (602), firewall (604), application server M (606), and application server N (608)) is accomplished without the use of traditional network wires.
  • Embodiments of the invention allow for virtual network paths to be created using virtual wires, without the need for traditional network wires. Specifically, by placing virtual machines in blades coupled via a chassis interconnect, and routing network traffic using VNICs and a virtual switching table, the need for traditional network wires between the virtual machines is avoided. Thus, embodiments of the invention facilitate the creation and reconfiguration of virtual network topologies without the physical labor typically involved in creating a traditional wired network.
  • In one embodiment of the invention, one or more virtual machines may be migrated from one blade to another blade in the blade chassis. Migration may be necessitated by a number of factors. For example, a virtual machine may need to be migrated from one blade to another blade because the virtual machine requires additional resources, which are not available on the blade on which it is currently executing. Alternatively, a virtual machine may need to be migrated from one blade to another blade because the blade on which the virtual machine is currently executing is powering down, failing, and/or other suspending operation. Alternatively, the migration may be triggered based on a migration policy. Migration policies are discussed below and in Figures 9A-11C.
  • In one embodiment of the invention, at least the bandwidth constraint associated with virtual machine is preserved across the migration, such that at least the bandwidth constraint associated with virtual machine is the same before and after the migration of the virtual machine. Those skilled in the art will appreciate that the bandwidth associated with a given virtual machine is enforced by VNIC associated with the virtual machine. As the VNIC is located in the host executing on the blade, the host includes functionality to associate the VNIC with the virtual machine and set the bandwidth of the VNIC.
  • Figures 7A-7B show flowcharts of a method for migrating a virtual machine in accordance with one or more embodiments of the invention. In one or more embodiments of the invention, one or more of the steps shown in Figures 7A-7B may be omitted, repeated, and/or performed in a different order. Accordingly, embodiments of the invention should not be considered limited to the specific arrangement of steps shown in Figure 7A-7B.
  • Referring to Figure 7A, in Step 700, a virtual machine (VM) to migrate is identified. The determination of whether to migrate a given VM may be based on any number of factors, some of which are discussed above. In Step 207, migration criteria for the VM are obtained. In one embodiment of the invention, the migration criteria corresponds the bandwidth constraint of the VM (e.g., the minimum bandwidth and/or maximum bandwidth available to the VM), a hardware constraint (e.g., minimum amount of computing resources required by the VM), a software constraint (e.g., version of host operating system required by VM), and/or any other constraint required by the VM. In one embodiment of the invention, the migration constraints may be obtained from the VM, the host on which the VM is executing, the control operating system, or any combination thereof.
  • In Step 704, the control operating system sends a request including the migration criteria to hosts executing on blades in the blade chassis. In one embodiment of the invention, the control operating system uses a multicast message to send the request. In Step 706, the control operating system receives responses from the hosts. The responses may include: (i) a response indicating that the host which sent the response is unable to satisfy the migration criteria or (ii) a response indicating that the host which sent the response is able to satisfy the migration criteria.
  • In Step 708, a determination is made, using the responses received in Step 706, about whether there are sufficient resources available to migrate the VM. If there are insufficient resources, the method proceeds to Figure 7B (described below). Alternatively, if there are sufficient resources, the method proceeds to Step 710. In Step 710, a target host is selected. The target host corresponds to a host to which the VM will be migrated. This selection is made by the control operating system based on the responses received in Step 706.
  • In Step 712, execution on the VM is suspended. In one embodiment of the invention, suspending the VM may also include suspending execution of associated VNICs (discussed below). In Step 714, state information required to migrate the VM is obtained. In one embodiment of the invention, the state information corresponds to information required to resume execution of the VM on the target host from the state of the VM prior to being suspended in Step 712.
  • In Step 716, the VNIC(s) to migrate with the VM is identified. Identifying the VNIC(s) corresponds to determining which VNIC(s) is associated with the VM. In one embodiment of the invention, a VNIC is associated with the VM if the VNIC is executing on the same host as the VM and the VM receives packets from and/or transmits packets to the VNIC. In Step 718, information required to migrate the VNIC identified in Step 716 is obtained. In one embodiment of the invention, the information corresponds to information required to resume execution of the VNIC on the target host from the state of the VNIC prior to suspending the VM in Step 712.
  • In Step 720, VM and VNIC(s) are migrated to the target host. In Step 722, the VM and VNIC(s) are configured on the target host. In one embodiment of the invention, the VM and VNIC(s) are configured such that they operate in the same manner on the target host as they operated on the source host (i.e., the host from which they were migrated). Configuring the VM and VNICs may also include configuring various portions of the target host. In one embodiment of the invention, the VM and VNIC(s) are configured using the information obtained in Steps 714 and 718. In one embodiment of the invention, Step 722 is initiated and monitored by the control operating system. In Step 724, the virtual switching table is updated to reflect that the VNIC(s) identified in Step 716 are on the target host. In Step 726, the execution of the VM is resumed on the host.
  • Referring to Figure 7B, as described above, if there are insufficient resources, the method proceeds to Figure 7B. In Step 726, the lowest priority active virtual wire operating in the blade chassis is obtained. In one embodiment of the invention, the control operating system maintains a data structure which includes the priorities of the various virtual wires operating in the blade chassis. Further, in one embodiment of the invention, only the control operating system includes functionality to set and change the priorities of the virtual wires.
  • In Step 728, the lowest priority active virtual wire is suspended. In one embodiment of the invention, suspending the lowest priority active virtual wire includes suspending operation of the VNICs on either end of the virtual wire. In addition, the VMs associated with the VNICs may also be suspended. Further, suspending the VNICs and, optionally, the VMs, results in freeing bandwidth and computing resources on the respective blades on which the suspended VNICs and VMs were executed.
  • In Step 730, the control operating system sends a request including the migration criteria to hosts executing on blades in the blade chassis. In one embodiment of the invention, the control operating system uses a multicast message to send the request. In Step 732, the control operating system receives responses from the hosts. The responses may include: (i) a response indicating that the host which sent the response is unable to satisfy the migration criteria or (ii) a response indicating that the host which sent the response is able to satisfy the migration criteria.
  • In Step 734, a determination is made, using the responses received in Step 732, about whether there are sufficient resources available to migrate the VM. If there are insufficient resources, the method proceeds to Step 726. Alternatively, if there are sufficient resources, the method proceeds to Step 710 in Figure 7A.
  • In one embodiment of the invention, if one or more virtual wires are suspended per Step 728, then the method described in Figures 7A and 7B may be used to migrate the VMs associated with the suspended virtual wires. In one embodiment of the invention, the order in which VMs are migrated to resume activity of suspended virtual wires is based on the priority of the suspended virtual wires.
  • Figures 8A-8B show an example of migrating a virtual machine in accordance with one or more embodiments of the invention. Figures 8A-8B are provided as examples only, and should not be construed as limiting the scope of the invention.
  • Referring to Figure 8A, consider the scenario in which the system includes three blades (800, 802, 804) connected to the chassis interconnect (824) in a blade chassis (not shown). The system is initially configured such that Blade A (800) includes Host A (806), Blade B (802) includes Host B (808) and Blade C (804) includes Host C (810).
  • As shown in Figure 8A, Host A (806) includes VNIC A (818) associated with Control Operation System (OS) (812) and VNIC B (820) associated with Virtual Machine (VM) A (814). Further, Host B (808) includes VNIC C (822) associated with VM B (816). Host C (810) initially does not include any VMs.
  • As shown in Figure 8A, VM A (814) communicates with VM B (816) using a virtual wire with a bandwidth limit of 5 gigabits per second (GBPS). The virtual wire connects VNIC B (820) with VNIC C (822). As such, network traffic from VM A (814) is transmitted through VNIC B (820) to receive buffer C (830) where it remains until it is sent to or requested by VNIC C (822). Similarly, network traffic from VM B (816) is transmitted through VNIC C (822) to receive buffer B (828) where it remains until it is sent to or requested by VNIC B (820). The virtual switching table (832) in the interface manager (826) implements the virtual wire.
  • After system has been configured as described above and shown in Figure 8A, a determination is made that VM A (814) requires additional processing resources and that such resources are not available on Host A (806). In accordance with Figures 7A and 7B, the control OS determines the migration criteria for VM A (806). The migration criteria include a hardware constraint defining the processing resources required by VM A (818) as well as a bandwidth constraint (i.e., 5 GBPS).
  • The control OS then sends a request including the migration criteria to the hosts in the blade chassis (i.e., Host B (808) and Host C (810)). Host B (808) responds that it does not have sufficient resources to satisfy the migration criteria. Host C responds that it has sufficient resources to satisfy the migration criteria. At this stage, the control OS selects Host C (810) as the target host. The control OS then initiates the migration of VM A (814) along with VNIC B (820) to Host C (810) in accordance with Steps 712-726.
  • The result of the migration is shown in Figure 8B. As shown in Figure 8B, after the migration, VM A (814) and VNIC B (820) are located on Host C (820). Further, the virtual wire is preserved across the migration.
  • In one or more embodiments of the invention, migration of virtual machines is based on one or more migration policies. A migration policy defines when to coalesce or expand the number of blades executing in a blade chassis. Specifically, a migration policy may be used to perform power management. The migration policy may be a coalesce migration policy, an expansion migration policy, or a time based migration policy.
  • In one or more embodiments of the invention, a coalesce policy defines when to decrease the number of blades executing in the blade chassis. Decreasing the number of blades executing may cause a decrease in the total power usage by the blade chassis. In one or more embodiments of the invention, the coalesce policy is based on the percentage resource usage on the blade chassis. A percentage resource usage is the amount that one or more resources on the blade chassis are used as compared to the total resources available. For example, the resource may be the percentage of the processor used, the throughput of packets, latency, input/output requests, and bandwidth availability.
  • The coalesce migration policy may be defined with different resource usage percentage ranges. In the lowest range (i.e., the percentage resource used is low) the migration policy may specify that the virtual machines on the blade are migratable and, subsequently, the blade may be powered down. In the medium range, the migration policy may specify that that the blade may accept virtual machines from another blade in order for the other blade to be powered down. In the high range, the migration policy may specify that the blade should not be powered down and the blade cannot accept additional virtual machines. For example, consider the scenario in which a migration policy defines the low range, medium range, and high range as below 40%, between 40% and 80%, and above 80% respectively. If the percentage resource usage on blade X is 25% and on blade Y is 57%, then based on the migration policy, virtual machines on blade X may be migrated to blade Y. After the virtual machines are migrated off of blade X, blade X may be powered down.
  • Another type of migration policy is an expansion migration policy. In one or more embodiments of the invention, an expansion migration policy defines performance standards for the virtual machines executing on a blade. The performance standard may be defined on a per-blade or per-virtual machine basis. For example, the performance standard may require that the throughput of packets for the virtual machine is at a certain level, the latency for processing instructions is below a specified level, and/or the available bandwidth for the blade is at a specified level. When the performance standards are not complied with by the execution of the virtual machine, then the expansion migration policy may indicate that one or more virtual machines should be migrated from the current blade to a different blade. If all of the currently executing blades do not have resources available, then a blade which was previously powered down may be powered up. Thus, the virtual machines may be migrated to the newly powered up blade.
  • In one or more embodiments of the invention, the migration policy may be a time based migration policy. A time based migration policy defines, for different times, the number of blades that should be powered up. For example, during the day (e.g., between 8:00 AM and 6:00 PM), the time based migration policy may specify that all blades should be powered up. During the evening and on weekends, the time based migration policy may specify that only half of the blades should be powered up. Thus, when the blades are expected to be operating at peak capacity, such as during the day, the migration policy specifies that all blades should be powered up. Conversely, when the blades are expected to be operating minimally, such as at night or on the weekends, the migration policy specifies that only a portion of the blades should be powered up in order to conserve energy.
  • Figures 9A-10 show flowcharts of migrating virtual machines for power management in one or more embodiments of the invention. While the various steps in these flowcharts are presented and described sequentially, one of ordinary skill will appreciate that some or all of the steps may be executed in different orders, may be combined or omitted, and some or all of the steps may be executed in parallel.
  • Figure 9A shows a flowchart of a method for migrating virtual machines based on a coalesce migration policy. In step 900, resource usage data for a blade is obtained. Specifically, each blade monitors how much of each resource specified in the coalesce migration policy is in use by the blade. Monitoring resource usage may be performed using techniques known in the art. If a centralized system for determining when to migrate is used, the obtaining the resource usage data may be performed by the control OS. Alternatively, a peer-to-peer based system may be used to determine resource usage.
  • In step 902, resource usage data is compared with the migration policy. In one or more embodiments of the invention, the comparison is performed on a per- blade basis. The blade being compared with the migration policy may be referred to as the current blade. Comparing the current blade with the migration policy may be performed by comparing the amount of each resource used as defined in the resource usage data with the migration policy.
  • In step 904, a determination is made whether to migrate virtual machines from the current blade. Specifically, a determination is made whether the migration policy indicates that virtual machines should be migrated from current the blade.
  • If a determination is made to migrate virtual machines from the current blade, then a determination is made whether a target blade is available in step 906. One method for determining whether a target blade is available may be performed by gathering migration criteria for each virtual machine executing on the current blade. If a host on the current blade is performing step 902 and/or step 904, the host may send a request to hosts executing on each blade in the blade chassis. The request may include migration criteria for each virtual machine executing on the current blade. Hosts on the other blades may respond to the request with an indication of whether they can satisfy the migration criteria of one or more virtual machines. If the host on another blade can satisfy the migration criteria, then this host becomes the target host for one or more virtual machines on the current blade. In one or more embodiments of the invention, more than one target host may exist.
  • Rather than sending a request to the hosts on the different blades, the hosts may use shared data repository in the network express manager. Specifically, each host may read and write migration criteria, resource availability, and/or responses to requests to the shared data repository. Thus, by accessing the shared data repository, the current blade may determine whether a target blade is available.
  • Alternatively, the current blade may send a message to the control OS that the virtual machines on the current blade need to migrate or the control OS may be performing step 902 and/or step 904. In such a scenario, the control OS may manage the migration as discussed above and in Figures 7A and 7B. In one or more embodiments of the invention, when the migration is based on a coalesce migration policy, no virtual wires are suspended to perform the migration. Rather, in such a scenario, the determination is made that a target blade is unavailable. However, in alternative embodiments, the control OS may suspend virtual wires based on the priority. For example, the migration policy may indicate that virtual wires operating at a specified low level of priority may be suspended for the purposes of power management. In such a scenario, if sufficient resources are available after suspending one or more virtual wires, then the target host is considered available.
  • If a target blade is available, then virtual machines are migrated from the current blade to the target blade in the blade chassis in step 908. Migrating the virtual machines may be performed as discussed above and in step 712-726 of Figure 7A. In step 910, the current blade is powered down. Specifically, the blade may be powered down or placed in standby mode.
  • If, as an alternative in step 904, a determination is made not to migrate virtual machines from the current blade, then a determination is made whether to accept virtual machines on the current blade in Step 912. Specifically, a determination is made whether the coalesce migration policy indicates that the current blade can receive data based on the resource usage data.
  • If a determination is made to accept virtual machines in step 912, then a determination is made whether migratable virtual machines exist in the blade chassis in step 914. A migratable virtual machine is a virtual machine on a different blade, such as a blade being powered down, that can be migrated to the current blade. Determining whether migratable virtual machines exist may be based on, for example, whether any requests with migration criteria are sent to the current blade, whether the shared data repository indicates that at least one blade is powering down, or whether the control OS identifies a virtual machine to migrate to the current blade after comparing resource usage data on the different blades. The aforementioned examples or only a few of the techniques that may be used to identify whether migratable virtual machines exist. Other techniques may be used without departing from the scope of the invention.
  • If migratable virtual machines exist, then the blade to power down is selected in step 916. The selected blade to power down corresponds to a blade that may be powered down in accordance with the migration policy and has the migratable virtual machines. In step 918, the migratable virtual machines are migrated from the selected blade to the current blade. Migrating the virtual machines may be performed as discussed above and in step 712-726 of Figure 7A. In step 920, the selected blade is powered down. Specifically, the blade may be powered down or placed in standby mode. Returning to step 912 and step 914, if a determination is made not to accept virtual machines on the current blade or if no migratable virtual machines exist, then the current blade may continue executing without migrating virtual machines to the current blade.
  • Figure 9B shows a flowchart of a method for migrating virtual machines based on an expansion migration policy. In step 950, the performance of each virtual machine executing on the blade is monitored. Monitoring the performance of each virtual machine may be performed using techniques known in the art. In step 952, the performance is compared with the migration policy to identify virtual machines not adhering to the performance standards of the migration policy. If any virtual machines are not adhering to the performance standards, than a determination may be made as to which resources (e.g., main memory, number of processors, guaranteed bandwidth, etc.) would be required for the identified virtual machines to comply with the performance standards.
  • At this stage, a determination may be made to migrate one or more virtual machines from the current blade. The virtual machines to migrate are selected in step 954. The selected virtual machines may be the identified virtual machines not adhering to the performance standards or one or more of the other virtual machines that are executing on the blade. For example, if the resource that would be required for the identified virtual machines would be available once other virtual machines are migrated, then a determination may be made to migrate the other virtual machines. The determination may be based on the allocation of resources. For example, if load balancing across the blades is desired, then the selected virtual machines may be based on the virtual machines that achieve load balancing.
  • In step 956, a determination is made whether to migrate the selected virtual machine(s) to an existing blade. Determining whether to migrate the selected virtual machines is based on whether an existing blade has sufficient resources available for the selected virtual machine. For example, as discussed above, determining whether a blade has sufficient resources may be performed by sending a request with migration criteria to the host of each blade. If a host responds that sufficient resources are available, then the selected virtual machines can be migrated to an existing blade. In step 958, the selected virtual machines are migrated to an existing blade. Migrating the selected virtual machines to an existing blade may be performed as discussed above.
  • Returning to step 956, if a determination is made not to migrate the virtual machines to an existing blade than a determination is made whether to migrate the virtual machines to a new blade in step 960. The determination is made whether to migrate to a new blade may be based on whether a powered down blade exists and can be powered up, and whether the resources of the powered down blade as sufficient to satisfy the migration criteria for the selected virtual machines.
  • If a determination is made to migrate virtual machines to a new blade, then a blade is powered up in step 962. Powering up the blade may be performed using techniques known in the art. In step 966, the selected virtual machines are migrated to the powered up blade. Migrating the selected virtual machines may be performed as discussed above.
  • Figure 10 shows a flowchart of a method for migrating virtual machines based on a time based migration policy. In step 1000, the time is monitored. In step 1002, a determination is made whether to trigger a time-based migration. Triggering a time-based migration is based on whether the current time matches a time in the time based migration policy. If a determination is made not to trigger a time based migration, then monitoring the time is continued in step 1000.
  • Alternatively, if a determination is made to trigger a time based migration, then a determination is made whether to expand the number of blades in use (i.e., powered up) in step 1004. Specifically, a determination is made whether the time-based migration policy indicates that the number of blades should be expanded. If a determination is made to expand the number of blades in use, then a new blade is powered up in step 1006. Powering up a new blade may be performed using techniques known in the art.
  • In step 1008, virtual machines to migrate to the new blade are selected. In one or more embodiments of the invention, the virtual machines are selected based on migration criteria for each virtual machine and/or to achieve load balancing across the blades in the blade chassis. In step 1010, virtual machines are migrated to the new blade. Migrating the virtual machines may be performed as discussed above.
  • Alternatively, if in step 1004, a determination is made not to expand the number of blades in use, then the number of blades in use is coalesced according to the time based migration policy. Specifically, the migration policy indicates that the number of blades should be reduced. Accordingly, a blade to power down is selected in step 1012. Selecting the blade to power down may be based on load balancing, the current resource usage for each blade and other such criteria. For example, the blade that is selected may be the blade that has the least number of virtual machines executing on the blade. In step 1014, virtual machines are migrated from the selected blade and the blade is powered down in step 1016. Migrating virtual machines and powering down the blade may be performed as discussed above.
  • Figures 11A-11C show an example of migrating a virtual machine in accordance with one or more embodiments of the invention. Figures 11A-11C are provided as examples only, and should not be construed as limiting the scope of the invention.
  • Referring to Figure 11A, consider the scenario in which the system includes three blades (1100, 1102, 1104) connected to the chassis interconnect (not shown) in a blade chassis (not shown). The system is initially configured such that Blade A (1100) includes Host A (1106), Blade B (1102) includes Host B (1108) and Blade C (1104) includes Host C (1110).
  • As shown in Figure 11A, Host A (1106) includes VNIC A (1122) associated with VM A (1112) and VNIC B (1124) associated with VM B (1114). Further, Host B (1108) includes VNIC C1 (1126) and VNIC C2 (1128) both associated with VM C (1116). Finally, Host C (1110) includes VNIC D (1130) associated with VM D (1118) and VNIC E (1132) associated with VM E (1120).
  • As shown in Figure 11A, VM A (1112) communicates with VM E (1120) using virtual wire (VW) A (1136) with a bandwidth limit of 5 gigabits per second (GBPS). VW A (1136) connects VNIC A (1122) with VNIC E (1132). Further, VM B (1114) communicates with VM C (1116) using VW B (1134) with a bandwidth limit of 3 GBPS. VW B (1134) connects VNIC B (1124) with VNIC C1 (1126). Finally, VM C (1116) communicates with VM D (1118) using VW C2 (1136) with a bandwidth limit of 8 GBPS. VW C2 (1136) connects VNIC C2 (1128) with VNIC D (1130). Each of the VWs is associated with the priority. The priority of the VWs in Figure 11A from highest to lowest is: VW B (1134), VW C2 (1136), and VW A (1138).
  • After system has been configured as described above and shown in Figure 11A, the resource usage data is monitored for each blade (1100, 1102, 1104). The resource usage data indicates that blade B (1102) is using only 10% of the resources. Accordingly, the coalesce migration policy indicates to power down blade B (1102). As such, VM C (1116) must be migrated to another host.
  • The control OS then sends a request including the migration criteria to the hosts in the blade chassis (i.e., Host A (1108) and Host C (1110)). The migration criteria indicates the amount of resources required by VM C (1116). Host C (1110) responds that Host C (1110) is using only 60% of the resources, accordingly, Host C (1110) has sufficient resources. At this stage, the control OS selects Host C (1110) as the target host. The control OS then initiates the migration of VM C (1116) along with VNIC C1 (1126) and VNIC C2 (1128) to Host C (1110) in accordance with Steps 712-726. Once the migration is complete, Blade B (1120) is powered down as shown in Figure 11B. As shown in Figure 11B, the VWs are preserved across the migration. However, because VNIC C2 (1128) and VNIC D (1130) are now located on Host C (1118), virtual switch (VS) C2 (1137) instead of VW C2 (1136) is used to connect VNIC C2 (1128) and VNIC D (1130).
  • Referring to Figure 11C, consider the scenario in which the system shown in Figure 11A is subject to a time-based migration policy and that the current time is 6:00pm. The time-based migration policy indicates that only two blades should be executing at 6:00pm. The control OS may select blade B (1102) to power down. As such, VM C (1116) must be migrated to another host.
  • The control OS then sends a request including the migration criteria to the hosts in the blade chassis (i.e., Host A (1108) and Host C (1110)). The migration criteria indicate the amount of resources required by VM C (1116). In accordance with Figures 7A and 7B, the control OS (not shown) determines the migration criteria for VM C (1116). The migration criteria include a bandwidth constraint (i.e., 11 GBPS). The control OS then sends a request including the migration criteria to the hosts in the blade chassis (i.e., Host A (1108) and Host C (1110)). Both Host A (1108) and Host C (1110) respond that they do not have sufficient resources to satisfy the migration criteria. At this stage, the control OS, pursuant to Figure 7B, identifies the lowest priority active VW (i.e., VW A). VW A (1138) is subsequently suspended. Suspending VW A (1138) includes suspending VM A (1112), VNIC A (1122), VM E (1120), and VNIC E (1132)
  • The control OS then resends the request include the migration criteria to the hosts in the blade chassis (i.e., Host A (1108) and Host C (1110)). Host A (1106) responds that it does not have sufficient resources to satisfy the migration criteria. Host C (1110) responds that it has sufficient resources to satisfy the migration criteria. At this stage, the control OS selects Host C (1110) as the target host.
  • The control OS then initiates the migration of VM C (1116) along with VNIC C1 (1126) and VNIC C2 (1128) to Host C (1110) in accordance with Steps 712-726. Once the migration is complete, Blade B (1120) is powered down. The result of the migration is shown in Figure 11C. As shown in Figure 11C, the VWs are preserved across the migration. However, because VNIC C2 (1128) and VNIC D (1130) are now located on Host C (1118), virtual switch (VS) C2 (1137) instead of VW C2 (1136) is used to connect VNIC C2 (1128) and VNIC D (1130).
  • Those skilled in the art will appreciate that while the invention has been described with respect to using blades, the invention may be extended for use with other computer systems, which are not blades. Specifically, the invention may be extended to any computer, which includes at least memory, a processor, and a mechanism to physically connect to and communicate over the chassis bus. Examples of such computers include, but are not limited to, multiprocessor servers, network appliances, and light-weight computing devices (e.g., computers that only include memory, a processor, a mechanism to physically connect to and communicate over the chassis bus), and the necessary hardware to enable the aforementioned components to interact.
  • Further, those skilled in the art will appreciate that if one or more computers, which are not blades, are not used to implement the invention, then an appropriate chassis may be used in place of the blade chassis.
  • Software instructions to perform embodiments of the invention may be stored on a computer readable medium such as a compact disc (CD), a diskette, a tape, a file, or any other computer readable storage device.
  • While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims.

Claims (14)

  1. A method for power management, comprising:
    gathering resource usage data for a first blade (102) and a second blade (104) on a blade chassis (100);
    migrating each virtual machine "VM" (202; 204) executing on the first blade to the second blade based on the resource usage data and a first migration policy, wherein the first migration policy defines when to condense the number of blades operating on the blade chassis;
    wherein at least one VM executing on the first blade is connected to a VM on the second blade using a virtual wire, wherein the connectivity provided by the virtual wire is maintained during the migration of the at least one VM to the second blade, and wherein the virtual wire is implemented by a virtual switching table;
    wherein the migrating comprises:
    obtaining migration criteria for a first VM executing on the first blade,
    suspending execution of the first VM on the first blade and obtaining information to migrate the first VM,
    identifying a first virtual network interface "VNIC" executing in the first blade that is associated with the first VM and obtaining information required to migrate the first VNIC,
    migrating the first VM and the first VNIC to the second blade using the information required to migrate the first VNIC,
    configuring the first VNIC and the first VM to satisfy the migration criteria,
    updating the virtual switching table in the blade chassis to reflect the migration of the first VM from the first blade to the second blade, and
    resuming execution of the first VM on the second blade; and
    powering down the first blade after each VM executing on the first blade is migrated from the first blade.
  2. The method of claim 1, wherein the first migration policy comprises a power management policy defining a minimum percentage of resource usage for executing first blade, and wherein each VM executing on the first blade is migrated from the first blade when a current percentage of resource usage is below the minimum percentage of resource usage.
  3. The method of claim 1 or 2, further comprising:
    monitoring performance of each VM executing on the second blade;
    identifying the second blade as failing to comply with a performance standard in a second migration policy, wherein the second migration policy defines when to expand the number of blades operating on the blade chassis;
    selecting a set of virtual machines executing on the second blade to migrate to a third blade based on the second migration policy; and
    migrating the set of virtual machines to the third blade.
  4. The method of claim 3, further comprising:
    powering up the third blade after identifying the second blade as failing to comply with the performance standard.
  5. The method of claim 3, further comprising:
    sending a request comprising migration criteria to the third blade, wherein the migration criteria identifies minimum performance requirements for executing the set of virtual machines; and
    receiving a response to request from the third blade, wherein response indicates that the third blade can satisfy the migration criteria,
    wherein the third blade is powered up when the second blade is identified as failing to comply with the performance standard.
  6. The method of any preceding claim,
    wherein the migration criteria is a bandwidth constraint for the first VM.
  7. The method of any preceding claim, further comprising:
    triggering a first migration based on a first current time matching a first time specified in a migration policy, wherein the migration policy defines when to condense the number of blades operating on a blade chassis; and
    selecting a first blade to power down based on the triggering of the first migration;
    wherein migrating each VM executing on the first blade to a second blade is based on the triggering of the first migration.
  8. The method for power management of claim 1, wherein the blade chassis carries a plurality of blades, the method comprising:
    triggering a first migration based on a first current time matching a first time specified in the migration policy, wherein the migration policy defines when to condense the number of blades operating on a blade chassis;
    selecting the first blade from the plurality of blades to power down based on the triggering of the first migration;
    migrating each VM executing on the first blade to the second blade based on the triggering of the first migration; and
    powering down the first blade after each VM executing on the first blade is migrated from the first blade.
  9. The method of claim 7 or 8, further comprising:
    triggering a second migration based on a second current time matching a second time specified in the migration policy, wherein the migration policy further defines when to expand the number of blades operating on the blade chassis;
    selecting a third blade to power up based on the triggering of the second migration;
    powering up the third blade after triggering the second migration;
    selecting a set of virtual machines executing on the second blade to migrate to the third blade based on the second migration policy;
    migrating the set of virtual machines to the third blade.
  10. The method of claim 8 or 9, wherein migrating comprises:
    obtaining migration criteria for a first VM executing on the first blade, wherein the migration criteria is a bandwidth constraint for the first VM;
    sending a request comprising the migration criteria to the second blade;
    receiving a response to request from the second blade, wherein response indicates that the second blade can satisfy the migration criteria;
    suspending execution of the first VM on the first blade and obtaining information to migrate the first VM;
    identifying a first VNIC executing in the first blade that is associated with the first VM and obtaining information required to migrate the first VNIC;
    migrating the first VM and the first VNIC to the second blade using the information required to migrate the first VNIC;
    configuring the first VNIC and the first VM to satisfy the migration criteria;
    updating a virtual switching table in the blade chassis to reflect the migration of the first VM from the first blade to the second blade; and
    resuming execution of the first VM on the second blade.
  11. The method of claim 10, wherein a second VNIC is located on a third blade in the chassis, wherein the second VNIC is associated with a second VM, wherein the first VNIC is connected to the second VNIC using a virtual wire, and wherein the virtual wire is implemented by the virtual switching table.
  12. The method of claim 7, 8 or 9, wherein migrating comprises:
    obtaining migration criteria for the first VM executing on the first blade,
    wherein the first VM is located on a first blade in a blade chassis,
    wherein the first VM is associated with a first VNIC,
    wherein the first VNIC is connected to a second VNIC on a third blade in the blade chassis using a first virtual wire having a first priority, and
    wherein the migration criteria is a bandwidth constraint for the first VM;
    sending a request comprising the migration criteria to the second blade in the blade chassis,
    wherein the third blade comprises a third VM associated with the third VNIC, a second VM associated with the second VNIC,
    wherein the second VNIC is connected to a fourth VNIC on the second blade in the blade chassis using a third virtual wire having a third priority,
    receiving a response to request from the second blade, wherein response indicates that the second blade cannot satisfy the migration criteria;
    suspending the third virtual wire, wherein the first priority is higher than the third priority and wherein suspending the third virtual wire comprises suspending the second VNIC and fourth VNIC;
    suspending execution of the first VM on the first blade and obtaining information to migrate the first VM;
    obtaining information required to migrate the first VNIC;
    migrating the first VM and the first VNIC to the second blade using the information required to migrate the first VM and the information required to migrate the first VNIC;
    configuring the first VNIC and the first VM to satisfy the migration criteria;
    updating a virtual switching table in the blade chassis to reflect the migration of the first VM from the first blade to the second blade; and
    resuming execution of the first VM on the second blade.
  13. A computer program product comprising computer readable program code which when executed in a computer system causes said computer system to implement the method of any one of the preceding claims.
  14. The computer program product of claim 13 comprising a computer readable medium comprising the computer readable program code.
EP09774210.0A 2008-06-30 2009-06-29 Method and system for power management in a virtual machine environment withouth disrupting network connectivity Active EP2304565B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/165,456 US8099615B2 (en) 2008-06-30 2008-06-30 Method and system for power management in a virtual machine environment without disrupting network connectivity
PCT/US2009/049003 WO2010002759A1 (en) 2008-06-30 2009-06-29 Method and system for power management in a virtual machine environment withouth disrupting network connectivity

Publications (2)

Publication Number Publication Date
EP2304565A1 EP2304565A1 (en) 2011-04-06
EP2304565B1 true EP2304565B1 (en) 2020-03-25

Family

ID=40973127

Family Applications (1)

Application Number Title Priority Date Filing Date
EP09774210.0A Active EP2304565B1 (en) 2008-06-30 2009-06-29 Method and system for power management in a virtual machine environment withouth disrupting network connectivity

Country Status (4)

Country Link
US (2) US8099615B2 (en)
EP (1) EP2304565B1 (en)
CN (1) CN102105865B (en)
WO (1) WO2010002759A1 (en)

Families Citing this family (95)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4488072B2 (en) * 2008-01-18 2010-06-23 日本電気株式会社 Server system and power reduction method for server system
JP2010097533A (en) * 2008-10-20 2010-04-30 Hitachi Ltd Application migration and power consumption optimization in partitioned computer system
US8214829B2 (en) 2009-01-15 2012-07-03 International Business Machines Corporation Techniques for placing applications in heterogeneous virtualized systems while minimizing power and migration cost
JP5476764B2 (en) * 2009-03-30 2014-04-23 富士通株式会社 Server apparatus, computer system, program, and virtual computer migration method
US9852011B1 (en) * 2009-06-26 2017-12-26 Turbonomic, Inc. Managing resources in virtualization systems
CN101937357B (en) * 2009-07-01 2013-11-06 华为技术有限公司 Virtual machine migration decision-making method, device and system
US8495629B2 (en) * 2009-09-24 2013-07-23 International Business Machines Corporation Virtual machine relocation system and associated methods
WO2011134716A1 (en) * 2010-04-26 2011-11-03 International Business Machines Corporation Managing a multiprocessing computer system
EP2521031A4 (en) * 2010-06-17 2014-01-08 Hitachi Ltd Computer system and upgrade method for same
CN101907917B (en) * 2010-07-21 2013-08-14 中国电信股份有限公司 Method and system for measuring energy consumption of virtual machine
US8505020B2 (en) * 2010-08-29 2013-08-06 Hewlett-Packard Development Company, L.P. Computer workload migration using processor pooling
WO2012045021A2 (en) * 2010-09-30 2012-04-05 Commvault Systems, Inc. Efficient data management improvements, such as docking limited-feature data management modules to a full-featured data management system
CN102445978B (en) * 2010-10-12 2016-02-17 深圳市金蝶中间件有限公司 A kind of method and apparatus of management data center
JP5691390B2 (en) * 2010-10-25 2015-04-01 サンケン電気株式会社 Power supply and program
US8484654B2 (en) * 2010-11-23 2013-07-09 International Business Machines Corporation Determining suitable network interface for partition deployment/re-deployment in a cloud environment
US9436515B2 (en) * 2010-12-29 2016-09-06 Sap Se Tenant virtualization controller for exporting tenant without shifting location of tenant data in a multi-tenancy environment
US9135037B1 (en) 2011-01-13 2015-09-15 Google Inc. Virtual network protocol
US8874888B1 (en) 2011-01-13 2014-10-28 Google Inc. Managed boot in a cloud system
US9990215B2 (en) * 2011-02-22 2018-06-05 Vmware, Inc. User interface for managing a virtualized computing environment
US9237087B1 (en) 2011-03-16 2016-01-12 Google Inc. Virtual machine name resolution
US8533796B1 (en) 2011-03-16 2013-09-10 Google Inc. Providing application programs with access to secured resources
GB2492172A (en) * 2011-06-25 2012-12-26 Riverbed Technology Inc Controlling the Operation of Server Computers by Load Balancing
US8667490B1 (en) * 2011-07-29 2014-03-04 Emc Corporation Active/active storage and virtual machine mobility over asynchronous distances
US9075979B1 (en) 2011-08-11 2015-07-07 Google Inc. Authentication based on proximity to mobile device
EP3605969B1 (en) 2011-08-17 2021-05-26 Nicira Inc. Distributed logical l3 routing
US8959367B2 (en) 2011-08-17 2015-02-17 International Business Machines Corporation Energy based resource allocation across virtualized machines and data centers
US8966198B1 (en) 2011-09-01 2015-02-24 Google Inc. Providing snapshots of virtual storage devices
TW201314433A (en) * 2011-09-28 2013-04-01 Inventec Corp Server system and power managing method data thereof
EP2748714B1 (en) 2011-11-15 2021-01-13 Nicira, Inc. Connection identifier assignment and source network address translation
US8958293B1 (en) 2011-12-06 2015-02-17 Google Inc. Transparent load-balancing for cloud computing services
US8800009B1 (en) 2011-12-30 2014-08-05 Google Inc. Virtual machine service access
US9047087B2 (en) * 2012-02-01 2015-06-02 Vmware, Inc. Power management and virtual machine migration between logical storage units based on quantity of shared physical storage units
US20130238773A1 (en) * 2012-03-08 2013-09-12 Raghu Kondapalli Synchronized and time aware l2 and l3 address learning
CN102662757A (en) * 2012-03-09 2012-09-12 浪潮通信信息系统有限公司 Resource demand pre-estimate method for cloud computing program smooth transition
US9015838B1 (en) 2012-05-30 2015-04-21 Google Inc. Defensive techniques to increase computer security
US8813240B1 (en) * 2012-05-30 2014-08-19 Google Inc. Defensive techniques to increase computer security
JP6044131B2 (en) * 2012-06-25 2016-12-14 富士通株式会社 Program, management server, and virtual machine migration control method
CN102769670A (en) * 2012-07-13 2012-11-07 中兴通讯股份有限公司 Method, device and system for migration of virtual machines
US9477710B2 (en) 2013-01-23 2016-10-25 Microsoft Technology Licensing, Llc Isolating resources and performance in a database management system
US9104455B2 (en) * 2013-02-19 2015-08-11 International Business Machines Corporation Virtual machine-to-image affinity on a physical server
JP2014186411A (en) * 2013-03-22 2014-10-02 Fujitsu Ltd Management device, information processing system, information processing method and program
US9602426B2 (en) 2013-06-21 2017-03-21 Microsoft Technology Licensing, Llc Dynamic allocation of resources while considering resource reservations
US10033693B2 (en) 2013-10-01 2018-07-24 Nicira, Inc. Distributed identity-based firewalls
US9374305B2 (en) * 2013-10-24 2016-06-21 Dell Products L.P. Packet transfer system
US9348654B2 (en) * 2013-11-19 2016-05-24 International Business Machines Corporation Management of virtual machine migration in an operating environment
US9215214B2 (en) 2014-02-20 2015-12-15 Nicira, Inc. Provisioning firewall rules on a firewall enforcing device
US9215210B2 (en) 2014-03-31 2015-12-15 Nicira, Inc. Migrating firewall connection state for a firewall service virtual machine
US9906494B2 (en) 2014-03-31 2018-02-27 Nicira, Inc. Configuring interactions with a firewall service virtual machine
US9503427B2 (en) 2014-03-31 2016-11-22 Nicira, Inc. Method and apparatus for integrating a service virtual machine
US9825913B2 (en) 2014-06-04 2017-11-21 Nicira, Inc. Use of stateless marking to speed up stateful firewall rule processing
US9729512B2 (en) 2014-06-04 2017-08-08 Nicira, Inc. Use of stateless marking to speed up stateful firewall rule processing
US9612765B2 (en) * 2014-11-19 2017-04-04 International Business Machines Corporation Context aware dynamic composition of migration plans to cloud
US9692727B2 (en) 2014-12-02 2017-06-27 Nicira, Inc. Context-aware distributed firewall
US9699060B2 (en) * 2014-12-17 2017-07-04 Vmware, Inc. Specializing virtual network device processing to avoid interrupt processing for high packet rate applications
US10320921B2 (en) 2014-12-17 2019-06-11 Vmware, Inc. Specializing virtual network device processing to bypass forwarding elements for high packet rate applications
US9965308B2 (en) * 2014-12-18 2018-05-08 Vmware, Inc. Automatic creation of affinity-type rules for resources in distributed computer systems
US9891940B2 (en) 2014-12-29 2018-02-13 Nicira, Inc. Introspection method and apparatus for network access filtering
CN106155812A (en) 2015-04-28 2016-11-23 阿里巴巴集团控股有限公司 Method, device, system and the electronic equipment of a kind of resource management to fictitious host computer
US10410155B2 (en) * 2015-05-01 2019-09-10 Microsoft Technology Licensing, Llc Automatic demand-driven resource scaling for relational database-as-a-service
CN105094944B (en) * 2015-06-10 2018-06-29 中国联合网络通信集团有限公司 A kind of virtual machine migration method and device
US10579403B2 (en) * 2015-06-29 2020-03-03 Vmware, Inc. Policy based provisioning of containers
US9755903B2 (en) 2015-06-30 2017-09-05 Nicira, Inc. Replicating firewall policy across multiple data centers
US10324746B2 (en) 2015-11-03 2019-06-18 Nicira, Inc. Extended context delivery for context-based authorization
EP3211531B1 (en) * 2016-02-25 2021-12-22 Huawei Technologies Co., Ltd. Virtual machine start method and apparatus
US10348685B2 (en) 2016-04-29 2019-07-09 Nicira, Inc. Priority allocation for distributed service rules
US10135727B2 (en) 2016-04-29 2018-11-20 Nicira, Inc. Address grouping for distributed service rules
US11425095B2 (en) 2016-05-01 2022-08-23 Nicira, Inc. Fast ordering of firewall sections and rules
US11171920B2 (en) 2016-05-01 2021-11-09 Nicira, Inc. Publication of firewall configuration
US11258761B2 (en) 2016-06-29 2022-02-22 Nicira, Inc. Self-service firewall configuration
US11088990B2 (en) 2016-06-29 2021-08-10 Nicira, Inc. Translation cache for firewall configuration
US10333983B2 (en) 2016-08-30 2019-06-25 Nicira, Inc. Policy definition and enforcement for a network virtualization platform
US10938837B2 (en) 2016-08-30 2021-03-02 Nicira, Inc. Isolated network stack to manage security for virtual machines
US10193862B2 (en) 2016-11-29 2019-01-29 Vmware, Inc. Security policy analysis based on detecting new network port connections
US10346191B2 (en) * 2016-12-02 2019-07-09 Wmware, Inc. System and method for managing size of clusters in a computing environment
US10609160B2 (en) 2016-12-06 2020-03-31 Nicira, Inc. Performing context-rich attribute-based services on a host
US11032246B2 (en) 2016-12-22 2021-06-08 Nicira, Inc. Context based firewall services for data message flows for multiple concurrent users on one machine
US10802857B2 (en) 2016-12-22 2020-10-13 Nicira, Inc. Collecting and processing contextual attributes on a host
US10581960B2 (en) 2016-12-22 2020-03-03 Nicira, Inc. Performing context-rich attribute-based load balancing on a host
US10812451B2 (en) 2016-12-22 2020-10-20 Nicira, Inc. Performing appID based firewall services on a host
US10805332B2 (en) 2017-07-25 2020-10-13 Nicira, Inc. Context engine model
US10803173B2 (en) 2016-12-22 2020-10-13 Nicira, Inc. Performing context-rich attribute-based process control services on a host
US10942758B2 (en) * 2017-04-17 2021-03-09 Hewlett Packard Enterprise Development Lp Migrating virtual host bus adaptors between sets of host bus adaptors of a target device in order to reallocate bandwidth to enable virtual machine migration
US10778651B2 (en) 2017-11-15 2020-09-15 Nicira, Inc. Performing context-rich attribute-based encryption on a host
US10862773B2 (en) 2018-01-26 2020-12-08 Nicira, Inc. Performing services on data messages associated with endpoint machines
US10802893B2 (en) 2018-01-26 2020-10-13 Nicira, Inc. Performing process control services on endpoint machines
US10853126B2 (en) * 2018-07-26 2020-12-01 Vmware, Inc. Reprogramming network infrastructure in response to VM mobility
US10620987B2 (en) 2018-07-27 2020-04-14 At&T Intellectual Property I, L.P. Increasing blade utilization in a dynamic virtual environment
US11310202B2 (en) 2019-03-13 2022-04-19 Vmware, Inc. Sharing of firewall rules among multiple workloads in a hypervisor
US11539718B2 (en) 2020-01-10 2022-12-27 Vmware, Inc. Efficiently performing intrusion detection
JP7338481B2 (en) * 2020-01-14 2023-09-05 富士通株式会社 Setting change method and setting change program
US11108728B1 (en) 2020-07-24 2021-08-31 Vmware, Inc. Fast distribution of port identifiers for rule processing
US11829793B2 (en) 2020-09-28 2023-11-28 Vmware, Inc. Unified management of virtual machines and bare metal computers
US11995024B2 (en) 2021-12-22 2024-05-28 VMware LLC State sharing between smart NICs
US11928062B2 (en) 2022-06-21 2024-03-12 VMware LLC Accelerating data message classification with smart NICs
US11899594B2 (en) 2022-06-21 2024-02-13 VMware LLC Maintenance of data message classification cache on smart NIC

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1890438A1 (en) * 2003-08-05 2008-02-20 Scalent Systems, Inc. Method and apparatus for achieving dynamic capacity and high availability in multi-stage data networks using adaptive flow-based routing

Family Cites Families (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6070219A (en) 1996-10-09 2000-05-30 Intel Corporation Hierarchical interrupt structure for event notification on multi-virtual circuit network interface controller
US6714960B1 (en) 1996-11-20 2004-03-30 Silicon Graphics, Inc. Earnings-based time-share scheduling
DE19654846A1 (en) 1996-12-27 1998-07-09 Pact Inf Tech Gmbh Process for the independent dynamic reloading of data flow processors (DFPs) as well as modules with two- or multi-dimensional programmable cell structures (FPGAs, DPGAs, etc.)
US6041053A (en) 1997-09-18 2000-03-21 Microsfot Corporation Technique for efficiently classifying packets using a trie-indexed hierarchy forest that accommodates wildcards
US6131163A (en) 1998-02-17 2000-10-10 Cisco Technology, Inc. Network gateway mechanism having a protocol stack proxy
CA2236188C (en) 1998-04-28 2002-10-01 Thomas Alexander Firmware controlled transmit datapath for high-speed packet switches
US6157955A (en) 1998-06-15 2000-12-05 Intel Corporation Packet processing system including a policy engine having a classification unit
US6600721B2 (en) 1998-12-31 2003-07-29 Nortel Networks Limited End node pacing for QOS and bandwidth management
US6757731B1 (en) 1999-02-25 2004-06-29 Nortel Networks Limited Apparatus and method for interfacing multiple protocol stacks in a communication network
WO2001025920A1 (en) 1999-10-05 2001-04-12 Ejasent, Inc. Virtual resource id mapping
US7046665B1 (en) 1999-10-26 2006-05-16 Extreme Networks, Inc. Provisional IP-aware virtual paths over networks
US6831893B1 (en) 2000-04-03 2004-12-14 P-Cube, Ltd. Apparatus and method for wire-speed classification and pre-processing of data packets in a full duplex network
US6985937B1 (en) 2000-05-11 2006-01-10 Ensim Corporation Dynamically modifying the resources of a virtual server
KR20020017265A (en) 2000-08-29 2002-03-07 구자홍 Communication method for plural virtual lan consisted in identical ip subnet
US6944168B2 (en) 2001-05-04 2005-09-13 Slt Logic Llc System and method for providing transformation of multi-protocol packets in a data stream
US20030037154A1 (en) 2001-08-16 2003-02-20 Poggio Andrew A. Protocol processor
US7260102B2 (en) 2002-02-22 2007-08-21 Nortel Networks Limited Traffic switching using multi-dimensional packet classification
US7177311B1 (en) 2002-06-04 2007-02-13 Fortinet, Inc. System and method for routing traffic through a virtual router-based network switch
JP3789395B2 (en) 2002-06-07 2006-06-21 富士通株式会社 Packet processing device
US7313793B2 (en) * 2002-07-11 2007-12-25 Microsoft Corporation Method for forking or migrating a virtual machine
KR100481614B1 (en) 2002-11-19 2005-04-08 한국전자통신연구원 METHOD AND APPARATUS FOR PROTECTING LEGITIMATE TRAFFIC FROM DoS AND DDoS ATTACKS
US7590683B2 (en) * 2003-04-18 2009-09-15 Sap Ag Restarting processes in distributed applications on blade servers
US7356818B2 (en) 2003-06-24 2008-04-08 International Business Machines Corporation Virtual machine communicating to external device without going through other virtual machines by using a list of IP addresses managed only by a single virtual machine monitor
JP4053967B2 (en) 2003-11-20 2008-02-27 株式会社日立コミュニケーションテクノロジー VLAN server
KR100608904B1 (en) 2003-12-18 2006-08-04 한국전자통신연구원 System and method for providing quality of service in ip network
US7752635B2 (en) 2003-12-18 2010-07-06 Intel Corporation System and method for configuring a virtual network interface card
US8156490B2 (en) * 2004-05-08 2012-04-10 International Business Machines Corporation Dynamic migration of virtual machine computer programs upon satisfaction of conditions
US7257811B2 (en) * 2004-05-11 2007-08-14 International Business Machines Corporation System, method and program to migrate a virtual machine
US7543166B2 (en) * 2004-05-12 2009-06-02 Intel Corporation System for managing power states of a virtual machine based on global power management policy and power management command sent by the virtual machine
US7515589B2 (en) 2004-08-27 2009-04-07 International Business Machines Corporation Method and apparatus for providing network virtualization
US20060070066A1 (en) 2004-09-30 2006-03-30 Grobman Steven L Enabling platform network stack control in a virtualization platform
US20060174324A1 (en) 2005-01-28 2006-08-03 Zur Uri E Method and system for mitigating denial of service in a communication network
US7458066B2 (en) * 2005-02-28 2008-11-25 Hewlett-Packard Development Company, L.P. Computer system and method for transferring executables between partitions
US7613132B2 (en) 2006-06-30 2009-11-03 Sun Microsystems, Inc. Method and system for controlling virtual machine bandwidth
US7885257B2 (en) 2006-07-20 2011-02-08 Oracle America, Inc. Multiple virtual network stack instances using virtual network interface cards
US7962587B2 (en) * 2007-12-10 2011-06-14 Oracle America, Inc. Method and system for enforcing resource constraints for virtual machines across migration
US7984123B2 (en) * 2007-12-10 2011-07-19 Oracle America, Inc. Method and system for reconfiguring a virtual network path
US20090172125A1 (en) * 2007-12-28 2009-07-02 Mrigank Shekhar Method and system for migrating a computer environment across blade servers
US7941539B2 (en) * 2008-06-30 2011-05-10 Oracle America, Inc. Method and system for creating a virtual router in a blade chassis to maintain connectivity

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1890438A1 (en) * 2003-08-05 2008-02-20 Scalent Systems, Inc. Method and apparatus for achieving dynamic capacity and high availability in multi-stage data networks using adaptive flow-based routing

Also Published As

Publication number Publication date
CN102105865B (en) 2015-04-01
US20120089981A1 (en) 2012-04-12
US20090327781A1 (en) 2009-12-31
CN102105865A (en) 2011-06-22
WO2010002759A1 (en) 2010-01-07
EP2304565A1 (en) 2011-04-06
US8099615B2 (en) 2012-01-17
US8386825B2 (en) 2013-02-26
WO2010002759A8 (en) 2010-05-06

Similar Documents

Publication Publication Date Title
EP2304565B1 (en) Method and system for power management in a virtual machine environment withouth disrupting network connectivity
US7962587B2 (en) Method and system for enforcing resource constraints for virtual machines across migration
US7941539B2 (en) Method and system for creating a virtual router in a blade chassis to maintain connectivity
US7984123B2 (en) Method and system for reconfiguring a virtual network path
US8370530B2 (en) Method and system for controlling network traffic in a blade chassis
US8095661B2 (en) Method and system for scaling applications on a blade chassis
US7945647B2 (en) Method and system for creating a virtual network path
US7826359B2 (en) Method and system for load balancing using queued packet information
US8086739B2 (en) Method and system for monitoring virtual wires
EP2559206B1 (en) Method of identifying destination in a virtual environment
US8321862B2 (en) System for migrating a virtual machine and resource usage data to a chosen target host based on a migration policy
US8495208B2 (en) Migrating virtual machines among networked servers upon detection of degrading network link operation
US8924534B2 (en) Resource optimization and monitoring in virtualized infrastructure
US9348653B2 (en) Virtual machine management among networked servers
US20100287262A1 (en) Method and system for guaranteed end-to-end data flows in a local networking domain
JP5500270B2 (en) Profile processing program, data relay apparatus, and profile control method
US7970951B2 (en) Method and system for media-based data transfer
US7944923B2 (en) Method and system for classifying network traffic
CN112398676A (en) Vendor independent profile based modeling of service access endpoints in a multi-tenant environment
EP2656212B1 (en) Activate attribute for service profiles in unified computing system
WO2014000292A1 (en) Migration method, serving control gateway and system for virtual machine across data centres
US8886838B2 (en) Method and system for transferring packets to a guest operating system
KR101343595B1 (en) Method for forwarding path virtualization for router

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20110119

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA RS

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180430

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20191024

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602009061542

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1249349

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200415

Ref country code: IE

Ref legal event code: FG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200625

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200626

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200625

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20200325

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200725

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200818

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1249349

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200325

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602009061542

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

26N No opposition filed

Effective date: 20210112

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200629

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20200630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200630

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200629

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200630

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200630

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200325

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230522

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240509

Year of fee payment: 16

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240507

Year of fee payment: 16