US9223616B2 - Virtual machine resource reduction for live migration optimization - Google Patents

Virtual machine resource reduction for live migration optimization Download PDF

Info

Publication number
US9223616B2
US9223616B2 US13/036,732 US201113036732A US9223616B2 US 9223616 B2 US9223616 B2 US 9223616B2 US 201113036732 A US201113036732 A US 201113036732A US 9223616 B2 US9223616 B2 US 9223616B2
Authority
US
United States
Prior art keywords
rate
migration
modification
virtual cpu
speed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US13/036,732
Other versions
US20120221710A1 (en
Inventor
Michael S. Tsirkin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Red Hat Israel Ltd
Original Assignee
Red Hat Israel Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Red Hat Israel Ltd filed Critical Red Hat Israel Ltd
Priority to US13/036,732 priority Critical patent/US9223616B2/en
Assigned to RED HAT ISRAEL, LTD. reassignment RED HAT ISRAEL, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TSIRKIN, MICHAEL S.
Publication of US20120221710A1 publication Critical patent/US20120221710A1/en
Application granted granted Critical
Publication of US9223616B2 publication Critical patent/US9223616B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Definitions

  • the embodiments of the invention relate generally to virtualization systems and, more specifically, relate to virtual machine resource reduction for live migration optimization.
  • a virtual machine In computer science, a virtual machine (VM) is a portion of software that, when executed on appropriate hardware, creates an environment allowing the virtualization of an actual physical computer system. Each VM may function as a self-contained platform, running its own operating system (OS) and software applications (processes). Typically, a virtual machine monitor (VMM) manages allocation and virtualization of computer resources and performs context switching, as may be necessary, to cycle between various VMs.
  • OS operating system
  • processs software applications
  • VMM virtual machine monitor
  • a host machine e.g., computer or server
  • a host machine is typically enabled to simultaneously run multiple VMs, where each VM may be used by a local or remote client.
  • the host machine allocates a certain amount of the host's resources to each of the VMs.
  • Each VM is then able to use the allocated resources to execute applications, including operating systems known as guest operating systems.
  • the VMM virtualizes the underlying hardware of the host machine or emulates hardware devices, making the use of the VM transparent to the guest operating system or the remote client that uses the VM.
  • a VM may need to be migrated from one host machine to another host machine for a variety of reasons.
  • This migration process may be a “live” migration process, referring to the fact that the VM stays running and operational (i.e., “live”) during most of the migration process.
  • live migration the entire state of a VM is transferred from one host machine to another host machine.
  • a critical piece of this transmission of state is the transfer of memory of the VM.
  • the entire memory of a VM can often times fall in the order of gigabytes, which can result in a lengthy live migration transfer process.
  • memory may become “dirty” during the transfer. This means that a particular page of the memory that was already transferred has been modified on the VM that is still residing on the source host. Typically, these “dirty” pages are marked so that those particular pages of memory can be transferred again during the live migration process.
  • live migration occurs when the rate of state change for a VM is faster than the rate of migration. For instance, with respect to memory, live migration will not be completed as long as the rate of pages being dirtied is faster than the rate of page migration.
  • the solutions to this problem have either involved (1) continue transfer pages and dirtying pages at the full speed of the VM and hope that migration will eventually complete, or (2) stop the VM and complete migration. These solutions are implemented without regard to the number of CPUs on a migrating VM or an ability to limit computing resources implemented by the VM.
  • FIG. 1 is a block diagram of an exemplary virtualization architecture in which embodiments of the present invention may operate;
  • FIG. 2 is a flow diagram illustrating a method for VM resource reduction for live migration optimization according to an embodiment of the invention
  • FIG. 3 is a flow diagram illustrating another method for VM resource reduction for live migration optimization according to an embodiment of the invention
  • FIG. 4 is a flow diagram illustrating a further method for VM resource reduction for live migration optimization according to an embodiment of the invention.
  • FIG. 5 illustrates a block diagram of one embodiment of a computer system.
  • Embodiments of the invention provide a mechanism for virtual machine (VM) resource reduction for live migration optimization.
  • a method of embodiments of the invention includes monitoring a rate of state change of a virtual machine (VM) undergoing a live migration, determining that the rate of state change of the VM exceeds a rate of state transfer of the VM during the live migration process, and adjusting one or more resources of the VM to decrease the rate of state change of the VM to be less than the rate of state transfer of the VM.
  • VM virtual machine
  • the present invention also relates to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a machine readable storage medium, such as, but not limited to, any type of disk including optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
  • the present invention may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present invention.
  • a machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer).
  • a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.), a machine (e.g., computer) readable transmission medium (non-propagating electrical, optical, or acoustical signals), etc.
  • Embodiments of the invention provide a mechanism for VM resource reduction for live migration optimization. Specifically, embodiments of the invention throttle computing resources on a VM in a manner to ensure that the rate of state change of a VM is no larger than the rate of migration of the VM. For example, if during live migration a VM is generating more dirty pages than pages being transferred, then embodiments of the invention reduce the amount of resources (e.g., CPU, memory, networking, etc.) dedicated to the VM so that the number of the pages transferred will be higher than of those being dirtied. In some cases, the computing resources are reduced by a constant factor. In other cases, the computing resources are reduced based on the relation between pages dirtied to pages transferred. In some other cases, a pagefault event is reported to the hypervisor when the memory is modified. In this case, the computing resources may be delayed on each pagefault by an amount related to time needed to migrate a page.
  • resources e.g., CPU, memory, networking, etc.
  • FIG. 1 illustrates an exemplary virtualization architecture 100 in which embodiments of the present invention may operate.
  • the virtualization architecture 100 may include one or more host machines 110 , 120 to run one or more virtual machines (VMs) 112 , 122 .
  • VMs virtual machines
  • Each VM 112 , 122 runs a guest operating system (OS) that may be different from one another.
  • the guest OS may include Microsoft Windows, Linux, Solaris, Mac OS, etc.
  • the host 110 , 120 may include a hypervisor 115 , 125 that emulates the underlying hardware platform for the VMs 112 , 122 .
  • the hypervisor 115 , 125 may also be known as a virtual machine monitor (VMM), a kernel-based hypervisor or a host operating system.
  • VMM virtual machine monitor
  • each VM 112 , 122 may be accessed by one or more of the clients over a network (not shown).
  • the network may be a private network (e.g., a local area network (LAN), wide area network (WAN), intranet, etc.) or a public network (e.g., the Internet).
  • the clients may be hosted directly by the host machine 110 , 120 as a local client.
  • the VM 112 , 122 provides a virtual desktop for the client.
  • the host 110 , 120 may be coupled to a host controller 105 (via a network or directly).
  • the host controller 105 may reside on a designated computer system (e.g., a server computer, a desktop computer, etc.) or be part of the host machine 110 , 120 or another machine.
  • the VMs 112 , 122 can be managed by the host controller 105 , which may add a VM, delete a VM, balance the load on the server cluster, provide directory service to the VMs 112 , 122 , and perform other management functions.
  • host controller 105 may include a controller migration agent 107 that is responsible for migration of a VM 122 between host machines 110 , 120 via network channel 130 .
  • each host machine 110 , 120 may include a host migration agent 117 , 127 to assist controller migration agent 107 in the migration process, or to handle the migration process directly themselves.
  • host machine 110 may be known as the origin host machine 110 from which a VM 140 is migrating from, and host machine 120 may be known as the destination host machine 120 to which the VM 140 is migrating to.
  • VM 140 on origin host machine 110 is live migrating to destination host machine 120 .
  • the state of VM 140 will be transferred between the two host machines 110 , 120 .
  • VM 140 remains operational on origin host machine 110 during this transfer of state.
  • Embodiments of the invention provide a solution to optimize the VM live migration process by reducing or eliminating VM 140 downtime incurred during the live migration process.
  • a solution is provided for reducing or throttling resources of a VM 140 in order to allow migration of VM state to complete at a rate greater than the rate that the state of the VM changes.
  • the state of the VM refers to the memory of the VM, and the rate of state change includes the rate of dirtying pages of the VM.
  • the host migration agent 107 , 117 may instead reduce one or more of the VM's 140 resources in such a way so that the VM 140 only utilizes a fraction of the resources allowed by an administrator of the VM 140 .
  • the VM 140 resources are reduced so that the amount of state change generated by the VM 140 will not exceed the rate that the VM's 140 state is migrated.
  • the resources of the VM 140 that may be reduced or throttled may include, but are not limited to, the CPU, network card, or a graphics card.
  • the resources of the VM 140 that may be reduced or throttled may include, but are not limited to, the CPU, network card, or a graphics card.
  • embodiments of the invention will be described in terms of adjusting the VM resource of the CPU. However, one skilled in the art will appreciate that other VM 140 resources may also be adjusted utilizing embodiments of the invention.
  • embodiments of the invention may also be applicable and useful to the process of fault tolerance in a virtualization system, such as providing redundancy for a VM 140 with replicated VMs in the same or different host machines 110 , 120 .
  • the host migration agent 107 , 117 may initially determine that the rate of VM 140 state change is exceeding the rate of migration of the VM state. The rate of change can be determined by querying the origin hypervisor 115 , while the rate of transfer can be determined by querying either of the origin or the destination hypervisors 115 , 125 . Once this is determined, the host migration agent 107 , 117 may utilize various methods to determine the percentage of the VM 140 resource, for example the amount of CPU processing speed, which will be reduced.
  • the VM 140 resource is reduced by a constant factor.
  • the current CPU speed may be divided by a constant factor, such as 1, 2, and so on. If over time the host migration agent 107 , 117 determines that this CPU reduction still does not reduce of VM state change to less than the rate of VM state migration, then the host migration agent may continue to divide the CPU speed by a constant factor until the VM state change rate is less than the VM state migration rate.
  • the VM 140 resource may be adjusted based on the ratio of VM state migration (memory pages sent) to VM state change (memory pages dirtied).
  • the CPU speed may be adjusted by multiplying the current CPU speed by the quotient of the number of memory pages transferred divided by the number of memory pages dirtied. In some embodiments, this quotient may be further divided by a constant factor in order to give an additional buffer to the CPU speed adjustment.
  • the VM 140 resource may be adjusted by introducing a delay each time a pagefault occurs on the VM 140 .
  • the resource for example, a network card modifying the VM memory
  • the VM 140 exits to the hypervisor 115 and the hypervisor 115 may intervene and slow the VM or the resource by introducing a small delay. This delay may then account for the greater speed of state change over migration speed.
  • the delay may be equal to the amount of time it takes to copy the state change via migration.
  • the delay may be multiplied by a constant factor (e.g., 1, 2, etc.) to provide an added advantage to state transfer over state change.
  • Embodiments of the invention may utilize a combination of one or more of the above methods of VM resource adjustment calculations.
  • embodiments of the invention may further apply different adjustments to multiple resources of the VM 140 based on individual characteristics of each of the multiple resources. For instance, if the VM 140 has multiple virtual CPUs, different adjustments may be made to each virtual CPU of the VM based on individual performance of the virtual CPU. Each virtual CPU can then be adjusted to a different rate.
  • FIG. 2 is a flow diagram illustrating a method 200 for VM resource reduction for live migration optimization according to an embodiment of the invention.
  • Method 200 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions run on a processing device), firmware, or a combination thereof.
  • processing logic may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions run on a processing device), firmware, or a combination thereof.
  • method 200 is performed by host migration agent 107 , 117 of FIG. 1 .
  • Method 200 begins at block 210 where the rate of state change of a VM is monitored during the live migration of the VM. Then, at decision block 220 , it is determined whether the rate of state change of the VM exceeds the rate of state transfer for the live migration of the VM. If the rate of state change does not exceed the rate of state transfer, then processing returns to block 210 to continue monitoring the rate of state change during the live migration process.
  • a resource of the VM is reduced by a constant factor.
  • the current CPU speed of the VM may be divided by a constant factor, such as 1, 2, and so on.
  • the host migration agent instructs the hypervisor to reduce the VM resource by this constant factor.
  • decision block 240 it is determined whether the rate of state change still exceeds the rate of state transfer. If so, then method 200 returns to block 230 to continue to reduce the VM resource by a constant factor. For example, the CPU speed will again be divided by a constant factor until the VM state change rate is less than the VM state migration rate.
  • the method 200 continues to decision block 250 where it is determined whether live migration is complete. If the live migration is not complete at decision block 250 , then method 200 returns to block 210 to continue monitoring the rate of state change of the VM during the live migration process. If the live migration process is complete at decision block 250 , then method 200 ends.
  • FIG. 3 is a flow diagram illustrating another method 300 for VM resource reduction for live migration optimization according to an embodiment of the invention.
  • Method 300 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions run on a processing device), firmware, or a combination thereof.
  • processing logic may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions run on a processing device), firmware, or a combination thereof.
  • method 300 is performed by host migration agent 107 , 117 of FIG. 1 .
  • Method 300 begins at block 310 where the rate of state change of a VM is monitored during the live migration of the VM. Then, at decision block 320 , it is determined whether the rate of state change of the VM exceeds the rate of state transfer for the live migration of the VM. If the rate of state change does not exceed the rate of state transfer, then processing returns to block 310 to continue monitoring the rate of state change during the live migration process.
  • a resource of the VM is adjusted based on the ratio of VM state migration to VM state change.
  • the CPU speed may be adjusted by multiplying the current CPU speed by the quotient of the number of memory pages transferred divided by the number of memory pages dirtied.
  • the resource of the VM is further adjusted by dividing the resource allocation by a constant factor. For instance, the altered CPU speed from block 330 is divided by a constant factor in order to give an additional buffer to the CPU speed adjustment.
  • method 300 determines whether the rate of state change still exceeds the rate of state transfer. If so, then method 300 returns to block 330 to continue to adjust the VM resource based on the ratio of VM state migration to VM state change. If the rate of state change no longer exceeds the rate of state transfer at decision block 350 , the method 300 continues to decision block 360 where it is determined whether live migration is complete. If the live migration is not complete at decision block 360 , then method 300 returns to block 310 to continue monitoring the rate of state change of the VM during the live migration process. If the live migration process is complete at decision block 360 , then method 300 ends.
  • FIG. 4 is a flow diagram illustrating a further method 400 for VM resource reduction for live migration optimization according to an embodiment of the invention.
  • Method 400 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions run on a processing device), firmware, or a combination thereof.
  • method 400 is performed by host migration agent 107 , 117 of FIG. 1 .
  • Method 400 begins at block 410 where the rate of state change of a VM is monitored during the live migration of the VM. Then, at decision block 420 , it is determined whether the rate of state change of the VM exceeds the rate of state transfer for the live migration of the VM. If the rate of state change does not exceed the rate of state transfer, then processing returns to block 410 to continue monitoring the rate of state change during the live migration process.
  • a resource of the VM is adjusted by introducing a delay each time a pagefault occurs on the VM.
  • the VM exits to the hypervisor and the hypervisor may intervene and slow the CPU by introducing a small delay. This delay may then account for the greater speed of state change over migration speed.
  • the delay may be equal to the amount of time it takes to copy the state change via migration.
  • the delay is multiplied by a constant factor (e.g., 1, 2, etc.) to provide an added advantage to state transfer over state change.
  • method 400 determines whether the rate of state change still exceeds the rate of state transfer. If so, then method 400 returns to block 430 to continue to adjust the VM resource by introducing a delay each time a pagefault occurs on the VM. If the rate of state change no longer exceeds the rate of state transfer at decision block 450 , the method 400 continues to decision block 460 where it is determined whether live migration is complete. If the live migration is not complete at decision block 460 , then method 400 returns to block 410 to continue monitoring the rate of state change of the VM during the live migration process. If the live migration process is complete at decision block 460 , then method 400 ends.
  • FIG. 5 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 500 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
  • the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet.
  • the machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
  • the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA Personal Digital Assistant
  • STB set-top box
  • WPA Personal Digital Assistant
  • a cellular telephone a web appliance
  • server a server
  • network router switch or bridge
  • the exemplary computer system 500 includes a processing device 502 , a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 506 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 518 , which communicate with each other via a bus 530 .
  • main memory 504 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • RDRAM Rambus DRAM
  • static memory 506 e.g., flash memory, static random access memory (SRAM), etc.
  • SRAM static random access memory
  • Processing device 502 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 502 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 502 is configured to execute the processing logic 526 for performing the operations and steps discussed herein.
  • CISC complex instruction set computing
  • RISC reduced instruction set computer
  • VLIW very long instruction word
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • DSP digital signal processor
  • network processor or the like.
  • the processing device 502 is configured to execute the processing logic 526 for performing the operations and steps
  • the computer system 500 may further include a network interface device 508 .
  • the computer system 500 also may include a video display unit 510 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 512 (e.g., a keyboard), a cursor control device 514 (e.g., a mouse), and a signal generation device 516 (e.g., a speaker).
  • a video display unit 510 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
  • an alphanumeric input device 512 e.g., a keyboard
  • a cursor control device 514 e.g., a mouse
  • a signal generation device 516 e.g., a speaker
  • the data storage device 518 may include a machine-accessible storage medium 528 on which is stored one or more set of instructions (e.g., software 522 ) embodying any one or more of the methodologies of functions described herein.
  • software 522 may store instructions to perform VM resource reduction for live migration optimization by host migration agent 107 , 117 described with respect to FIG. 1 .
  • the software 522 may also reside, completely or at least partially, within the main memory 504 and/or within the processing device 502 during execution thereof by the computer system 500 ; the main memory 504 and the processing device 502 also constituting machine-accessible storage media.
  • the software 522 may further be transmitted or received over a network 520 via the network interface device 508 .
  • the machine-readable storage medium 528 may also be used to store instructions to perform methods 200 , 300 , and 400 for VM resource reduction for live migration optimization described with respect to FIGS. 2 , 3 , and 4 , and/or a software library containing methods that call the above applications. While the machine-accessible storage medium 528 is shown in an exemplary embodiment to be a single medium, the term “machine-accessible storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • machine-accessible storage medium shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instruction for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention.
  • machine-accessible storage medium shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

A mechanism for virtual machine resource reduction for live migration optimization is disclosed. A method of the invention includes monitoring a rate of state change of a virtual machine (VM) undergoing a live migration, determining that the rate of state change of the VM exceeds a rate of state transfer of the VM during the live migration process, and adjusting one or more resources of the VM to decrease the rate of state change of the VM to be less than the rate of state transfer of the VM.

Description

TECHNICAL FIELD
The embodiments of the invention relate generally to virtualization systems and, more specifically, relate to virtual machine resource reduction for live migration optimization.
BACKGROUND
In computer science, a virtual machine (VM) is a portion of software that, when executed on appropriate hardware, creates an environment allowing the virtualization of an actual physical computer system. Each VM may function as a self-contained platform, running its own operating system (OS) and software applications (processes). Typically, a virtual machine monitor (VMM) manages allocation and virtualization of computer resources and performs context switching, as may be necessary, to cycle between various VMs.
A host machine (e.g., computer or server) is typically enabled to simultaneously run multiple VMs, where each VM may be used by a local or remote client. The host machine allocates a certain amount of the host's resources to each of the VMs. Each VM is then able to use the allocated resources to execute applications, including operating systems known as guest operating systems. The VMM virtualizes the underlying hardware of the host machine or emulates hardware devices, making the use of the VM transparent to the guest operating system or the remote client that uses the VM.
Often times, a VM may need to be migrated from one host machine to another host machine for a variety of reasons. This migration process may be a “live” migration process, referring to the fact that the VM stays running and operational (i.e., “live”) during most of the migration process. During live migration, the entire state of a VM is transferred from one host machine to another host machine. A critical piece of this transmission of state is the transfer of memory of the VM. The entire memory of a VM can often times fall in the order of gigabytes, which can result in a lengthy live migration transfer process. In addition, because the VM is “live” during this transfer, memory may become “dirty” during the transfer. This means that a particular page of the memory that was already transferred has been modified on the VM that is still residing on the source host. Typically, these “dirty” pages are marked so that those particular pages of memory can be transferred again during the live migration process.
Currently, a problem with the live migration occurs when the rate of state change for a VM is faster than the rate of migration. For instance, with respect to memory, live migration will not be completed as long as the rate of pages being dirtied is faster than the rate of page migration. The solutions to this problem have either involved (1) continue transfer pages and dirtying pages at the full speed of the VM and hope that migration will eventually complete, or (2) stop the VM and complete migration. These solutions are implemented without regard to the number of CPUs on a migrating VM or an ability to limit computing resources implemented by the VM.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the invention. The drawings, however, should not be taken to limit the invention to the specific embodiments, but are for explanation and understanding only.
FIG. 1 is a block diagram of an exemplary virtualization architecture in which embodiments of the present invention may operate;
FIG. 2 is a flow diagram illustrating a method for VM resource reduction for live migration optimization according to an embodiment of the invention;
FIG. 3 is a flow diagram illustrating another method for VM resource reduction for live migration optimization according to an embodiment of the invention;
FIG. 4 is a flow diagram illustrating a further method for VM resource reduction for live migration optimization according to an embodiment of the invention; and
FIG. 5 illustrates a block diagram of one embodiment of a computer system.
DETAILED DESCRIPTION
Embodiments of the invention provide a mechanism for virtual machine (VM) resource reduction for live migration optimization. A method of embodiments of the invention includes monitoring a rate of state change of a virtual machine (VM) undergoing a live migration, determining that the rate of state change of the VM exceeds a rate of state transfer of the VM during the live migration process, and adjusting one or more resources of the VM to decrease the rate of state change of the VM to be less than the rate of state transfer of the VM.
In the following description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
Some portions of the detailed descriptions which follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “sending”, “receiving”, “attaching”, “forwarding”, “caching”, “monitoring”, “determining”, “adjusting”, or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a machine readable storage medium, such as, but not limited to, any type of disk including optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear as set forth in the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
The present invention may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present invention. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.), a machine (e.g., computer) readable transmission medium (non-propagating electrical, optical, or acoustical signals), etc.
Embodiments of the invention provide a mechanism for VM resource reduction for live migration optimization. Specifically, embodiments of the invention throttle computing resources on a VM in a manner to ensure that the rate of state change of a VM is no larger than the rate of migration of the VM. For example, if during live migration a VM is generating more dirty pages than pages being transferred, then embodiments of the invention reduce the amount of resources (e.g., CPU, memory, networking, etc.) dedicated to the VM so that the number of the pages transferred will be higher than of those being dirtied. In some cases, the computing resources are reduced by a constant factor. In other cases, the computing resources are reduced based on the relation between pages dirtied to pages transferred. In some other cases, a pagefault event is reported to the hypervisor when the memory is modified. In this case, the computing resources may be delayed on each pagefault by an amount related to time needed to migrate a page.
FIG. 1 illustrates an exemplary virtualization architecture 100 in which embodiments of the present invention may operate. The virtualization architecture 100 may include one or more host machines 110, 120 to run one or more virtual machines (VMs) 112, 122. Each VM 112, 122 runs a guest operating system (OS) that may be different from one another. The guest OS may include Microsoft Windows, Linux, Solaris, Mac OS, etc. The host 110, 120 may include a hypervisor 115, 125 that emulates the underlying hardware platform for the VMs 112, 122. The hypervisor 115, 125 may also be known as a virtual machine monitor (VMM), a kernel-based hypervisor or a host operating system.
In one embodiment, each VM 112, 122 may be accessed by one or more of the clients over a network (not shown). The network may be a private network (e.g., a local area network (LAN), wide area network (WAN), intranet, etc.) or a public network (e.g., the Internet). In some embodiments, the clients may be hosted directly by the host machine 110, 120 as a local client. In one scenario, the VM 112, 122 provides a virtual desktop for the client.
As illustrated, the host 110, 120 may be coupled to a host controller 105 (via a network or directly). In some embodiments, the host controller 105 may reside on a designated computer system (e.g., a server computer, a desktop computer, etc.) or be part of the host machine 110, 120 or another machine. The VMs 112, 122 can be managed by the host controller 105, which may add a VM, delete a VM, balance the load on the server cluster, provide directory service to the VMs 112, 122, and perform other management functions.
In one embodiment, host controller 105 may include a controller migration agent 107 that is responsible for migration of a VM 122 between host machines 110, 120 via network channel 130. In addition, each host machine 110, 120 may include a host migration agent 117, 127 to assist controller migration agent 107 in the migration process, or to handle the migration process directly themselves.
For purposes of the following explanation, host machine 110 may be known as the origin host machine 110 from which a VM 140 is migrating from, and host machine 120 may be known as the destination host machine 120 to which the VM 140 is migrating to. Assume VM 140 on origin host machine 110 is live migrating to destination host machine 120. In embodiments of the invention, when it is decided to initiate a live migration process for VM 140 between origin host machine 110 and destination host machine 120, the state of VM 140 will be transferred between the two host machines 110, 120. VM 140 remains operational on origin host machine 110 during this transfer of state.
Embodiments of the invention provide a solution to optimize the VM live migration process by reducing or eliminating VM 140 downtime incurred during the live migration process. In one embodiment, a solution is provided for reducing or throttling resources of a VM 140 in order to allow migration of VM state to complete at a rate greater than the rate that the state of the VM changes. In most cases, the state of the VM refers to the memory of the VM, and the rate of state change includes the rate of dirtying pages of the VM. In embodiments of the invention, instead of stopping the migrating VM 140 completely to finish the migration process, the host migration agent 107, 117 may instead reduce one or more of the VM's 140 resources in such a way so that the VM 140 only utilizes a fraction of the resources allowed by an administrator of the VM 140. The VM 140 resources are reduced so that the amount of state change generated by the VM 140 will not exceed the rate that the VM's 140 state is migrated.
In one embodiment, the resources of the VM 140 that may be reduced or throttled may include, but are not limited to, the CPU, network card, or a graphics card. For purposes of the following explanation, embodiments of the invention will be described in terms of adjusting the VM resource of the CPU. However, one skilled in the art will appreciate that other VM 140 resources may also be adjusted utilizing embodiments of the invention. Furthermore, embodiments of the invention may also be applicable and useful to the process of fault tolerance in a virtualization system, such as providing redundancy for a VM 140 with replicated VMs in the same or different host machines 110, 120.
In embodiments of the invention, the host migration agent 107, 117 may initially determine that the rate of VM 140 state change is exceeding the rate of migration of the VM state. The rate of change can be determined by querying the origin hypervisor 115, while the rate of transfer can be determined by querying either of the origin or the destination hypervisors 115, 125. Once this is determined, the host migration agent 107, 117 may utilize various methods to determine the percentage of the VM 140 resource, for example the amount of CPU processing speed, which will be reduced.
In one embodiment, the VM 140 resource is reduced by a constant factor. For example, the current CPU speed may be divided by a constant factor, such as 1, 2, and so on. If over time the host migration agent 107, 117 determines that this CPU reduction still does not reduce of VM state change to less than the rate of VM state migration, then the host migration agent may continue to divide the CPU speed by a constant factor until the VM state change rate is less than the VM state migration rate.
In another embodiment, the VM 140 resource may be adjusted based on the ratio of VM state migration (memory pages sent) to VM state change (memory pages dirtied). For example, the CPU speed may be adjusted by multiplying the current CPU speed by the quotient of the number of memory pages transferred divided by the number of memory pages dirtied. In some embodiments, this quotient may be further divided by a constant factor in order to give an additional buffer to the CPU speed adjustment.
In a further embodiment, the VM 140 resource may be adjusted by introducing a delay each time a pagefault occurs on the VM 140. When the VM 140 generates new state or new state is generated for the VM 140 by the resource (for example, a network card modifying the VM memory), the VM 140 exits to the hypervisor 115 and the hypervisor 115 may intervene and slow the VM or the resource by introducing a small delay. This delay may then account for the greater speed of state change over migration speed. The delay may be equal to the amount of time it takes to copy the state change via migration. In some embodiments, the delay may be multiplied by a constant factor (e.g., 1, 2, etc.) to provide an added advantage to state transfer over state change.
Embodiments of the invention may utilize a combination of one or more of the above methods of VM resource adjustment calculations. In addition, embodiments of the invention may further apply different adjustments to multiple resources of the VM 140 based on individual characteristics of each of the multiple resources. For instance, if the VM 140 has multiple virtual CPUs, different adjustments may be made to each virtual CPU of the VM based on individual performance of the virtual CPU. Each virtual CPU can then be adjusted to a different rate.
FIG. 2 is a flow diagram illustrating a method 200 for VM resource reduction for live migration optimization according to an embodiment of the invention. Method 200 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions run on a processing device), firmware, or a combination thereof. In one embodiment, method 200 is performed by host migration agent 107, 117 of FIG. 1.
Method 200 begins at block 210 where the rate of state change of a VM is monitored during the live migration of the VM. Then, at decision block 220, it is determined whether the rate of state change of the VM exceeds the rate of state transfer for the live migration of the VM. If the rate of state change does not exceed the rate of state transfer, then processing returns to block 210 to continue monitoring the rate of state change during the live migration process.
If the rate of state change does exceed the rate of state transfer at decision block 220, then at block 230, a resource of the VM is reduced by a constant factor. For example, the current CPU speed of the VM may be divided by a constant factor, such as 1, 2, and so on. In one embodiment, the host migration agent instructs the hypervisor to reduce the VM resource by this constant factor. At decision block 240, it is determined whether the rate of state change still exceeds the rate of state transfer. If so, then method 200 returns to block 230 to continue to reduce the VM resource by a constant factor. For example, the CPU speed will again be divided by a constant factor until the VM state change rate is less than the VM state migration rate.
If the rate of state change no longer exceeds the rate of state transfer at decision block 240, the method 200 continues to decision block 250 where it is determined whether live migration is complete. If the live migration is not complete at decision block 250, then method 200 returns to block 210 to continue monitoring the rate of state change of the VM during the live migration process. If the live migration process is complete at decision block 250, then method 200 ends.
FIG. 3 is a flow diagram illustrating another method 300 for VM resource reduction for live migration optimization according to an embodiment of the invention. Method 300 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions run on a processing device), firmware, or a combination thereof. In one embodiment, method 300 is performed by host migration agent 107, 117 of FIG. 1.
Method 300 begins at block 310 where the rate of state change of a VM is monitored during the live migration of the VM. Then, at decision block 320, it is determined whether the rate of state change of the VM exceeds the rate of state transfer for the live migration of the VM. If the rate of state change does not exceed the rate of state transfer, then processing returns to block 310 to continue monitoring the rate of state change during the live migration process.
If the rate of state change does exceed the rate of state transfer at decision block 320, then at block 330, a resource of the VM is adjusted based on the ratio of VM state migration to VM state change. For example, the CPU speed may be adjusted by multiplying the current CPU speed by the quotient of the number of memory pages transferred divided by the number of memory pages dirtied. At block 340, the resource of the VM is further adjusted by dividing the resource allocation by a constant factor. For instance, the altered CPU speed from block 330 is divided by a constant factor in order to give an additional buffer to the CPU speed adjustment.
At decision block 350, it is determined whether the rate of state change still exceeds the rate of state transfer. If so, then method 300 returns to block 330 to continue to adjust the VM resource based on the ratio of VM state migration to VM state change. If the rate of state change no longer exceeds the rate of state transfer at decision block 350, the method 300 continues to decision block 360 where it is determined whether live migration is complete. If the live migration is not complete at decision block 360, then method 300 returns to block 310 to continue monitoring the rate of state change of the VM during the live migration process. If the live migration process is complete at decision block 360, then method 300 ends.
FIG. 4 is a flow diagram illustrating a further method 400 for VM resource reduction for live migration optimization according to an embodiment of the invention. Method 400 may be performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions run on a processing device), firmware, or a combination thereof. In one embodiment, method 400 is performed by host migration agent 107, 117 of FIG. 1.
Method 400 begins at block 410 where the rate of state change of a VM is monitored during the live migration of the VM. Then, at decision block 420, it is determined whether the rate of state change of the VM exceeds the rate of state transfer for the live migration of the VM. If the rate of state change does not exceed the rate of state transfer, then processing returns to block 410 to continue monitoring the rate of state change during the live migration process.
If the rate of state change does exceed the rate of state transfer at decision block 420, then at block 430, a resource of the VM is adjusted by introducing a delay each time a pagefault occurs on the VM. Each time the VM generates new state, the VM exits to the hypervisor and the hypervisor may intervene and slow the CPU by introducing a small delay. This delay may then account for the greater speed of state change over migration speed. The delay may be equal to the amount of time it takes to copy the state change via migration. At block 440, the delay is multiplied by a constant factor (e.g., 1, 2, etc.) to provide an added advantage to state transfer over state change.
At decision block 450, it is determined whether the rate of state change still exceeds the rate of state transfer. If so, then method 400 returns to block 430 to continue to adjust the VM resource by introducing a delay each time a pagefault occurs on the VM. If the rate of state change no longer exceeds the rate of state transfer at decision block 450, the method 400 continues to decision block 460 where it is determined whether live migration is complete. If the live migration is not complete at decision block 460, then method 400 returns to block 410 to continue monitoring the rate of state change of the VM during the live migration process. If the live migration process is complete at decision block 460, then method 400 ends.
FIG. 5 illustrates a diagrammatic representation of a machine in the exemplary form of a computer system 500 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed. In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The exemplary computer system 500 includes a processing device 502, a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) (such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 506 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 518, which communicate with each other via a bus 530.
Processing device 502 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 502 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 502 is configured to execute the processing logic 526 for performing the operations and steps discussed herein.
The computer system 500 may further include a network interface device 508. The computer system 500 also may include a video display unit 510 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 512 (e.g., a keyboard), a cursor control device 514 (e.g., a mouse), and a signal generation device 516 (e.g., a speaker).
The data storage device 518 may include a machine-accessible storage medium 528 on which is stored one or more set of instructions (e.g., software 522) embodying any one or more of the methodologies of functions described herein. For example, software 522 may store instructions to perform VM resource reduction for live migration optimization by host migration agent 107, 117 described with respect to FIG. 1. The software 522 may also reside, completely or at least partially, within the main memory 504 and/or within the processing device 502 during execution thereof by the computer system 500; the main memory 504 and the processing device 502 also constituting machine-accessible storage media. The software 522 may further be transmitted or received over a network 520 via the network interface device 508.
The machine-readable storage medium 528 may also be used to store instructions to perform methods 200, 300, and 400 for VM resource reduction for live migration optimization described with respect to FIGS. 2, 3, and 4, and/or a software library containing methods that call the above applications. While the machine-accessible storage medium 528 is shown in an exemplary embodiment to be a single medium, the term “machine-accessible storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-accessible storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instruction for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention. The term “machine-accessible storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that any particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. Therefore, references to details of various embodiments are not intended to limit the scope of the claims, which in themselves recite only those features regarded as the invention.

Claims (10)

What is claimed is:
1. A method, comprising:
monitoring, by a processing device of a host machine managing a virtual machine (VM), a rate of modification of memory pages of the VM during a live migration of the VM;
comparing, by the processing device of the host machine, the rate of modification to a rate of migration of the memory pages during the live migration; and
when the rate of modification exceeds the rate of migration, adjusting, by the host machine, one or more resources of the VM to decrease the rate of modification to be less than the rate of migration, wherein the adjusting comprises at least one of dividing the one or more resources by a first constant factor or multiplying a delay by a second constant factor, and wherein the adjusting is in view of the rate of migration and the rate of modification, and wherein the adjusting adjusts a speed of a first virtual CPU and a speed of a second virtual CPU separately.
2. The method of claim 1, wherein the one or more resources of the VM comprises memory.
3. The method of claim 1, wherein the speed of the first virtual CPU is adjusted in view of performance of the first virtual CPU, and wherein the speed of the second virtual CPU is adjusted in view of performance of the second virtual CPU.
4. The method of claim 1, wherein the delay is in view of a pagefault on the VM.
5. A system, comprising:
a memory to store a virtual machine (VM); and
a processing device, operatively coupled to the memory, to:
the processing to:
monitor a rate of modification of memory pages of the VM during a live migration of the VM,
compare the rate of modification to a rate of migration of the memory pages during the live migration, and
when the rate of modification exceeds the rate of migration, adjust one or more resources of the VM to decrease the rate of modification to be less than the rate of migration, wherein the adjusting comprises at least one of dividing the one or more resources by a first constant factor or multiplying a delay by a second constant factor, and wherein the adjusting is in view of the rate of migration and the rate of modification, and wherein the adjusting adjusts a speed of a first virtual CPU and a speed of a second virtual CPU separately.
6. The system of claim 5, wherein the speed of the first virtual CPU is adjusted in view of performance of the first virtual CPU, and wherein the speed of the second virtual CPU is adjusted in view of performance of the second virtual CPU.
7. The system of claim 5, wherein the delay is in view of a pagefault on the VM.
8. A non-transitory machine-readable storage medium including instructions that, when accessed by a processing device, cause the processing device to:
monitor, by the processing device of a host machine managing a virtual machine (VM), a rate of modification of memory pages of the VM during a live migration of the VM;
compare, by the processing device, the rate of modification to a rate of migration of the memory pages during the live migration; and
when the rate of modification exceeds the rate of migration, adjust one or more resources of the VM to decrease the rate of modification to be less than the rate of migration, wherein the adjusting comprises at least one of dividing the one or more resources by a first constant factor or multiplying a delay by a second constant factor, and wherein the adjusting is in view of the rate of migration and the rate of modification, and wherein the adjusting adjusts a speed of a first virtual CPU and a speed of a second virtual CPU separately.
9. The non-transitory machine-readable storage medium of claim 8, wherein the speed of the first virtual CPU is adjusted in view of performance of the first virtual CPU, and wherein the speed of the second virtual CPU is adjusted in view of performance of the second virtual CPU.
10. The non-transitory machine-readable storage medium of claim 8, wherein the delay is in view of a pagefault on the VM.
US13/036,732 2011-02-28 2011-02-28 Virtual machine resource reduction for live migration optimization Expired - Fee Related US9223616B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/036,732 US9223616B2 (en) 2011-02-28 2011-02-28 Virtual machine resource reduction for live migration optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/036,732 US9223616B2 (en) 2011-02-28 2011-02-28 Virtual machine resource reduction for live migration optimization

Publications (2)

Publication Number Publication Date
US20120221710A1 US20120221710A1 (en) 2012-08-30
US9223616B2 true US9223616B2 (en) 2015-12-29

Family

ID=46719771

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/036,732 Expired - Fee Related US9223616B2 (en) 2011-02-28 2011-02-28 Virtual machine resource reduction for live migration optimization

Country Status (1)

Country Link
US (1) US9223616B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170147371A1 (en) * 2015-11-24 2017-05-25 Red Hat Israel, Ltd. Virtual machine migration using memory page hints
US12039365B2 (en) 2021-03-30 2024-07-16 International Business Machines Corporation Program context migration

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201250464A (en) * 2011-06-01 2012-12-16 Hon Hai Prec Ind Co Ltd System and method for monitoring virtual machines
US20130086298A1 (en) 2011-10-04 2013-04-04 International Business Machines Corporation Live Logical Partition Migration with Stateful Offload Connections Using Context Extraction and Insertion
US20130083690A1 (en) 2011-10-04 2013-04-04 International Business Machines Corporation Network Adapter Hardware State Migration Discovery in a Stateful Environment
JP2014191752A (en) * 2013-03-28 2014-10-06 Fujitsu Ltd Migration processing program, migration method, and cloud system
US9268588B2 (en) * 2013-05-20 2016-02-23 Citrix Systems, Inc. Optimizing virtual machine migration via identification and treatment of virtual memory swap file
US9081599B2 (en) 2013-05-28 2015-07-14 Red Hat Israel, Ltd. Adjusting transfer rate of virtual machine state in virtual machine migration
CN103455363B (en) * 2013-08-30 2017-04-19 华为技术有限公司 Command processing method, device and physical host of virtual machine
JP6372074B2 (en) * 2013-12-17 2018-08-15 富士通株式会社 Information processing system, control program, and control method
US9436751B1 (en) * 2013-12-18 2016-09-06 Google Inc. System and method for live migration of guest
US9361145B1 (en) 2014-06-27 2016-06-07 Amazon Technologies, Inc. Virtual machine state replication using DMA write records
JP6413517B2 (en) * 2014-09-04 2018-10-31 富士通株式会社 Management device, migration control program, information processing system
US9612765B2 (en) * 2014-11-19 2017-04-04 International Business Machines Corporation Context aware dynamic composition of migration plans to cloud
US10715460B2 (en) * 2015-03-09 2020-07-14 Amazon Technologies, Inc. Opportunistic resource migration to optimize resource placement
US10721181B1 (en) * 2015-03-10 2020-07-21 Amazon Technologies, Inc. Network locality-based throttling for automated resource migration
US9880870B1 (en) 2015-09-24 2018-01-30 Amazon Technologies, Inc. Live migration of virtual machines using packet duplication
US20180246751A1 (en) * 2015-09-25 2018-08-30 Intel Corporation Techniques to select virtual machines for migration
US10552199B2 (en) * 2018-02-26 2020-02-04 Nutanix, Inc. System and method for binary throttling for live migration of virtual machines
US10552209B2 (en) * 2018-03-15 2020-02-04 Nutanix, Inc. System and method for throttling for live migration of virtual machines
US10552200B2 (en) 2018-03-22 2020-02-04 Nutanix, Inc. System and method for dynamic throttling for live migration of virtual machines
US11121981B1 (en) 2018-06-29 2021-09-14 Amazon Technologies, Inc. Optimistically granting permission to host computing resources
JP7125601B2 (en) * 2018-07-23 2022-08-25 富士通株式会社 Live migration control program and live migration control method
US11188368B2 (en) 2018-10-31 2021-11-30 Nutanix, Inc. Asynchronous workload migration control
US11194620B2 (en) 2018-10-31 2021-12-07 Nutanix, Inc. Virtual machine migration task management
CN110515701B (en) * 2019-08-28 2020-11-06 杭州数梦工场科技有限公司 Thermal migration method and device for virtual machine
US11550490B2 (en) * 2020-02-12 2023-01-10 Red Hat, Inc. Scalable storage cluster mirroring
CN112783605B (en) * 2021-01-27 2024-02-23 深信服科技股份有限公司 Method, device, equipment and storage medium for thermomigration of virtual machine
CN115827169B (en) * 2023-02-07 2023-06-23 天翼云科技有限公司 Virtual machine migration method and device, electronic equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110145471A1 (en) * 2009-12-10 2011-06-16 Ibm Corporation Method for efficient guest operating system (os) migration over a network
US20110264788A1 (en) * 2010-04-23 2011-10-27 Glauber Costa Mechanism for Guaranteeing Deterministic Bounded Tunable Downtime for Live Migration of Virtual Machines Over Reliable Channels
US20120036251A1 (en) * 2010-08-09 2012-02-09 International Business Machines Corporation Method and system for end-to-end quality of service in virtualized desktop systems
US20120304176A1 (en) * 2010-01-29 2012-11-29 Nec Corporation Virtual machine handling system, virtual machine handling method, computer, and storage medium
US20140032735A1 (en) * 2008-06-17 2014-01-30 Abhinav Kapoor Adaptive rate of screen capture in screen sharing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140032735A1 (en) * 2008-06-17 2014-01-30 Abhinav Kapoor Adaptive rate of screen capture in screen sharing
US20110145471A1 (en) * 2009-12-10 2011-06-16 Ibm Corporation Method for efficient guest operating system (os) migration over a network
US20120304176A1 (en) * 2010-01-29 2012-11-29 Nec Corporation Virtual machine handling system, virtual machine handling method, computer, and storage medium
US20110264788A1 (en) * 2010-04-23 2011-10-27 Glauber Costa Mechanism for Guaranteeing Deterministic Bounded Tunable Downtime for Live Migration of Virtual Machines Over Reliable Channels
US20120036251A1 (en) * 2010-08-09 2012-02-09 International Business Machines Corporation Method and system for end-to-end quality of service in virtualized desktop systems

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170147371A1 (en) * 2015-11-24 2017-05-25 Red Hat Israel, Ltd. Virtual machine migration using memory page hints
US10768959B2 (en) * 2015-11-24 2020-09-08 Red Hat Israel, Ltd. Virtual machine migration using memory page hints
US12039365B2 (en) 2021-03-30 2024-07-16 International Business Machines Corporation Program context migration

Also Published As

Publication number Publication date
US20120221710A1 (en) 2012-08-30

Similar Documents

Publication Publication Date Title
US9223616B2 (en) Virtual machine resource reduction for live migration optimization
US8244956B2 (en) Mechanism for automatic adjustment of virtual machine storage
US8150971B2 (en) Mechanism for migration of client-side virtual machine system resources
US8589921B2 (en) Method and system for target host optimization based on resource sharing in a load balancing host and virtual machine adjustable selection algorithm
US9934056B2 (en) Non-blocking unidirectional multi-queue virtual machine migration
US8756383B2 (en) Random cache line selection in virtualization systems
US9405642B2 (en) Providing virtual machine migration reliability using an intermediary storage device
US8826292B2 (en) Migrating virtual machines based on level of resource sharing and expected load per resource on candidate target host machines
US8356120B2 (en) Mechanism for memory state restoration of virtual machine (VM)-controlled peripherals at a destination host machine during migration of the VM
US8244957B2 (en) Mechanism for dynamic placement of virtual machines during live migration based on memory
US9348655B1 (en) Migrating a VM in response to an access attempt by the VM to a shared memory page that has been migrated
US8924965B2 (en) Memory state transfer of virtual machine-controlled peripherals during migrations of the virtual machine
US8631405B2 (en) Identification and placement of new virtual machines based on similarity of software configurations with hosted virtual machines
US8832683B2 (en) Using memory-related metrics of host machine for triggering load balancing that migrate virtual machine
US8327060B2 (en) Mechanism for live migration of virtual machines with memory optimizations
US8650563B2 (en) Identification and placement of new virtual machines to reduce memory consumption based on shared images with hosted virtual machines
US9280380B2 (en) Management of I/O reqeusts in virtual machine migration
US8301859B2 (en) Automatically adjusting memory of a VM on a power client
US20100332658A1 (en) Selecting a host from a host cluster to run a virtual machine
US8291070B2 (en) Determining an operating status of a remote host upon communication failure
US8631253B2 (en) Manager and host-based integrated power saving policy in virtualization systems
US8914803B2 (en) Flow control-based virtual machine request queuing
US9256455B2 (en) Delivery of events from a virtual machine to host CPU using memory monitoring instructions
US20150128134A1 (en) Adjusting pause-loop exiting window values
US20220269521A1 (en) Memory page copying for virtual machine migration

Legal Events

Date Code Title Description
AS Assignment

Owner name: RED HAT ISRAEL, LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TSIRKIN, MICHAEL S.;REEL/FRAME:025873/0312

Effective date: 20110228

ZAAA Notice of allowance and fees due

Free format text: ORIGINAL CODE: NOA

ZAAB Notice of allowance mailed

Free format text: ORIGINAL CODE: MN/=.

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20231229