US20160139962A1 - Migrating a vm in response to an access attempt by the vm to a shared memory page that has been migrated - Google Patents

Migrating a vm in response to an access attempt by the vm to a shared memory page that has been migrated Download PDF

Info

Publication number
US20160139962A1
US20160139962A1 US14/546,330 US201414546330A US2016139962A1 US 20160139962 A1 US20160139962 A1 US 20160139962A1 US 201414546330 A US201414546330 A US 201414546330A US 2016139962 A1 US2016139962 A1 US 2016139962A1
Authority
US
United States
Prior art keywords
host
virtual machine
source host
shared memory
hypervisor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/546,330
Other versions
US9348655B1 (en
Inventor
Michael S. Tsirkin
David A. Gilbert
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Red Hat Israel Ltd
Original Assignee
Red Hat Israel Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Red Hat Israel Ltd filed Critical Red Hat Israel Ltd
Priority to US14/546,330 priority Critical patent/US9348655B1/en
Assigned to RED HAT ISRAEL, LTD. reassignment RED HAT ISRAEL, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GILBERT, DAVID A., TSIRKIN, MICHAEL S.
Publication of US20160139962A1 publication Critical patent/US20160139962A1/en
Priority to US15/162,277 priority patent/US10552230B2/en
Application granted granted Critical
Publication of US9348655B1 publication Critical patent/US9348655B1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Definitions

  • the present disclosure is generally related to computer systems, and more particularly, to group migration in virtualized computer systems.
  • a virtual machine is a portion of software that, when executed on appropriate hardware, creates an environment allowing the virtualization of an actual physical computer system (e.g., a server, a mainframe computer, etc.).
  • the actual physical computer system is typically referred to as a “host machine,” and the operating system (OS) of the host machine is typically referred to as the “host operating system.”
  • OS operating system
  • software on the host machine known as a “hypervisor” (or a “virtual machine monitor”) manages the execution of one or more virtual machines or “guests”, providing a variety of functions such as virtualizing and allocating resources, context switching among virtual machines, etc.
  • the operating system (OS) of the virtual machine is typically referred to as the “guest operating system.”
  • a running virtual machine or group of virtual machines can be moved from one host to another without disconnecting or terminating the virtual machine.
  • Memory, storage, and network connectivity of the virtual machines can be transferred from the source host machine to a destination host machine.
  • the process is referred to as “live migration” or “group migration.”
  • FIG. 1 depicts a high-level component diagram of an example computer system architecture, in accordance with one or more aspects of the present disclosure.
  • FIG. 2 depicts a flow diagram of a method for migrating shared memory for a group of virtual machines, in accordance with one or more aspects of the present disclosure.
  • FIG. 3 depicts a flow diagram of a method for migrating a page of shared memory requested by a destination host, in accordance with one or more aspects of the present disclosure.
  • FIG. 4 depicts a flow diagram of a method for requesting missing shared memory pages by a destination host, in accordance with one or more aspects of the present disclosure.
  • FIG. 5 depicts a block diagram of an illustrative computer system operating in accordance with examples of the invention.
  • Methods of group migration may include “pre-copy” and “post-copy” techniques.
  • Pre-copy techniques can involve sending a copy of the state of a virtual machine to the destination host while the virtual machine continues to execute on the source host. If some memory pages change during the process, they can be re-copied until there are very few changes remaining on the source, at which point the virtual machine can be stopped on the source and restarted on the destination.
  • Post-copy techniques can involve suspending the virtual machine on the source, copying a subset of the state of the virtual machine to the destination, and then resuming the virtual machine on the destination. If a post-copied virtual machine attempts to access a page of its own memory that has not been migrated, the attempted access can generate a fault and the requesting virtual machine can stop executing until the memory is pulled from the source host.
  • Employing these techniques can be effective with individual virtual machines because their migration can be treated independently from other virtual machines in a group.
  • employing traditional techniques for virtual machine migration can make managing continuing updates to the shared memory space untenable.
  • Migration for each of the virtual machines in a group may not complete at the same time, which can result in extended latency and downtime for virtual machines that should wait for the entire shared memory space to be migrated.
  • employing traditional techniques to shared memory can prevent a consistent view of the shared memory across the group of virtual machines during migration and may require stopping all of the virtual machines in the group that access the shared memory until migration of the shared memory has been completed.
  • a group of virtual machines that share a memory space on a source host can be migrated from the source host to a destination host.
  • the migration may be initiated by a virtualization management system, the hypervisor on the source host, or in any other similar manner.
  • the hypervisor on the source host may migrate the group of virtual machines using pre-copy techniques, post-copy techniques, or a combination of the two.
  • the hypervisor on the source host can determine that a virtual machine of the group of virtual machines on the source host has been migrated to a destination host.
  • the hypervisor can determine that the first virtual machine has been migrated to the destination host by determining that a portion of the state of the virtual machine has been migrated to the destination host.
  • the portion of the state of the virtual machine may comprise a predetermined state of various components of the virtual machine that are necessary for the virtual machine to begin execution on the destination host.
  • the portion of the virtual machine may comprise a device state, the state of CPU registers, the pages of memory that are currently being accessed by the virtual machine, or the like.
  • the state of the virtual machine may be migrated by copying the state from the source host to the destination host directly through the network, placing the state in a shared space for the destination host to retrieve, or in any other manner.
  • the hypervisor on the source host may then determine that the virtual machine that has been migrated shares a memory space on the source host with the other virtual machines in the group of virtual machines that should be migrated to the destination host.
  • the hypervisor on the source host may identify an area of memory as shared by the group of virtual machines by using a mapping table, a configuration file, an area within the memory page table of the host operating system, or in any other similar manner.
  • the hypervisor may store a unique identifier in a mapping table for the virtual machine that references the memory page addresses that are shared with other virtual machines in the group.
  • the hypervisor on the source host may begin migrating pages of the shared memory space to the destination host at the same time that the group of virtual machines are being migrated.
  • the hypervisor may send the contents of shared memory pages by copying the contents from the source host to the destination host directly through the network, placing the contents in a shared space for the destination host to retrieve, or in any other manner.
  • the hypervisor on the source host may wait to migrate the pages of the shared memory space until a request is received from the destination host.
  • the hypervisor may migrate the memory page to the destination host and designate that memory page as not present on the source host.
  • the hypervisor on the source host may maintain the status of migrated memory pages of the shared memory space to determine the status of the overall migration. For example, the hypervisor may store a status flag or a total number of migrated pages in a mapping table. Upon determining that the number of migrated memory pages of the shared memory space meets a predetermined threshold condition, the hypervisor may designate the migration of the shared memory space complete and notify the destination host accordingly.
  • the hypervisor on the source host may employ both methods of migrating the contents of the shared memory space. For example, the hypervisor on the source host may begin sending pages of share memory space to the destination host as resources are made available, but prioritize sending a particular page of shared memory to the destination host if a request is received for that particular page. Once the hypervisor on the source host migrates a page of shared memory to the destination host, the hypervisor on the source host may then designate that page of shared memory as not present on the source host. In one illustrative example, the hypervisor on the source host may modify the valid bit of the page table entry for the shared memory page within the memory page table of the host operating system. Alternatively, the hypervisor may save the state of the memory page within a separate mapping table in hypervisor accessible memory.
  • the hypervisor may begin monitoring the shared memory space for accesses of the other virtual machines in the group.
  • the hypervisor on the source host may receive a request to access a memory page of the shared memory space from one of the virtual machines that has not yet been migrated. If the hypervisor on the source host determines that the requested memory page of the shared memory space has not yet been migrated to the destination host, the hypervisor on the source host may allow the non-migrated virtual machine to access the shared memory page. Alternatively, the hypervisor may prevent any virtual machine still running on the source host from accessing any memory page from the shared memory space that is still on the source host.
  • the hypervisor on the source host determines that the requested memory page of the shared memory space has been migrated to the destination host (e.g., the requested page has been designated as not present on the source host)
  • the hypervisor may then stop execution of the virtual machine that issued the request on the source host and migrate that virtual machine to the destination host.
  • the hypervisor of the destination host may start executing that virtual machine on the destination host.
  • a virtual machine that was stopped on a source host because it attempted to access a page of shared memory that had already been migrated to the destination host should be migrated to, and then started on, the destination host seamlessly. Once migrated to the destination host, the virtual machine will then be able to access the page of shared memory on the destination host.
  • the hypervisor of the destination host may receive a request from the migrated virtual machine for a page of memory from the shared memory space on the destination host. Upon determining that the requested memory page of the shared memory space is missing on the destination host, the hypervisor of the destination host may pause the execution of the virtual machine that issued the request. In one illustrative example, the hypervisor may make this determination by referencing the valid bit of the page table entry for the shared memory page within the memory page table of the destination host operating system.
  • the hypervisor of the destination host may then retrieve the missing memory page from the source host.
  • the hypervisor of the destination host may send a request to the source host for the missing shared memory page.
  • the hypervisor on the destination host may designate the missing shared memory page on the destination host as having been requested. This may be accomplished by using a mapping table, a configuration file, an area within the memory page table of the destination host operating system, or in any other similar manner.
  • the hypervisor of the destination host may monitor the status of missing memory pages on a time interval and resend any request that has not been fulfilled by the source host if a predefined period of time has elapsed since the request was sent to the source host.
  • the hypervisor of the destination host may then receive the missing shared memory page from the source host and subsequently designate that memory page as present on the destination host.
  • aspects of the present disclosure are thus capable of reducing latency and downtime for migrated virtual machines that share memory across a group, while maintaining a consistent view of the shared memory across all virtual machines in the group during the migration process. More particularly, aspects of the present disclosure allow seamless migration of a group of virtual machines by migrating shared memory such that it is transparent to the virtual machines, thereby reducing latency, downtime, and resulting page faults.
  • FIG. 1 depicts a high-level component diagram of an illustrative example of a network architecture 100 , in accordance with one or more aspects of the present disclosure.
  • a network architecture 100 is a high-level component diagram of an illustrative example of a network architecture 100 , in accordance with one or more aspects of the present disclosure.
  • FIG. 1 depicts a high-level component diagram of an illustrative example of a network architecture 100 , in accordance with one or more aspects of the present disclosure.
  • FIG. 1 depicts a high-level component diagram of an illustrative example of a network architecture 100 , in accordance with one or more aspects of the present disclosure.
  • FIG. 1 depicts a high-level component diagram of an illustrative example of a network architecture 100 , in accordance with one or more aspects of the present disclosure.
  • FIG. 1 depicts a high-level component diagram of an illustrative example of a network architecture 100 , in accordance with one or more aspects of the
  • the network architecture 100 includes one or more source hosts 110 coupled to one or more destination hosts 120 over a network 101 .
  • the network 101 may be a private network (e.g., a local area network (LAN), wide area network (WAN), intranet, etc.) or a public network (e.g., the Internet).
  • the source hosts 110 and destination hosts 120 may also be coupled to a host controller 130 (via the same or a different network or directly).
  • Host controller 130 may be an independent machine such as a server computer, a desktop computer, etc. Alternatively, the host controller 130 may be part of the source host 110 or destination host 120 .
  • Source Host 110 may comprise server computers or any other computing devices capable of running one or more source virtual machines (VMs) 111 - 1 through 111 -N where N is a positive integer.
  • Each source VM 111 is a software implementation of a machine that executes programs as though it was a physical machine.
  • Each source VM 111 may run a guest operating system (OS) that may be different from one virtual machine to another.
  • the guest OS may include Microsoft Windows, Linux, Solaris, Mac OS, etc.
  • the source host 110 may comprise source shared memory 112 , a memory space that is shared among a group of source VMs 111 .
  • the source host 110 may additionally comprise a source hypervisor 113 that emulates the underlying hardware platform for the source VMs 111 .
  • the source hypervisor 113 may also be known as a virtual machine monitor (VMM) or a kernel-based hypervisor.
  • the source hypervisor 113 may comprise migration module 114 , memory page table 115 , and mapping table 116 .
  • Migration module 114 can manage the source-side tasks required for migration of a group of VMs (e.g., source VMs 111 ) that are running on source host 110 as well as the shared memory of the group (e.g., source shared memory 112 ) to a destination host 120 , as described in detail below with respect to FIGS. 2 and 3 .
  • the migration module 114 can initiate migration of a group of VMs 111 , monitor the status of the migration state of each VM during migration, migrate memory pages from source shared memory 112 to destination host 120 , and service requests received from destination host 120 for missing shared memory pages.
  • the migration module 114 may store information regarding page migration status for later use in memory page table 115 or mapping table 116 . For example, upon migrating a page of shared memory from source shared memory 112 , migration module 114 may modify the corresponding page table entry in memory page table 115 to designate the memory page as not present. Additionally, migration module 114 may store unique identifiers in a mapping table 116 that associate the group of VMs 111 to the page addresses of source shared memory 112 that are shared by the group.
  • Destination Host 120 may comprise server computers or any other computing devices capable of running one or more destination virtual machines (VMs) 121 - 1 through 121 -N where N is a positive integer.
  • Each destination VM 121 is a software implementation of a machine that executes programs as though it was a physical machine.
  • Each destination VM 121 may run a guest operating system (OS) that may be different from one virtual machine to another.
  • the guest OS may include Microsoft Windows, Linux, Solaris, Mac OS, etc.
  • the destination host 120 may comprise destination shared memory 122 , a memory space that is shared among a group of destination VMs 121 .
  • the destination host 120 may additionally comprise a destination hypervisor 123 that emulates the underlying hardware platform for the destination VMs 121 .
  • the destination hypervisor 123 may also be known as a virtual machine monitor (VMM) or a kernel-based hypervisor.
  • the destination hypervisor 123 may comprise migration module 124 , memory page table 125 , and mapping table 126 .
  • Migration module 124 can manage the destination-side tasks for migration of a group of VMs (e.g., destination VMs 121 ) with the shared memory of the group (e.g., destination shared memory 122 ), as described in detail below with respect to FIG. 4 .
  • the migration module 124 can complete the migration of a group of destination VMs 121 , start each destination VMs 121 on destination host 120 , and send requests to source host 110 for memory pages missing from destination shared memory 122 .
  • the migration module 124 may store information regarding page migration status for later use in memory page table 125 or mapping table 126 . For example, upon receiving a missing memory page from source host 110 , migration module 124 may modify the corresponding page table entry in memory page table 125 to designate the memory page as present. Additionally, migration module 124 may store unique identifiers in a mapping table 126 that associate the group of destination VMs 121 to the page addresses of destination shared memory 122 that are shared by the group. Moreover, migration module 124 may use mapping table 126 to store the status of requests submitted to source host 110 for missing memory pages (e.g., to indicate that particular memory pages of shared memory have been requested from the source host, received successfully from the source host, are present in destination shared memory 122 , etc.).
  • a host controller 130 can manage the source VMs 111 and destination VMs 121 .
  • Host controller 130 may manage the allocation of resources from source host 110 to source VMs 111 , the allocation of resources from destination host 120 to destination VMs 121 .
  • host controller 130 may initiate the migration of a group of source VMs 111 with their associated source memory 112 to destination host 120 .
  • host controller 130 may run on a separate physical machine from source host 110 and destination host 120 .
  • host controller 130 may run locally on either source host 110 or destination host 120 .
  • the host controller 130 may include a virtualization manager 131 to perform the management operations described above.
  • FIG. 2 depicts a flow diagram of an example method 200 for migrating shared memory for a group of virtual machines.
  • the method may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both.
  • method 200 may be performed by migration module 114 of source hypervisor 113 in FIG. 1 .
  • some or all of method 200 might be performed by another machine. It should be noted that blocks depicted in FIG. 2 could be performed simultaneously or in a different order than that depicted.
  • processing logic determines that a first virtual machine of a group of virtual machines has been migrated to a destination host.
  • processing logic can determine that the first virtual machine has been migrated to the destination host by determining that a portion of the state of the virtual machine has been migrated to the destination host.
  • the portion of the state of the virtual machine may comprise a predetermined state of various components of the virtual machine that are necessary for the virtual machine to begin execution on the destination host.
  • the portion of the virtual machine may comprise a device state, the state of CPU registers, the pages of memory that are currently being accessed by the virtual machine, or the like.
  • the state of the virtual machine may be migrated by copying the state from the source host to the destination host directly through the network, placing the state in a shared space for the destination host to retrieve, or in any other manner.
  • processing logic determines whether the first virtual machine shares a memory space with a second virtual machine of the group of virtual machines. If not, the method of FIG. 2 terminates. Otherwise, execution continues to block 203 .
  • processing logic may identify an area of memory as shared by a group of virtual machines by using a mapping table, a configuration file, an area within the memory page table of the host operating system, or in any other similar manner. For example, processing logic may store a unique identifier in a mapping table for the virtual machine that references the memory page addresses that are shared with other virtual machines in a group.
  • processing logic begins monitoring shared memory space accesses of the second virtual machine.
  • processing logic receives a request from the second virtual machine to access a memory page of the shared memory space.
  • processing logic determines whether the shared memory page requested by the second virtual machine has been migrated to the destination host (e.g., the memory page has been designated as not present on the source host). If not, the method of FIG. 2 terminates. Otherwise, execution continues to block 206 .
  • processing logic stops execution of the second virtual machine on the source host.
  • processing logic migrates the second virtual machine to the destination host.
  • Processing logic may migrate the second virtual machine using pre-copy techniques, post-copy techniques, or a combination of the two.
  • Processing logic may migrate the second virtual machine to the destination by copying the state of the second virtual machine from the source host to the destination host directly through the network, placing the state in a shared space for the destination host to retrieve, or in any other manner.
  • the method of FIG. 2 terminates.
  • FIG. 3 depicts a flow diagram of an example method 300 for migrating a page of shared memory requested by a destination host.
  • the method may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both.
  • method 300 may be performed by migration module 114 of source hypervisor 113 in FIG. 1 .
  • some or all of method 300 might be performed by another machine. It should be noted that blocks depicted in FIG. 3 could be performed simultaneously or in a different order than that depicted.
  • processing logic receives a request from a destination host for a memory page of shared memory space on a source host.
  • processing logic migrates the requested memory page to the destination host.
  • Processing logic may send the contents of shared memory pages by copying the contents from the source host to the destination host directly through the network, placing the contents in a shared space for the destination host to retrieve, or in any other manner.
  • processing logic designates the migrated memory page as not present on the source host. For example, processing logic may modify the valid bit of the page table entry for the shared memory page within the memory page table of the host operating system. Alternatively, processing logic may save the state of the memory page within a separate mapping table in hypervisor accessible memory.
  • processing logic updates a mapping table to maintain the status of migrated memory pages of the shared memory space. For example, processing logic may store a status flag or a total number of migrated pages in a mapping table.
  • processing logic determines if the migrated number of pages meets a predetermined threshold condition. If not, the method of FIG. 3 ends. Otherwise, execution proceeds to block 306 .
  • processing logic notifies the destination host that the shared memory migration has completed. After block 306 , the method of FIG. 3 terminates.
  • FIG. 4 depicts a flow diagram of an example method 400 for requesting missing shared memory pages by a destination host.
  • the method may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both.
  • method 400 may be performed by migration module 124 of destination hypervisor 123 in FIG. 1 .
  • some or all of method 400 might be performed by another machine. It should be noted that blocks depicted in FIG. 4 could be performed simultaneously or in a different order than that depicted.
  • processing logic starts a migrated virtual machine on the destination host.
  • the virtual machine is started once a portion of the state of the virtual machine from the source host has been migrated to a destination host.
  • the portion of the state of the virtual machine may comprise a predetermined state of various components of the virtual machine that are necessary for the virtual machine to begin execution on the destination host.
  • the portion of the virtual machine may comprise a device state, the state of CPU registers, the pages of memory that are currently being accessed by the virtual machine, or the like.
  • a virtual machine that was stopped on a source host because it attempted to access a page of shared memory that had already been migrated to the destination host will be migrated to then started on the destination host seamlessly.
  • processing logic receives a request from a migrated virtual machine for a page of memory from a shared memory space on the destination host.
  • processing logic determines whether the requested page of shared memory is missing from the shared memory space on the destination host. In one illustrative example, processing logic may make this determination by referencing the valid bit of the page table entry for the shared memory page within the memory page table of the destination host operating system. If the requested page is not missing, the method of FIG. 4 ends. Otherwise, execution proceeds to block 404 .
  • processing logic sends a request to the source host for the missing memory page.
  • processing logic designates the requested missing memory page as requested. In some implementations, processing logic may accomplish this using a mapping table, a configuration file, an area within the memory page table of the destination host operating system, or in any other similar manner.
  • processing logic receives the missing shared memory page from the source host.
  • processing logic designates the missing memory page as present on the destination host. After block 407 , the method of FIG. 4 terminates.
  • FIG. 5 depicts an example computer system 500 which can perform any one or more of the methods described herein.
  • computer system 500 may correspond to network architecture 100 of FIG. 1 .
  • the computer system may be connected (e.g., networked) to other computer systems in a LAN, an intranet, an extranet, or the Internet.
  • the computer system may operate in the capacity of a server in a client-server network environment.
  • the computer system may be a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device.
  • PC personal computer
  • STB set-top box
  • server a server
  • network router switch or bridge
  • the exemplary computer system 500 includes a processing device 502 , a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 506 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 516 , which communicate with each other via a bus 508 .
  • main memory 504 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • static memory 506 e.g., flash memory, static random access memory (SRAM)
  • SRAM static random access memory
  • Processing device 502 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 502 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets.
  • the processing device 502 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like.
  • the processing device 502 is configured to execute migration module 526 for performing the operations and steps discussed herein (e.g., corresponding to the methods of FIGS. 2-4 , etc.).
  • the computer system 500 may further include a network interface device 522 .
  • the computer system 500 also may include a video display unit 510 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 512 (e.g., a keyboard), a cursor control device 514 (e.g., a mouse), and a signal generation device 520 (e.g., a speaker).
  • a video display unit 510 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
  • an alphanumeric input device 512 e.g., a keyboard
  • a cursor control device 514 e.g., a mouse
  • a signal generation device 520 e.g., a speaker
  • the video display unit 510 , the alphanumeric input device 512 , and the cursor control device 514 may be combined into a single component or device (e.
  • the data storage device 516 may include a computer-readable medium 524 on which is stored migration module 526 (e.g., corresponding to the methods of FIGS. 2-4 , etc.) embodying any one or more of the methodologies or functions described herein.
  • Migration module 526 may also reside, completely or at least partially, within the main memory 504 and/or within the processing device 502 during execution thereof by the computer system 500 , the main memory 504 and the processing device 502 also constituting computer-readable media.
  • Migration module 526 may further be transmitted or received over a network via the network interface device 522 .
  • While the computer-readable storage medium 524 is shown in the illustrative examples to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
  • the term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention.
  • the term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
  • the present invention also relates to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
  • the present invention may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present invention.
  • a machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer).
  • a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.).
  • example or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion.
  • the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A hypervisor of a source host receives a request to migrate a group of virtual machines from the source host to a destination host. The hypervisor of the source host determines that a first virtual machine being migrated to the destination host shares a memory space on the source host with a second virtual machine on the source host. Upon receiving a request from the second virtual machine on the source host to access a first memory page of the shared memory space on the source host that has been migrated to the destination host, the hypervisor of the source host initiates migration of the second virtual machine to the destination host.

Description

    TECHNICAL FIELD
  • The present disclosure is generally related to computer systems, and more particularly, to group migration in virtualized computer systems.
  • BACKGROUND
  • A virtual machine (VM) is a portion of software that, when executed on appropriate hardware, creates an environment allowing the virtualization of an actual physical computer system (e.g., a server, a mainframe computer, etc.). The actual physical computer system is typically referred to as a “host machine,” and the operating system (OS) of the host machine is typically referred to as the “host operating system.” Typically, software on the host machine known as a “hypervisor” (or a “virtual machine monitor”) manages the execution of one or more virtual machines or “guests”, providing a variety of functions such as virtualizing and allocating resources, context switching among virtual machines, etc. The operating system (OS) of the virtual machine is typically referred to as the “guest operating system.”
  • In multiple host environments, a running virtual machine or group of virtual machines can be moved from one host to another without disconnecting or terminating the virtual machine. Memory, storage, and network connectivity of the virtual machines can be transferred from the source host machine to a destination host machine. The process is referred to as “live migration” or “group migration.”
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present disclosure is illustrated by way of example, and not by way of limitation, and can be more fully understood with reference to the following detailed description when considered in connection with the figures in which:
  • FIG. 1 depicts a high-level component diagram of an example computer system architecture, in accordance with one or more aspects of the present disclosure.
  • FIG. 2 depicts a flow diagram of a method for migrating shared memory for a group of virtual machines, in accordance with one or more aspects of the present disclosure.
  • FIG. 3 depicts a flow diagram of a method for migrating a page of shared memory requested by a destination host, in accordance with one or more aspects of the present disclosure.
  • FIG. 4 depicts a flow diagram of a method for requesting missing shared memory pages by a destination host, in accordance with one or more aspects of the present disclosure.
  • FIG. 5 depicts a block diagram of an illustrative computer system operating in accordance with examples of the invention.
  • DETAILED DESCRIPTION
  • Described herein are methods and systems by which a memory space shared by group of virtual machines may be migrated from a source host to a destination host. Methods of group migration may include “pre-copy” and “post-copy” techniques. Pre-copy techniques can involve sending a copy of the state of a virtual machine to the destination host while the virtual machine continues to execute on the source host. If some memory pages change during the process, they can be re-copied until there are very few changes remaining on the source, at which point the virtual machine can be stopped on the source and restarted on the destination. Post-copy techniques can involve suspending the virtual machine on the source, copying a subset of the state of the virtual machine to the destination, and then resuming the virtual machine on the destination. If a post-copied virtual machine attempts to access a page of its own memory that has not been migrated, the attempted access can generate a fault and the requesting virtual machine can stop executing until the memory is pulled from the source host.
  • Employing these techniques can be effective with individual virtual machines because their migration can be treated independently from other virtual machines in a group. However, when a group of virtual machines share an area of memory that any virtual machine in the group may update at any time, employing traditional techniques for virtual machine migration can make managing continuing updates to the shared memory space untenable. Migration for each of the virtual machines in a group may not complete at the same time, which can result in extended latency and downtime for virtual machines that should wait for the entire shared memory space to be migrated. Particularly, employing traditional techniques to shared memory can prevent a consistent view of the shared memory across the group of virtual machines during migration and may require stopping all of the virtual machines in the group that access the shared memory until migration of the shared memory has been completed.
  • Aspects of the present disclosure address the above noted deficiency by employing modified post-copy techniques for pages of memory in a memory space shared among a group of virtual machines being migrated between hosts. In an illustrative example, a group of virtual machines that share a memory space on a source host can be migrated from the source host to a destination host. The migration may be initiated by a virtualization management system, the hypervisor on the source host, or in any other similar manner. The hypervisor on the source host may migrate the group of virtual machines using pre-copy techniques, post-copy techniques, or a combination of the two. The hypervisor on the source host can determine that a virtual machine of the group of virtual machines on the source host has been migrated to a destination host. In certain implementations, the hypervisor can determine that the first virtual machine has been migrated to the destination host by determining that a portion of the state of the virtual machine has been migrated to the destination host. The portion of the state of the virtual machine may comprise a predetermined state of various components of the virtual machine that are necessary for the virtual machine to begin execution on the destination host. For example, the portion of the virtual machine may comprise a device state, the state of CPU registers, the pages of memory that are currently being accessed by the virtual machine, or the like. The state of the virtual machine may be migrated by copying the state from the source host to the destination host directly through the network, placing the state in a shared space for the destination host to retrieve, or in any other manner.
  • The hypervisor on the source host may then determine that the virtual machine that has been migrated shares a memory space on the source host with the other virtual machines in the group of virtual machines that should be migrated to the destination host. In some implementations, the hypervisor on the source host may identify an area of memory as shared by the group of virtual machines by using a mapping table, a configuration file, an area within the memory page table of the host operating system, or in any other similar manner. For example, the hypervisor may store a unique identifier in a mapping table for the virtual machine that references the memory page addresses that are shared with other virtual machines in the group.
  • The hypervisor on the source host may begin migrating pages of the shared memory space to the destination host at the same time that the group of virtual machines are being migrated. The hypervisor may send the contents of shared memory pages by copying the contents from the source host to the destination host directly through the network, placing the contents in a shared space for the destination host to retrieve, or in any other manner. Alternatively, the hypervisor on the source host may wait to migrate the pages of the shared memory space until a request is received from the destination host. Upon receiving a request from the destination host for a memory page that has not yet been migrated to the destination host, the hypervisor may migrate the memory page to the destination host and designate that memory page as not present on the source host.
  • In some implementations, the hypervisor on the source host may maintain the status of migrated memory pages of the shared memory space to determine the status of the overall migration. For example, the hypervisor may store a status flag or a total number of migrated pages in a mapping table. Upon determining that the number of migrated memory pages of the shared memory space meets a predetermined threshold condition, the hypervisor may designate the migration of the shared memory space complete and notify the destination host accordingly.
  • In some implementations, the hypervisor on the source host may employ both methods of migrating the contents of the shared memory space. For example, the hypervisor on the source host may begin sending pages of share memory space to the destination host as resources are made available, but prioritize sending a particular page of shared memory to the destination host if a request is received for that particular page. Once the hypervisor on the source host migrates a page of shared memory to the destination host, the hypervisor on the source host may then designate that page of shared memory as not present on the source host. In one illustrative example, the hypervisor on the source host may modify the valid bit of the page table entry for the shared memory page within the memory page table of the host operating system. Alternatively, the hypervisor may save the state of the memory page within a separate mapping table in hypervisor accessible memory.
  • Upon determining that the first virtual machine shares the memory space on the source host with the other virtual machines in the group of virtual machines that should be migrated to the destination host, the hypervisor may begin monitoring the shared memory space for accesses of the other virtual machines in the group. In certain implementations, the hypervisor on the source host may receive a request to access a memory page of the shared memory space from one of the virtual machines that has not yet been migrated. If the hypervisor on the source host determines that the requested memory page of the shared memory space has not yet been migrated to the destination host, the hypervisor on the source host may allow the non-migrated virtual machine to access the shared memory page. Alternatively, the hypervisor may prevent any virtual machine still running on the source host from accessing any memory page from the shared memory space that is still on the source host.
  • If the hypervisor on the source host determines that the requested memory page of the shared memory space has been migrated to the destination host (e.g., the requested page has been designated as not present on the source host), the hypervisor may then stop execution of the virtual machine that issued the request on the source host and migrate that virtual machine to the destination host.
  • Once a portion of the state of any virtual machine of the group of virtual machines from the source host has been migrated to a destination host, the hypervisor of the destination host may start executing that virtual machine on the destination host. A virtual machine that was stopped on a source host because it attempted to access a page of shared memory that had already been migrated to the destination host should be migrated to, and then started on, the destination host seamlessly. Once migrated to the destination host, the virtual machine will then be able to access the page of shared memory on the destination host.
  • Subsequently, the hypervisor of the destination host may receive a request from the migrated virtual machine for a page of memory from the shared memory space on the destination host. Upon determining that the requested memory page of the shared memory space is missing on the destination host, the hypervisor of the destination host may pause the execution of the virtual machine that issued the request. In one illustrative example, the hypervisor may make this determination by referencing the valid bit of the page table entry for the shared memory page within the memory page table of the destination host operating system.
  • The hypervisor of the destination host may then retrieve the missing memory page from the source host. In some implementations, the hypervisor of the destination host may send a request to the source host for the missing shared memory page. Once the request is sent, the hypervisor on the destination host may designate the missing shared memory page on the destination host as having been requested. This may be accomplished by using a mapping table, a configuration file, an area within the memory page table of the destination host operating system, or in any other similar manner. The hypervisor of the destination host may monitor the status of missing memory pages on a time interval and resend any request that has not been fulfilled by the source host if a predefined period of time has elapsed since the request was sent to the source host. The hypervisor of the destination host may then receive the missing shared memory page from the source host and subsequently designate that memory page as present on the destination host.
  • Aspects of the present disclosure are thus capable of reducing latency and downtime for migrated virtual machines that share memory across a group, while maintaining a consistent view of the shared memory across all virtual machines in the group during the migration process. More particularly, aspects of the present disclosure allow seamless migration of a group of virtual machines by migrating shared memory such that it is transparent to the virtual machines, thereby reducing latency, downtime, and resulting page faults.
  • FIG. 1 depicts a high-level component diagram of an illustrative example of a network architecture 100, in accordance with one or more aspects of the present disclosure. One skilled in the art will appreciate that other architectures for network architecture 100 are possible, and that the implementation of a network architecture utilizing examples of the invention are not necessarily limited to the specific architecture depicted by FIG. 1.
  • The network architecture 100 includes one or more source hosts 110 coupled to one or more destination hosts 120 over a network 101. The network 101 may be a private network (e.g., a local area network (LAN), wide area network (WAN), intranet, etc.) or a public network (e.g., the Internet). The source hosts 110 and destination hosts 120 may also be coupled to a host controller 130 (via the same or a different network or directly). Host controller 130 may be an independent machine such as a server computer, a desktop computer, etc. Alternatively, the host controller 130 may be part of the source host 110 or destination host 120.
  • Source Host 110 may comprise server computers or any other computing devices capable of running one or more source virtual machines (VMs) 111-1 through 111-N where N is a positive integer. Each source VM 111 is a software implementation of a machine that executes programs as though it was a physical machine. Each source VM 111 may run a guest operating system (OS) that may be different from one virtual machine to another. The guest OS may include Microsoft Windows, Linux, Solaris, Mac OS, etc. The source host 110 may comprise source shared memory 112, a memory space that is shared among a group of source VMs 111.
  • The source host 110 may additionally comprise a source hypervisor 113 that emulates the underlying hardware platform for the source VMs 111. The source hypervisor 113 may also be known as a virtual machine monitor (VMM) or a kernel-based hypervisor. The source hypervisor 113 may comprise migration module 114, memory page table 115, and mapping table 116. Migration module 114 can manage the source-side tasks required for migration of a group of VMs (e.g., source VMs 111) that are running on source host 110 as well as the shared memory of the group (e.g., source shared memory 112) to a destination host 120, as described in detail below with respect to FIGS. 2 and 3. The migration module 114 can initiate migration of a group of VMs 111, monitor the status of the migration state of each VM during migration, migrate memory pages from source shared memory 112 to destination host 120, and service requests received from destination host 120 for missing shared memory pages.
  • The migration module 114 may store information regarding page migration status for later use in memory page table 115 or mapping table 116. For example, upon migrating a page of shared memory from source shared memory 112, migration module 114 may modify the corresponding page table entry in memory page table 115 to designate the memory page as not present. Additionally, migration module 114 may store unique identifiers in a mapping table 116 that associate the group of VMs 111 to the page addresses of source shared memory 112 that are shared by the group.
  • Destination Host 120 may comprise server computers or any other computing devices capable of running one or more destination virtual machines (VMs) 121-1 through 121-N where N is a positive integer. Each destination VM 121 is a software implementation of a machine that executes programs as though it was a physical machine. Each destination VM 121 may run a guest operating system (OS) that may be different from one virtual machine to another. The guest OS may include Microsoft Windows, Linux, Solaris, Mac OS, etc. The destination host 120 may comprise destination shared memory 122, a memory space that is shared among a group of destination VMs 121.
  • The destination host 120 may additionally comprise a destination hypervisor 123 that emulates the underlying hardware platform for the destination VMs 121. The destination hypervisor 123 may also be known as a virtual machine monitor (VMM) or a kernel-based hypervisor. The destination hypervisor 123 may comprise migration module 124, memory page table 125, and mapping table 126. Migration module 124 can manage the destination-side tasks for migration of a group of VMs (e.g., destination VMs 121) with the shared memory of the group (e.g., destination shared memory 122), as described in detail below with respect to FIG. 4. The migration module 124 can complete the migration of a group of destination VMs 121, start each destination VMs 121 on destination host 120, and send requests to source host 110 for memory pages missing from destination shared memory 122.
  • The migration module 124 may store information regarding page migration status for later use in memory page table 125 or mapping table 126. For example, upon receiving a missing memory page from source host 110, migration module 124 may modify the corresponding page table entry in memory page table 125 to designate the memory page as present. Additionally, migration module 124 may store unique identifiers in a mapping table 126 that associate the group of destination VMs 121 to the page addresses of destination shared memory 122 that are shared by the group. Moreover, migration module 124 may use mapping table 126 to store the status of requests submitted to source host 110 for missing memory pages (e.g., to indicate that particular memory pages of shared memory have been requested from the source host, received successfully from the source host, are present in destination shared memory 122, etc.).
  • A host controller 130 can manage the source VMs 111 and destination VMs 121. Host controller 130 may manage the allocation of resources from source host 110 to source VMs 111, the allocation of resources from destination host 120 to destination VMs 121. In addition, host controller 130 may initiate the migration of a group of source VMs 111 with their associated source memory 112 to destination host 120. In some implementations host controller 130 may run on a separate physical machine from source host 110 and destination host 120. Alternatively, host controller 130 may run locally on either source host 110 or destination host 120. The host controller 130 may include a virtualization manager 131 to perform the management operations described above.
  • FIG. 2 depicts a flow diagram of an example method 200 for migrating shared memory for a group of virtual machines. The method may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both. In one illustrative example, method 200 may be performed by migration module 114 of source hypervisor 113 in FIG. 1. Alternatively, some or all of method 200 might be performed by another machine. It should be noted that blocks depicted in FIG. 2 could be performed simultaneously or in a different order than that depicted.
  • At block 201, processing logic determines that a first virtual machine of a group of virtual machines has been migrated to a destination host. In certain implementations, processing logic can determine that the first virtual machine has been migrated to the destination host by determining that a portion of the state of the virtual machine has been migrated to the destination host. The portion of the state of the virtual machine may comprise a predetermined state of various components of the virtual machine that are necessary for the virtual machine to begin execution on the destination host. For example, the portion of the virtual machine may comprise a device state, the state of CPU registers, the pages of memory that are currently being accessed by the virtual machine, or the like. The state of the virtual machine may be migrated by copying the state from the source host to the destination host directly through the network, placing the state in a shared space for the destination host to retrieve, or in any other manner.
  • At block 202, processing logic determines whether the first virtual machine shares a memory space with a second virtual machine of the group of virtual machines. If not, the method of FIG. 2 terminates. Otherwise, execution continues to block 203. In some implementations, processing logic may identify an area of memory as shared by a group of virtual machines by using a mapping table, a configuration file, an area within the memory page table of the host operating system, or in any other similar manner. For example, processing logic may store a unique identifier in a mapping table for the virtual machine that references the memory page addresses that are shared with other virtual machines in a group.
  • At block 203, processing logic begins monitoring shared memory space accesses of the second virtual machine. At block 204, processing logic receives a request from the second virtual machine to access a memory page of the shared memory space. At block 205, processing logic determines whether the shared memory page requested by the second virtual machine has been migrated to the destination host (e.g., the memory page has been designated as not present on the source host). If not, the method of FIG. 2 terminates. Otherwise, execution continues to block 206.
  • At block 206, processing logic stops execution of the second virtual machine on the source host. At block 207, processing logic migrates the second virtual machine to the destination host. Processing logic may migrate the second virtual machine using pre-copy techniques, post-copy techniques, or a combination of the two. Processing logic may migrate the second virtual machine to the destination by copying the state of the second virtual machine from the source host to the destination host directly through the network, placing the state in a shared space for the destination host to retrieve, or in any other manner. After block 207, the method of FIG. 2 terminates.
  • FIG. 3 depicts a flow diagram of an example method 300 for migrating a page of shared memory requested by a destination host. The method may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both. In one illustrative example, method 300 may be performed by migration module 114 of source hypervisor 113 in FIG. 1. Alternatively, some or all of method 300 might be performed by another machine. It should be noted that blocks depicted in FIG. 3 could be performed simultaneously or in a different order than that depicted.
  • At block 301, processing logic receives a request from a destination host for a memory page of shared memory space on a source host. At block 302, processing logic migrates the requested memory page to the destination host. Processing logic may send the contents of shared memory pages by copying the contents from the source host to the destination host directly through the network, placing the contents in a shared space for the destination host to retrieve, or in any other manner.
  • At block 303, processing logic designates the migrated memory page as not present on the source host. For example, processing logic may modify the valid bit of the page table entry for the shared memory page within the memory page table of the host operating system. Alternatively, processing logic may save the state of the memory page within a separate mapping table in hypervisor accessible memory.
  • At block 304, processing logic updates a mapping table to maintain the status of migrated memory pages of the shared memory space. For example, processing logic may store a status flag or a total number of migrated pages in a mapping table. At block 305, processing logic determines if the migrated number of pages meets a predetermined threshold condition. If not, the method of FIG. 3 ends. Otherwise, execution proceeds to block 306.
  • At block 306, processing logic notifies the destination host that the shared memory migration has completed. After block 306, the method of FIG. 3 terminates.
  • FIG. 4 depicts a flow diagram of an example method 400 for requesting missing shared memory pages by a destination host. The method may be performed by processing logic that may comprise hardware (circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both. In one illustrative example, method 400 may be performed by migration module 124 of destination hypervisor 123 in FIG. 1. Alternatively, some or all of method 400 might be performed by another machine. It should be noted that blocks depicted in FIG. 4 could be performed simultaneously or in a different order than that depicted.
  • At block 401, processing logic starts a migrated virtual machine on the destination host. In some implementations, the virtual machine is started once a portion of the state of the virtual machine from the source host has been migrated to a destination host. The portion of the state of the virtual machine may comprise a predetermined state of various components of the virtual machine that are necessary for the virtual machine to begin execution on the destination host. For example, the portion of the virtual machine may comprise a device state, the state of CPU registers, the pages of memory that are currently being accessed by the virtual machine, or the like. A virtual machine that was stopped on a source host because it attempted to access a page of shared memory that had already been migrated to the destination host will be migrated to then started on the destination host seamlessly.
  • At block 402, processing logic receives a request from a migrated virtual machine for a page of memory from a shared memory space on the destination host. At block 403, processing logic determines whether the requested page of shared memory is missing from the shared memory space on the destination host. In one illustrative example, processing logic may make this determination by referencing the valid bit of the page table entry for the shared memory page within the memory page table of the destination host operating system. If the requested page is not missing, the method of FIG. 4 ends. Otherwise, execution proceeds to block 404.
  • At block 404, processing logic sends a request to the source host for the missing memory page. At block 405, processing logic designates the requested missing memory page as requested. In some implementations, processing logic may accomplish this using a mapping table, a configuration file, an area within the memory page table of the destination host operating system, or in any other similar manner. At block 406, processing logic receives the missing shared memory page from the source host. At block 407, processing logic designates the missing memory page as present on the destination host. After block 407, the method of FIG. 4 terminates.
  • FIG. 5 depicts an example computer system 500 which can perform any one or more of the methods described herein. In one example, computer system 500 may correspond to network architecture 100 of FIG. 1. The computer system may be connected (e.g., networked) to other computer systems in a LAN, an intranet, an extranet, or the Internet. The computer system may operate in the capacity of a server in a client-server network environment. The computer system may be a personal computer (PC), a set-top box (STB), a server, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, while only a single computer system is illustrated, the term “computer” shall also be taken to include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods discussed herein.
  • The exemplary computer system 500 includes a processing device 502, a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), a static memory 506 (e.g., flash memory, static random access memory (SRAM)), and a data storage device 516, which communicate with each other via a bus 508.
  • Processing device 502 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device 502 may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device 502 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 502 is configured to execute migration module 526 for performing the operations and steps discussed herein (e.g., corresponding to the methods of FIGS. 2-4, etc.).
  • The computer system 500 may further include a network interface device 522. The computer system 500 also may include a video display unit 510 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 512 (e.g., a keyboard), a cursor control device 514 (e.g., a mouse), and a signal generation device 520 (e.g., a speaker). In one illustrative example, the video display unit 510, the alphanumeric input device 512, and the cursor control device 514 may be combined into a single component or device (e.g., an LCD touch screen).
  • The data storage device 516 may include a computer-readable medium 524 on which is stored migration module 526 (e.g., corresponding to the methods of FIGS. 2-4, etc.) embodying any one or more of the methodologies or functions described herein. Migration module 526 may also reside, completely or at least partially, within the main memory 504 and/or within the processing device 502 during execution thereof by the computer system 500, the main memory 504 and the processing device 502 also constituting computer-readable media. Migration module 526 may further be transmitted or received over a network via the network interface device 522.
  • While the computer-readable storage medium 524 is shown in the illustrative examples to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present invention. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
  • Although the operations of the methods herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operation may be performed, at least in part, concurrently with other operations. In certain implementations, instructions or sub-operations of distinct operations may be in an intermittent and/or alternating manner.
  • It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
  • In the above description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
  • Some portions of the detailed descriptions above are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
  • It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “receiving,” “determining,” “identifying,” “stopping,” “migrating,” “designating,” “notifying,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
  • The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
  • The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear as set forth in the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
  • The present invention may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present invention. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.).
  • The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an embodiment” or “one embodiment” or “an implementation” or “one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such. Furthermore, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation.

Claims (20)

What is claimed is:
1. A method comprising:
determining, by a processing device executing a hypervisor on a source host, that a first virtual machine of a group of virtual machines on the source host has been migrated to a destination host;
upon determining that the first virtual machine shares a memory space on the source host with a second virtual machine of the group of virtual machines on the source host, monitoring, by the hypervisor of the source host, shared memory space accesses of the second virtual machine;
receiving, by the hypervisor of the source host, a request from the second virtual machine on the source host to access a first memory page of the shared memory space on the source host; and
upon determining that the first memory page of the shared memory space on the source host has been migrated to the destination host,
stopping, by the hypervisor of the source host, execution of the second virtual machine on the source host, and
migrating, by the hypervisor of the source host, the second virtual machine to the destination host.
2. The method of claim 1 further comprising:
receiving, by the hypervisor of the source host, a request from the destination host for a second memory page of the shared memory space on the source host;
migrating, by the hypervisor of the source host, the second memory page of the shared memory space on the source host to the destination host; and
designating, by the hypervisor of the source host, the second memory page of the shared memory space on the source host as not present.
3. The method of claim 2 further comprising:
upon determining that a number of memory pages of the shared memory space remaining on the source host meets a predetermined threshold condition, notifying, by the hypervisor of the source host, the destination host that shared memory transfer has completed.
4. The method of claim 1 wherein determining that the first virtual machine has been migrated to the destination host comprises determining that a portion of a state of the first virtual machine has been migrated to the destination host, the portion of the state of the first virtual machine comprising a device state, a CPU register state, and a RAM pages state.
5. The method of claim 1 wherein the hypervisor of the destination host is to:
start the first virtual machine on the destination host;
receive a request from the first virtual machine for a page of the shared memory space;
upon determining that the requested memory page of the shared memory space is missing from the destination host, pause the execution of the first virtual machine;
retrieve the missing shared memory page from the source host; and
resume execution of the first virtual machine.
6. The method of claim 5 wherein to retrieve the shared memory page, the hypervisor of the destination host is to:
send a request to the source host for the missing shared memory page;
designate the missing shared memory page on the destination host as having been requested;
receive the missing shared memory page from the source host; and
designate the missing shared memory page on the destination host as present.
7. The method of claim 5 wherein the hypervisor of the destination host is further to:
start the second virtual machine on the destination host.
8. A computing apparatus comprising:
a memory to store instructions; and
a processing device, coupled to the memory, to execute the instructions, wherein the processing device is to:
determine, by the processing device executing a hypervisor on a source host, that a first virtual machine on the source host being migrated to a destination host shares a memory space on the source host with a second virtual machine on the source host; and
upon receiving a request from the second virtual machine on the source host to access a first memory page of the shared memory space on the source host that has been migrated to the destination host, initiate, by the hypervisor of the source host, migration of the second virtual machine to the destination host.
9. The apparatus of claim 8 wherein the processing device is further to:
receive, by the hypervisor of the source host, a request from the destination host for a second memory page of the shared memory space on the source host;
migrate, by the hypervisor of the source host, the second memory page of the shared memory space on the source host to the destination host; and
designate, by the hypervisor of the source host, the second memory page of the shared memory space on the source host as not present.
10. The apparatus of claim 9 wherein the processing device is further to:
upon determining that a number of memory pages of the shared memory space remaining on the source host meets a predetermined threshold condition, notify, by the hypervisor of the source host, the destination host that shared memory transfer has completed.
11. The apparatus of claim 8 wherein the first virtual machine of the source host and the second virtual machine of the source host are part of a group of virtual machines being migrated to the destination host.
12. The apparatus of claim 8, wherein the hypervisor of the destination host is to:
start the first virtual machine on the destination host;
receive a request from the first virtual machine for a page of the shared memory space;
upon determining that the requested memory page of the shared memory space is missing from the destination host, pause the execution of the first virtual machine;
retrieve the missing shared memory page from the source host; and
resume execution of the first virtual machine.
13. The apparatus of claim 11, wherein to retrieve the shared memory page, the hypervisor of the destination host is to:
send a request to the source host for the missing shared memory page;
designate the missing shared memory page on the destination host as having been requested;
receive the missing shared memory page from the source host; and
designate the missing shared memory page on the destination host as present.
14. The apparatus of claim 8, wherein the hypervisor of the destination host is further to:
start the second virtual machine on the destination host.
15. A non-transitory computer readable storage medium, having instructions stored therein, which when executed by a processing device of a computer system, cause the processing device to perform operations comprising:
determining, by the processing device executing a hypervisor on a source host, that a first virtual machine of a group of virtual machines on the source host has been migrated to a destination host;
upon determining that the first virtual machine shares a memory space on the source host with a second virtual machine of the group of virtual machines on the source host, monitoring, by the hypervisor of the source host, shared memory space accesses of the second virtual machine;
receiving, by the hypervisor of the source host, a request from the second virtual machine on the source host to access a first memory page of the shared memory space on the source host; and
upon determining that the first memory page of the shared memory space on the source host has been migrated to the destination host,
stopping, by the hypervisor of the source host, execution of the second virtual machine on the source host, and
migrating, by the hypervisor of the source host, the second virtual machine to the destination host.
16. The non-transitory computer readable storage medium of claim 15, the operations further comprising:
receiving, by the hypervisor of the source host, a request from the destination host for a second memory page of the shared memory space on the source host;
migrating, by the hypervisor of the source host, the second memory page of the shared memory space on the source host to the destination host; and
designating, by the hypervisor of the source host, the second memory page of the shared memory space on the source host as not present.
17. The non-transitory computer readable storage medium of claim 16 wherein determining that the first virtual machine has been migrated to the destination host comprises determining that a portion of a state of the first virtual machine has been migrated to the destination host, the portion of the state of the first virtual machine comprising a device state, a CPU register state, and a RAM pages state.
18. The non-transitory computer readable storage medium of claim 15, wherein the hypervisor of the destination host is to:
start the first virtual machine on the destination host;
receive a request from the first virtual machine for a page of the shared memory space;
upon determining that the requested memory page of the shared memory space is missing from the destination host, pause the execution of the first virtual machine;
retrieve the missing shared memory page from the source host; and
resume execution of the first virtual machine.
19. The non-transitory computer readable storage medium of claim 18, wherein to retrieve the shared memory page, the hypervisor of the destination host is to:
send a request to the source host for the missing shared memory page;
designate the missing shared memory page on the destination host as having been requested;
receive the missing shared memory page from the source host; and
designate the missing shared memory page on the destination host as present.
20. The non-transitory computer readable storage medium of claim 18, wherein the hypervisor of the destination host is further to:
start the second virtual machine on the destination host.
US14/546,330 2014-11-18 2014-11-18 Migrating a VM in response to an access attempt by the VM to a shared memory page that has been migrated Active 2034-11-22 US9348655B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/546,330 US9348655B1 (en) 2014-11-18 2014-11-18 Migrating a VM in response to an access attempt by the VM to a shared memory page that has been migrated
US15/162,277 US10552230B2 (en) 2014-11-18 2016-05-23 Post-copy migration of a group of virtual machines that share memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/546,330 US9348655B1 (en) 2014-11-18 2014-11-18 Migrating a VM in response to an access attempt by the VM to a shared memory page that has been migrated

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/162,277 Continuation US10552230B2 (en) 2014-11-18 2016-05-23 Post-copy migration of a group of virtual machines that share memory

Publications (2)

Publication Number Publication Date
US20160139962A1 true US20160139962A1 (en) 2016-05-19
US9348655B1 US9348655B1 (en) 2016-05-24

Family

ID=55961766

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/546,330 Active 2034-11-22 US9348655B1 (en) 2014-11-18 2014-11-18 Migrating a VM in response to an access attempt by the VM to a shared memory page that has been migrated
US15/162,277 Active 2034-11-22 US10552230B2 (en) 2014-11-18 2016-05-23 Post-copy migration of a group of virtual machines that share memory

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/162,277 Active 2034-11-22 US10552230B2 (en) 2014-11-18 2016-05-23 Post-copy migration of a group of virtual machines that share memory

Country Status (1)

Country Link
US (2) US9348655B1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160366014A1 (en) * 2015-06-09 2016-12-15 Kt Corporation Method and apparatus for network function virtualization
US20170090964A1 (en) * 2015-09-28 2017-03-30 Red Hat Israel, Ltd. Post-copy virtual machine migration with assigned devices
US20180107509A1 (en) * 2015-07-31 2018-04-19 Adrian Shaw Migration of computer systems
US20180246751A1 (en) * 2015-09-25 2018-08-30 Intel Corporation Techniques to select virtual machines for migration
US20180329737A1 (en) * 2015-12-18 2018-11-15 Intel Corporation Virtual machine batch live migration
US20190243573A1 (en) * 2018-02-06 2019-08-08 Nutanix, Inc. System and method for migrating virtual machines with storage while in use
US10439960B1 (en) * 2016-11-15 2019-10-08 Ampere Computing Llc Memory page request for optimizing memory page latency associated with network nodes
US10509567B2 (en) * 2018-02-06 2019-12-17 Nutanix, Inc. System and method for migrating storage while in use
US10509584B2 (en) * 2018-02-06 2019-12-17 Nutanix, Inc. System and method for using high performance storage with tunable durability
US10817333B2 (en) 2018-06-26 2020-10-27 Nutanix, Inc. Managing memory in devices that host virtual machines and have shared memory
US11074099B2 (en) 2018-02-06 2021-07-27 Nutanix, Inc. System and method for storage during virtual machine migration
US11360807B2 (en) * 2019-09-13 2022-06-14 Oracle International Corporation Cloning a computing environment through node reconfiguration and with node modification
US11409619B2 (en) 2020-04-29 2022-08-09 The Research Foundation For The State University Of New York Recovering a virtual machine after failure of post-copy live migration
US11474848B2 (en) * 2019-10-24 2022-10-18 Red Hat, Inc. Fail-safe post copy migration of virtual machines

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015172803A1 (en) * 2014-05-12 2015-11-19 Nokia Solutions And Networks Management International Gmbh Controlling of communication network comprising virtualized network functions
US9792138B2 (en) * 2015-02-18 2017-10-17 Red Hat Israel, Ltd. Virtual machine migration to hyper visors with virtual function capability
US20170003997A1 (en) * 2015-07-01 2017-01-05 Dell Products, Lp Compute Cluster Load Balancing Based on Memory Page Contents
US10768959B2 (en) * 2015-11-24 2020-09-08 Red Hat Israel, Ltd. Virtual machine migration using memory page hints
US10572289B2 (en) 2017-08-28 2020-02-25 Red Hat Israel, Ltd. Guest-initiated announcement of virtual machine migration
US11194620B2 (en) * 2018-10-31 2021-12-07 Nutanix, Inc. Virtual machine migration task management
US11188368B2 (en) 2018-10-31 2021-11-30 Nutanix, Inc. Asynchronous workload migration control
US11321112B2 (en) 2019-04-22 2022-05-03 International Business Machines Corporation Discovery and recreation of communication endpoints in virtual machine migration
US11055010B2 (en) 2019-09-05 2021-07-06 Microsoft Technology Licensing, Llc Data partition migration via metadata transfer and access attribute change
CN113051024B (en) * 2019-12-26 2022-08-09 阿里巴巴集团控股有限公司 Virtual machine live migration method and device, electronic equipment and storage medium
US11182092B1 (en) * 2020-07-14 2021-11-23 Red Hat, Inc. PRI overhead reduction for virtual machine migration

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5617554A (en) * 1992-02-10 1997-04-01 Intel Corporation Physical address size selection and page size selection in an address translator
US5761734A (en) * 1996-08-13 1998-06-02 International Business Machines Corporation Token-based serialisation of instructions in a multiprocessor system
US7155462B1 (en) * 2002-02-01 2006-12-26 Microsoft Corporation Method and apparatus enabling migration of clients to a specific version of a server-hosted application, where multiple software versions of the server-hosted application are installed on a network
US7484208B1 (en) * 2002-12-12 2009-01-27 Michael Nelson Virtual machine migration
US7203944B1 (en) 2003-07-09 2007-04-10 Veritas Operating Corporation Migrating virtual machines among computer systems to balance load caused by virtual machines
US7257811B2 (en) * 2004-05-11 2007-08-14 International Business Machines Corporation System, method and program to migrate a virtual machine
US20050273571A1 (en) * 2004-06-02 2005-12-08 Lyon Thomas L Distributed virtual multiprocessor
US20070204266A1 (en) 2006-02-28 2007-08-30 International Business Machines Corporation Systems and methods for dynamically managing virtual machines
US9081669B2 (en) * 2006-04-27 2015-07-14 Avalanche Technology, Inc. Hybrid non-volatile memory device
US8903888B1 (en) * 2006-10-27 2014-12-02 Hewlett-Packard Development Company, L.P. Retrieving data of a virtual machine based on demand to migrate the virtual machine between physical machines
US7673113B2 (en) 2006-12-29 2010-03-02 Intel Corporation Method for dynamic load balancing on partitioned systems
US7925850B1 (en) * 2007-02-16 2011-04-12 Vmware, Inc. Page signature disambiguation for increasing the efficiency of virtual machine migration in shared-page virtualized computer systems
EP1962192A1 (en) 2007-02-21 2008-08-27 Deutsche Telekom AG Method and system for the transparent migration of virtual machine storage
US8019962B2 (en) * 2007-04-16 2011-09-13 International Business Machines Corporation System and method for tracking the memory state of a migrating logical partition
US20090276774A1 (en) * 2008-05-01 2009-11-05 Junji Kinoshita Access control for virtual machines in an information system
JP5157717B2 (en) 2008-07-28 2013-03-06 富士通株式会社 Virtual machine system with virtual battery and program for virtual machine system with virtual battery
US8769206B2 (en) * 2009-01-20 2014-07-01 Oracle International Corporation Methods and systems for implementing transcendent page caching
US8490181B2 (en) * 2009-04-22 2013-07-16 International Business Machines Corporation Deterministic serialization of access to shared resource in a multi-processor system for code instructions accessing resources in a non-deterministic order
WO2010122709A1 (en) 2009-04-23 2010-10-28 日本電気株式会社 Rejuvenation processing device, rejuvenation processing system, computer program, and data processing method
US8239609B2 (en) 2009-10-23 2012-08-07 Sap Ag Leveraging memory similarity during live migrations
US8832683B2 (en) 2009-11-30 2014-09-09 Red Hat Israel, Ltd. Using memory-related metrics of host machine for triggering load balancing that migrate virtual machine
US8327060B2 (en) 2009-11-30 2012-12-04 Red Hat Israel, Ltd. Mechanism for live migration of virtual machines with memory optimizations
US8589921B2 (en) 2009-11-30 2013-11-19 Red Hat Israel, Ltd. Method and system for target host optimization based on resource sharing in a load balancing host and virtual machine adjustable selection algorithm
US8244957B2 (en) 2010-02-26 2012-08-14 Red Hat Israel, Ltd. Mechanism for dynamic placement of virtual machines during live migration based on memory
US8826292B2 (en) 2010-08-06 2014-09-02 Red Hat Israel, Ltd. Migrating virtual machines based on level of resource sharing and expected load per resource on candidate target host machines
US8851440B2 (en) * 2010-09-29 2014-10-07 Velcro Industries B.V. Releasable hanging system
US9229516B2 (en) 2010-10-21 2016-01-05 At&T Intellectual Property I, L.P. Methods, devices, and computer program products for maintaining network presence while conserving power consumption
JP5541117B2 (en) 2010-11-26 2014-07-09 富士通株式会社 Virtual machine migration system, virtual machine migration program, and virtual machine migration method
US8819678B2 (en) 2010-12-15 2014-08-26 Red Hat Israel, Ltd. Live migration of a guest from a source hypervisor to a target hypervisor
US8745234B2 (en) 2010-12-23 2014-06-03 Industrial Technology Research Institute Method and manager physical machine for virtual machine consolidation
US8356120B2 (en) 2011-01-07 2013-01-15 Red Hat Israel, Ltd. Mechanism for memory state restoration of virtual machine (VM)-controlled peripherals at a destination host machine during migration of the VM
US8533713B2 (en) 2011-03-29 2013-09-10 Intel Corporation Efficent migration of virtual functions to enable high availability and resource rebalance
JP5370946B2 (en) 2011-04-15 2013-12-18 株式会社日立製作所 Resource management method and computer system
US9355119B2 (en) * 2011-09-22 2016-05-31 Netapp, Inc. Allocation of absent data within filesystems
US8756601B2 (en) * 2011-09-23 2014-06-17 Qualcomm Incorporated Memory coherency acceleration via virtual machine migration
US8694644B2 (en) 2011-09-29 2014-04-08 Nec Laboratories America, Inc. Network-aware coordination of virtual machine migrations in enterprise data centers and clouds
US20140019964A1 (en) 2012-07-13 2014-01-16 Douglas M. Neuse System and method for automated assignment of virtual machines and physical machines to hosts using interval analysis
EP2687982A1 (en) 2012-07-16 2014-01-22 NTT DoCoMo, Inc. Hierarchical system for managing a plurality of virtual machines, method and computer program
US9372726B2 (en) 2013-01-09 2016-06-21 The Research Foundation For The State University Of New York Gang migration of virtual machines using cluster-wide deduplication
US9170950B2 (en) 2013-01-16 2015-10-27 International Business Machines Corporation Method, apparatus and computer programs providing cluster-wide page management
US9268583B2 (en) 2013-02-25 2016-02-23 Red Hat Israel, Ltd. Migration of virtual machines with shared memory
US9563452B2 (en) * 2013-06-28 2017-02-07 Sap Se Cloud-enabled, distributed and high-availability system with virtual machine checkpointing
US9317326B2 (en) * 2013-11-27 2016-04-19 Vmware, Inc. Consistent migration of a group of virtual machines using source and destination group messaging
US9336039B2 (en) * 2014-06-26 2016-05-10 Vmware, Inc. Determining status of migrating virtual machines
US9432205B2 (en) * 2014-11-04 2016-08-30 Telefonaktiebolaget L M Ericsson (Publ) Explicit block encoding of multicast group membership information with bit index explicit replication (BIER)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10713071B2 (en) * 2015-06-09 2020-07-14 Kt Corporation Method and apparatus for network function virtualization
US20160366014A1 (en) * 2015-06-09 2016-12-15 Kt Corporation Method and apparatus for network function virtualization
US20180107509A1 (en) * 2015-07-31 2018-04-19 Adrian Shaw Migration of computer systems
US20180246751A1 (en) * 2015-09-25 2018-08-30 Intel Corporation Techniques to select virtual machines for migration
US20170090964A1 (en) * 2015-09-28 2017-03-30 Red Hat Israel, Ltd. Post-copy virtual machine migration with assigned devices
US10430221B2 (en) * 2015-09-28 2019-10-01 Red Hat Israel, Ltd. Post-copy virtual machine migration with assigned devices
US20180329737A1 (en) * 2015-12-18 2018-11-15 Intel Corporation Virtual machine batch live migration
US11074092B2 (en) * 2015-12-18 2021-07-27 Intel Corporation Virtual machine batch live migration
US10439960B1 (en) * 2016-11-15 2019-10-08 Ampere Computing Llc Memory page request for optimizing memory page latency associated with network nodes
US10509567B2 (en) * 2018-02-06 2019-12-17 Nutanix, Inc. System and method for migrating storage while in use
US10540112B2 (en) * 2018-02-06 2020-01-21 Nutanix, Inc. System and method for migrating virtual machines with storage while in use
US10509584B2 (en) * 2018-02-06 2019-12-17 Nutanix, Inc. System and method for using high performance storage with tunable durability
US20190243573A1 (en) * 2018-02-06 2019-08-08 Nutanix, Inc. System and method for migrating virtual machines with storage while in use
US11074099B2 (en) 2018-02-06 2021-07-27 Nutanix, Inc. System and method for storage during virtual machine migration
US10817333B2 (en) 2018-06-26 2020-10-27 Nutanix, Inc. Managing memory in devices that host virtual machines and have shared memory
US11360807B2 (en) * 2019-09-13 2022-06-14 Oracle International Corporation Cloning a computing environment through node reconfiguration and with node modification
US11474848B2 (en) * 2019-10-24 2022-10-18 Red Hat, Inc. Fail-safe post copy migration of virtual machines
US11409619B2 (en) 2020-04-29 2022-08-09 The Research Foundation For The State University Of New York Recovering a virtual machine after failure of post-copy live migration
US11983079B2 (en) 2020-04-29 2024-05-14 The Research Foundation For The State University Of New York Recovering a virtual machine after failure of post-copy live migration

Also Published As

Publication number Publication date
US10552230B2 (en) 2020-02-04
US20160266940A1 (en) 2016-09-15
US9348655B1 (en) 2016-05-24

Similar Documents

Publication Publication Date Title
US10552230B2 (en) Post-copy migration of a group of virtual machines that share memory
US11494213B2 (en) Virtual machine memory migration by storage
US10817333B2 (en) Managing memory in devices that host virtual machines and have shared memory
US9405642B2 (en) Providing virtual machine migration reliability using an intermediary storage device
US10877793B2 (en) Extending the base address register by modifying the number of read-only bits associated with a device to be presented to a guest operating system
US9740519B2 (en) Cross hypervisor migration of virtual machines with VM functions
US20130227559A1 (en) Management of i/o reqeusts in virtual machine migration
US9569223B2 (en) Mixed shared/non-shared memory transport for virtual machines
US9489228B2 (en) Delivery of events from a virtual machine to a thread executable by multiple host CPUs using memory monitoring instructions
US11809888B2 (en) Virtual machine memory migration facilitated by persistent memory devices
US11474848B2 (en) Fail-safe post copy migration of virtual machines
US9639388B2 (en) Deferred assignment of devices in virtual machine migration
US10394586B2 (en) Using capability indicators to indicate support for guest driven surprise removal of virtual PCI devices
US9256455B2 (en) Delivery of events from a virtual machine to host CPU using memory monitoring instructions
US12001869B2 (en) Memory over-commit support for live migration of virtual machines
US10768959B2 (en) Virtual machine migration using memory page hints
US10503659B2 (en) Post-copy VM migration speedup using free page hinting
US9575788B2 (en) Hypervisor handling of processor hotplug requests
US11093275B2 (en) Partial surprise removal of a device for virtual machine migration
US9684529B2 (en) Firmware and metadata migration across hypervisors based on supported capabilities
US20230043180A1 (en) Fail-safe post copy migration of containerized applications
US11614973B2 (en) Assigning devices to virtual machines in view of power state information

Legal Events

Date Code Title Description
AS Assignment

Owner name: RED HAT ISRAEL, LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TSIRKIN, MICHAEL S.;GILBERT, DAVID A.;SIGNING DATES FROM 20141114 TO 20141117;REEL/FRAME:034199/0650

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8