WO2012009843A1 - Migration en direct d'une machine virtuelle avec contrôle et envoi en continu d'écritures mémoire - Google Patents

Migration en direct d'une machine virtuelle avec contrôle et envoi en continu d'écritures mémoire Download PDF

Info

Publication number
WO2012009843A1
WO2012009843A1 PCT/CN2010/075240 CN2010075240W WO2012009843A1 WO 2012009843 A1 WO2012009843 A1 WO 2012009843A1 CN 2010075240 W CN2010075240 W CN 2010075240W WO 2012009843 A1 WO2012009843 A1 WO 2012009843A1
Authority
WO
WIPO (PCT)
Prior art keywords
host computer
memory
modified data
virtual machine
continually
Prior art date
Application number
PCT/CN2010/075240
Other languages
English (en)
Inventor
Hai Jin
Song Wu
Xiaodong Pan
Original Assignee
Empire Technology Development Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Empire Technology Development Llc filed Critical Empire Technology Development Llc
Priority to PCT/CN2010/075240 priority Critical patent/WO2012009843A1/fr
Publication of WO2012009843A1 publication Critical patent/WO2012009843A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/4557Distribution of virtual machine instances; Migration and load balancing

Definitions

  • a method may include, while the virtual machine is operating on the source host computer, the source host computer continually monitor and record writes by the virtual machine to memory of the source host computer, continually send modified data in the memory of the source host computer to the destination host computer, and continually determine whether modified data sent to the destination host computer substantially matches the modified data in the memory of the source host computer.
  • the method may further include the source host computer stopping the virtual machine from operating on the source host computer, when the source host computer determines that the modified data sent to the destination host computer substantially matches the modified data in the memory of the source host computer.
  • the method may include the source host computer sending one last amount of modified data in the memory of the source host computer to the destination host computer, after stopping the virtual machine from operating on the source host computer.
  • the continual monitor and record of writes by the virtual machine to memory of the source host computer may include the source host computer initializing one or more bitmaps to track modification of one or more corresponding memory pages of the memory of the source host computer, initializing a first table to track modification rates of the memory pages of the source host computer, and/or initializing a second table to track modified data of the memory pages of the source host computer.
  • the continually monitoring and recording of writes by the virtual machine to memory of the source host computer may include the source host computer updating one or more bitmaps to track modifications of one or more corresponding memory pages of the memory of the source host computer, updating the first table to track modification rates of the memory pages of the source host computer, and/or updating the second table to track modified data of the memory pages the source host computer.
  • the continually sending of modified data in the memory of the source host computer to the destination host computer may include the source host computer determining whether a modification rate of a memory page of the memory of the source host computer has exceeded a boundary value. Additionally, the sending operation may include the source host computer sending an entire memory page to the destination host computer, when the source host computer determines that the modification rate of the memory page of the source host computer has exceeded the boundary value, and/or sending modified data of a memory page to the destination host computer, when the source host computer determines that the modification rate of the memory page of the source host computer has failed to exceed the boundary value. Further, the sending of modified data of a memory page of the source host computer to the destination host computer may include sending a package formed with the modified data of a memory page of the source host computer, and the modification bitmap of the memory page.
  • a computer may be configured to be a source host computer for hosting a virtual machine.
  • the source host computer may be configured to include a virtual machine live migration service equipped to live migrate the virtual machine from the source host computer to a destination host computer.
  • the source host computer may include memory, one or more processors, and a virtual machine monitor.
  • the virtual machine monitor may be configured to be operated by the one or more processors to perform a number of tasks while the virtual machine is in operation on the source host computer.
  • the tasks may include a first task to continually monitor and record writes by the virtual machine to the memory, a second task to continually send modified data in the memory to the destination host computer, and a third task to continually determine whether modified data sent to the destination host computer substantially matches modified data in the memory.
  • the virtual machine monitor may be further configured to stop the virtual machine after a determination that the modified data sent to the destination host computer substantially matches the modified data in the memory of the source host computer.
  • an article of manufacture with a non-transitory, tangible computer-readable storage medium may be provided with a number of instructions configured to cause an apparatus, in response to execution of the instructions by the apparatus, to perform a number operations associated with live migrating a virtual machine hosted by the apparatus to another apparatus.
  • the operations may be performed while the virtual machine is in operation on the apparatus.
  • the operations may include continually monitoring and recording writes by the virtual machine to memory of the apparatus, continually sending modified data in the memory of the apparatus to the another apparatus, and continually determining whether modified data sent to the other apparatus substantially matches modified data in the memory of the apparatus.
  • the operations may further include stopping the virtual machine after the apparatus determines that the modified data sent to the other apparatus substantially matches the modified data in the memory of the apparatus.
  • FIG. 1 illustrates an overview of a computing environment having a number of host computers hosting virtual machines, and equipped with a virtual machine live migration service that includes a continual monitor and send function for memory writes,
  • FIG. 2 illustrates a method of virtual machine live migration, employing the virtual machine live migration service of FIG. 1,
  • FIG. 3 illustrates the monitor and record operation of FIG. 2 in further details
  • FIG. 4 illustrates the send operation of FIG. 2 in further details
  • FIG. 5 illustrates a modification rate table, a write data table, and a memory page bitmap, suitable for use with the monitor and record, and send operations of FIG. 3 and 4,
  • FIG. 6 illustrates an example computing device suitable for use as a host computer of FIG. 1, and
  • FIG 7 illustrates an example program product including an article of manufacture with instructions configured to enable an apparatus to perform the method of FIG. 2, all arranged in accordance with at least some embodiments of the present disclosure.
  • a computing platform such as a computer or a similar electronic computing device such as a cellular telephone, that manipulates and/or transforms data represented as physical quantities including electronic and/or magnetic quantities within the computing platform's processors, memories, registers, etc.
  • FIG. 1 illustrates an overview of a computing environment having a number of host computers hosting virtual machines, and equipped with a virtual machine live migration service that includes a continual monitor and send function for memory writes, in
  • computing environment 100 may include a number of host computers, e.g., host computer A 102 and host computer B 122.
  • Host computer A 102 may include memory 104 and processors 106
  • host computer B 122 may include memory 124 and processors 126.
  • Host computer A 102 may host a number of virtual machines 110
  • host computer B 122 may host a number of virtual machines 130.
  • host computer A 102 may include virtual machine monitor (VMM) 112
  • host computer B 122 may include virtual machine monitor (VMM) 132.
  • VMM virtual machine monitor
  • VMM 112 may include a virtual machine (VM) Live Migration Service 114 with Continual Monitor and Send Function for memory writes, while VMM 132 may include a virtual machine (VM) Live Migration Service 134 with Continual Monitor and Send Function for memory writes.
  • Host computers A and B 102 and 122 may be coupled to each other via network 150. Host computers A and B 102 and 122 may also have shared access to various mass storage devices 140.
  • VM Live Migration Services 114 and 134 may be configured to enable host computer A and B 102 and 122, to practice embodiments of the method of VM live migration of the present disclosure, to be described more fully below. Except for VM Live Migration Services 114 and 134, the other elements of FIG. 1, host computers A and B 102 and 122, memory 104 and 124, processors 106 and 126, virtual machines 110 and 130, the basic functions of VMM 112 and 122, network 150 and mass storage devices 140 are all intended to represent a broad range of these elements.
  • FIG. 2 wherein a method of virtual machine live migration, employing the virtual machine live migration service of FIG. 1, in accordance with embodiments of the present disclosure, is illustrated.
  • the host computer 102 and/or host computer 122 can be configured to perform the various operations, functions or actions described below via a respective one of VMM 112 and VMM 132.
  • An example method 200 may include one or more functions, operations, or actions as is illustrated by one or more of blocks 202, 204, 206, 208, and/or 210. It should be appreciated that in some implementations one or more of the illustrated blocks may be eliminated, combined or separated into additional blocks, or performed in a different order without departing from the spirit of the present disclosure.
  • VM live migration method 200 may include two phases, a pre-copy phase 212 and a shut down and copy phase 214.
  • pre-copy phase 212 and shut down and copy phase 214 of VM live migration method 200 will be described in terms of an example live migration of a VM 110 from host computer A 102 to host computer B 122.
  • host computer A 102 may be referred to as the source host computer
  • host computer B 122 may be referred to as the destination host computer.
  • VM Live Migration Services 114 of source host computer 102 may begin the VM live migration method 200 at block 202 (Monitor and Record Writes to Memory). At block 202, VM Live Migration Services 114 may monitor and record writes by one or more of virtual machines 110 to memory 104. From block 202, the method may continue at block 204 (Send Modified Data). At block 204, VM Live Migration Services 114 may send modified data from source host computer 102 to destination host computer 122.
  • the sending of data between source computer and the destination computer may be constrained by the available bandwidth of the inter-coupling network.
  • the amount of modified data having been sent to and stored on destination host computer might be less than the amount of modified data on the source host computer.
  • the network bandwidth eases especially relative to the volume of writes to memory
  • the amount of modified data having been sent to and stored on destination host computer might substantially match the amount of modified data on the source host computer
  • method 200 may proceed to block 206 (Modified Data Sent substantially matches Data Modified).
  • VM Live Migration Services 114 may determine whether the amount of modified data sent from the source host computer to the destination host computer substantially matches the amount of data modified on the source host computer.
  • the amount of modified data sent from the source host computer to the destination host computer is considered substantially matching with the amount of data modified on the source host computer, when the amount remaining to be sent to the destination host computer can be sent within a desired small time interval in view of the network bandwidth available. The reason shall be apparent from the description to follow.
  • VM Live Migration Services 114 determines that the amount of modified data sent from the source host computer to the destination host computer does not substantially match the amount of data modified on the source host computer, method 200 may return to block 202, and then proceed onto blocks 204 and 206 as earlier described.
  • method 200 continually monitor and record writes by the virtual machine to the memory of the source host computer at block 202, continually send modified data from the source host computer to the destination host computer at block 204, and continually determine whether the amount of modified data sent from the source host computer to the destination host computer substantially matches the amount of data modified on the source host computer, until VM Live Migration Services 114 determines that the amount of modified data sent from the source host computer to the destination host computer does substantially match the amount of data modified on the source host computer.
  • VM Live Migration Services 114 determines that the amount of modified data sent from the source host computer to the destination host computer substantially matches the amount of data modified on the source migration host computer, method 200 ends pre-copy phase 212, and enters shut down and copy phase 214.
  • method 200 may proceed to block 208 (Stop Virtual Machine).
  • VM Live Migration Services 114 causes the virtual machine to be migrated to stop operating on the source host computer.
  • method 200 may proceed to block 210 (Send Final Modified Data).
  • VM Live Migration Services 114 causes the final installment of the modified data to be sent from the source host computer to the destination host computer.
  • method 200 enters into the shut down and copy phase 214 after VM Live Migration Services 114 determines that the amount of modified data sent from the source host computer to the destination host computer substantially matches the amount of data modified on the source host computer, subject to a design point selected for substantial matching and the network bandwidth, the virtual machine being live migrated has to be stopped for only a relatively short time.
  • the amount of time the virtual machine being live migrated has to be stopped is shorter than prior art live migration methods.
  • FIG. 3 illustrates the monitor and record operation of FIG. 2 in further details, in accordance with various embodiments of the present disclosure.
  • the host computer 102 and/or host computer 122 can be configured to perform the various operations, functions or actions described below via a respective one of VMM 112 and VMM 132.
  • An example method 202 may include one or more functions, operations, or actions as is illustrated by one or more of blocks 302, 304, 306, 308, and/or 310. It should be appreciated that in some implementations one or more of the illustrated blocks may be eliminated, combined or separated into additional blocks, or performed in a different order without departing from the spirit of the present disclosure.
  • VM Live Migration Services 114 starts the monitor and record writes to memory operation 202 at block 302 (Initialize Data Structures).
  • VM Live Migration Services 114 may initialize various data structures to be employed for the monitor and record writes to memory operation 202.
  • VM Live Migration Services 114 may initialize a modification rate table and/or a write data table, to be described more fully below.
  • operation 202 may proceed to block 304 (Monitor Memory Writes).
  • VM Live Migration Services 114 may continue to monitor for writes to memory by the virtual machine to be live migrated. On detection of a write to memory by the virtual machine to be live migrated, operation 202 may proceed from block 304 to block 306 (Bitmap Initialized?).
  • VM Live Migration Services 114 may determine whether a memory page bitmap corresponding to the memory page being written into has been previously initialized and in use. If the result of the determination is negative, that is, a memory page bitmap has not been initialized for the memory page being written into, operation 202 may proceed from block 306 to block 308 (Initialize a Bitmap).
  • VM Live Migration Services 114 may initialize a memory page bitmap for the memory page being written into, and proceeds to block 310 (Record Memory Write). On the other hand, if the result of the determination at block 306 is affirmative, that is, a memory page bitmap has been initialized for the memory page being written into, operation 202 may directly proceed from block 308 to block 310 (Record Memory Write). At block 310, VM Live Migration Services 114 may record the write to memory by the virtual machine being live migrated. For the embodiments where a modification rate table, a write data table, and memory page bitmaps are employed, Live Migration Services 114 may update the modification rate table, the write data table and a memory page bitmap corresponding to the memory page being written into accordingly.
  • FIG. 4 illustrates the send operation of FIG. 2 in further details, in accordance with various embodiments of the present disclosure.
  • the host computer 102 and/or host computer 122 can be configured to perform the various operations, functions or actions described below via a respective one of VMM 112 and VMM 132.
  • An example method 204 may include one or more functions, operations, or actions as is illustrated by one or more of blocks 402, 404, and/or 406. It should be appreciated that in some implementations one or more of the illustrated blocks may be eliminated, combined or separated into additional blocks, or performed in a different order without departing from the spirit of the present disclosure.
  • send operation 204 may start at block 402 (Memory writes to a Page greater than a boundary value).
  • Live Migration Services 114 may determine whether the number of writes into memory for the particular memory page has exceeded a boundary value.
  • the boundary value may be configurable.
  • send operation 204 may proceed from block 402 to block 404 (Send only changed data).
  • block 404 Live
  • Migration Services 114 may send only the changed data of the memory page from the source host computer to the destination host computer.
  • Live Migration Services 114 may reference the modification rate table to determine whether the number of writes into memory for the particular memory page has exceeded the boundary value.
  • the sending of only changed data may include forming a package that includes the changed data (using the write data table) and a modification bitmap of the memory page (showing the locations of the changed data), and sending the formed package.
  • send operation 204 may proceed from block 402 to block 406 (Send entire Memory Page).
  • block 406 Send entire Memory Page
  • Migration Services 114 may send the entire memory page from the source host computer to the destination host computer.
  • FIG. 5 illustrates a modification rate table, a write data table, and a memory page modification bitmap, suitable for use with the monitor and record, and send operations of FIG. 3 and 4, in accordance with various embodiments of the disclosure.
  • the various features illustrated in FIG. 5 may be utilized by host computer 102 and/or host computer 122 to facilitate the various methods described herein. It should be appreciated that in some implementations a different arrangement of the same or similar information may be suitable.
  • modification rate table 502 may include two columns, a page id column 504 for storing identifiers of memory pages, and a
  • write data table 512 may include three (3) columns, a page id column 514 for storing identifiers of the memory pages, an offset column 516 for storing an offset to a specific location within a memory page for the write data, and a data column 518 for storing the write data themselves.
  • memory page modification bitmap 522 may include a m x n array, where m and n are integers corresponding to the size of a memory page of the source host computer. Each array slot 524 may be employed to store an indicator indicating whether the corresponding memory location has been modified. [0036] FIG.
  • computing device 600 is a block diagram illustrating an example computing device 600, suitable for use as a host computer of FIG. 1, in accordance with the present disclosure.
  • computing device 600 typically includes one or more processors 610 and system memory 620.
  • a memory bus 630 may be used for communicating between the processor 610 and the system memory 620.
  • processor 610 may be of any type including but not limited to a microprocessor ( ⁇ ), a microcontroller ( ⁇ , a digital signal processor (DSP), or any combination thereof.
  • Processor 610 may include one more levels of caching, such as a level one cache 611 and a level two cache 612, a processor core 613, and registers 614.
  • An example processor core 613 may include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof.
  • An example memory controller 615 may also be used with the processor 610, or in some implementations the memory controller 615 may be an internal part of the processor 610.
  • system memory 620 may be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof.
  • System memory 620 may include a number of hosted virtual machines 622, and a virtual machine monitor 623.
  • Virtual machine monitor 623 may include a virtual machine live migration service 624 that includes a continual monitor and send function, as described earlier.
  • Computing device 600 may have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 601 and any required devices and interfaces.
  • a bus/interface controller 640 may be used to facilitate communications between the basic configuration 601 and one or more data storage devices 650 via a storage interface bus 641.
  • the data storage devices 650 may be removable storage devices 651, non-removable storage devices 652, or a combination thereof. Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few.
  • HDD hard-disk drives
  • CD compact disk
  • DVD digital versatile disk
  • SSD solid state drives
  • Example computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • System memory 620, removable storage 651 and non-removable storage 652 are all examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by computing device 600. Any such computer storage media may be part of computing device 600.
  • Computing device 600 may also include an interface bus 642 for facilitating communication from various interface devices (e.g., output interfaces, peripheral interfaces, and communication interfaces) to the basic configuration 601 via the bus/interface controller 640.
  • Example output devices 660 include a graphics processing unit 661 and an audio processing unit 662, which may be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 663.
  • Example peripheral interfaces 670 include a serial interface controller 671 or a parallel interface controller 672, which may be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 673.
  • An example communication device 680 includes a network controller 681, which may be arranged to facilitate communications with one or more other computing devices 690 over a network communication link via one or more communication ports 682.
  • the network communication link may be one example of a communication media.
  • Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media.
  • modulated data signal may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media.
  • RF radio frequency
  • IR infrared
  • computer readable media as used herein may include both storage media and communication media.
  • Computing device 600 may be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions.
  • a small-form factor portable (or mobile) electronic device such as a cell phone, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions.
  • PDA personal data assistant
  • Computing device 600 may also be implemented as a personal computer including both laptop computer and non-laptop computer configurations.
  • FIG. 7 illustrates a block diagram of an example article of manufacture having a computer program product 700 configured to enable an apparatus to practice the method of FIG. 2, in accordance with various embodiments of the present disclosure.
  • the computer program product 700 may comprise non-transitory computer- readable storage medium 702 and plurality of programming instructions 704 stored in the computer-readable storage medium 702.
  • programming instructions 704 may be configured to enable an apparatus, in response to execution by the apparatus, to perform operations including:
  • Computer-readable storage medium 702 may take a variety of forms including, but not limited to, non-volatile and persistent memory, such as, but not limited to, compact disc read-only memory (CDROM) and flash memory.
  • CDROM compact disc read-only memory
  • references in the specification to "an implementation,” “one implementation,” “some implementations,” or “other implementations” may mean that a particular feature, structure, or characteristic described in connection with one or more implementations may be included in at least some implementations, but not necessarily in all implementations.
  • the various appearances of "an implementation,” “one implementation,” or “some implementations” in the preceding description are not necessarily all referring to the same implementations.
  • when terms or phrases such as “coupled” or “responsive” or “in response to” or “in communication with,” etc. are used herein or in the claims that follow, these terms should be interpreted broadly.
  • the phrase “coupled to” may refer to being communicatively, electrically and/or operatively coupled as appropriate for the context in which the phrase is used.
  • the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
  • block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that individual function and/or operation within such block diagrams, flowcharts, or examples may be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof.
  • ASICs Application Specific ICs
  • FPGAs Field Programmable Gate Arrays
  • DSPs digital signal processors
  • Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
  • a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and nonvolatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities).
  • a typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data
  • any two components herein combined to achieve a particular functionality may be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components.
  • any two components so associated may also be viewed as being “operably connected", or “operably coupled,” to each other to achieve the desired
  • operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

L'invention concerne des modes de réalisation associés à des procédés, des appareils et/ou des articles manufacturés pour la migration en direct d'une machine virtuelle d'un premier ordinateur hôte (source) vers un second ordinateur hôte (destination). Un procédé peut consister, pendant que la machine virtuelle s'exécute sur l'ordinateur hôte source, à surveiller et à enregistrer en continu les écritures de la machine virtuelle dans la mémoire, à envoyer en continu des données modifiées dans la mémoire à l'ordinateur hôte de destination, et à déterminer en continu si les données modifiées envoyées à l'ordinateur hôte de destination correspondent sensiblement aux données modifiées dans la mémoire. Le procédé peut consister également à arrêter le fonctionnement de la machine virtuelle, lorsque les données modifiées envoyées à l'ordinateur hôte de destination correspondent sensiblement aux données modifiées dans la mémoire, et à envoyer une dernière quantité de données modifiées dans la mémoire à l'ordinateur hôte de destination. D'autres modes de réalisation peuvent être décrits ou revendiqués.
PCT/CN2010/075240 2010-07-19 2010-07-19 Migration en direct d'une machine virtuelle avec contrôle et envoi en continu d'écritures mémoire WO2012009843A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/075240 WO2012009843A1 (fr) 2010-07-19 2010-07-19 Migration en direct d'une machine virtuelle avec contrôle et envoi en continu d'écritures mémoire

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2010/075240 WO2012009843A1 (fr) 2010-07-19 2010-07-19 Migration en direct d'une machine virtuelle avec contrôle et envoi en continu d'écritures mémoire

Publications (1)

Publication Number Publication Date
WO2012009843A1 true WO2012009843A1 (fr) 2012-01-26

Family

ID=45496436

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/075240 WO2012009843A1 (fr) 2010-07-19 2010-07-19 Migration en direct d'une machine virtuelle avec contrôle et envoi en continu d'écritures mémoire

Country Status (1)

Country Link
WO (1) WO2012009843A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014110804A1 (de) 2013-08-08 2015-02-12 International Business Machines Corporation Live-Migration einer virtuellen Maschine mithilfe einer Peripheriefunktion
WO2019066689A1 (fr) * 2017-09-27 2019-04-04 Telefonaktiebolaget Lm Ericsson (Publ) Procédé et composante de réattribution pour gérer la réattribution d'informations de circuits de mémoire source à cible

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101562A (zh) * 2007-07-10 2008-01-09 北京大学 一种虚拟机的外存在线迁移方法
CN101464812A (zh) * 2009-01-06 2009-06-24 北京航空航天大学 一种虚拟机迁移方法
WO2010029123A1 (fr) * 2008-09-15 2010-03-18 International Business Machines Corporation Procédé pour assurer la migration en temps réel d’une machine virtuelle au sein d’un environnement de service

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101562A (zh) * 2007-07-10 2008-01-09 北京大学 一种虚拟机的外存在线迁移方法
WO2010029123A1 (fr) * 2008-09-15 2010-03-18 International Business Machines Corporation Procédé pour assurer la migration en temps réel d’une machine virtuelle au sein d’un environnement de service
CN101464812A (zh) * 2009-01-06 2009-06-24 北京航空航天大学 一种虚拟机迁移方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CLARK, CHRISTOPHER ET AL.: "Live Migration of Virtual Machines.", NSDI'05: 2ND SYMPOSIUM ON NETWORKED SYSTEMS DESIGN & IMPLEMENTATION., May 2005 (2005-05-01), pages 273 - 286 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014110804A1 (de) 2013-08-08 2015-02-12 International Business Machines Corporation Live-Migration einer virtuellen Maschine mithilfe einer Peripheriefunktion
WO2019066689A1 (fr) * 2017-09-27 2019-04-04 Telefonaktiebolaget Lm Ericsson (Publ) Procédé et composante de réattribution pour gérer la réattribution d'informations de circuits de mémoire source à cible
US11216203B2 (en) 2017-09-27 2022-01-04 Telefonaktiebolaget Lm Ericsson (Publ) Method and reallocation component for managing reallocation of information from source to target memory sled

Similar Documents

Publication Publication Date Title
US9760281B2 (en) Sequential write stream management
US9886352B2 (en) De-duplicated virtual machine image transfer
US9619478B1 (en) Method and system for compressing logs
US20200050385A1 (en) Virtualizing Isolation Areas of Solid-State Storage Media
KR101738074B1 (ko) 메모리 장치, 및 이를 포함하는 컴퓨터 시스템
WO2016209564A1 (fr) Système et disposition pour la compression efficace de données d'un disque dur statique à semi-conducteurs
US9792062B2 (en) Acceleration of memory access
KR101266580B1 (ko) 메모리 디바이스에 대한 인덱스된 레지스터 액세스
US9483318B2 (en) Distributed procedure execution in multi-core processors
US8885570B2 (en) Schemes for providing private wireless network
US10432230B2 (en) Error detection or correction of a portion of a codeword in a memory device
KR101659922B1 (ko) 솔리드 스테이트 저장 장치를 위한 배드 블록 보상
JP2022522595A (ja) ホストベースのフラッシュメモリメンテナンス技術
US20160124744A1 (en) Sub-packaging of a packaged application including selection of user-interface elements
WO2012009843A1 (fr) Migration en direct d'une machine virtuelle avec contrôle et envoi en continu d'écritures mémoire
US9891863B2 (en) Handling shingled magnetic recording (SMR) drives in a tiered storage system
US9852000B2 (en) Consolidating operations associated with a plurality of host devices
US9588882B2 (en) Non-volatile memory sector rotation
KR101772547B1 (ko) 컴퓨팅 디바이스에서의 전력 소비 감축
US10942672B2 (en) Data transfer method and apparatus for differential data granularities
US9578131B2 (en) Virtual machine migration based on communication from nodes
US20160021187A1 (en) Virtual shared storage device
WO2015094349A1 (fr) Mémorisation de données dans une mémoire à semi-conducteurs dégradée
US9286230B1 (en) Sparse volume management cache mechanism system and method
US9424181B2 (en) Address mapping for solid state devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10854873

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10854873

Country of ref document: EP

Kind code of ref document: A1