WO2017019086A1 - Conservation de données de machine virtuelle - Google Patents

Conservation de données de machine virtuelle Download PDF

Info

Publication number
WO2017019086A1
WO2017019086A1 PCT/US2015/042891 US2015042891W WO2017019086A1 WO 2017019086 A1 WO2017019086 A1 WO 2017019086A1 US 2015042891 W US2015042891 W US 2015042891W WO 2017019086 A1 WO2017019086 A1 WO 2017019086A1
Authority
WO
WIPO (PCT)
Prior art keywords
raw data
virtual machine
processor
power
power fail
Prior art date
Application number
PCT/US2015/042891
Other languages
English (en)
Inventor
Daniel Stein
Siamak Nazari
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/042891 priority Critical patent/WO2017019086A1/fr
Publication of WO2017019086A1 publication Critical patent/WO2017019086A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1438Restarting or rejuvenating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1479Generic software techniques for error detection or fault masking
    • G06F11/1482Generic software techniques for error detection or fault masking by means of middleware or OS functionality
    • G06F11/1484Generic software techniques for error detection or fault masking by means of middleware or OS functionality involving virtual machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1441Resetting or repowering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Definitions

  • a computing device can host emulations or virtualizations of other computing systems on its hardware. Virtualizing another computing system on a first computing device allows the use of virtualized machine software or architecture while avoiding implementation of the machine software or architecture on a second computing device. The virtualized machine uses the hardware components of the computing device upon which it is implemented.
  • FIG. 1 is an example of a computing system for saving a virtual machine during a power fail event
  • FIG. 2 is an example of a conceptual layout of a virtual machine across hardware components of a host computing device
  • FIG. 3 is a block diagram of an example method for saving and restoring a virtual machine state
  • FIG. 4 is block diagram of an example method for saving a virtual machine during a power fail event
  • FIG. 5 is a diagram of a an example non-transitory, computer-readable medium that holds code that when executed by a processor saves a virtual machine state on in persistent storage.
  • a virtual machine kernel can be a virtualization infrastructure for use on an operating system that can turn a virtual machine into a hypervisor for presenting a guest operating system with a virtual operating platform for management of the execution of the guest operating systems.
  • a virtual machine can be an emulation of a computer system. Virtual machines can operate based on the computer architecture and functions of a real or hypothetical computer, and the implementations of a virtual machine may involve specialized hardware, software, or a combination of both.
  • Present virtual machines have a mechanism for stopping, saving, and restarting the VM from a saved image file. This process does not account for time constrained nature of a computing device experiencing a loss of power.
  • Methods and techniques described herein can apply to VMs and can also be generalized to anything that needs to survive a power fail event.
  • the hardware of a machine can maintain power for a limited time, and during this time the state of a machine can be preserved.
  • the amount of time for preserving the state of the machine can be bounded by the amount of physical state that needs to be preserved.
  • transactional model includes preserving a log of unacknowledged requests.
  • the present disclosure preserves metadata for restoring the application before power and any back up power supply that allows the machine to function, is disabled.
  • the metadata preserved can be found in anonymous memory, such as the swap space, and also in physical memory such as random access memory (RAM).
  • RAM random access memory
  • a filter is used to preserve a subset of the metadata that can be limited to exclude any metadata not related to restart. Use of this filter can further bound the time it takes to save metadata as it can limit the metadata to be saved to the physical limits of memory and anonymous memory or swap space.
  • the metadata to be saved can be the raw data for reconstructing the saved state for use in restarting the application from a power fail event checkpoint.
  • the data can be raw in that it is unformatted, and accordingly, during a power fail event, takes no extra time for formatting during preserving and copying.
  • the metadata that was preserved can be reconstituted when the machine reboots. Reconstitution can consist of a machine checkpoint and the machine restore combined.
  • the format of how the data is to be restored can be embedded in the logic of power fail restore. This format can be optimized for VMs or generalized to work with any application.
  • Methods disclosed herein ensure the process of saving a virtual machine state can be accomplished in a bounded amount of time. Bounding the amount of time for saving a virtual machine state can improve the resilience of a device, which runs virtual machines, to losses of power.
  • the recovery of a virtual machine state can include first storing backup of a virtual machine and its linked files without regard for how or where the virtual machine state is stored.
  • the present disclosure includes copying virtual machine state data from volatile memory. In an example, the data on all volatile memory devices is saved. By limiting the amount and location of the saved data by explicitly saving 'all volatile memory,' the present disclosure bounds the time of saving because the volatile memory of each device has unique and limited physical limits of volatile memory installed on the host device.
  • swap space used by the virtual machine and located on a physical storage is preserved even during power fail event.
  • a swap space also known as a swap file or page file, can be designated space on a hard disk used to be used as a virtual memory extension of a computer's real memory like random access memory (RAM). Having a swap file allows your computer's operating system to use the small amount of hard disk space as an extension of the system memory and data stored in a swap space can swap from the swap space to the memory and processor as needed for processing or quicker access.
  • a power fail event can be a point in time during which a device losses power, and the power loss event can persist while a loss of power to the device can be ongoing, however a power fail event begins when power is first lost or
  • a power fail event can signal for a data preserving process to begin.
  • This preserved swap space can also be limited in size and can be used for restoration of the virtual machine upon power restore. Like the hardware memory constraints, swap space is similarly bounded by partitions in hardware such as a hard drive. In an example, the amount of data to save or preserve is kept manageable by limiting the data to be saved to include the data in use by the virtual machine. This limited data can include the virtual machine kernel, data stored in the caches of the central processing unit, swap space of a storage, or user data in a memory device, such as memory implementing random access memory (RAM).
  • FIG. 1 is an example of a computing system for saving a virtual machine during a power fail event.
  • the computing device 1 00 may be, for example, a laptop computer, desktop computer, ultrabook, tablet computer, mobile device, or server, among others.
  • the computing device 100 may include a central processing unit (CPU) 1 02 that is configured to execute stored instructions, as well as a memory device 104 that stores instructions that are executable by the CPU 102.
  • the CPU 102 may be coupled to the memory device 104 by a bus 106.
  • the CPU 102 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations.
  • the computing device 100 may include more than one CPU 102.
  • the CPU 102 can also connect through a storage array interface 106 to external storage arrays 1 10 by the bus 106.
  • the storage array 1 10 can be an external system or array of systems that are hosting its own guest virtual machines or interacting with the virtual machines of the computing device 100.
  • the computing device 100 also can locally include a storage device 1 1 2.
  • the storage device 1 12 is a non-volatile storage device such as a hard drive, an optical drive, a thumbdrive, an array of drives, or any combinations thereof.
  • the memory device 104 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems.
  • the memory device 104 may include dynamic random access memory (DRAM).
  • the CPU 102 may be linked through the bus 106 to a display interface configured to connect the computing device 1 00 to a display device.
  • the display device may include a display screen that is a built-in component of the computing device 100.
  • the display device may also include a computer monitor, television, or projector, among others, that is externally connected to the computing device 100 or built into the computing device 1 00, for example, in a laptop or tablet.
  • the CPU 102 may also be connected through the bus 1 06 to an input/output (I/O) device interface configured to connect the computing device 100 to one or more I/O devices.
  • the I/O devices may include, for example, a keyboard and a pointing device, wherein the pointing device may include a touchpad or a touchscreen, among others.
  • the I/O devices may be built-in components of the computing device 100 or may be devices that are externally connected to the computing device 100.
  • the computing device 100 may also include a network interface controller (NIC) may be configured to connect the computing device 100 through the bus 106 to the network.
  • the computing device 1 00 may also be connected to a network.
  • the network may be a wide area network (WAN), local area network (LAN), or the Internet, among others.
  • the computing device 100 and its components may be powered by a power supply unit (PSU) 1 14.
  • the CPU 102 may be coupled to the PSU through the bus 106 which may communicate control signals or status signals between then CPU 102 and the PSU 1 14.
  • the PSU 1 14 is further coupled to a power source 1 1 6.
  • the power source 1 16 can be a supply external to the computing device 100, and can also be internal in the case of a battery, or can be both in the case of an external power source with a power supply backup to continue supplying power and send a power fail event in the case of a power fail.
  • the CPU 102 can control the functioning of the backup over a bus 106.
  • the computer device 100 also includes a virtual machine state recoverer (VMSR) 1 18, which may be stored in the storage device 1 12.
  • VMSR virtual machine state recoverer
  • the VMSR 1 18 may instruct the processor to copy volatile data of a virtual machine in a memory device 104 or stored in the memory stores of the CPU 102.
  • the VMSR 1 18 can also instruct the preservation of data used by a virtual machine in the swap space of a storage device 1 12.
  • the VMSR 1 1 8 can aid in the recovery of a virtual machine state upon return of power to a computing device 100.
  • the VMSR 1 1 8 can direct a processor to load and restore a virtual machine state by indicating the location in a persistent storage of the saved virtual machine state data.
  • Persistent storage is non-volatile such that data stored there is preserved even without power being supplied to the persistent storage.
  • persistent storage can allow the storage of a state of an application or process through serialization of the data to a storable format, and then saving this data to a file for future retrieval.
  • the virtual machine state is stored in a local version of persistent storage such as a storage device 1 12.
  • the virtual machine state can also be stored remotely at a storage array 1 1 0, for example.
  • the block diagram of Fig. 1 is not intended to indicate that the computing device 100 is to include all of the components shown in Fig. 1 . Further, the computing device 100 may include any number of additional components not shown in Fig. 1 , depending on the details of the specific implementation.
  • FIG. 2 is an example of a conceptual layout 200 of a virtual machine across hardware components of a host computing device.
  • the host computing device can be the computing device 100 described in Fig. 1 and can host a guest virtual machine, or several machines such as the virtual machine 202 shown.
  • the virtual machine 202 can be implemented across one or several memory, processing, and storage devices as shown. Indeed, more than one virtual machine 202 can be hosted on a computing device 1 00, and further, a virtual machine 202 can be hosted across several different computing devices making use of devices resources. For simplicity, a computing device 100 and virtual machine 202 are shown.
  • the virtual machine 202 As the virtual machine 202 is emulated on the hardware of the computing device 100, it can take up either a portion of those resources, or may completely control of these components functioning. As an illustration of the potential partial resources usage of a virtual machine 202 on a computing device 100, a part of the computing device 100 resources is shown inside the virtual machine 202. In one example, the present drawings do not represent the relative amount of control of the virtual machine 202. The present drawings provide an example of the relationship of the indicated components with the virtual machine 202 and the computing device 100.
  • the computing device 100 can include a CPU 102, a memory device 104, and a storage device 1 12. These items are as described above. A virtual machine hosted on the computing device 100 can make use of these components.
  • the CPU caches 204 are potential locations for storage and use by a virtual machine 202.
  • the CPU caches can include the L1 , L2, and L3 caches and are decreasingly volatile and fast.
  • the CPU caches can also host data used in the virtual machine 202.
  • the CPU 102 Upon receipt of a power fail event, the CPU 102 can be instructed to copy data in its or other CPU caches 204 to be copied to persistent storage, and thereby save a part of a virtual machine 202 state.
  • the virtual machine 202 state data is stored in the CPU caches 204 and no other copying or preserving of data is undertaken.
  • the storage device 1 12 can include a swap space 206 used by the virtual machine 202.
  • the swap space can be a portion of a hard disk drive, or other persistent storage as used to describe the storage device 1 12 above.
  • the virtual machine 202 or a computing device 100 can transfer data that is not immediately active to the swap space 206 for easy and quicker access compared to other more remote storage areas and devices.
  • active swap space 206 data can be copied back into the memory device 1 04 or CPU caches 204.
  • This available swap space 206 while persistent, can be slower than the memory device 104 or the CPU caches 204 can increase the total available system memory of a virtual machine 202 or a computing device 100. Accordingly, data stored in the swap space 206, and used by the virtual machine 202 during a power fail event, can be preserved. In an example, the swap space 206 is already persistent, so no copying of the data is undertaken so long as neither the virtual machine 202 or the computing device 100 copies over or erases the data held there.
  • FIG. 3 is a block diagram of an example method for saving and restoring a virtual machine state. This example can be executed in a computing device, for example the computing device 100 of Fig. 1 .
  • the method 300 begins at block 302 when a power fail event occurs.
  • the power fail event can occur when a long term power source for the computing device 100 becomes removed, damaged, or made non-functioning.
  • a backup or temporary power source can take over powering the computing device until the original power source is fixed, or returns.
  • a power fail event is sent in response to the power fail event.
  • the power fail event can be sent to the components of the computing device that have volatile memory as well as to the components of the computing device that have a VMSR 1 18.
  • the VMSR 1 18 may perform the method shown and steps involved in saving and restarting the virtual machine.
  • the processor can be instructed to copy data from the memory, specifically data that was used by the virtual machine, to persistent storage. Similarly, the processor also copies data in the processor to persistent storage.
  • This data in the processor can include data in any memory bank of the CPU 102 including the L1 , L2, and L3 caches.
  • the processor can be instructed to preserve data in the swap space.
  • the processor can indicate an area of the swap space used for virtual machine data at the time of the power fail event, and further prevent overwriting or movement of this data. Any other method of preserving this preserving of swap space data of a virtual machine can also be used, and includes an indication that upon reload, this swap space data will be present in the swap space for the reloading machine.
  • a power restore can occur.
  • a computing device 100 can be receiving power from a power element such as a backup battery or a secondary power source, a power store in block 312 indicates a return of the primary power source. If a power restore occurs, a power restore event can be sent to components of the computing device 1 00 including the VMSR 1 18.
  • a processor can respond to a power restore event by analyzing data in persistent storage to determine if the virtual machine state can be recovered and restarted. If, yes, the processor determines the virtual machine state can be recovered process flow continues at block 316. If, no, the virtual machine state cannot be recovered, process flow proceeds to block 318.
  • the recovery of the virtual machine state can proceed. This can include a reloading of data to volatile memory from the persistent storage and reloading a virtual machine state. This can include reloading of data into the CPU 102, CPU caches 204, and other data in the memory device 104. Restoring the virtual machine state can also include ensuring the swap space data from the time of the power fail event is in place as it was at the time of the power fail event.
  • a processor identifies that the stored data cannot be used to recover the virtual machine state, e.g., a non-recoverable machine state. The identification of the non-recoverability of the machine state can be used in analysis.
  • Any data that was copied from the memory and swap space can be searched for missing components, misdirecting pointers, inconsistent logic, or any other cause of the non-recoverability of the machine state.
  • this analysis focuses on why the data that recovered could not be used to recover the virtual machine state. This data can be used to improve the recovery process in the future, and can also be used to indicate pieces of the virtual state machine that could be used to perform a partial recovery.
  • FIG. 4 is block diagram 400 of an example method for saving a virtual machine during a power fail event.
  • This example method can be implemented in a computing device, such as the computing device 1 00 of FIG. 1 .
  • the example method beings at block 402.
  • a power element powers a system in response to a power fail event.
  • the system the power element is powering can be the computing device 100 shown in FIG. 1 , or any other suitable system.
  • the powering of the system by the power element may have begun prior to the power fail event, can continue to be a power source during a power fail in response to a power fail event. In this way there is no gap in power supply to a system experiencing a power fail.
  • the power element can be activated upon a power fail event and the system remains powered during the power up timer period through additional power backup sources such as capacitors, back up batteries, or any other suitable backup power source.
  • the power element of block 402 can be temporary or time limited in the time it will be providing power to a system, before the power element itself runs out of power and the system can become completely unpowered.
  • a processor can copy data located in the processor and a memory to a persistent storage.
  • the processor in block 404 can be the CPU 102 of FIG. 1 .
  • the memory can be the memory device 104 of FIG. 1 and the persistent storage can be the storage device 1 1 2 of FIG. 1 .
  • Other embodiments can be included, where data in a volatile memory can be moved or copied to a nonvolatile storage device or powered memory not affected by the power fail.
  • the data copied from memory and from CPUs can include the state of threads and CPUs. In an example, active threads can be saved as if it was preempted. In this way, any thread within a guest virtual machine can be restarted and restored.
  • the data located in a swap space of storage is preserved. This preservation can be ensured by a processor such as the CPU 102 of FIG. 1 .
  • the device where the swap space can be located can be immediately unpowered upon the receipt of a power fail event so that data stored in the persistent storage, in including data in the swap space is preserved.
  • a determination can be made by a VMSR 1 18, in an example, as to whether a persistent storage continuing swap space can be powered down immediately upon receipt of a power fail event or if the persistent storage can be the location of persistent storage of copied data from the processor and memory.
  • a new user level application can extract the guest virtual machine state from the data copied from the memory and CPUs and swap space restart the guest virtual machine.
  • This extraction and reassembly of active threads occurs after the system has power restored, the operating system (OS) has booted, and determined that there is a power fail recovery to complete. Postponing the assembly of this data, the threads, and the extraction of the state until after the power restore allows a quicker saving of data that enables this process when the time of data loss due to loss of power is more limited.
  • Fig. 5 is a diagram of a an example non-transitory, computer-readable medium that holds code that when executed by a processor saves a virtual machine state on in persistent storage.
  • the computer-readable medium 500 can be accessed by a processor 502 over a system bus 504.
  • the code may direct the processor 502 to perform the steps of the current method as described with respect to FIGS. 3 and 4.
  • the processor 502 corresponds to a CPU 102 from FIG. 1 .
  • the system bus 504 linking the processor 502 with the computer-readable medium 500 can correspond to the bus 106 of FIG. 1 in function and implementation.
  • the computer-readable medium 500 can include a power element module 506.
  • the computer-readable medium 500 can include RAM, such as DRAM or SRAM.
  • the RAM may be referred to as a logic engine with program instructions used to store a register table that includes a list of authorized commands for the flash memory device.
  • the power element module can control the function of a power element, which can correspond to the power source 1 16 and power supply unit 1 14 of FIG. 1 .
  • the power element can be implemented in any way that allows power to be supplied to a system even if a primary power source no longer functions. This can include acting as a backup power supply that activates or takes over upon a power fail event.
  • the power element module manages deployment of the power element to ensure constant supply of power until power completely runs out of a system or a primary power source is restored to full functionality.
  • the data copier module 508 can direct a processor to copy data located in memory components, particularly data in volatile memory, to a persistent storage.
  • the data copier module 508 can direct a processor 502 to copy data stored in the processor's caches to persistent storage.
  • the persistent storage can be the computer-readable medium 500 upon which the power element module 506 is located.
  • a portion of data in memory is copied, and the data copier module 508 can choose to copy data in memory that is being used by a virtual machine at the time of a power fail event. In this way the amount of data to be copied can be reduced to match the data for recovering the virtual machine upon power restore and the copy time reduced to increase the odds that the copy completes prior to complete power fail of a system.
  • the power element controlled by the power element module ensures the host computer has a set period of power to allow the other modules to save the virtual machine state.
  • a host computer that supports recovery during a power fail event can also provide the persistent storage for a data copier module 508 can use to save data to be recovered after the power is restored.
  • this saved data can be used during a host computer power restore to restore the saved state of the virtual machine.
  • the data copied by the data copier module 508 can include a guest virtual machine including a kernel state and host level state that is maintained by host applications.
  • the infrastructure supporting the guest virtual machine can be saved to persistent storage, in addition to the data in the volatile memory, before power is lost on the host computer.
  • the processor 502 can be a multiprocessor system, in which case CPUs stop executing instructions from the virtual machine and their contexts are saved to non-volatile memory by the data copier module 508.
  • CPUs stop executing instructions from the virtual machine and their contexts are saved to non-volatile memory by the data copier module 508.
  • a subset of memory and CPU stored data can be copied, to include the active set of kernel and user physical memory - the memory in use by the virtual machine. This data can also be compressed before storage in nonvolatile memory.
  • the data preserver module 510 can direct a processor to preserve data located in the swap space of a storage device.
  • the swap space can be located in a persistent storage corresponding to a storage device 1 12 from FIG. 1 .
  • the data used by a virtual machine at the time of the power fail event are preserved by the data preserver module 510.
  • the swap space may not be used by data for a virtual machine and the swap space can be used in the copying of data from memory or other processes occurring after a power fail event and while a power element acts as a backup power source.
  • the data copied by the data copier module 508 and the data preserved by the data preserver module 510 can be parsed together by the processor 502 to extract the saved guest virtual machine image which can then be resumed.
  • the automatic saving of data and swap space can be triggered by host computer faults that result in a reboot in addition to the automatic saving of data triggered by power fails.
  • the host computer can determine whether the fault is recoverable in a process that corresponds to block 314 in FIG. 3.
  • FIG. 5 The block diagram of FIG. 5 is not intended to indicate that the computer-readable medium 500 is to include the components or modules shown in FIG. 5. Further, any number of additional components may be included within the computer-readable medium 500, depending on the details of the end to end QoS technique and in-band communication described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Power Sources (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

Des exemples de mise en œuvre de l'invention concernent la conservation de données de machine virtuelle pendant un événement de panne d'alimentation. Par exemple, un système peut comprendre un processeur, une mémoire et une unité de stockage comprenant un espace d'échange utilisé par la machine virtuelle. Le système peut comprendre un élément d'alimentation pour délivrer de l'énergie au système pendant un événement de panne d'alimentation. En réponse à l'événement d'alimentation, le processeur peut copier des données se trouvant dans le processeur et la mémoire sur une unité de stockage persistante et conserver également les données se trouvant dans l'espace d'échange.
PCT/US2015/042891 2015-07-30 2015-07-30 Conservation de données de machine virtuelle WO2017019086A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/042891 WO2017019086A1 (fr) 2015-07-30 2015-07-30 Conservation de données de machine virtuelle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/042891 WO2017019086A1 (fr) 2015-07-30 2015-07-30 Conservation de données de machine virtuelle

Publications (1)

Publication Number Publication Date
WO2017019086A1 true WO2017019086A1 (fr) 2017-02-02

Family

ID=57886842

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/042891 WO2017019086A1 (fr) 2015-07-30 2015-07-30 Conservation de données de machine virtuelle

Country Status (1)

Country Link
WO (1) WO2017019086A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060136765A1 (en) * 2004-12-03 2006-06-22 Poisner David L Prevention of data loss due to power failure
US20100161976A1 (en) * 2008-12-23 2010-06-24 International Business Machines Corporation System and method for handling cross-platform system call with shared page cache in hybrid system
US20120151118A1 (en) * 2010-12-13 2012-06-14 Fusion-Io, Inc. Apparatus, system, and method for auto-commit memory
US20130145085A1 (en) * 2008-06-18 2013-06-06 Super Talent Technology Corp. Virtual Memory Device (VMD) Application/Driver with Dual-Level Interception for Data-Type Splitting, Meta-Page Grouping, and Diversion of Temp Files to Ramdisks for Enhanced Flash Endurance
US20150058533A1 (en) * 2013-08-20 2015-02-26 Lsi Corporation Data storage controller and method for exposing information stored in a data storage controller to a host system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060136765A1 (en) * 2004-12-03 2006-06-22 Poisner David L Prevention of data loss due to power failure
US20130145085A1 (en) * 2008-06-18 2013-06-06 Super Talent Technology Corp. Virtual Memory Device (VMD) Application/Driver with Dual-Level Interception for Data-Type Splitting, Meta-Page Grouping, and Diversion of Temp Files to Ramdisks for Enhanced Flash Endurance
US20100161976A1 (en) * 2008-12-23 2010-06-24 International Business Machines Corporation System and method for handling cross-platform system call with shared page cache in hybrid system
US20120151118A1 (en) * 2010-12-13 2012-06-14 Fusion-Io, Inc. Apparatus, system, and method for auto-commit memory
US20150058533A1 (en) * 2013-08-20 2015-02-26 Lsi Corporation Data storage controller and method for exposing information stored in a data storage controller to a host system

Similar Documents

Publication Publication Date Title
US9940064B2 (en) Live migration of virtual disks
US8875160B2 (en) Dynamic application migration
US9792187B2 (en) Facilitating test failover using a thin provisioned virtual machine created from a snapshot
EP2876556B1 (fr) Redémarrage rapide d'applications à l'aide d'une mémoire partagée
US8464257B2 (en) Method and system for reducing power loss to backup IO start time of a storage device in a storage virtualization environment
DK3008600T3 (en) Backup of a virtual machine from a storage snapshot
US9323550B2 (en) Mechanism for providing virtual machines for use by multiple users
Kourai et al. Fast software rejuvenation of virtual machine monitors
US6795966B1 (en) Mechanism for restoring, porting, replicating and checkpointing computer systems using state extraction
RU2568280C2 (ru) Быстрый запуск компьютера
US9727274B2 (en) Cloning live virtual machines
US8578144B2 (en) Partial hibernation restore for boot time reduction
US10387261B2 (en) System and method to capture stored data following system crash
US8621461B1 (en) Virtual machine based operating system simulation using host ram-based emulation of persistent mass storage device
KR101696490B1 (ko) 부분 리부팅 복구 장치 및 방법
KR101673299B1 (ko) 운영 시스템 복구 방법 및 장치, 그리고 단말기기
US11467920B2 (en) Methods and systems to index file data of virtual machine (VM) image
US10496492B2 (en) Virtual machine backup with efficient checkpoint handling based on a consistent state of the virtual machine of history data and a backup type of a current consistent state of the virtual machine
US20160154664A1 (en) Information processing system and method of controlling same
US11301338B2 (en) Recovery on virtual machines with existing snapshots
US20190129800A1 (en) Application High Availability via Crash-Consistent Asynchronous Replication of Persistent Data
CN114741233A (zh) 快速启动方法
TWI546661B (zh) 使用狀態資訊回復系統之技術
US8468388B2 (en) Restoring programs after operating system failure
CN106775846B (zh) 用于物理服务器的在线迁移的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15899876

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15899876

Country of ref document: EP

Kind code of ref document: A1