EP3584705A1 - System and method for backup in a virtualized environment - Google Patents

System and method for backup in a virtualized environment Download PDF

Info

Publication number
EP3584705A1
EP3584705A1 EP19169657.4A EP19169657A EP3584705A1 EP 3584705 A1 EP3584705 A1 EP 3584705A1 EP 19169657 A EP19169657 A EP 19169657A EP 3584705 A1 EP3584705 A1 EP 3584705A1
Authority
EP
European Patent Office
Prior art keywords
backup
production
virtual machines
workflows
remote backup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP19169657.4A
Other languages
German (de)
English (en)
French (fr)
Inventor
Shelesh Chopra
Hareej Hebbur
Sunil Yadav
Manish Sharma
Sudha Hebsur
Soumen Acharya
Aaditya Bansal
Suman Tokuri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EMC Corp
Original Assignee
EMC IP Holding Co LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EMC IP Holding Co LLC filed Critical EMC IP Holding Co LLC
Publication of EP3584705A1 publication Critical patent/EP3584705A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • G06F11/1451Management of the data involved in backup or backup restore by selection of backup contents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1461Backup scheduling policy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45579I/O management, e.g. providing access to device drivers or storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45587Isolation or security of virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/815Virtual

Definitions

  • Computing devices generate, use, and store data.
  • the data may be, for example, images, documents, webpages, or meta-data associated with the data.
  • the data may be stored on a persistent storage. Stored data may be deleted from the persistent storage.
  • a backup of the data stored on a computing device may be backed up by storing it on a second computing device.
  • the second computing device may be geographically separated from the computing device.
  • a remote backup agent that provides data storage services to virtual machines in accordance with one or more embodiments of the invention includes a persistent storage and a processor.
  • the persistent storage stores workflows for the virtual machines.
  • the processor performs a first remote backup of the virtual machines based on the workflows using production agents hosted by production hosts that also host the virtual machines; obtains a workflow update; updates the workflows based on the workflow update to obtain updated workflows; and performs a second remote backup of the virtual machines based on the updated workflows using the production hosts without modifying the production agents.
  • a method of providing data storage services to virtual machines in accordance with one or more embodiments of the invention includes performing, by a remote backup agent, a first remote backup of the virtual machines based on workflows using production agents hosted by production hosts that also host the virtual machines; obtaining, by the remote backup agent, a workflow update; updating, by the remote backup agent, the workflows based on the workflow update to obtain updated workflows; and performing, by the remote backup agent, a second remote backup of the virtual machines based on the updated workflows using the production hosts without modifying the production agents.
  • a non-transitory computer readable medium in accordance with one or more embodiments of the invention includes computer readable program code, which when executed by a computer processor enables the computer processor to perform a method for providing data storage services to virtual machines, the method includes performing, by a remote backup agent, a first remote backup of the virtual machines based on workflows using production agents hosted by production hosts that also host the virtual machines; obtaining, by the remote backup agent, a workflow update; updating, by the remote backup agent, the workflows based on the workflow update to obtain updated workflows; and performing, by the remote backup agent, a second remote backup of the virtual machines based on the updated workflows using the production hosts without modifying the production agents.
  • any component described with regard to a figure in various embodiments of the invention, may be equivalent to one or more like-named components described with regard to any other figure.
  • descriptions of these components will not be repeated with regard to each figure.
  • each and every embodiment of the components of each figure is incorporated by reference and assumed to be optionally present within every other figure having one or more like-named components.
  • any description of the components of a figure is to be interpreted as an optional embodiment, which may be implemented in addition to, in conjunction with, or in place of the embodiments described with regard to a corresponding like-named component in any other figure.
  • embodiments of the invention relate to systems, devices, and methods for backing up and performing restorations of virtual machines. More specifically, the systems, devices, and methods may improve the consistency of generation of backups of virtual machines which, consequently, improves the likelihood that a virtual machine will be able to be restored in the future.
  • the system provides a centralized mechanism for controlling the generation of backups of virtual machines without modifying production hosts.
  • embodiments of the invention may provide remote backup agents that orchestrate backup generation using a common set of policies.
  • a common set of policies By using a common set of policies, backups of virtual machines are generated consistently which improves the likelihood that any necessary backups for restoration of a virtual machine will be available for performing a restoration.
  • embodiments of the invention may improve data security in a network environment by improving the likelihood that data redundancy mechanisms successfully generated copies of data structures.
  • embodiments of the invention may improve the field of networked computing devices that utilize the distribute nature of the computing environment to store data. Embodiments of the invention may address additional problems without departing from the invention.
  • FIG. 1 shows an example system in accordance with one or more embodiments of the invention.
  • the system may include production hosts (130) that host virtual machines exposed to clients (140).
  • the system may further include remote backup agents (110) that provide services to the production hosts.
  • the services may include data storage in backup storages (120), restorations of virtual machines using non-production hosts (100).
  • Each component of the system of FIG. 1 may be operably connected via any combination of wired and/or wireless connections. Each component of the system is discussed below.
  • the clients (140) may be computing devices.
  • the computing devices may be, for example, mobile phones, tablet computers, laptop computers, desktop computers, servers, or cloud resources.
  • the computing devices may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.).
  • the persistent storage may store computer instructions, e.g., computer code, that when executed by the processor(s) of the computing device cause the computing device to perform the functions described in this application.
  • the clients (140) may be other types of computing devices without departing from the invention. For additional details regarding computing devices, See FIG. 9 .
  • the clients (140) may interact with virtual machines (not shown) hosted by the production hosts (130).
  • the virtual machines may host databases, email servers, or any other type of application.
  • the clients (140) may utilize services provided by the aforementioned applications or other applications.
  • the clients (140) may directly operate the virtual machines, e.g., second command to virtual machines in a virtualized environment. In such a scenario, the clients (140) may operate as terminals for accessing the virtual machines.
  • the production hosts (130) are computing devices.
  • the computing devices may be, for example, mobile phones, tablet computers, laptop computers, desktop computers, servers, distributed computing systems, or a cloud resource.
  • a cloud resource may be one or more computing devices that cooperatively host cloud based applications.
  • the computing devices may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.).
  • the persistent storage may store computer instructions, e.g., computer code, that when executed by the processor(s) of the computing device cause the computing device to perform the functions described in this application.
  • the production hosts (130) may be other types of computing devices without departing from the invention. For additional details regarding computing devices, See FIG. 9 .
  • the production hosts (130) are distributed computing devices.
  • a distributed computing device refers to functionality provided by a logical device that utilizes the computing resources of one or more separate and/or distinct computing devices.
  • the production hosts (130) may be distributed devices that include components distributed across a number of separate and/or distinct computing devices. In such a scenario, the functionality of the production hosts (130) may be performed by multiple different computing devices without departing from the invention.
  • the production hosts (130) host virtual machines.
  • the production hosts (130) may host any number of virtual machines without departing from the invention.
  • the production hosts (130) may also host agents, or other executing components, for orchestrating the operation of the hosted virtual machines. For additional details regarding the production hosts (130), See FIG. 2 .
  • the non-production hosts (100) are computing devices.
  • the computing devices may be, for example, mobile phones, tablet computers, laptop computers, desktop computers, servers, distributed computing systems, or a cloud resource.
  • the computing devices may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.).
  • the persistent storage may store computer instructions, e.g., computer code, that when executed by the processor(s) of the computing device cause the computing device to perform the functions described in this application.
  • the non-production hosts (100) may be other types of computing devices without departing from the invention. For additional details regarding computing devices, See FIG. 9 .
  • the non-production hosts (100) are distributed computing devices.
  • a distributed computing device refers to functionality provided by a logical device that utilizes the computing resources of one or more separate and/or distinct computing devices.
  • the non-production hosts (100) may be distributed devices that include components distributed across a number of separate and/or distinct computing devices. In such a scenario, the functionality of the non-production hosts (100) may be performed by multiple different computing devices without departing from the invention.
  • the non-production hosts (100) host concealed virtual machines, or other components, from the clients and/or other entities.
  • a concealed virtual machine may not be visible to other devices.
  • the non-production hosts (100) may host any number of concealed virtual machines without departing from the invention.
  • the non-production hosts (100) may also host agents, or other executing components, for orchestrating the operation of the hosted virtual machines.
  • virtual machines are restored on a non-production host (100) and are transferred to a production host (130) after being restored.
  • the non-production hosts (100) have high computing resource availability.
  • the non-production hosts (100) may have high computing resource availability because the virtual machines hosted by the non-production hosts (100) are concealed. In other words, clients (140) or other entity do not interact with the virtual machines hosted by the non-production hosts (100) because the hosted virtual machines are concealed.
  • the remote backup agents (110) are computing devices.
  • the computing devices may be, for example, mobile phones, tablet computers, laptop computers, desktop computers, servers, distributed computing systems, or a cloud resource.
  • the computing devices may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.).
  • the persistent storage may store computer instructions, e.g., computer code, that when executed by the processor(s) of the computing device cause the computing device to perform the functions described in this application.
  • remote backup agents (110) may be other types of computing devices without departing from the invention. For additional details regarding computing devices, See FIG. 9 .
  • the remote backup agents (110) are distributed computing devices.
  • a distributed computing device refers to functionality provided by a logical device that utilizes the computing resources of one or more separate and/or distinct computing devices.
  • the remote backup agents (110) may be distributed devices that include components distributed across a number of separate and/or distinct computing devices. In such a scenario, the functionality of the remote backup agents (110) may be performed by multiple different computing devices without departing from the invention.
  • the remote backup agents (110) provide services to virtual machines.
  • the services may include storing virtual machine data, generating backups of the virtual machines or portions thereof, and/or performing restorations of virtual machines.
  • the remote backup agents (110) may perform the methods illustrated in FIGs. 6-8B .
  • the remote backup agents (110) may use data structures shown in FIGs. 5A-5C when performing the aforementioned methods. For additional details regarding the remote backup agents (110), See FIG. 3 .
  • the backup storages (120) are computing devices.
  • the computing devices may be, for example, mobile phones, tablet computers, laptop computers, desktop computers, servers, distributed computing systems, or a cloud resource.
  • the computing devices may include one or more processors, memory (e.g., random access memory), and persistent storage (e.g., disk drives, solid state drives, etc.).
  • the persistent storage may include hard disk drives, solid state drives, tape drives, and/or other storage devices.
  • the persistent storage may store computer instructions, e.g., computer code, that when executed by the processor(s) of the computing device cause the computing device to perform the functions described in this application.
  • the backup storages (120) may be other types of computing devices without departing from the invention. For additional details regarding computing devices, See FIG. 9 .
  • the backup storages (120) are distributed computing devices.
  • a distributed computing device refers to functionality provided by a logical device that utilizes the computing resources of one or more separate and/or distinct computing devices.
  • the backup storages (120) may be distributed devices that include components distributed across a number of separate and/or distinct computing devices. In such a scenario, the functionality of the backup storages (120) may be performed by multiple different computing devices without departing from the invention.
  • the backup storages (120) store data from the production hosts (130).
  • the data may be, for example, images of virtual machines executing on the production hosts (130), application data from virtual machines, or any other type of data.
  • the data stored in the backup storages (120) may enable virtual machines executing on the production hosts (130) to be restored.
  • the data stored in the backup storages (120) may reflect a past state of the virtual machines or other applications executing on the production hosts (130).
  • the backup storages (120) may store additional or different data without departing from the invention.
  • different backup storages have difference performance characteristics. For example, some backup storages may be high performance in that data may be stored to or retrieved from the backup storages quickly. In contrast, some backup storages may be low performance in that data may be stored to or retrieved from the backup slowly. It may be less costly to store data in low performance backup storages rather than high performance storages.
  • multiple backup storages are used to store multiple copies of the same data. For example, in some embodiments of the invention a high degree redundancy may be requested. In such a scenario, multiple copies of data may be stored in multiple backup storages to improve the likelihood of ensuring that the stored data is retrievable in the future.
  • some of the backup storages (120) are deduplicated storages.
  • a deduplicated storage attempts to increase the quantity of data that it can store by only storing copies of unique data.
  • the data may first be checked to determine whether it is duplicative of data already stored in the backup storage. Only the unique portions of the data may be stored in the backup storage. Storing and accessing data in a deduplicated storage may be significantly more computing resource costly than storing data in a non-deduplicated storage.
  • FIG. 2 shows a diagram of an example production host (200) in accordance with one or more embodiments of the invention.
  • the example production hosts (200) host virtual machines (210).
  • the example production hosts (200) may host any number of virtual machines (210A, 210N) without departing from the invention.
  • the virtual machines (210) execute using computing resources of the example production host (200).
  • each of the virtual machines (210) may be allocated a portion of the processing resources, memory resources, and/or storage resources of the example production host (200).
  • an image of each of the virtual machines (210) at points in time in the past may be stored.
  • a differencing disk that stores each of the changes made from the image of each of the virtual machines (210) may be stored.
  • the aforementioned images and differencing disks may be stored locally or in a backup storage.
  • generating a backup of a virtual machine includes storing a copy of the image of the virtual machine and any differencing disks in a backup storage.
  • the differencing disks may be merged with a virtual machine image to obtain a representation of the virtual machine at the point in time following the periods of time reflected by each of the differencing disks.
  • the example production host (200) may include a hypervisor (220) that manages the execution of the virtual machines (210).
  • the hypervisor (220) may instantiate and/or terminate any of the virtual machines (210).
  • the hypervisor (220) is a hardware device including circuitry.
  • the hypervisor (220) may be, for example, digital signal processor, a field programmable gate array, or an application specific integrated circuit.
  • the hypervisor (220) may be other types of hardware devices without departing from the invention.
  • the hypervisor (220) is implemented as computing code stored on a persistent storage that when executed by a processor performs the functionality of the hypervisor (220).
  • the processor may be hardware processor including circuitry such as, for example, a central processing unit or a microcontroller.
  • the processor may be other types of hardware devices for processing digital information without departing from the invention.
  • the example production host (200) may include a production agent (230) that manages the storage of virtual machine data in a backup storage.
  • the production agent (230) may issue commands to the hypervisor (220) to control the operation of a virtual machine when attempting to store virtual machine data.
  • the production agent (230) may initiate the processes of generating a backup package, i.e., data that reflects a state of an entity and enables the entity to be restored to the state, for a virtual machine, an application, or other entity executing on the example production host (200).
  • the production agent (230) may initiate a processes of restoring a virtual machine, application, or other entity or migrating a restored virtual machine, application, or other entity.
  • the production agent (230) is a hardened entity, i.e., not modifiable by an entity that is remote to a production host on which the production agent (230) is executing.
  • the production agent (230) may have a set, finite number of predefined functions that may be invoked by a remote entity.
  • the production agent (230) is not configurable by modifying settings or associated configuration files.
  • the production agent (230) is a hardware device including circuitry.
  • the production agent (230) may be, for example, digital signal processor, a field programmable gate array, or an application specific integrated circuit.
  • the production agent (230) may be other types of hardware devices without departing from the invention.
  • the production agent (230) is implemented as computing code stored on a persistent storage that when executed by a processor performs the functionality of the production agent (230).
  • the processor may be hardware processor including circuitry such as, for example, a central processing unit or a microcontroller.
  • the processor may be other types of hardware devices for processing digital information without departing from the invention.
  • FIG. 3 shows a diagram of an example remote backup agent (300) in accordance with one or more embodiments of the invention.
  • the example remote backup agent (300) manages the process of storing data in backup storages and restoring virtual machines, applications, or other entities using data stored in the backup storages.
  • the example remote backup agent (300) may include a backup and recovery manager (310) and a persistent storage (320) storing data structures used by the backup and recovery manager (310).
  • the backup and recovery manager (310) provides backup and restoration services to virtual machines.
  • the backup and recovery manager (310) may obtain data from the virtual machines and store it in the backup storages.
  • the backup and recovery manager (310) may obtain data from a backup storage and perform a restoration of a virtual machine, application, or another entity. In one or more embodiments of the invention, performing a restoration returns an entity to a previous state.
  • the backup and recovery manager (310) may perform all, or a portion thereof, of the methods illustrated in FIGs. 6-8B .
  • the backup and recovery manager (310) may use the data structures in the persistent storage (320).
  • the backup and recovery manager (310) is a hardware device including circuitry.
  • the backup and recovery manager (310) may be, for example, a digital signal processor, a field programmable gate array, or an application specific integrated circuit.
  • the backup and recovery manager (310) may be other types of hardware devices without departing from the invention.
  • the backup and recovery manager (310) is implemented as computing code stored on a persistent storage that when executed by a processor performs the functionality of the backup and recovery manager (310).
  • the processor may be hardware processor including circuitry such as, for example, a central processing unit or a microcontroller.
  • the processor may be other types of hardware devices for processing digital information without departing from the invention.
  • the persistent storage (320) is storage device that stores data structures.
  • the persistent storage (320) may be a physical or virtual device.
  • the persistent storage (320) may include solid state drives, solid state drives, tape drives, and other components to provide data storage functionality.
  • the persistent storage (320) may be a virtual device that utilizes the physical computing resources of other components to provide data storage functionality.
  • the persistent storage (320) stores a topology map (320A), a resource map (320B), a bad host map (320C), and backup/restoration policies (320D).
  • the persistent storage (320) may store additional data structures without departing from the invention.
  • the topology map (320A) may be a representation of the physical and virtual topology of the entities of FIG. 1 .
  • the topology map may include the hardware and/or software profile of each computing device and/or the connective of each computing device.
  • the topology map (320A) may be updated by the example remote backup agent (300).
  • the resource map (320B) may specify the computing resources of the production hosts, the non-production hosts, the remote backup agents, and the backup storages of the system of FIG. 1 .
  • the resource map (320B) may specify the available processing resources, memory resources, storage resources, and communication resources of each of the aforementioned entities.
  • the resource map (320B) may be updated by the example remote backup agent (300).
  • the bad host map (320C) may specify the production hosts that are in a partial error state. For example, over time components of the production hosts may fail.
  • the component may be either hardware or software, e.g., a hypervisor, a backup agent, etc.
  • the bad host map (320C) may specify identifiers of each production host that is in a partial error state due to a partial failure of hardware or software component of the respective production host.
  • the bad host map (320C) may be updated by the example remote backup agent (300).
  • the example remote backup agent (300) may update the bad host map (320C) when the backup and recovery manager (310) is unable to perform a backup or a recovery due to a partial error state of a production host.
  • the backup/restoration policies (320D) may specify the backup and/or restoration workflows for virtual machines hosted by components of the system of FIG. 1 .
  • the backup/restoration policies (320D) may specify the frequency, storage location, restoration location, and other aspects of performing backups or restorations.
  • the backup/restoration policies (320D) may be specified on a granular level, e.g., a workflow for each virtual machine, or on a macro level, e.g., a workflow for multiple virtual machines.
  • the data structures of the persistent storage (320) are illustrated as separate data structures, the aforementioned data structures may be combined with each other and/or other data without departing from the invention. Additionally, while the aforementioned data structures are illustrated as being stored on the example remote backup agent (300), the data structures may be stored on persistent storage of other devices without departing from the invention. For example, multiple remote backup agents may use a single instance of any of the aforementioned data structures stored on one of the remote backup agents or another entity.
  • FIG. 4 shows a diagram of an example backup storage (400) in accordance with one or more embodiments of the invention.
  • the example backup storage (400) stores data from remote backup agents or other entities.
  • a remote backup agent may send data to the example backup storage (400) for storage.
  • an example backup storage (400) may store data obtained from a production host.
  • the remote backup agent may orchestrate the process, i.e., instructs the production host to store the data in the example backup storage (400).
  • the example backup storage (400) provides previously stored data to remote backup agents or other entities.
  • a remote backup agent may initiate a restoration of a virtual machine.
  • the remote backup agent may send an instruction to the example backup storage (400) or the computing device where the restoration of the virtual machines will be performed to provide or obtain, respectively, data in the example backup storage (400).
  • the obtained data may be used to perform the restoration.
  • the example backup storage (400) may include a storage manager (410) and a persistent storage (420) storing data structures used by the storage manager (410).
  • the storage manager (410) manages the storage of data in and the retrieval of data from the persistent storage (420).
  • the data stored in the persistent storage (420) may be deduplicated before storage.
  • the storage manager (410) may compare to-be-stored data to already stored data and only store unique portions of the to-be-stored data.
  • a unique portion may be a portion of the to-be-stored data that is not duplicative of data already stored in the persistent storage (420). For example, after storing a first draft of a text document in the persistent storage (420), minor changes may be made to the first draft.
  • the storage manager (410) may only store the portions of the first draft that were changed. Thereby, more data may be stored in the persistent storage (420) when compared to storing data in the persistent storage (420) without performing deduplication of the data.
  • deduplication uses significant computing resource including processing cycles, memory cycles, and/or storage input-output.
  • the storage manager (410) is a hardware device including circuitry.
  • the storage manager (410) may be, for example, a digital signal processor, a field programmable gate array, or an application specific integrated circuit.
  • the storage manager (410) may be other types of hardware devices without departing from the invention.
  • the storage manager (410) is implemented as computing code stored on a persistent storage that when executed by a processor performs the functionality of the storage manager (410).
  • the processor may be hardware processor including circuitry such as, for example, a central processing unit or a microcontroller.
  • the processor may be other types of hardware devices for processing digital information without departing from the invention.
  • the persistent storage (420) is storage device that stores data structures.
  • the persistent storage (420) may be a physical or virtual device.
  • the persistent storage (420) may include solid state drives, solid state drives, tape drives, and other components to provide data storage functionality.
  • the persistent storage (420) may be a virtual device that utilizes the physical computing resources of other components to provide data storage functionality.
  • the persistent storage (420) stores a deduplicated data storage (420A).
  • the deduplicated data storage (420A) may be a data structure that includes data necessary to regenerate previously stored data structures. To regenerate a previously stored data structure, multiple pieces of different unique data stored in the deduplicated data storage (420A) may be combined.
  • a deduplicated storage may only store copies of unique data.
  • each copy of a unique data may represent a portion of multiple data structures that were previously stored in the deduplicated data storage (420A).
  • a copy of a unique piece of data stored in the deduplicated data storage (420A) may be used to regenerate multiple pieces of previously stored data.
  • the deduplicated data storage (420A) may store unique pieces of data in any format without departing from the invention. Additionally, while the persistent storage (420) is illustrated as only including the deduplicated data storage (420A), the persistent storage (420) may include other data without departing from the invention.
  • FIGs. 5A-5C show data structures that may be used by the components of the system of FIG. 1 .
  • FIG. 5A shows a diagram of an example topology map (500) in accordance with one or more embodiments of the invention.
  • the example topology map (500) may specify functionality of the production hosts, the non-production hosts, the remote backup agents, and/or the backup storages.
  • the functionality may include the computing resources such as, for example, the computing cycles, memory cycles, storage bandwidth, and/or communication bandwidth.
  • the functionality may include a function to be performed, i.e., a function of a distributed system.
  • the example topology map (500) may also specify the connectivity of each of the aforementioned components.
  • the connectivity map may specify the bandwidth between each of the components.
  • the example topology map (500) includes a number of entries (501, 505).
  • Each entry may include a host identifier (e.g., 501A) that specifies an identifier of a component of FIG. 1 .
  • Each entry may also include a functionality (e.g., 501B), i.e., a description, associated with the component of the system of FIG. 1 identified by the host ID (501A).
  • FIG. 5B shows a diagram of an example resource map (510) in accordance with one or more embodiments of the invention.
  • the example resource map (510) may specify the computing resources of each component, or a portion of the components, of the system of FIG. 1 .
  • the example resource map (510) includes a number of entries (511, 515). Each entry may include a host identifier (e.g., 511A) that specifies an identifier of a component of FIG. 1 . Each entry may also include a total computing resource capacity ( e.g. , 511B) that specifies a total quantity of computing resources available to the component of the system of FIG. 1 identified by the host ID (511A). Each entry may also include an available computing resource capacity (e.g., 511B) that specifies the available quantity of computing resources of the component of the system of FIG. 1 identified by the host ID (511A). In other words, the available computing resource capacity (511C) may specify the computing resources that are not currently in use while the total computing resource capacity (511B) may specify the aggregate of the computing resources that are both in use and not in use.
  • a host identifier e.g., 511A
  • Each entry may also include a total computing resource capacity (e.g. , 511B) that specifies a
  • FIG. 5C shows a diagram of an example bad host map (520) in accordance with one or more embodiments of the invention.
  • the example bad host map (520) may specify hosts of the system of FIG. 1 that are in a state that prevents virtual machines from being restored using the hosts.
  • a host may have a hardware error or a software error that prevents the host performing a restoration of a virtual machine.
  • a production agent e.g., 230, FIG. 2
  • the example bad host map (520) may specify each of the aforementioned hosts that are in such a state.
  • the example bad host map (520) includes a number of entries (521, 525). Each entry may include a host identifier (e.g. , 521A) that specifies an identifier of a component of FIG. 1 . Each entry may also include a status ( e.g. , 521B) that specifies whether the host specified by the host ID (521A) is in a state that prevents it from performing backups or restorations.
  • a host identifier e.g. , 521A
  • a status e.g. , 521B
  • FIGs. 5A-5C are shown as a list of entries, the data structures may be stored in other formats, may be divided into multiple data structures, and/or portion of the data structures may be distributed across multiple computing devices without departing from the invention.
  • components of the system of FIG. 1 may perform methods of generating backups and performing restorations of virtual machines, in addition to other functions.
  • FIGs. 6-8B show methods in accordance with one or more embodiments of the invention that may be performed by components of the system of FIG. 1 .
  • FIG. 6 shows a flowchart of a method in accordance with one or more embodiments of the invention.
  • the method depicted in FIG. 6 may be used to assign clients to storage gateway pools in accordance with one or more embodiments of the invention.
  • the method shown in FIG. 6 may be performed by, for example, remote backup agents (110, FIG. 1 ).
  • Other components of the system illustrated in FIG. 1 may perform the method of FIG. 6 without departing from the invention.
  • Step 600 a first remote backup of virtual machines is performed based on workflows using production agents hosted by production hosts that also host the virtual machines.
  • performing the first remote backup of the virtual machine includes initiating a backup by a remote backup agent, generating a backup package that reflects the changes to the virtual machines since the last time a backup of the virtual machines was generated, and storing the generate backup in a backup storage.
  • the remote backup agent may initiate the backup by sending a message to a production agent present on a host that hosts a portion of the virtual machines.
  • the backup package may be multiple packages each of which including data from a single virtual machine. Each of the multiple packages may be transmitted separately, or in aggregate, to the backup storage. Different packages of the multiple packages may be transmitted to different backup storages. Copies of any number of the multiple packages may be transmitted to and stored in any number of backup storages.
  • the first remote backup may be performed, in part, by sending a command from a remote backup agent to a production agent.
  • the command may instruct the production agent to perform one of a number of predefined functions.
  • the functions may be to generate a backup of a virtual machine.
  • the first remote backup may be performed by identifying a portion of the virtual machines based on the workflows; identifying a first portion of the production hosts that each host a virtual machine of the portion of the virtual machines; sending a backup initiation request to each production host of the portion of the production hosts; obtaining first backup data from each production host of the first portion of the production hosts after sending the backup initiation request; and storing the first obtained backup data in backup storage.
  • the workflows may be specified by backup/restoration policies.
  • Step 602 a workflow update is obtained.
  • the workflow update specifies a change to a process of performing a backup or a process of performing a restoration of a virtual machine.
  • the workflow update may specify, for example, a change to a frequency of generation of a backup, a change to a location to where a virtual machine is to be restored, or a change to a storage location of the backup.
  • the workflow update may specify other portions of the process of generating a backup or performing a restoration without departing from the invention.
  • Step 604 workflows are updated based on the workflow update to obtain updated workflows.
  • the workflows are updated by modify a backup/restoration policy (e.g., 320D) that specifies the actions taken to perform a backup or restoration of a virtual machine.
  • a backup/restoration policy e.g., 320D
  • the backup/restoration policy may be modified to conform to the workflow specified by the workflow update.
  • multiple backup/restoration policies are updated based on the workflow update.
  • a workflow update may be used to modify multiple backup/restoration policies.
  • multiple policies that determine the workflows for multiple virtual machines may be updated similarly.
  • embodiments of the invention may ensure that workflows for any number of virtual machines may be made to be consistent, i.e., the same workflow.
  • prior methods of performing a workflow update may require the separate update of multiple entities across a range of both production and non-production hosts.
  • embodiments of the invention may provide consistent workflows for performing backups or restorations of virtual machines.
  • Step 606 a second remote backup of the virtual machines is performed based on the updated workflows using the production hosts without modifying production agents.
  • the updated workflows specify a workflow that is different from a workflow specified by the workflows before the update.
  • the difference may be, for example, a frequency at which the update is performed, a storage location of the generated backup, a redundancy of the backup, an entity that performs the backup, or any other aspect of generating a workflow.
  • a restoration of a virtual machine of the virtual machines is performed.
  • the restoration may be performed using the updated workflows.
  • the restoration may be performed based on a workflow that is different from the workflows before they were updated.
  • the method may end following Step 606.
  • FIG. 7 shows a flowchart of a method in accordance with one or more embodiments of the invention.
  • the method depicted in FIG. 7 may be used to perform a restoration in accordance with one or more embodiments of the invention.
  • the method shown in FIG. 7 may be performed by, for example, remote backup agents (110, FIG. 1 ).
  • Other components of the system illustrated in FIG. 1 may perform the method of FIG. 7 without departing from the invention.
  • Step 700 a request to restore a virtual machine hosted by a production host is obtained.
  • the request is obtained from a production host.
  • a production agent may identify that a virtual machine is in the processes of failing and send a request to perform a restoration of the virtual machine.
  • Other entities may monitor the virtual machines and initiate restorations without departing from the invention.
  • Step 702 a high computing resource availability host is identified that does not host the virtual machine in response to the request.
  • the high computing resource availability host is a non-production host.
  • the high computing resource availability host is a production host that has sufficient resources to perform a restoration.
  • a production host may have sufficient resources if it has a predetermined quantity of available computing resources.
  • the predetermined quantity may be the same quantity that the virtual machine that is to be restored is either currently using or was using before the virtual machine that is to be restored failed.
  • Step 704 while the virtual machine is operating, a restoration of the virtual machine is performed in the identified high computing resource availability host is performed.
  • performing the restoration includes transferring an image of the virtual machine to the identified high computing resource availability host, transferring a difference disk of the virtual machine to the identified high computing resource availability host, and performing a merge of the virtual machine image and the difference disk.
  • performing a merge includes modifying the virtual machine image to reflect the changes included in the difference disk.
  • the merged image of the virtual machine may reflect a state of the virtual machine at the time the difference disk was generated, i.e., when changes to the virtual machine were last stored in the difference disk.
  • Step 706 the restored virtual machine is migrated.
  • the restored virtual machine may be migrated to a production host.
  • the production host maybe the host that hosts the existing copy of the virtual machine, or not.
  • the restored virtual machine may be migrated by transferring the merged image of the virtual machine to the production host.
  • Step 708 the virtual machine is concealed.
  • the virtual machine may be concealed by suspending or terminating the execution of the virtual machine, i.e., the existing virtual machine.
  • Step 710 the restored virtual machine is exposed.
  • the restored virtual machine is exposed by initiating execution of the restored virtual machine.
  • the clients that were interacting with the concealed virtual machine may be redirected to the restored virtual machine.
  • the client interactions with the now concealed virtual machine may be directed to the restored virtual machine.
  • Configurations or other settings from the concealed virtual machine may be transferred to the restored virtual machine to prepare the restored virtual machines to interact with the clients that were interacting with the concealed virtual machines.
  • the method may end following Step 710.
  • a virtual machine is concealed by terminating the virtual machine.
  • the high computing resource availability host does not host any exposed virtual machines.
  • restoration of a virtual machine includes transferring a backup of the virtual machine from a backup storage to the high computing resource availability host.
  • a backup of the virtual machine consists of data associated with a first period of time in the past.
  • the backup may include a virtual machine image associated with a predetermined period of time and one or more difference disks associated with other finite periods of time.
  • performing a restoration of a virtual machine further includes transferring a partial backup of the virtual machine to the high computing resource availability host.
  • the partial backup may be data from a differencing disk.
  • a partial backup may reflect differential data.
  • each of the partial backups is generated after an image of the virtual machine is generated.
  • performing a restoration of a virtual machine includes merging a full backup and a partial backup on a high computing resource availability host to obtain an up to date backup.
  • the restoration may further include instantiating a restored virtual machine using the up to date backup. Instantiating may cause the restored virtual machine to begin execution.
  • the restored virtual machine may be instantiated by a production agent on a production to which the restored virtual machine has been migrated.
  • exposing a restored virtual machine includes sending an execution initiation message to a production agent of the second host.
  • the execution initiation message may specify that the restored virtual machine is to be placed in an executing state.
  • migrating the restored virtual machine to a host includes sending a data transfer message to a high computing resource availability host on which the virtual machine was restored.
  • the data transfer message may specify that the restored virtual machine is to be transferred to the second host.
  • FIG. 8A shows a flowchart of a method in accordance with one or more embodiments of the invention.
  • the method depicted in FIG. 8A may be used to service a support request in accordance with one or more embodiments of the invention.
  • the method shown in FIG. 8A may be performed by, for example, remote backup agents (110, FIG. 1 ).
  • Other components of the system illustrated in FIG. 1 may perform the method of FIG. 8A without departing from the invention.
  • Step 800 a support request for a virtual machine is obtained.
  • the support request specified an identity of the virtual machine.
  • the support request is a backup generation.
  • the support request is a restoration of the virtual machine.
  • Step 802 a capacity analysis is performed.
  • the capacity analysis determines a capacity that a backup storage and/or a remote backup agent have available.
  • the capacity may be, for example, the total number of concurrently performed support sessions associated with performing backups or restorations.
  • the capacity analysis is performed via the method illustrated in FIG. 8B .
  • the capacity analysis may be performed via other methods without departing from the invention.
  • Step 804 it is determined whether there is available capacity.
  • the presence of available capacity is determined based on the capacity analysis.
  • the capacity analysis may specify whether additional sessions for performing a backup or restoration may be performed without degrading a quality of backup or restoration generation service.
  • Step 806 If sufficient capacity is available, the method may proceed to Step 806. If sufficient capacity is not available, the method may proceed to Step 808.
  • Step 806 a session associated with the support request is initiated.
  • the session is a backup generation session.
  • the backup generation session may generate a backup of the virtual machine of Step 800, or portion thereof.
  • the method may end following Step 806.
  • Step 804 the method may proceed to Step 808 following Step 804.
  • Step 808 a future capacity is predicted.
  • the future capacity is the capacity for performing a backup or restoration in the future.
  • the future capacity may be specified at a granular level, e.g., the number of additional concurrent sessions that may be performed for predetermined time periods in the future.
  • the future capacity is predicted by analyzing backup/restoration policies to determine a number of concurrent backup and/or restorations that will be performed in each of the predetermined time periods in the future, identifying a quantity of available computing resources that will be available during each of the predetermined time periods in the future, and predicting the future capacity based on the number of concurrent backup and/or restorations as well as the available computing resources during each of the predetermined periods of time.
  • the predicted future capacity may specify an available capacity at a granular level over a future period of time, e.g., every 15 minutes for the next 24 hours.
  • Step 810 it is determined whether future capacity is available.
  • the determination of whether future capacity is available is made by comparing the capacity required for the support request, e.g., a number of concurrent sessions for any number of backups or restorations, to the predicted future capacity. If the required capacity exceeds the predicted future capacity, at any point in time in the future, the future capacity may be determined as being available.
  • Step 814 If sufficient capacity is available, the method may proceed to Step 814. If sufficient capacity is not available, the method may proceed to Step 812.
  • Step 812 the support request is denied.
  • the method may end following Step 812.
  • Step 814 a session associated with the support request is scheduled.
  • the session is scheduled for a future period of time in which sufficient capacity is available.
  • the predicted future capacity of Step 808 may be used to determine a period of time in the future in which to schedule a session associated with the support request.
  • the scheduled session is a backup session for the virtual machine (e.g. , Step 800).
  • the scheduled session may be a backup session.
  • the scheduled session is a restoration session for the virtual machine (e.g. , Step 800).
  • the scheduled session may be a backup session.
  • the method may end following Step 814.
  • FIG. 8B shows a flowchart of a method in accordance with one or more embodiments of the invention.
  • the method depicted in FIG. 8B may be used to service a support request in accordance with one or more embodiments of the invention.
  • the method shown in FIG. 8B may be performed by, for example, remote backup agents (110, FIG. 1 ).
  • Other components of the system illustrated in FIG. 1 may perform the method of FIG. 8B without departing from the invention.
  • Step 820 a service request time specified by a support request is identified.
  • the service request time is specified by the support request.
  • Step 822 the identified service request time is compared to backup storage computing resources.
  • the comparison is based on a bandwidth between the backup storage and a production host that hosts a virtual machine that generated the service request time.
  • the available bandwidth of the backup storage at the identified service request time may be compared to an estimated bandwidth required to complete the support request. Based on the comparison, a number of supportable concurrent sessions at the service request time may be identified.
  • the comparison is based on an availability of computation cycles of the backup storage at the identified service request.
  • the available computation cycles may be compared to an estimated number of computation cycles required to complete the support request. Based on the comparison, a number of supportable concurrent sessions at the service request time may be identified.
  • the comparison is based on an availability of memory of the backup storage at the identified service request.
  • the available computation cycles may be compared to an estimated quantity of memory cycles required to complete the support request. Based on the comparison, a number of supportable concurrent sessions at the service request time may be identified.
  • the comparison is based on an availability of input-output cycles of the storage of the backup storage at the identified service request.
  • the available input-output cycles may be compared to an estimated number of input-output cycles required to complete the support request. Based on the comparison, a number of supportable concurrent sessions at the service request time may be identified.
  • the method may end following Step 822.
  • FIG. 9 shows a diagram of a computing device in accordance with one or more embodiments of the invention.
  • the computing device (900) may include one or more computer processors (902), non-persistent storage (904) (e.g., volatile memory, such as random access memory (RAM), cache memory), persistent storage (906) (e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.), a communication interface (912) (e.g., Bluetooth interface, infrared interface, network interface, optical interface, etc.), input devices (910), output devices (908), and numerous other elements (not shown) and functionalities.
  • non-persistent storage e.g., volatile memory, such as random access memory (RAM), cache memory
  • persistent storage e.g., a hard disk, an optical drive such as a compact disk (CD) drive or digital versatile disk (DVD) drive, a flash memory, etc.
  • a communication interface e.
  • the computer processor(s) (902) may be an integrated circuit for processing instructions.
  • the computer processor(s) may be one or more cores or micro-cores of a processor.
  • the computing device (900) may also include one or more input devices (910), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.
  • the communication interface (912) may include an integrated circuit for connecting the computing device (900) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.
  • a network not shown
  • LAN local area network
  • WAN wide area network
  • the computing device (900) may include one or more output devices (908), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device.
  • a screen e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device
  • One or more of the output devices may be the same or different from the input device(s).
  • the input and output device(s) may be locally or remotely connected to the computer processor(s) (902), non-persistent storage (904), and persistent storage (906).
  • the computer processor(s) (902), non-persistent storage (904), and persistent storage (906 may be locally or remotely connected to the computer processor(s) (902), non-persistent storage (904), and persistent storage (906).
  • One or more embodiments of the invention may be implemented using instructions executed by one or more processors of the data management device. Further, such instructions may correspond to computer readable instructions that are stored on one or more non-transitory computer readable mediums.
  • One or more embodiments of the invention may address the problem of generating consistent backups of virtual machines in a distributed environment.
  • local backup agents on production hosts must be individually configured to generate backups. That is, the workflows for generating backups must be specified on a granular level. Due to the distributed nature of a virtualized environment, ensuring consistency between the workflows places a large cognitive burden on system administrators.
  • Embodiments of the invention may decrease a cognitive burden on a system administrator by enabling backup workflows to be managed centrally in a distributed environment. More specifically, one or more embodiments of the invention may provide a method by which backups of virtual machines may be generated without modifying production agents of production hosts that host the virtual machines.
  • embodiments of the invention may provide a system that enables centralized control over a distributed environment without modification of the components of the distributed environment.
  • One or more embodiments of the invention may enable one or more of the following: i) improved consistent of generated backups, ii) decreased cognitive loads on users such as system administrator, and iii) improved likelihood of being able to restore a virtual machine because of consistently generated backups of the virtual machine.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
EP19169657.4A 2018-04-27 2019-04-16 System and method for backup in a virtualized environment Pending EP3584705A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/964,099 US10572349B2 (en) 2018-04-27 2018-04-27 System and method for backup in a virtualized environment

Publications (1)

Publication Number Publication Date
EP3584705A1 true EP3584705A1 (en) 2019-12-25

Family

ID=66217840

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19169657.4A Pending EP3584705A1 (en) 2018-04-27 2019-04-16 System and method for backup in a virtualized environment

Country Status (3)

Country Link
US (1) US10572349B2 (zh)
EP (1) EP3584705A1 (zh)
CN (1) CN110413369B (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10754739B2 (en) * 2018-10-26 2020-08-25 EMC IP Holding Company LLC System and method for predictive backup in a distributed environment
US11775393B2 (en) 2021-06-11 2023-10-03 EMC IP Holding Company LLC Method and system for mapping data protection services to data cluster components
US11740807B2 (en) 2021-10-05 2023-08-29 EMC IP Holding Company LLC Method and system for mapping data protection policies to data clusters
US20240241803A1 (en) * 2023-01-18 2024-07-18 Dell Products L.P. System and method for logical device migration based on a downtime prediction model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001006368A1 (en) * 1999-07-15 2001-01-25 Commvault Systems, Inc. Modular backup and retrieval system
US20120331248A1 (en) * 2011-06-23 2012-12-27 Hitachi, Ltd. Storage management system and storage management method

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8417678B2 (en) 2002-07-30 2013-04-09 Storediq, Inc. System, method and apparatus for enterprise policy management
US7174433B2 (en) 2003-04-03 2007-02-06 Commvault Systems, Inc. System and method for dynamically sharing media in a computer network
US7500053B1 (en) 2004-11-05 2009-03-03 Commvvault Systems, Inc. Method and system for grouping storage system components
US9342364B2 (en) * 2008-04-09 2016-05-17 International Business Machines Corporation Workflow managed composite applications
US9191437B2 (en) 2009-12-09 2015-11-17 International Business Machines Corporation Optimizing data storage among a plurality of data storage repositories
US8918439B2 (en) 2010-06-17 2014-12-23 International Business Machines Corporation Data lifecycle management within a cloud computing environment
AU2011274249B2 (en) 2010-07-02 2016-05-12 Metacdn Pty Ltd Systems and methods for storing digital content
US9037712B2 (en) 2010-09-08 2015-05-19 Citrix Systems, Inc. Systems and methods for self-loading balancing access gateways
WO2012147123A1 (en) 2011-04-26 2012-11-01 Hitachi, Ltd. Storage apparatus and control method therefor
US8554918B1 (en) 2011-06-08 2013-10-08 Emc Corporation Data migration with load balancing and optimization
US10089148B1 (en) 2011-06-30 2018-10-02 EMC IP Holding Company LLC Method and apparatus for policy-based replication
US9465697B2 (en) * 2011-09-21 2016-10-11 Netapp, Inc. Provision of backup functionalities in cloud computing systems
WO2013093994A1 (ja) 2011-12-19 2013-06-27 富士通株式会社 ストレージシステム、データリバランシングプログラム及びデータリバランシング方法
US9740435B2 (en) 2012-02-27 2017-08-22 Fujifilm North America Corporation Methods for managing content stored in cloud-based storages
US8832234B1 (en) 2012-03-29 2014-09-09 Amazon Technologies, Inc. Distributed data storage controller
US9513823B2 (en) 2012-04-30 2016-12-06 Hewlett Packard Enterprise Development Lp Data migration
US9021203B2 (en) 2012-05-07 2015-04-28 International Business Machines Corporation Enhancing tiering storage performance
US9098525B1 (en) 2012-06-14 2015-08-04 Emc Corporation Concurrent access to data on shared storage through multiple access points
WO2014052333A1 (en) * 2012-09-28 2014-04-03 Emc Corporation System and method for full virtual machine backup using storage system functionality
US9460099B2 (en) 2012-11-13 2016-10-04 Amazon Technologies, Inc. Dynamic selection of storage tiers
US9953075B1 (en) 2012-12-27 2018-04-24 EMC IP Holding Company LLC Data classification system for hybrid clouds
US9451013B1 (en) 2013-01-02 2016-09-20 Amazon Technologies, Inc. Providing instance availability information
US20140281301A1 (en) 2013-03-15 2014-09-18 Silicon Graphics International Corp. Elastic hierarchical data storage backend
US9292226B2 (en) 2013-06-24 2016-03-22 Steven Andrew Moyer Adaptive data management using volume types
US9280678B2 (en) 2013-12-02 2016-03-08 Fortinet, Inc. Secure cloud storage distribution and aggregation
US10061628B2 (en) 2014-03-13 2018-08-28 Open Text Sa Ulc System and method for data access and replication in a distributed environment utilizing data derived from data access within the distributed environment
US10282100B2 (en) 2014-08-19 2019-05-07 Samsung Electronics Co., Ltd. Data management scheme in virtualized hyperscale environments
US9521089B2 (en) 2014-08-30 2016-12-13 International Business Machines Corporation Multi-layer QoS management in a distributed computing environment
US9977704B1 (en) 2014-09-26 2018-05-22 EMC IP Holding Company LLC Automated backup and replication of virtual machine data centers
US10078556B2 (en) 2015-08-31 2018-09-18 Paypal, Inc. Data replication between databases with heterogenious data platforms
US10659532B2 (en) 2015-09-26 2020-05-19 Intel Corporation Technologies for reducing latency variation of stored data object requests
US20170168729A1 (en) 2015-12-11 2017-06-15 Netapp, Inc. Methods and systems for managing resources of a networked storage environment
US10180912B1 (en) 2015-12-17 2019-01-15 Amazon Technologies, Inc. Techniques and systems for data segregation in redundancy coded data storage systems
US10282104B2 (en) 2016-06-01 2019-05-07 International Business Machines Corporation Dynamic optimization of raid read operations
US10678579B2 (en) 2017-03-17 2020-06-09 Vmware, Inc. Policy based cross-cloud migration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001006368A1 (en) * 1999-07-15 2001-01-25 Commvault Systems, Inc. Modular backup and retrieval system
US20120331248A1 (en) * 2011-06-23 2012-12-27 Hitachi, Ltd. Storage management system and storage management method

Also Published As

Publication number Publication date
US10572349B2 (en) 2020-02-25
CN110413369A (zh) 2019-11-05
US20190332496A1 (en) 2019-10-31
CN110413369B (zh) 2023-11-07

Similar Documents

Publication Publication Date Title
EP3686739B1 (en) Method and system for enabling agentless backup and restore operations on a container orchestration platform
EP3584705A1 (en) System and method for backup in a virtualized environment
US10503428B2 (en) System and method for concurrent multipoint backup
US10698719B2 (en) System and method for virtual machine restoration
US10572350B1 (en) System and method for improved application consistency in a distributed environment
US10732886B2 (en) Application distributed optimal backup model
US20200150950A1 (en) Upgrade managers for differential upgrade of distributed computing systems
EP3731099A1 (en) System and method for accelerating application service restoration
US10860431B2 (en) System and method for fault tolerant backup generation in a virtual environment
US20200019469A1 (en) System and method for orchestrated backup in a virtualized environment
US11288133B2 (en) System and method for resilient data protection
US11775393B2 (en) Method and system for mapping data protection services to data cluster components
EP3647953B1 (en) System and method for data backup in mixed disk environment
CN110955558B (zh) 用于向高可用性应用程序提供备份服务的系统和方法
US20200151063A1 (en) System and method for data-less backups modification during checkpoint merging
US10776036B2 (en) System and method for efficient restore
US11409613B1 (en) System and method for raw disk backup and recovery
EP3591531B1 (en) Instant restore and instant access of hyper-v vms and applications running inside vms using data domain boostfs
US10776223B1 (en) System and method for accelerated point in time restoration
US20230214269A1 (en) Method and system for performing computational offloads for composed information handling systems
US20240248802A1 (en) System and method for data protection
US20200241965A1 (en) Fast storage disaster solution
US20210011814A1 (en) System and method for restorations of virtual machines in virtual systems

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20200625

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20211027