US9436560B2 - Increasing disaster resiliency by having a pod backed up to other peer pods in a site or beyond - Google Patents
Increasing disaster resiliency by having a pod backed up to other peer pods in a site or beyond Download PDFInfo
- Publication number
- US9436560B2 US9436560B2 US14/243,405 US201414243405A US9436560B2 US 9436560 B2 US9436560 B2 US 9436560B2 US 201414243405 A US201414243405 A US 201414243405A US 9436560 B2 US9436560 B2 US 9436560B2
- Authority
- US
- United States
- Prior art keywords
- sites
- site
- objective
- backup
- recovery
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1461—Backup scheduling policy
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
- G06F11/1451—Management of the data involved in backup or backup restore by selection of backup contents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1464—Management of the backup or restore process for networked environments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1469—Backup restoration techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1456—Hardware arrangements for backup
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/805—Real-time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/815—Virtual
Definitions
- the present application relates generally to computers, and computer applications, and more particularly to increasing disaster resiliency of computer systems.
- VMs virtual machines
- PoD point of delivery
- the VMs may not be able to be restored from the backups since those backups would be lost with the PoD that stores them.
- backup of existing VMs running in a cloud is typically achieved by using the storage subsystem within the cloud PoD, which hosts the VMs. If the cloud PoD (including storage) faces disaster then the VMs cannot be restored.
- a method of increasing disaster resiliency in computer systems may comprise executing an optimization algorithm that solves simultaneously for at least a first objective to increase a spread of a backup of virtual machines from a given site onto other sites in proportion to an amount of available space for backup at each site, a second objective to increase a number of backups at one or more of the other sites with low probability of system crash while reducing backups at one or more of the other sites with higher probability of system crash, and a third objective to minimize a violation of recovery time objectives of the virtual machines during recovery.
- the method may also comprise determining one or more backup sites and one or more recovery sites in an event the given site crashes based on a solution of the optimization algorithm.
- a system for increasing disaster resiliency in computer systems may comprise an optimization model that solves simultaneously for at least a first objective to increase a spread of a backup of virtual machines from a given site onto other sites in proportion to an amount of available space for backup at each site, a second objective to increase a number of backups at one or more of the other sites with low probability of system crash while reducing backups at one or more of the other sites with higher probability of system crash, and a third objective to minimize a violation of recovery time objectives of the virtual machines during recover.
- a processor may be operable to execute the optimization model to determine one or more backup sites and one or more recovery sites in an event the given site crashes based on a solution of the optimization model, wherein algorithm.
- a computer readable storage medium storing a program of instructions executable by a machine to perform one or more methods described herein also may be provided.
- FIG. 1 is a diagram illustrating a storage manager of a PoD configured to use a storage manager (SM) of a peer PoD to keep a backup in one embodiment of the present disclosure.
- SM storage manager
- FIG. 2 is a diagram illustrating a disaster scenario in one embodiment of the present disclosure.
- FIG. 3 is a diagram illustrating spreading of backups on Peer PoDs in one embodiment of the present disclosure.
- FIG. 4 illustrates a scenario for modulating the spread to take care of RTOs of VMs when a disaster strikes a PoD in one embodiment of the present disclosure.
- FIG. 5 is a flow diagram illustrating a method of the present disclosure in one embodiment.
- FIG. 6 illustrates a schematic of an example computer or processing system that may implement a backup/recovery system in one embodiment of the present disclosure.
- a method and a system may be provided that back up the VMs of a given PoD onto other PoDs or clouds, such that, optimal distribution of backups of a PoD across multiple other PoDs may be achieved, e.g., subject to a set of constraints such as subject to storage capacity, compute capacity, regulatory, and hazard (probability of failure) constraints.
- the placement of backups also considers the Recovery Time Objectives (RTOs) of the individual VMs along with the network bandwidth between the PoD on which the backup lies and the Recovery PoD.
- RTO Recovery Time Objectives
- an RTO is the time that it could take to get a system back up and running after a failure.
- a method and a system of the present disclosure consider optimally constructing the schedule of VMs backup on other PoDs to maximize resiliency from disasters affecting more than one PoD.
- a method and a system of the present disclosure consider constraints on the disaster proneness of the individual PoD as well as the network bandwidth between the two PoDs in deciding the schedule.
- a method and a system may provide for an approach to compute an optimal recovery strategy once a PoD faces disaster.
- An optimal backup strategy may be determined that is recovery sensitive, as well as providing an optimal schedule for recovery in the face of disaster of a PoD.
- a method and a system of the present disclosure may spread backups of VMs in a given PoD onto the storage infrastructure of other PoDs in such a way that the probability of reconstruction of lost VMs is maximized in the face of disasters.
- a method and a system of the present disclosure in one embodiment may spread backups of VMs running in a given PoD onto the storage infrastructure of other PoDs with consideration for: (1) minimizing the risk-exposure to any other PoD when one of the PoD faces a disaster; and (2) minimizing penalty to be paid for missing RTO during recovery after a disaster.
- a Point of Delivery is a hosting environment where virtual machines (VMs) belonging to applications run.
- VMs virtual machines
- a PoD can play the following multiple roles simultaneously: It provides infrastructure (e.g., compute, storage, and network) for running virtual machines as well as tools to manage the infrastructure;
- infrastructure e.g., compute, storage, and network
- a PoD can also act as a recovery PoD, i.e., in the event a PoD goes down then another PoD can provide the infrastructure to run the applications which were running on the disaster-struck PoD.
- the recovery process may entail transfer of the backup or mirror of the failed VM to the recovery PoD, set-up of the vLAN, and other components to bring up and run the VM.
- FIG. 1 is a diagram illustrating a storage manager of a PoD configured to use a storage manager (SM) of a peer PoD to keep a backup in one embodiment of the present disclosure.
- FIG. 1 shows multiple PoDs (e.g., 102 , 104 , 106 , 108 , 110 ).
- Each PoD may include at least one storage manager (e.g., 116 , 118 , 120 , 122 , 124 ).
- Storage manager at PoD 1 ( 102 ) may backup its disk 1 ( 112 ) on PoD x ( 104 ) and its disk 2 ( 114 ) on PoD y ( 106 ).
- Disk 1 ( 112 ) may contain VM 1 components; Disk 2 ( 114 ) may contain VM 2 components.
- PoD x ( 104 ) serves as a backup PoD for VM 1 of PoD 1 ;
- PoD y ( 106 ) serves as a backup PoD for VM 2 of PoD 1 .
- an SM may store a backup locally as well as with a Peer PoD.
- PoD 1 ( 102 ) also may have backups of at least one of VM 1 and VM 2 .
- PoD 1 ( 102 ) may store a backup only with Peer PoDs and not locally.
- FIG. 2 is a diagram illustrating a disaster scenario in one embodiment of the present disclosure.
- PoD 1 202 e.g., also shown in FIG. 1 at 102
- a backup may be restored on a recovery PoD.
- a method of the present disclosure determines which of the remaining PoD would be the best to serve as a restore or recovery PoD.
- the PoD on which to restore the backup of the disaster struck PoD ( 102 ) may be one of the peer PoDs (e.g., 204 , 206 , 208 , 210 , 212 ).
- the recovery PoD 210 may be chosen based on an optimization model described further below.
- FIG. 3 is a diagram illustrating spreading of backups on Peer PoDs in one embodiment of the present disclosure.
- PoD e.g., 302
- PoD 302 may spread its backups over multiple Peer PoDs to mitigate further loss of data in the event the PoD serving as a backup PoD (e.g., 304 ) also goes down.
- PoD 302 may backup its disks across several PoDs (e.g., 304 , 306 , 308 ).
- Peer PoD determined to be a recovery PoD ( 310 ) would store the backed up disks from the PoDs (e.g., 306 , 308 ), that are still running.
- Spreading reduces risk-exposure against multiple simultaneous disasters. A PoD is exposed to greater risk, if all its backups are on another PoD and that PoD faces disaster. Spreading the backups is also useful because in the face of disaster, the transfer of backups back onto the Recovery PoD may take place using different networks.
- a processing capacity of a PoD for backup may depend on a storage manager (SM).
- SM storage manager
- a PoD site that has one storage manager is shown. It is noted, however, the methodology of the present application also apply to a site with more than one storage manager.
- the capacity of the PoD for backup is equivalent to the capacity of the SM instance within the site, which for example, can handle 750 VM clients.
- An average size of a VM to be backed up in this example may be 170 gigabytes (GB).
- 80% utilization and 5% change rate per VM per day implies 5.1 terabytes (TB) of generated data per day for the SM instance.
- TB terabytes
- For a 12 hour backup window the data rate is 118 megabytes (MB) per second (sec). This is the capacity of an SM instance or the Maximum backup flow handled by the SM instance.
- All sites that host VMs can also host a backup of a VM running on a remote site, and the sites are indexed by i ⁇ 1, 2, . . . , N ⁇ .
- C i : processing capacity of PoD site i, i.e., the maximum rate of flow allowed for a given SM instance at a site.
- n i : # of VM instances hosted/running at site i.
- s ij rate of “backup” flow for VM j hosted/running at site i.
- p i the probability that site i will suffer a disaster or fails.
- a i : available space at site i for keeping backups from other PoD sites.
- ds ij storage size of VM j hosted/running at site i.
- x kij 1 if VM j hosted at site i is backed up at site k; otherwise 0.
- a processing Capacity constraint is represented as
- data privacy constraints may be also considered into an optimization formulation.
- a data privacy constraint may dictate that a data disk in one PoD cannot be backed up onto another PoD because of government policies or customer policies restricting data to be stored outside a region.
- Each VM is backed up on at least one PoD different from where it is running, which may be represented as
- an objective is to increase the spread of the backup of VMs from a given site i onto other sites in proportion to the amount of available space for backup at each PoD.
- the number in the ( . . . ) is defined as the imbalance of the number of backups kept at each site.
- the outermost summation considers each PoD i where VMs run and allocate one or more backups of each VM onto other PoDs keeping in view the available space for backup at each PoD.
- the imbalance definition as below could also suffice:
- the above objective function may be normalized by square of n 1 +n 2 + . . . +n N
- spread may depend on the probability of disaster (or failure) at a site.
- a hosting site i has probability of disaster or failure, denoted by p i .
- This probability may depend on several factors, e.g., which may include the characteristics of the area of a site and region of the site. For example, conditions such as floods, tornadoes, hurricanes, snow storms, pandemics, closeness of the airport associated with the area, and/or characteristics such as a region tendency for terrorist attacks, financial failures, train derailments with toxic materials, political situation, and other, may be considered for the probability of disaster occurring at the site.
- a method of the present disclosure in one embodiment may (weighted) add to the objective function introduced above the following:
- the function f(.) could simply be p i or such that as the argument increases the f-value increases as well (monotonicity).
- the above term forces the optimization problem to try and increase number of backups at sites with low p i while reducing backups at PoD sites with higher p i .
- n 1 +n 2 + . . . +n N may be normalized by n 1 +n 2 + . . . +n N .
- a recovery site's bandwidth with the backup sites determines the speed at which a VM backup from the backup sites can be transferred to the recovery site.
- Different data transfer approaches may include a full disk transfer over the network; transferring only the delta, e.g., assuming that a base image already exists at the recovery site (all other approaches can be subsumed by this one); physically transferring data disks to the recovery site (also called: sneaker net); and physically transferring tapes to the recovery site.
- a choice of the above approach affects the spread of the backups.
- the virtual machines could be brought up in the PoD in which they are backed up and the disks subsequently transferred to the designated recovery PoD as well.
- a plurality of policies can be adopted for redistributing the lost backups.
- the lost backups are delegated to the recovery site itself; the lost backups are redistributed amongst the available sites.
- the optimization problem of the present disclosure in one embodiment may be rerun with all other variables fixed except for the backups to be redistributed.
- the data change rate of the backups may be as before.
- Recovery Time Objectives may be considered with respect to applications and not individual VMs.
- a matrix ⁇ a cij ⁇ exists such that a cij is 1 if VM j, hosted on site i, belongs to application c otherwise 0. Assume that c ranges from 1 to M.
- a VM can belong to multiple applications.
- the following simplifying assumptions may also be made with respect to the transfer of a disk or delta over the network from a site to another site.
- the backup data When the backup data is being transferred from a backup PoD site to a recovery site, the backup data for different VMs may be transferred in a sequence and the available network bandwidth is wholly dedicated to a VM's data and not split across VMs. Transfer of data is work-conserving, i.e., during the entire transfer of data of all VMs whose backup is hosted on a given PoD site, no time is wasted during the transfer.
- Job 1 requires 10 units; job 2 requires 10 units. Let a server process at the rate of 2 units per second. If done in parallel then both will miss their RTO of 5 seconds by 5 units. If Job 1 is executed first then it will meet its RTO, while the second job will miss its RTO by 5 units.
- FIG. 4 illustrates a scenario for modulating the spread of backups to take care of RTOs of VMs when a disaster strikes a PoD in one embodiment of the present disclosure.
- Backup PoD site k ( 402 ) has a backup of VM 404 that was hosted on the disaster struck PoD site i.
- the backup of VM 404 may be a delta with respect to a base image, wherein c ij represents the size of the delta.
- Recovery PoD r ( 406 ) may already have a base image 408 of the VM. The recovery time depends on at least the network bandwidth (b rkij ) 410 .
- RTO c denote the RTO of the application c.
- RTO c represents the maximum tolerable time for recovering the data and bringing the application back online.
- c ij denote the actual delta or difference between the base image and backup that was taken for VM j at site i, before the disaster struck site i. It is assumed that the base images are populated at the recovery PoD so that only this delta of size c ij need be transferred.
- time taken for a delta depends on its size and connection bandwidth, and is given by:
- Base images may be constructed and used, for example, as follows. For example, all the images are grouped into one or more groups. Each VM disk within a group is considered as a file on the operating system and divided into “chunks” of a given or possibly variable size. A base image then can be constructed by concatenating at the i-th position that chunk that occurs most frequently across all the images in the group at that position. The resulting image is then called the base image for that group. The above method can be executed on the PoD where the VMs are running. The base images can be distributed to PoDs which could act as a recovery PoD for the given PoD.
- a corresponding manifest (typically a very small file as compared to the size of the base image) may be constructed that describes the base image in terms of the hash values of the chunks in the base image.
- the manifest may be sent from the recovery PoD to all the backup PoDs which are hosting the backups to be transferred to the Recovery PoD.
- the backup PoDs use the manifest to determine which chunks are already in the base image and therefore need not be sent, and thus only send those chunks which are not present in the base image in the various locations in the base image.
- an optimization model of the present disclosure minimizes the violation of RTOs of VMs during restore after disaster.
- Notation: x rnkij 1 if VM j is hosted on site i, with its backup done on site k, and if it is the n-th transfer to the recovery site r from site k after a disaster occurs for site i, otherwise it is 0.
- RTOc i.e., RTOc when site i is disaster struck
- the objective function of the present disclosure may be enhanced with the following term which sums up the expected penalty for the RTO violations for all the applications for a given site i facing disaster, and then finds the maximum of such sums across all sites,
- ⁇ c (.) is the normalized penalty function for RTO violation of c. Note, instead of “max”, it may be possible to take an “average” or “median” for the outermost “max” term.
- the above object function (1) finds the best available PoD to act as a recovery PoD, in the event a given PoD crashes; and (2) includes the cost of missing an RTO.
- VM j running on the fallen PoD i is transferred only at most once from any of the backup PoDs where it may be backed-up.
- k ⁇ i ⁇ x rnkit ⁇ all VMs s and t that belong to site i (that is disaster struck) are transferred to the recovery pod r. Note that there are a total of n i VMs on pod i.
- FIG. 5 is a flow diagram illustrating a method of the present disclosure in one embodiment.
- an optimization algorithm may be constructed. As described above, the optimization algorithm may simultaneously solve for or integrate at least a first objective to increase a spread of a backup of virtual machines from a given site onto other sites in proportion to an amount of available space for backup at each site, a second objective to increase a number of backups at one or more of the other sites with low probability of system crash while reducing backups at one or more of the other sites with higher probability of system crash, and a third objective to minimize a violation of recovery time objectives of the virtual machines during recovery.
- the objective function is run on a processor, e.g., to determine one or more backup sites and one or more recovery sites in an event the given site crashes based on a solution of the optimization algorithm.
- the given site and the other sites comprise points of delivery that comprise hosting environments where the virtual machines belonging to one or more applications run.
- the optimization algorithm is solved subject to a processing capacity constraint associated with at least the other sites, storage capacity constraint associated with at least the other sites, and data privacy constraints associated with at least the virtual machines to be backed up.
- a schedule of backups for the virtual machines on one or more of the other sites may be constructed based on a solution of the optimization algorithm.
- the optimization problem may be solved using a host of techniques such as simulated annealing, branch and bound, etc.
- the output of the solution provides which VM's backup will be hosted on which site.
- frequency of backup is decided based on the RPO (recovery point objective).
- the RPO may also decide the replication schedule and the amount of processing capacity of a PoD for backup that is utilized by a VM. For instance, the more closer the RPO is to 0, the faster is the replication rate and hence the more processing capacity of the PoD where backup is situated is utilized.
- the rate of backup flow s ij is predetermined based on the RPO for VM j on Site i.
- the schedule that is to be constructed is to determine what is the recovery PoD to be used and thereafter which backup of a VM j on Site i has to be transferred to the recovery PoD and in which order given that different applications c have different RTO c .
- the first task of finding out which should be the recovery PoD is through the solution of the following for each potential recovery PoD:
- the above finds the PoD r* that minimizes the penalty to be paid in recovering the lost applications on Site i.
- the solution to the above problem also yields the instantiation of x r*nkij for i and r* fixed and for n being in ⁇ 1, . . . , n i ⁇ , and k in ⁇ 1 . . . N ⁇ (but r not equal to i), and j being the index over all the VMs on Site i.
- a schedule of recovery for the virtual machines on one or more of the other sites may be constructed based on a solution of the optimization algorithm.
- a graphical tool may incorporate the above-described methodology for interacting with a user, e.g., presenting selected sites as backup and recovery sites according to the optimization performed, e.g., automatically by a computing processor.
- FIG. 6 illustrates a schematic of an example computer or processing system that may implement a backup/recovery system in one embodiment of the present disclosure.
- the computer system is only one example of a suitable processing system and is not intended to suggest any limitation as to the scope of use or functionality of embodiments of the methodology described herein.
- the processing system shown may be operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the processing system shown in FIG.
- 6 may include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
- the computer system may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system.
- program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types.
- the computer system may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network.
- program modules may be located in both local and remote computer system storage media including memory storage devices.
- the components of computer system may include, but are not limited to, one or more processors or processing units 12 , a system memory 16 , and a bus 14 that couples various system components including system memory 16 to processor 12 .
- the processor 12 may include an optimization module 10 that performs the methods described herein.
- the module 10 may be programmed into the integrated circuits of the processor 12 , or loaded from memory 16 , storage device 18 , or network 24 or combinations thereof.
- Bus 14 may represent one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
- bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus.
- Computer system may include a variety of computer system readable media. Such media may be any available media that is accessible by computer system, and it may include both volatile and non-volatile media, removable and non-removable media.
- System memory 16 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) and/or cache memory or others. Computer system may further include other removable/non-removable, volatile/non-volatile computer system storage media.
- storage system 18 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (e.g., a “hard drive”).
- a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”).
- an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media.
- each can be connected to bus 14 by one or more data media interfaces.
- Computer system may also communicate with one or more external devices 26 such as a keyboard, a pointing device, a display 28 , etc.; one or more devices that enable a user to interact with computer system; and/or any devices (e.g., network card, modem, etc.) that enable computer system to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 20 .
- external devices 26 such as a keyboard, a pointing device, a display 28 , etc.
- any devices e.g., network card, modem, etc.
- I/O Input/Output
- computer system can communicate with one or more networks 24 such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 22 .
- network adapter 22 communicates with the other components of computer system via bus 14 .
- bus 14 It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.
- the present invention may be a system, a method, and/or a computer program product.
- the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk
- a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
- a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
wherein the first term is the sum of the rate of backup flows into the SM of site k and it should be less than the capacity of site k which is Ck.
wherein the first term is the sum of the disk sizes of the VMs being backed up at the site k and it should be less than the storage capacity of site k available for keeping backups from other sites, which is Ak.
This setup allows for backing up a VM running on a PoD onto potentially multiple peer PoDs. This further reduces the risk exposure to disasters.
represent “imbalance” in number of hosted backups of site i on sites r and s.
where ψc (.) is the normalized penalty function for RTO violation of c. Note, instead of “max”, it may be possible to take an “average” or “median” for the outermost “max” term.
represents the time for complete transfer of the delta corresponding to VM j when it is the n-th transfer from site k to the recovery site r, given that site i where it was hosted is disaster struck.
represents the RTO violation for application c due to VM j hosted on disaster-struck site i and backed-up on site k and transferred to recovery site r.
represents the RTO violation for application c when site i is disaster-struck and some of its hosted VMs are backed-up on site k which have to be transferred to recovery site r.
represents the RTO violation for application c when site i is disaster-struck (could be a negative value) and when the recovery PoD is r
If VM j's delta is the n-th transfer there is some other VM whose delta is transferred at the (n−1)-th position; for the n-th position there may be only one backup.
Links the variables xkij with xrnkij.
a≠b, a≠i, b≠i, p≠q. These constraints ensure that only one of the target recovery PoDs is chosen for a given fallen PoD. There are, for every i, a total of
VM j running on the fallen PoD i is transferred only at most once from any of the backup PoDs where it may be backed-up.
Processing capacity constraints for each pair (r, i), r≠i.
Storage capacity constraints for each pair (r, i), r≠i.
all VMs s and t that belong to site i (that is disaster struck) are transferred to the recovery pod r. Note that there are a total of ni VMs on pod i.
subject to (s.t.) the above specified constraints. w1, w2, and w3 represent weights associated with each term.
subject to the constraints specified above.
Claims (18)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/243,405 US9436560B2 (en) | 2014-04-02 | 2014-04-02 | Increasing disaster resiliency by having a pod backed up to other peer pods in a site or beyond |
US15/236,542 US10229008B2 (en) | 2014-04-02 | 2016-08-15 | Increasing disaster resiliency by having a PoD backed up to other peer PoDs in a site or beyond |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/243,405 US9436560B2 (en) | 2014-04-02 | 2014-04-02 | Increasing disaster resiliency by having a pod backed up to other peer pods in a site or beyond |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/236,542 Continuation US10229008B2 (en) | 2014-04-02 | 2016-08-15 | Increasing disaster resiliency by having a PoD backed up to other peer PoDs in a site or beyond |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150286539A1 US20150286539A1 (en) | 2015-10-08 |
US9436560B2 true US9436560B2 (en) | 2016-09-06 |
Family
ID=54209846
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/243,405 Expired - Fee Related US9436560B2 (en) | 2014-04-02 | 2014-04-02 | Increasing disaster resiliency by having a pod backed up to other peer pods in a site or beyond |
US15/236,542 Expired - Fee Related US10229008B2 (en) | 2014-04-02 | 2016-08-15 | Increasing disaster resiliency by having a PoD backed up to other peer PoDs in a site or beyond |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/236,542 Expired - Fee Related US10229008B2 (en) | 2014-04-02 | 2016-08-15 | Increasing disaster resiliency by having a PoD backed up to other peer PoDs in a site or beyond |
Country Status (1)
Country | Link |
---|---|
US (2) | US9436560B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150363276A1 (en) * | 2014-06-16 | 2015-12-17 | International Business Machines Corporation | Multi-site disaster recovery mechanism for distributed cloud orchestration software |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107193687A (en) * | 2017-04-18 | 2017-09-22 | 北京潘达互娱科技有限公司 | Database backup method and controlling equipment |
CN107085939B (en) * | 2017-05-17 | 2019-12-03 | 同济大学 | A kind of highway VMS layout optimization method divided based on road network grade |
CN107329412B (en) * | 2017-06-29 | 2019-06-07 | 广州杰赛科技股份有限公司 | The method and device of target area cooperation detection |
CN109656742B (en) * | 2018-12-28 | 2022-05-10 | 咪咕文化科技有限公司 | Node exception handling method and device and storage medium |
US10977132B2 (en) | 2019-03-08 | 2021-04-13 | International Business Machines Corporation | Selective placement and adaptive backups for point-in-time database recovery |
CN112667153B (en) * | 2020-12-22 | 2024-08-02 | 军事科学院系统工程研究院网络信息研究所 | Multi-station disaster recovery backup method based on distributed raid slice |
CN113448762B (en) * | 2021-06-29 | 2022-12-27 | 东莞市小精灵教育软件有限公司 | Crash processing method and system, intelligent device and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050108565A1 (en) * | 2003-11-14 | 2005-05-19 | International Business Machines Corporation | System, apparatus, and method for automatic copy function selection |
US20060095696A1 (en) * | 2004-11-01 | 2006-05-04 | Hitachi, Ltd. | Quality of service for remote copy operations in storage systems |
US20080154979A1 (en) * | 2006-12-21 | 2008-06-26 | International Business Machines Corporation | Apparatus, system, and method for creating a backup schedule in a san environment based on a recovery plan |
US20090327601A1 (en) * | 2008-06-30 | 2009-12-31 | Shachar Fienblit | Asynchronous data mirroring with look-ahead synchronization record |
US7644249B2 (en) * | 2003-09-19 | 2010-01-05 | Hewlett-Packard Development Company, L.P. | Method of designing storage system |
US7885938B1 (en) * | 2008-02-27 | 2011-02-08 | Symantec Corporation | Techniques for granular recovery of data from local and remote storage |
US20130054536A1 (en) * | 2011-08-27 | 2013-02-28 | Accenture Global Services Limited | Backup of data across network of devices |
US20140006350A1 (en) * | 2012-06-27 | 2014-01-02 | International Business Machines Corporation | Method for selecting storage cloud for storage of entity files from plurality of storage clouds, and computer and computer program therefor |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8769049B2 (en) * | 2009-04-24 | 2014-07-01 | Microsoft Corporation | Intelligent tiers of backup data |
CA2794339C (en) * | 2010-03-26 | 2017-02-21 | Carbonite, Inc. | Transfer of user data between logical data sites |
WO2014002094A2 (en) * | 2012-06-25 | 2014-01-03 | Storone Ltd. | System and method for datacenters disaster recovery |
-
2014
- 2014-04-02 US US14/243,405 patent/US9436560B2/en not_active Expired - Fee Related
-
2016
- 2016-08-15 US US15/236,542 patent/US10229008B2/en not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7644249B2 (en) * | 2003-09-19 | 2010-01-05 | Hewlett-Packard Development Company, L.P. | Method of designing storage system |
US20050108565A1 (en) * | 2003-11-14 | 2005-05-19 | International Business Machines Corporation | System, apparatus, and method for automatic copy function selection |
US20060095696A1 (en) * | 2004-11-01 | 2006-05-04 | Hitachi, Ltd. | Quality of service for remote copy operations in storage systems |
US20080154979A1 (en) * | 2006-12-21 | 2008-06-26 | International Business Machines Corporation | Apparatus, system, and method for creating a backup schedule in a san environment based on a recovery plan |
US7885938B1 (en) * | 2008-02-27 | 2011-02-08 | Symantec Corporation | Techniques for granular recovery of data from local and remote storage |
US20090327601A1 (en) * | 2008-06-30 | 2009-12-31 | Shachar Fienblit | Asynchronous data mirroring with look-ahead synchronization record |
US20130054536A1 (en) * | 2011-08-27 | 2013-02-28 | Accenture Global Services Limited | Backup of data across network of devices |
US20140006350A1 (en) * | 2012-06-27 | 2014-01-02 | International Business Machines Corporation | Method for selecting storage cloud for storage of entity files from plurality of storage clouds, and computer and computer program therefor |
Non-Patent Citations (3)
Title |
---|
Dines, R., "Cloud-Based Disaster Recovery: Demystified", http://blogs.forrester.com/rachel-dines/12-03-22-cloud-based-disaster-recovery-demystified posted on Mar. 22, 2012, pp. 1-3. |
Gsoedl, J., "Blueprint for cloud-based disaster recovery", http://searchstorage.techtarget.com/magazineContent/Blueprint-for-cloud-based-disaster-recovery, first published May 2011, pp. 1-45. |
Wood, T., et al., "PipeCloud: Using Causality to Overcome Speed-of-Light Delays in Cloud-Based Disaster Recovery", SOCC'11, Oct. 27-28, 2011, Cascais, Portugal, pp. 1-13. |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150363276A1 (en) * | 2014-06-16 | 2015-12-17 | International Business Machines Corporation | Multi-site disaster recovery mechanism for distributed cloud orchestration software |
US9582379B2 (en) * | 2014-06-16 | 2017-02-28 | International Business Machines Corporation | Multi-site disaster recovery mechanism for distributed cloud orchestration software |
Also Published As
Publication number | Publication date |
---|---|
US20160350189A1 (en) | 2016-12-01 |
US20150286539A1 (en) | 2015-10-08 |
US10229008B2 (en) | 2019-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10229008B2 (en) | Increasing disaster resiliency by having a PoD backed up to other peer PoDs in a site or beyond | |
US11132264B2 (en) | Point-in-time copy restore | |
US10884884B2 (en) | Reversal of the direction of replication in a remote copy environment by tracking changes associated with a plurality of point in time copies | |
US10169173B2 (en) | Preserving management services with distributed metadata through the disaster recovery life cycle | |
US10474694B2 (en) | Zero-data loss recovery for active-active sites configurations | |
US20160170837A1 (en) | Use of replicated copies to improve database backup performance | |
US10831665B2 (en) | Preservation of modified cache data in local non-volatile storage following a failover | |
US9632724B1 (en) | Point-in-time copy with chain cloning | |
US10901863B2 (en) | Unified data layer backup system | |
US9760449B2 (en) | Restoring a point-in-time copy | |
US11829609B2 (en) | Data loss recovery in a secondary storage controller from a primary storage controller | |
US9760450B2 (en) | Restoring a clone point-in-time copy | |
US10976941B2 (en) | Validation of storage volumes that are in a peer to peer remote copy relationship | |
US20170102998A1 (en) | Data protection and recovery system | |
US20240126657A1 (en) | Opportunistic backups through time-limited airgap | |
DuBois | Best practices in business continuity and disaster recovery | |
US11853585B2 (en) | Performing a point-in-time snapshot copy operation within a data consistency application | |
Saleh | Cloud Computing Failures, Recovery Approaches and Management Tools | |
US20180275897A1 (en) | Preservation of a golden copy that stores consistent data during a recovery process in an asynchronous copy environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GUPTA, MANISH;HARPER, RICHARD E.;SIGNING DATES FROM 20140228 TO 20140331;REEL/FRAME:032584/0417 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Expired due to failure to pay maintenance fee |
Effective date: 20200906 |