US20120084445A1 - Automatic replication and migration of live virtual machines - Google Patents

Automatic replication and migration of live virtual machines Download PDF

Info

Publication number
US20120084445A1
US20120084445A1 US12/959,091 US95909110A US2012084445A1 US 20120084445 A1 US20120084445 A1 US 20120084445A1 US 95909110 A US95909110 A US 95909110A US 2012084445 A1 US2012084445 A1 US 2012084445A1
Authority
US
United States
Prior art keywords
virtual machine
primary
backend computing
computing device
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/959,091
Inventor
Scott L. Brock
Sumit Kumar Bose
Ronald Leaton Skeoch
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US12/959,091 priority Critical patent/US20120084445A1/en
Application filed by Individual filed Critical Individual
Assigned to DEUTSCH BANK NATIONAL TRUST COMPANY; GLOBAL TRANSACTION BANKING reassignment DEUTSCH BANK NATIONAL TRUST COMPANY; GLOBAL TRANSACTION BANKING SECURITY AGREEMENT Assignors: UNISYS CORPORATION
Assigned to GENERAL ELECTRIC CAPITAL CORPORATION, AS AGENT reassignment GENERAL ELECTRIC CAPITAL CORPORATION, AS AGENT SECURITY AGREEMENT Assignors: UNISYS CORPORATION
Priority to EP11831543.1A priority patent/EP2625605A4/en
Priority to PCT/US2011/054975 priority patent/WO2012048037A2/en
Priority to CA2813561A priority patent/CA2813561A1/en
Priority to AU2011312036A priority patent/AU2011312036B2/en
Publication of US20120084445A1 publication Critical patent/US20120084445A1/en
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: DEUTSCHE BANK TRUST COMPANY
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE
Assigned to UNISYS CORPORATION reassignment UNISYS CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: WELLS FARGO BANK, NATIONAL ASSOCIATION (SUCCESSOR TO GENERAL ELECTRIC CAPITAL CORPORATION)
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1479Generic software techniques for error detection or fault masking
    • G06F11/1482Generic software techniques for error detection or fault masking by means of middleware or OS functionality
    • G06F11/1484Generic software techniques for error detection or fault masking by means of middleware or OS functionality involving virtual machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2097Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the instant disclosure relates generally to a system and method for automatically replicating and migrating live virtual machines across wide area networks.
  • a virtual machine is a software platform capable of replicating a computing device with full operating system (OS) and applications functions.
  • the VM is generally installed on a target machine that functions as the host by contributing physical resources like memory, and processing capabilities.
  • a remote device uses client VM software to connect to the remote device and view the VM operating on the target machine.
  • a virtual machine provides a remote computing device user with a complete software based computing platform separate from the remote computing device on which the software runs.
  • the level of separation between the VM software and the hardware on which it runs establishes the type of virtual machine with the primary types being a system virtual machine and an application virtual machine.
  • a system virtual machine type allows a remote user of the VM to access some of the physical hardware devices on which the VM executes.
  • the application VM functions as a stand-alone application platform over which other software applications are implemented.
  • the purpose of the application VM is to enable different operating systems with different file structures to function within an existing native operating system.
  • the virtual machine data, operations, and functions are assigned to a virtual machine image file in the native memory of a target machine.
  • Remote devices having client VM software installed within the device access the virtual machine image remotely.
  • the image file renders in the client VM software on the remote device as an OS with its overlying applications and data displayed for the user of the remote machine. Any changes made the application, data, or OS is saved to the virtual machine image on the target machine.
  • the VM can be scheduled for execution at geographically disparate cloud locations. However, storing a virtual machine image across networks from one location to another is complicated by the size of the data and the number of users connected to the virtual machine image.
  • One conventional VM method enabled a share repository of the virtual machine image to be accessible by both the current or primary target machine and a secondary target machine for backup. This required both the primary target machine and a secondary target machine to be on the same sub-net (or within the same local network) for effective results without significant lag. Further, it is difficult to identify remote sites to store replicas of the virtual machine image during a ‘live’ or in-use session. Problems associated with network latency, long-term and short-term costs of chosen remote sites are some of the issues associated with choosing remote sites for replicating virtual machine image data.
  • a primary remote site is automatically chosen for storing a primary VM image file, and one or more secondary remote sites are automatically chosen for storing secondary replicas of the primary VM image file.
  • the applicable changes instituted in the virtual machine image by a client computer are sent to update the replica virtual machine image at each of the remote sites.
  • a replica of the virtual machine image can be activated as the new primary replica, while designating the old primary replica as a secondary replica.
  • a primary VM file can be copied to a new site, where the new site does not have an updated replica available.
  • a computer-implemented method of automatically replicating and migrating live virtual machines comprising: comparing, in a primary backend computing device, a plurality of first virtual machine image components from a first virtual machine image and a plurality of second virtual machine image components from updates applied to the first virtual machine image, to identify new virtual machine image components; updating, in each of a plurality of secondary backend computing devices, a replica of the first virtual machine image with the new virtual machine image components; calculating, in the primary backend computing device, a plurality of operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device; comparing, in the primary backend computing device, the operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device, wherein an operating range within limits of the operating parameter values is defined for each operating parameter by a computer-coded business rule; selecting at least one secondary backend computing device from the plurality of backend computing devices, where the operating parameter values of the selected secondary backend computing device are
  • a computer-implemented system of automatically replicating and migrating live virtual machines comprising: comparing, in a primary backend computing device, a plurality of first virtual machine image components from a first virtual machine image and a plurality of second virtual machine image components from updates applied to the first virtual machine image, to identify new virtual machine image components; updating, in each of a plurality of secondary backend computing devices, a replica of the first virtual machine image with the new virtual machine image components; calculating, in the primary backend computing device, a plurality of operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device; comparing, in the primary backend computing device, the operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device, wherein an operating range within limits of the operating parameter values is defined for each operating parameter by a computer-coded business rule; selecting at least one secondary backend computing device from the plurality of backend computing devices, where the operating parameter values of the selected secondary backend computing device are
  • FIG. 1 illustrates a system for replicating VM images for multiple secondary VM storage cloud sites according to an exemplary embodiment.
  • FIG. 2 illustrates a system and method for scheduling and provisioning VM images across multiple secondary cloud sites based on the operating parameters of the secondary cloud sites according to an exemplary embodiment.
  • FIG. 3 illustrates a method of de-duplication and scheduling updates on replica VM images according to an exemplary embodiment.
  • FIG. 4 illustrates a system of checking for VM image data updates according to an exemplary embodiment.
  • FIG. 5 illustrates a system and method of live migration of a VM image according to an exemplary embodiment.
  • FIG. 6 illustrates a system and method of hiber-waking VM images according to an exemplary embodiment.
  • FIG. 7 illustrates the write flow of VM image data across various software modules or sub-modules before hiber-waking according to an exemplary embodiment.
  • FIG. 8 illustrates the write flow of VM image data across various software modules or sub-modules after hiber-waking according to an exemplary embodiment.
  • FIG. 9 a illustrates a method of updating of VM images from disaster management backup sites before hiber-waking according to an exemplary embodiment.
  • FIG. 9 b illustrates a method of updating of VM images from disaster management backup sites after hiber-waking according to an exemplary embodiment.
  • Virtual machines are widely used in cloud computing applications, where the actual physical machine running the VM software is located in different locations. While virtual machine image files store much of the status information along with data related to current application and OS in implementation, other image files can be used exclusively for data storage.
  • image files is used interchangeably with the term ‘images’ in this disclosure, both describing a file comprising virtual machine data.
  • Database image files can be accessed by the image files for the database information pertaining to a “live” VM. As such, if the VM image has multiple data image files, then the database image file, the virtual machine image file, and any other related image files should be replicated.
  • Remote storage of live VMs across high latency, low bandwidth wide area networks (WAN) results in lags and hardware trailing issues that are visible to a client computer accessing the VM.
  • the process of replicating a live VM involves storing the entire state of the VM from a primary remote machine to multiple secondary storage machines. Multiple storage machines are updated with new data from the end-user of the live VM without any data loss or continuity disruptions to the end-user client computer.
  • the replication method and systems described herein are motivated by various factors, including the price of storage devices, redundancy of the virtual machine image data, and limited network bandwidth at the secondary VM locations.
  • operating parameters and its values are analyzed by a VM management software application comprising software-based sub-modules for managing replica placement.
  • Software modules and sub-modules are software codes that render independently or within a large software program, and are used interchangeably in this disclosure.
  • Exemplary operating parameters include, the average access costs; perceived latency for the VMs hosted at different cloud sites; available network bandwidth; heat generated; number of access users allowed; cost of resources; memory capacity (e.g., random access memory, read only memory, and read and write memory); and network congestion among the different sites.
  • memory capacity e.g., random access memory, read only memory, and read and write memory
  • the commonality of different VM images are compared, where the different VMs are stored in different physical machines within the same cloud site (intra-site), or different physical machines in different cloud sites (inter-site).
  • an existing VM image at one destination site is compared with the VM image to be replicated to find similarities, thereby enabling the VM management software to determine if the destination site is suited for the VM image replica.
  • Comparison methods can be automated using software to compare virtual machine image metadata of the existing VM against the VM to be replicated. Further, size variations, transmission speeds, and costs of maintaining and operating physical machines at a destination site are analyzed for the existing VM at the destination site, prior to selection and replica placement.
  • the replication of virtual machine image files and its associated image files (e.g., data image file, etc.) across multiple secondary VM sites is implemented by a VM management software application resident on a networked backend computing device.
  • the VM software application monitors a VM image file resident at a primary site, and being used by a VM end-user on an end-user client computing device.
  • the end-user makes any updates within the VM environment on the client computing device, the changes generate new data in the VM image file at the primary location.
  • the VM management software application uses this new data to update the replica VM images at each secondary site.
  • the replication methods described herein incorporates exemplary processes for efficient replication, and propagation of updates including write coalescing and data compression methods.
  • a de-duplication or removal of duplicate information among multiple replicas is implemented between the secondary replicas at each secondary site. This process reduces cost of storage of multiple replicas in different secondary VM sites.
  • the de-duplication method described herein implements either variable sized chunking technique, also called content based redundancy (CBR) elimination technique using sliding window hashes in the form of Rabin fingerprints or fixed size chunking technique to find and eliminate redundant data. It is further appreciated that propagation of updates to a primary VM image file and de-duplication can be effected in a single software module.
  • CBR content based redundancy
  • update propagation and de-duplication when update propagation and de-duplication are combined, CBR based on Rabin fingerprints and/or fixed size chunking is first implemented to de-duplicate the replicated secondary image files and create hash indices to verify updates to a primary VM image file, while write-coalescing and compression methods are used to propagate updates from the primary VM image file to secondary replica image files.
  • update propagation can utilize the CBR and/or hash indices produced as a result of the de-duplication process to identify the need for propagation of primary VM image file updates prior to applying write-coalescing and compression methods for actual propagation.
  • de-duplication ratios derived form the CBR methods are used to determine the state of secondary replicas (e.g., amount of redundancy in image data).
  • the state of the secondary replicas enable the VM management software application to replicate non-redundant chunks of data by comparing hash indices of stale and updated VM image files during initial replica placement.
  • the non-redundant data chunks may represent the updates to the primary VM image file, where the updates generated by an end-user, and where the updates are replicated using the write-coalescing and compression methods to conserve network bandwidth and enable faster transfer of updated portions of VM image data to remote sites.
  • De-duplication ratio is a measure of the size of the common content between two different VM image files.
  • a combined or separate implementation of update propagation and/or de-duplication method can be initiated at any time, and between any set time period according to a pre-defined scheduled times.
  • the granularity of the scheduling can be in the order of several hours.
  • Multiple scheduling periods can be arranged for automated replication of the changed data blocks to the secondary sites.
  • a replication placement manager module analyzes data collected from previous replications in different cloud sites by a statistics collector module. The data is used to determine a new location for a new replica placement. Further, during scheduling of primary and secondary VM images and site locations, a single replication (replica placement) horizon period comprising multiple replication scheduling periods is defined.
  • An exemplary replication horizon period is about a month in granularity, while exemplary scheduling periods are hours, days, or even minutes. Each period can also comprise one primary VM image and multiple secondary VM images on which de-duplication and replication functions are implemented. Each replication horizon period signals the end of the replication schedule and the start of a de-duplication cycle to remove redundancy among the replicas used in the first cycle, before the next replication horizon begins. It is appreciated that propagation of the incremental changes from one primary replica to secondary replica is interchangeably used with replica placement in the standard literature, and in this disclosure. Update propagation happens in intervals of few seconds. A person skilled in the art will recognize from the context being described that replication is used for either placement of whole VM images at different sites (i.e. replica placement) or propagation of the incremental updates from the primary VM image to the secondary VM images (i.e. update propagation).
  • the methods and systems described here enable migration of live VM images across wide area networks.
  • TCP transmission control protocol
  • applications encapsulated with the VMs should see no disruptions in network connection even across multiple networks (outside the sub-net of a local network).
  • the methods described herein allows a VM management software to manage the migration of large virtual machine image files across networks without consuming excessive network bandwidth, and network time.
  • the exemplary methods described herein implements hash comparisons to find data differences between replicas at remote cloud sites after which, incremental updates are applied to replicas in remote cloud sites with only the updated data sections, and not the entire image file.
  • any data file that is being accessed by a VM image file should be seamlessly linked to the migrated VM image.
  • the data file might be a database file of all application data belonging to a client user of the VM image, where the data file is stored in a storage area network (SAN). It is further appreciated that data image files associated with VM image files are also stored as multiple replicas in multiple locations, each replica capable of being de-duplicated and replicated at scheduled times.
  • a VM replication and VM scheduling process is implemented by the VM management software application in a backend computing device.
  • the VM replication process identifies remote cloud sites and intra-cloud physical VMs to store replicas of the VM image from a primary VM site.
  • These secondary VM cloud sites are identified by reviewing the operating parameters and values of each cloud site and of similar VM images, if an existent VM image is of similar size, capabilities, and functions as an intended primary VM image.
  • Operating parameters include long-term average cost of computation of every VM at each of the candidate cloud sites during different time-periods and the end-user latency requirements associated with the intended primary VM. Such sites are classified to meet the end-user latency requirements and are eligible for replica placement.
  • the methods and systems described herein includes prioritization processes within a scheduling software module of the VM management software application.
  • one of the secondary sites is assigned as a high priority backup for disaster management.
  • This enables the prioritization processes to follow a priority schedule for initial replication of new data from VM image files to a selected secondary replica VM image at a selected disaster recovery secondary site, and subsequently, to other secondary sites of lower priority. If the disaster management secondary site encounters a problem and fails, then the VM management software application can assign responsibility to a different secondary site by changing the priority on an existing secondary site to a high priority disaster management secondary site.
  • migration of a live VM image is implemented by the VM software application using computer coded instructions to move a VM image to a remote cloud site that does not have a replica of the VM image.
  • Another exemplary implementation, using the VM software application incorporates a hiber-waking method. In this method, a replica VM image at a destination cloud site is transitioned to an active state from a previously passive (hibernated) state, and becomes the primary VM image, while the previously active VM image at the source cloud site is re-designated as a replica VM image.
  • One requirement in the hiber-waking method is for the active VM image at a source site to be hibernated prior to designation as a replica VM image, while the replica VM image at a destination site is awakened from its hibernation state, and is re-designated as the active VM image.
  • An Enterprise Cloud Manager (ECM) software module can be deployed on a centralized backend computing device as the VM management software application to monitor the interactions of multiple VMs from the secondary sites.
  • a statistics collector (SC) sub-module of the ECM software module collects statistics and mines for data from site managers (SM) software modules located at each secondary site. The SC module then presents this data to a replica placement manager sub-module (RPM) within the ECM.
  • the SM is responsible for VM placement and scheduling at the local site.
  • the site manager also monitors optimal conditions defined to meet performance objectives which are pre-defined for each site.
  • the determination of a site objective can be set through the ECM based on such factors, as the hardware, network and software of the remote secondary site.
  • an objective set by one site manager at one site for the optimization of VM storage at the site is minimization of overall energy consumption at the site.
  • Each VM also comprises an application manager (AM), which interacts with the SM at each secondary site.
  • the AM monitors the application behavior and ensures that the BMs are allocated sufficient computing resources so that the service level objectives (SLO) as defined by a service level agreement (SLA) are not violated.
  • SLA can be defined between a company, which wishes to deploy cloud computing capabilities for its business, and the cloud computing service providers. Further, much of the monitoring of the AM software module can be implemented automatically. This is enabled by converting the SLA into computer-coded business rules that can be implemented by a software module to monitor current usage, and trigger alarms if pre-defined limits are violated.
  • the RPM sub-module determines the number of replicas and their site locations after considering the de-duplication ratios and long-term average cost provided by the statistics collector (SC) sub-module.
  • De-duplication is a process of removing redundant data from multiple replicas using mathematical methods. De-duplication can be implemented on multiple replicas at pre-designated time schedules to reduce the amount of data stored between secondary VM sites.
  • a de-duplication method comprises reduction of each replica using the mathematical model in equation (1).
  • F is the average file size
  • C t indicates the client network bandwidth at time t
  • I signifies the initialization time
  • ⁇ t denotes the network arrival rate at time t
  • S jt denotes the server network bandwidth of site j at time t
  • B indicates the buffer size
  • represents the dynamic server rate
  • Y represents the static server time.
  • Y kj has a value of 1, if the chunk k is stored at site j, otherwise Y kj is 0.
  • Z ijt is 1 if the replica of the VM image i at site j is the primary copy at time t otherwise Y kj is 0.
  • Equation (2) is further subject to the conditions of equation (3), equation (4), equation (5), and equation (6) below.
  • N i min is the minimum number of replicas of VM image i
  • N i max is the maximum number of replicas of VM image i
  • l i max is the maximum acceptable latency for VM image i
  • X ij is 0.
  • equation (1) is computationally large for a moderate cardinality of the set K.
  • a greedy heuristic approach is used to resolve equation (1) for determining the sites for replica placement. Assuming a set, D ii ; as the result of a de-duplicating pair of VM images, i and i′, a high value of d ii′ , where d ii ⁇ D ii′ , indicates that the VM images i and i′ share a significant proportion of their content. Further, d ii′ is expressed as a percentage, and is calculated as a ratio of the size of the common content to the total size of the content of i or i′, whichever is maximum.
  • the objective of this calculation is to create an algorithm to detect sites with complimentary cost structures ( C jt ).
  • C jt complimentary cost structures
  • the cost of maintaining only one replica at site j′ is equivalent to the cost of operating VM image i during time t at site j′, as against the cost of maintaining two replicas at site j and j′.
  • the cost of maintaining two replicas includes the additional storage requirement as a consequence of having a replica at site j.
  • an exemplary algorithm can calculate latency issues and profitability of reserving multiple replicas at different sites or instead, maintaining fewer replicas at the expense of a higher cost operation.
  • This algorithm is software coded into the RPM sub-module and is implemented at various times using data collected by the SC sub-module.
  • the scheduler sub-module interacts with the RPM, to trigger the RPM into ensuring that the current costs and latencies are within acceptable limits as defined in an SLA or following the profitability objectives of the cloud service provider.
  • the SLA business rules can be computer-coded to data comparison algorithms to ensure that the SLA or profitability requirements are maintained at all times.
  • This comparison can be implemented in the enterprise cloud manager (ECM), with full accessibility to the cloud service provider (target VM machine administrators) and limited accessibility to the cloud client (client VM machines).
  • An exemplary algorithm for calculating latency and costs for the RPM utilizes two phases—a distribution phase, and a consolidation phase.
  • a distribution phase a set of sites (J 1 ⁇ J) is identified that adheres to the latency requirements for VM i 1 and another set of sites (J 2 ⁇ J) is identified that adheres to the latency requirements of VM i 2 .
  • the members that are common to the sets J 1 and J 2 fulfill the latency requirements for virtual machines, i 1 and i 2 . If there are common members (if J 1 ⁇ J 2 is NOT null), then the replicas of the VMs i 1 and i 2 are placed at sites j ⁇ J 1 ⁇ J 2 , and the algorithm proceeds to the next iteration.
  • the consolidation phase the distribution phase results are consolidated to reduce the number of replicas generated.
  • the savings in storage spaces as a consequence of the de-duplicating of set I j is calculated.
  • the contribution made by each replica is calculated as a ratio of the space savings (Sav j ) generated when the replica is part of the I j , as disclosed in equation (7) and the space savings (Sav rj ) calculated when the replica is left out of the set I j , as illustrated in equation (8).
  • Sav j ⁇ i ⁇ I 1 ⁇ ⁇ k ⁇ K 1 ⁇ size k - ⁇ k ⁇ K 1 ⁇ size k ( 7 )
  • Sav r ij ⁇ ( i ′ ⁇ I 1 & ⁇ ⁇ i ′ ⁇ i ) ⁇ ⁇ ( k ⁇ K 1 , ) ⁇ size k - ⁇ ( k ⁇ K 1 ⁇ K 1 , & ⁇ i ′ ⁇ i ) ⁇ size k ( 8 )
  • the exemplary algorithm detects the sites where cost structures ( C jt ) vary in a similar manner to latency structures. For the VM images with multiple replicas at sites with cost structures enforced, the algorithm calculates whether it is profitable to maintain multiple replicas these sites. In one example, this is implemented by monitoring any decreases in costs for storage cost if the replica of the VM image i is deleted from the site. An absence of a cost benefit will leave the image on the site, but will delete it for a cost benefit. In an exemplary method of implementing cost benefit measures, the ratio of the marginal decrease in cost due to de-duplication when a VM image i is retained at a site, and the marginal decrease in cost due to de-duplication when the VM image is deleted from the site is measured. The VM image with the lowest ratio is considered for deletion, subject to the fulfillment of the other constraints disclosed herein, such as cost of maintaining the VM image at each secondary site.
  • FIG. 1 illustrates of an exemplary embodiment of a method and system for replication of VM images across wide area networks (WAN).
  • An enterprise cloud manager (ECM) 105 software module functions as a VM management software application on a backend computing device to monitor and manage the primary and replica images on remote cloud sites.
  • the ECM can be accessed via a browser or a stand-alone web-enabled software application. While complete control of the sub-module elements of the ECM is extended to the cloud computing service provider hosting the entire method and system 100 , partial control is designate to client computing devices.
  • a parent software application is implemented to control the ECM, when the parent application implements business rules defined in a service level agreement (SLA) between the cloud computing service provider and the client computing devices.
  • SLA service level agreement
  • the ECM 105 comprises a scheduler 110 software sub-module, a replica placement manager 115 sub-module and a statistics collector and data miner 125 sub-module. Each of these sub-modules is connected to a database server 120 , where the database server can be a remote server with networking capability.
  • Each cloud site location 135 is a geographically disparate location with multiple backend computing devices 145 , where each device is managed by an application manager 140 .
  • the remote sites 135 are managed by site managers 130 , which are connected through a data network to a central ECM software at the location of the cloud computing service provider.
  • the site manager (SM) 130 is a monitoring and enforcement tool that is responsible for VM placement and implementing scheduling decisions sent from the ECM.
  • the SM 130 monitors operating parameters and its values, such as, the network bandwidth, CPU (processing) availability, memory capacity, power usage, among other computing metrics and transfers these metrics to the statistics collector (SC) 125 .
  • SC statistics collector
  • the SM 130 also provides the SC 125 with site specific inputs, for example, the per unit storage costs, and the per unit cost of computation at different time intervals.
  • the SM 130 also incorporates a de-duplication module for identifying duplicate data blocks for the VM images stored in a centralized shared repository within the cloud site.
  • the replication function within the SM module implements write-coalescing and compression methods over the hash indices maintained by the de-duplication module to transmit non-redundant (new de-duplicated) data of the primary VM image to the secondary VM image replica files.
  • This non-redundant data can then be transmitted to another secondary replica site B 135 chosen earlier by the RPM 115 .
  • the hash index of de-duplication information is presented to the RPM 115 via the SC 125 by the SM module; the RPM determines a replica site, while the hiber-waking and replica provisioning manager 150 via the replication function 720 performs the propagation of the non-redundant data updates to the secondary replicas using compression and write coalescing methods of the replication function (or sub-module).
  • a storage area network (SAN) is an example of a centralized shared repository.
  • the meta-data associated with the data blocks, for example, a hash value of the data contents, and the number VM images are also communicated to the SC module. Additionally, the percentage of similarity between VM images are calculated from the data blocks within the SM. This statistic is also transferred to the SC, where all the data is collated over several scheduling cycles and long-term averages to calculate the operations costs and access costs.
  • the RPM (replica placement manager) 115 periodically communicates with the SC, and uses the statistics collated to resolve any replica placement issues.
  • a virtual machine image and associated data image files of a VM are created for a client and stored in a primary cloud site, in a primary VM device.
  • the access to these files is determined in an SLA and further information on the users is maintained in a user directory.
  • the SLA also defines the extent of support and backup provided to the client.
  • the number of secondary cloud sites and secondary VM devices, as well as the locations and costs are determined and the VM image and associated image files are replicated to these secondary VM devices.
  • the RPM 115 and scheduler 110 communicates with other modules within the ECM to transmit solutions to any issues developed when the SC data is reviewed.
  • a hiber-waking, migration and replica provisioning manager module 150 analyzes the solution from the scheduler 110 , and along with input from the RPM 115 , implements a VM image at a different site, by either hiber-waking 155 or live migration depending on the state of the current live VM image.
  • the primary VM image is replicated at a secondary site, where the secondary site did not have a copy of the primary VM image file to begin with.
  • an up-to-date secondary VM image is activated as primary (awakened), while the primary VM image is hibernated (deactivated or re-designated) as secondary VM image.
  • a solution of live migration or hiber-waking is provided if there is a determination from the SC that the current cloud site or the physical VM hosting device is deemed to have issues, for example, high costs or latencies that were previously non-existent.
  • the information on the location of replicas are maintained in a centralized database, e.g., database 120 , and are available to the hiber-waking, migration and replica provisioning manager module 150 , and the ECM 105 .
  • Sub-modules 150 and 105 make the list of replicas and information regarding location of the replicas available for review by a system administrator for reference or manual control of the replicas.
  • the scheduler 110 can be either time-based or event-based or both.
  • the scheduler module In the case that the scheduler is event-based, the scheduler module generates VM scheduling decisions based on event notification from the SM 130 , by way of the scheduler 110 . As an example, if an SM indicates that operations costs are increasing, and the SC 125 has provided this information to the scheduler 110 . The scheduler 110 , in turn, notifies the hiber-waking manager 150 that a VM images can be moved or activated at a different location and removed more deactivated from the current location.
  • the hiber-waking, migration and replica provisioning manager module 150 performs a hiber-waking or a live migration process to move a primary VM site to a secondary VM site on the same or a different cloud site.
  • the live migration implementation involves copying the current VM image to a secondary VM site, where the secondary VM site does not have an existing and up-to-date replica of the current VM image.
  • the current VM image at the secondary VM site is then activated the primary VM image.
  • the hiber-waking implementation activates (or wakes) a hibernated replica at the secondary (or destination) VM site, while de-activating (or hibernating) the previously active primary VM image at the primary (or source) VM site.
  • the type of scheduling where the RPM acts on an event is referred to herein as reactive scheduling.
  • the provisioning of VM images across VM sites can be implemented within a cloud site, from one physical VM computing device to another. Such inter-cloud creation and deletion of VM images is implemented if the computing capability of one VM computing devices reaches a pre-determined threshold.
  • the newly designated replica VM image (previously the primary VM image) will be in hibernation and will be updated with new data from the primary VM image.
  • the replica VM image does not perform any live services. As such, the operational costs are at minimum to retain the replica this previously live cloud site.
  • the scheduler 110 can also implement scheduling based on time-sensitive strategies, where the module 110 proactively seeks and selects cloud sites and VM devices within cloud sites for replica VM image placement.
  • the RPM can be invoked with a granularity period in the order of months.
  • FIG. 2 illustrates the provisioning of VM images on different cloud sites at different time intervals depending on the operating parameters over a time period.
  • Sites 1 , 2 , 3 and 4 ( 204 , 210 , 215 and 220 ) are remote cloud sites in different locations.
  • a primary VM image, VM- 1 can be stored in site 1 205 .
  • the replica VM images for VM- 1 are stored in site 3 215 and site 4 220 .
  • the sites are chosen by their availability in terms of the operating costs per the schedule. Further, the latency in the network access to the different sites indicates that a primary VM image would be better represented from a different site.
  • the operating costs at time t 1 and t 2 are low for site 1 , and therefore, site 1 can be implemented ahead of the other sites.
  • the scheduler indicates to the hiber-waking, migration and replica provisioning manager module 150 to select a new site, deactivate or delete the old replica VM image, and transfer control of the primary VM image to the new or activated replica VM image.
  • the percentage of similarity of the VM images is used to update the replica VM image without having to copy the entire VM image over.
  • Table 1 illustrates the percentage of similarity between a pair of VM images from FIG. 2 .
  • the percentage of similarity is at 70 for VM- 2 , which implies that the de-duplication will remove the duplicate data blocks from the primary VM image on site- 1 205 , while the replication module updates sites 2 and 3 ( 215 , and 220 ) with the non-redundant data blocks.
  • the algorithms discussed above to check for redundancy and to de-duplicate data blocks will be implemented at this stage.
  • Table 2 lists the perceived end-user latencies when the VMs operate from different sites as illustrated in FIG. 2 .
  • This information table illustrates one of the operating parameters (latency) and its associated values, which are used to choose a secondary VM cloud site.
  • the operating parameter values of the third VM image data is utilized to find a site for the intended VM image data replica.
  • the similarities of the intended VM image data and the third unrelated VM image data can extend to comparison of the third VM image data latencies, size of the VM image data, the network bandwidth, power consumption, number of users allowed, among other parameters.
  • the latency rules are followed according to the combination set by Table 2, and illustrated in FIG. 2 , where, the following combinations are never implemented because of high latency, i.e., VM 4 at Site 1 , VM 1 at site-S 2 , VM 3 at sites-S 3 and S 4 .
  • VM 1 at Site 1 , Site 3 and Site 4 are eligible combinations having reasonable latency values approved by a client in, for instance an SLA.
  • eligible sites for VM 2 , VM 3 and VM 4 can be determined.
  • the virtual machines VMs—VM 1 , VM 2 and VM 3 can have a replica each at site, S 1 .
  • VM 3 will be operational during all the scheduling periods, t 1 to t 4 , as illustrated in FIG. 2 at site 2 210 .
  • VM 3 can be implemented at S 1 ; the new instance of VM 3 at S 1 205 is scheduled for execution only during time-slot t 4 , as illustrated in element 230 of site 1 205 . As a result, the instance of VM 3 at S 2 is scheduled for execution during periods, t 1 to t 3 .
  • the cost of additional storage due to VM 3 at S 1 is more than the operating cost of VM 3 at S 2 during scheduling period t 4 , it suffices to have only one replica of VM 3 (at S 2 ).
  • VMs—VM 2 , VM 3 and VM 4 are candidates for S 2 .
  • FIG. 2 The scheduler 110 in FIG. 1 now draws a schedule for executing the VMs (this involves choosing one of the replicas of a VM as a primary copy) at the four sites in a manner that either balances the load equitably across the different cloud sites or optimizes a cost function.
  • the scheduler When the scheduler determines that load balancing across clouds is a priority (e.g. a step-wise cost function as shown in the FIG. 2 ), the scheduler will schedule execution of VM 3 at S 2 and of VM 4 at S 3 during time slots t 2 , t 3 , t 4 and schedule execution of VM 4 at S 4 during time-slot t 1 . Similarly, the scheduler schedules execution of VM 2 at S 4 and VM 4 at S 1 during time-slot t 4 . However, if the objective is to minimize the number of inter-cloud migrations (e.g., due to reasons related to performance), then the scheduler schedules execution of VM 1 at S 4 during time-slot t 4 .
  • the scheduler schedules execution of VM 1 at S 4 during time-slot t 4 .
  • FIG. 3 illustrates the scheduling process implemented according to an exemplary embodiment.
  • the scheduling period 355 indicates the periods when data is collected by the SC 320 .
  • the replication horizon 360 for replica placement, occurs every few scheduling cycles and indicates when the replica is updated, and when the de-duplication of the replicas is initiated.
  • the replication horizon 360 is also called the replica placement horizon.
  • the scheduling periods are encapsulated by a replication horizon over pre-determined time periods. If the replication and de-duplication processes are combined into a single SM module for the purposes of incremental update propagation, then the replication (used here to describe incremental update propagation) is initiated based on the schedule by analyzing the hash index of each SM at the primary site and the replica sites.
  • the SM module is then capable of de-duplicating and propagating the new data chucks to the various secondary replicas in the secondary sites.
  • the SM module only tracks the hash index via the de-duplication module 305 , while the de-duplicated data is sent to the replication function 720 of FIG. 7 for propagation to the secondary replicas.
  • the replication module 315 can be limited to decisions on the placement of the replicas, and is, therefore, the same as the replica placement manager 115 in FIG. 1 , but is different from the replication function 720 in FIG. 7 , which serves to propagate the incremental updates from the primary replica to the secondary replicas using different write coalescing and compression techniques.
  • a scheduling module 325 logs the update to the old replica and implements a new schedule 340 for selection of a new primary VM image.
  • the timer or event module 330 might, alternatively, trigger a schedule event to be implemented via the scheduling module 325 .
  • Scheduling periods occurs over intervals in the order of hours, while replication (for placement of VM image replicas) horizons occurs in the order of days or even months. Further, propagation of the incremental updates from one primary replica to secondary replica happens in intervals of few seconds. A person skilled in the art will be able to recognize from the context being described that ‘replication’ is used to either describe placement of whole VM images at different sites or propagation of the incremental updates from the primary VM image to the secondary VM images.
  • an event driven or time driven replication is initiated by the replication module 315 for replica placement.
  • the statistics collected in the SC module 320 during different scheduling periods (or previous replication horizons) is used to determine initial replica placement for the next replication horizon 360 at a subsequent time 365 in a pre-determined schedule.
  • the combination of replication and scheduling in conjunction with content-based hashing for duplicate identification and storage mirroring is used to minimize the network latencies while placing an initial VM image replica in a remote VM cloud site.
  • the granularity in the case for initial replica placement is of the order of weeks or months. However, the granularity for update propagation will be of the order of minutes/seconds or even sub-seconds.
  • FIG. 4 illustrates a system and method according to an exemplary embodiment of updating VM images efficiently in a WAN by implementing a hash comparison between active index 410 and stale index 415 at the primary VM backend computing device.
  • the primary image copy 460 is broken down in constituent data blocks which are stored as hash objects for comparison.
  • Hash comparison using Rabin fingerprints is implemented between the active and stale indices to identify any update to the image data files at the primary cloud site 405 .
  • the indices are compared and asynchronous updates are applied to chosen replicas at different secondary cloud sites 440 , 430 and 420 using only the new data blocks.
  • each secondary cloud site is updated to indicate when the replica VM image was updated and with index data on the new data block.
  • one of the secondary cloud sites can be used as a permanent backup site from which the primary VM image cannot be activated, but can be used for retrieval of the primary VM image.
  • FIG. 5 illustrates the live migration of VM images from site-A 505 , to site-B 510 after a shutdown has been effected in VM image 1 525 .
  • the other VMs 515 in cloud site-A 505 have been disregarded due to overall costs at site-A rather than the capacity of the physical hosting device at site-A.
  • the live migration method implemented by the ECM through its sub-modules initiate a copy of the VM image file and associated data files to site-B 510 . However, if the data file accessed by the VM image is in a SAN and is already shared seamlessly across multiple sites, then there is no need to move additional files.
  • FIG. 5 illustrates the live migration of VM images from site-A 505 , to site-B 510 after a shutdown has been effected in VM image 1 525 .
  • the other VMs 515 in cloud site-A 505 have been disregarded due to overall costs at site-A rather than the capacity of the physical hosting device at site-A.
  • FIG. 6 illustrates the hiber-waking process 600 , where the primary VM image 635 is deactivated (or hibernated 640 ) at site-A 605 and an updated replica VM image in a secondary site-B is activated (or awakened from hibernation 640 ).
  • the data files associated with the previously active primary VM image at site 605 is now linked to the new primary VM image (previously a replica VM image) at site 610 .
  • the previously active primary VM image at 605 is hibernated and designated as a new replica VM image.
  • the new replica VM image will be updated using asynchronous updates made to the new primary VM image at 610 .
  • Storage area 630 in either site represents primary storage without large processing capabilities.
  • FIG. 7 illustrates the interaction between various software modules of the ECM, along with the data flow across the various modules.
  • Site manager (SM) 705 and 795 maintain reference tables for identifying the location a share storage (SAN) 780 and 745 in the case that a SAN is used to store the contents of a chunk of the VM image file for the particular site.
  • the contents of a local storage 780 and 745 are indexed by SMs 705 and 795 , for identifying new chucks of the image file, where the new chunks are identified by the de-duplication function 775 of the SM module on the primary site.
  • the primary local host 710 provides the physical resources for an exemplary VM image file accessed from the site illustrated in FIG. 7 . From the embodiments for combining de-duplication and replication, it is appreciated that such a combined module can be controlled by the SM or the HWPM, where appropriate.
  • Replication function 720 within the HWPM 715 module performs the replication of new/updated image file data chunks between the primary host 710 and remote secondary or shared devices 790 through networks 725 and 735 .
  • a primary host based splitter software is used to duplicate write data for storage in multiple storage device for the multiple VM image sites.
  • the application write flow from the local host 710 for the primary VM image file is sent to the local SAN 780 , and then function 775 for de-duplication.
  • the de-duplicated data is sent for writing into the local storage 765 .
  • the SAN operating system controls the operations of data input and output between the SM software modules.
  • the splitter in the local SAN 780 assigns the de-duplicated data to the secondary sites via the replication function 720 .
  • the use of a splitter-based approach doubles the number of writes that is initiated by the host HWPM 715 .
  • the splitter may also allow parallel writes to multiple VM sites. This process requires a small processor load, and negligible incremental input/output load on the network, as long as the host adapter on the primary site is not saturated.
  • the software driver for the splitter is resident in the local SAN operating system 785 driver stack, under the file system and volume manager of the OS, and interacts with the port, miniport and the multi-pathing drivers of the host device as illustrated in FIG. 7 .
  • the service manager component polling negotiates between site managers during the migration process to indicate the control of the primary image data and the status of the de-duplication efforts implemented by the primary site 705 .
  • FIG. 7 also demonstrates the replication of VM image data from the primary local host 710 prior to the movement from a local host and to a remote host for the purposes of migrating the primary VM image between physical devices in the cloud sites.
  • Replication functions can also reside within the SAN switch of the SAN OS for the replication of SAN image data to remote devices. This migration process requires specialized switching hardware components for routing de-duplicate writes to the various storage devices 765 , 745 , and duplicate writes to the de-duplication functions and storage device 780 .
  • FIG. 8 illustrates the system of this disclosure, where the HWPM 815 has completed re-designation of the primary VM image to the remote host 890 , and designated the previously primary host 810 as a secondary site. Thereafter, de-duplication of SAN data can be implemented using the local de-duplication function 850 , controlled by site SM 895 . De-duplicated write flow from the remote storage 855 of the new primary site comprising host 890 is directed to the replication function for transmission to the old primary site (now new, optional remote site) comprising host 810 via remote storage 880 and 865 .
  • FIG. 9 a and FIG. 9 b illustrate the incremental updates applied to the secondary sites, site-B and site-C, using the de-duplicated data from primary site-A 900 and 950 .
  • Site-D can be used as a permanent backup (for disaster recovery (DR) purposes) to any of the secondary sites or the primary site.
  • Site-D will have high priority scheduling during the replication (for update propagation) for any new data from the primary site-A. Should site-D encounter a failure, priority can be assigned to one of the other secondary sites (B or C) to undertake disaster management responsibilities.
  • FIG. 9 a further illustrates an exemplary embodiment of a VM image file that is stored at Site-D for DR purposes, where site-D is a set at high priority for replication from primary site-A.
  • the DR image file can also be accessed by the secondary sites, B and C, as well.
  • the incremental updates are propagated from the primary site for the VM image file and from the DR file host site (e.g., site-D in FIG. 9 b ) for the backup files in case of a failure at the primary site during the re-designation process.

Abstract

Systems and methods are disclosed herein to automatically replicate and migrate live virtual machine image (VM) files from a primary VM computing device to secondary VM computing devices. The operating parameters (e.g., cost of operation, power consumption, etc.) of a number of secondary VM computing devices are analyzed. Replicas of the primary VM image is stored in the secondary VM devices with operating parameters that meet limiting parameters defined in an SLA. The primary VM image is indexed by its constituent data blocks in an active index, which is compared against a stale index of data blocks. A comparison of the indices will indicate when new data is added to the VM image. The new data is used to update the replicas. Migration is performed by copying the primary VM image or awakening a hibernated secondary VM image replica, and hibernating the current primary VM image.

Description

  • This patent application is related to and claims the benefit of Provisional U.S. Patent Application Ser. No. 61/389,748, filed Oct. 5, 2010, which is incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • The instant disclosure relates generally to a system and method for automatically replicating and migrating live virtual machines across wide area networks.
  • BACKGROUND
  • A virtual machine (VM) is a software platform capable of replicating a computing device with full operating system (OS) and applications functions. The VM is generally installed on a target machine that functions as the host by contributing physical resources like memory, and processing capabilities. A remote device uses client VM software to connect to the remote device and view the VM operating on the target machine. As a result, a virtual machine provides a remote computing device user with a complete software based computing platform separate from the remote computing device on which the software runs. The level of separation between the VM software and the hardware on which it runs establishes the type of virtual machine with the primary types being a system virtual machine and an application virtual machine. A system virtual machine type allows a remote user of the VM to access some of the physical hardware devices on which the VM executes. In contrast, the application VM functions as a stand-alone application platform over which other software applications are implemented. The purpose of the application VM is to enable different operating systems with different file structures to function within an existing native operating system.
  • The virtual machine data, operations, and functions are assigned to a virtual machine image file in the native memory of a target machine. Remote devices having client VM software installed within the device access the virtual machine image remotely. The image file renders in the client VM software on the remote device as an OS with its overlying applications and data displayed for the user of the remote machine. Any changes made the application, data, or OS is saved to the virtual machine image on the target machine. The VM can be scheduled for execution at geographically disparate cloud locations. However, storing a virtual machine image across networks from one location to another is complicated by the size of the data and the number of users connected to the virtual machine image.
  • One conventional VM method enabled a share repository of the virtual machine image to be accessible by both the current or primary target machine and a secondary target machine for backup. This required both the primary target machine and a secondary target machine to be on the same sub-net (or within the same local network) for effective results without significant lag. Further, it is difficult to identify remote sites to store replicas of the virtual machine image during a ‘live’ or in-use session. Problems associated with network latency, long-term and short-term costs of chosen remote sites are some of the issues associated with choosing remote sites for replicating virtual machine image data.
  • SUMMARY
  • The systems and methods described herein attempt to overcome the drawbacks discussed above by analyzing the operating costs are a number of remote sites for storing the virtual machine image. A primary remote site is automatically chosen for storing a primary VM image file, and one or more secondary remote sites are automatically chosen for storing secondary replicas of the primary VM image file. Further, the applicable changes instituted in the virtual machine image by a client computer are sent to update the replica virtual machine image at each of the remote sites. Additionally, a replica of the virtual machine image can be activated as the new primary replica, while designating the old primary replica as a secondary replica. Alternatively, a primary VM file can be copied to a new site, where the new site does not have an updated replica available.
  • In one embodiment, a computer-implemented method of automatically replicating and migrating live virtual machines, the method comprising: comparing, in a primary backend computing device, a plurality of first virtual machine image components from a first virtual machine image and a plurality of second virtual machine image components from updates applied to the first virtual machine image, to identify new virtual machine image components; updating, in each of a plurality of secondary backend computing devices, a replica of the first virtual machine image with the new virtual machine image components; calculating, in the primary backend computing device, a plurality of operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device; comparing, in the primary backend computing device, the operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device, wherein an operating range within limits of the operating parameter values is defined for each operating parameter by a computer-coded business rule; selecting at least one secondary backend computing device from the plurality of backend computing devices, where the operating parameter values of the selected secondary backend computing device are within the range of the limits and the operating parameter values of the primary backend computing device are outside the range of the limits; and activating, in the selected secondary backend computing device, the replica of the updated first virtual machine image as a new first virtual machine, thereby designating the selected secondary backend computing device as a new primary backend computing device and re-designating the primary backend computing device as a secondary backend computing device.
  • In another embodiment, a computer-implemented system of automatically replicating and migrating live virtual machines, the method comprising: comparing, in a primary backend computing device, a plurality of first virtual machine image components from a first virtual machine image and a plurality of second virtual machine image components from updates applied to the first virtual machine image, to identify new virtual machine image components; updating, in each of a plurality of secondary backend computing devices, a replica of the first virtual machine image with the new virtual machine image components; calculating, in the primary backend computing device, a plurality of operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device; comparing, in the primary backend computing device, the operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device, wherein an operating range within limits of the operating parameter values is defined for each operating parameter by a computer-coded business rule; selecting at least one secondary backend computing device from the plurality of backend computing devices, where the operating parameter values of the selected secondary backend computing device are within the range of the limits and the operating parameter values of the primary backend computing device are outside the range of the limits; and activating, in the selected secondary backend computing device, the replica of the updated first virtual machine image as a new first virtual machine, thereby designating the selected secondary backend computing device as a new primary backend computing device and re-designating the primary backend computing device as a secondary backend computing device.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings constitute a part of this specification and illustrate an embodiment of the invention and together with the specification, explain the invention.
  • FIG. 1 illustrates a system for replicating VM images for multiple secondary VM storage cloud sites according to an exemplary embodiment.
  • FIG. 2 illustrates a system and method for scheduling and provisioning VM images across multiple secondary cloud sites based on the operating parameters of the secondary cloud sites according to an exemplary embodiment.
  • FIG. 3 illustrates a method of de-duplication and scheduling updates on replica VM images according to an exemplary embodiment.
  • FIG. 4 illustrates a system of checking for VM image data updates according to an exemplary embodiment.
  • FIG. 5 illustrates a system and method of live migration of a VM image according to an exemplary embodiment.
  • FIG. 6 illustrates a system and method of hiber-waking VM images according to an exemplary embodiment.
  • FIG. 7 illustrates the write flow of VM image data across various software modules or sub-modules before hiber-waking according to an exemplary embodiment.
  • FIG. 8 illustrates the write flow of VM image data across various software modules or sub-modules after hiber-waking according to an exemplary embodiment.
  • FIG. 9 a illustrates a method of updating of VM images from disaster management backup sites before hiber-waking according to an exemplary embodiment.
  • FIG. 9 b illustrates a method of updating of VM images from disaster management backup sites after hiber-waking according to an exemplary embodiment.
  • DETAILED DESCRIPTION
  • Reference will now be made in detail to the preferred embodiments, examples of which are illustrated in the accompanying drawings.
  • Virtual machines are widely used in cloud computing applications, where the actual physical machine running the VM software is located in different locations. While virtual machine image files store much of the status information along with data related to current application and OS in implementation, other image files can be used exclusively for data storage. The term ‘image files’ is used interchangeably with the term ‘images’ in this disclosure, both describing a file comprising virtual machine data. Database image files can be accessed by the image files for the database information pertaining to a “live” VM. As such, if the VM image has multiple data image files, then the database image file, the virtual machine image file, and any other related image files should be replicated. Remote storage of live VMs across high latency, low bandwidth wide area networks (WAN) results in lags and hardware trailing issues that are visible to a client computer accessing the VM. Further, the process of replicating a live VM involves storing the entire state of the VM from a primary remote machine to multiple secondary storage machines. Multiple storage machines are updated with new data from the end-user of the live VM without any data loss or continuity disruptions to the end-user client computer. The replication method and systems described herein are motivated by various factors, including the price of storage devices, redundancy of the virtual machine image data, and limited network bandwidth at the secondary VM locations.
  • In an exemplary embodiment, to determine multiple eligible sites for replication of the VM image, operating parameters and its values are analyzed by a VM management software application comprising software-based sub-modules for managing replica placement. Software modules and sub-modules are software codes that render independently or within a large software program, and are used interchangeably in this disclosure. Exemplary operating parameters include, the average access costs; perceived latency for the VMs hosted at different cloud sites; available network bandwidth; heat generated; number of access users allowed; cost of resources; memory capacity (e.g., random access memory, read only memory, and read and write memory); and network congestion among the different sites. Further, the long-term costs associated with inter-site and intra-site variations are also analyzed for replica placement. In another embodiment for determining inter-site and intra-site variations, the commonality of different VM images are compared, where the different VMs are stored in different physical machines within the same cloud site (intra-site), or different physical machines in different cloud sites (inter-site). It is further appreciated that an existing VM image at one destination site is compared with the VM image to be replicated to find similarities, thereby enabling the VM management software to determine if the destination site is suited for the VM image replica. Comparison methods can be automated using software to compare virtual machine image metadata of the existing VM against the VM to be replicated. Further, size variations, transmission speeds, and costs of maintaining and operating physical machines at a destination site are analyzed for the existing VM at the destination site, prior to selection and replica placement.
  • In another exemplary embodiment, the replication of virtual machine image files and its associated image files (e.g., data image file, etc.) across multiple secondary VM sites is implemented by a VM management software application resident on a networked backend computing device. The VM software application monitors a VM image file resident at a primary site, and being used by a VM end-user on an end-user client computing device. When the end-user makes any updates within the VM environment on the client computing device, the changes generate new data in the VM image file at the primary location. The VM management software application uses this new data to update the replica VM images at each secondary site. The replication methods described herein incorporates exemplary processes for efficient replication, and propagation of updates including write coalescing and data compression methods.
  • In another exemplary embodiment, a de-duplication or removal of duplicate information among multiple replicas is implemented between the secondary replicas at each secondary site. This process reduces cost of storage of multiple replicas in different secondary VM sites. The de-duplication method described herein, in an exemplary embodiment, implements either variable sized chunking technique, also called content based redundancy (CBR) elimination technique using sliding window hashes in the form of Rabin fingerprints or fixed size chunking technique to find and eliminate redundant data. It is further appreciated that propagation of updates to a primary VM image file and de-duplication can be effected in a single software module. In this case, when update propagation and de-duplication are combined, CBR based on Rabin fingerprints and/or fixed size chunking is first implemented to de-duplicate the replicated secondary image files and create hash indices to verify updates to a primary VM image file, while write-coalescing and compression methods are used to propagate updates from the primary VM image file to secondary replica image files. Alternatively, update propagation can utilize the CBR and/or hash indices produced as a result of the de-duplication process to identify the need for propagation of primary VM image file updates prior to applying write-coalescing and compression methods for actual propagation.
  • In another exemplary embodiment, de-duplication ratios derived form the CBR methods are used to determine the state of secondary replicas (e.g., amount of redundancy in image data). The state of the secondary replicas enable the VM management software application to replicate non-redundant chunks of data by comparing hash indices of stale and updated VM image files during initial replica placement. The non-redundant data chunks may represent the updates to the primary VM image file, where the updates generated by an end-user, and where the updates are replicated using the write-coalescing and compression methods to conserve network bandwidth and enable faster transfer of updated portions of VM image data to remote sites. De-duplication ratio is a measure of the size of the common content between two different VM image files. A combined or separate implementation of update propagation and/or de-duplication method can be initiated at any time, and between any set time period according to a pre-defined scheduled times. The granularity of the scheduling can be in the order of several hours. Multiple scheduling periods can be arranged for automated replication of the changed data blocks to the secondary sites. In one example, for replication on a separate replication and de-duplication implementation, a replication placement manager module analyzes data collected from previous replications in different cloud sites by a statistics collector module. The data is used to determine a new location for a new replica placement. Further, during scheduling of primary and secondary VM images and site locations, a single replication (replica placement) horizon period comprising multiple replication scheduling periods is defined. An exemplary replication horizon period is about a month in granularity, while exemplary scheduling periods are hours, days, or even minutes. Each period can also comprise one primary VM image and multiple secondary VM images on which de-duplication and replication functions are implemented. Each replication horizon period signals the end of the replication schedule and the start of a de-duplication cycle to remove redundancy among the replicas used in the first cycle, before the next replication horizon begins. It is appreciated that propagation of the incremental changes from one primary replica to secondary replica is interchangeably used with replica placement in the standard literature, and in this disclosure. Update propagation happens in intervals of few seconds. A person skilled in the art will recognize from the context being described that replication is used for either placement of whole VM images at different sites (i.e. replica placement) or propagation of the incremental updates from the primary VM image to the secondary VM images (i.e. update propagation).
  • In yet another exemplary embodiment, the methods and systems described here enable migration of live VM images across wide area networks. In order to limit the disruption or noticeable effects to the end-user client when migrating live VM images, it is beneficial to ensure that network TCP (transmission control protocol) survives the VM migration process. Further, applications encapsulated with the VMs should see no disruptions in network connection even across multiple networks (outside the sub-net of a local network). The methods described herein allows a VM management software to manage the migration of large virtual machine image files across networks without consuming excessive network bandwidth, and network time. To minimize bandwidth, and network time, the exemplary methods described herein implements hash comparisons to find data differences between replicas at remote cloud sites after which, incremental updates are applied to replicas in remote cloud sites with only the updated data sections, and not the entire image file. Finally, any data file that is being accessed by a VM image file should be seamlessly linked to the migrated VM image. The data file might be a database file of all application data belonging to a client user of the VM image, where the data file is stored in a storage area network (SAN). It is further appreciated that data image files associated with VM image files are also stored as multiple replicas in multiple locations, each replica capable of being de-duplicated and replicated at scheduled times.
  • In one exemplary embodiment for managing latencies during the transfer of large data volumes across wide area networks, a VM replication and VM scheduling process is implemented by the VM management software application in a backend computing device. The VM replication process identifies remote cloud sites and intra-cloud physical VMs to store replicas of the VM image from a primary VM site. These secondary VM cloud sites are identified by reviewing the operating parameters and values of each cloud site and of similar VM images, if an existent VM image is of similar size, capabilities, and functions as an intended primary VM image. Operating parameters include long-term average cost of computation of every VM at each of the candidate cloud sites during different time-periods and the end-user latency requirements associated with the intended primary VM. Such sites are classified to meet the end-user latency requirements and are eligible for replica placement.
  • In another exemplary embodiment, the methods and systems described herein includes prioritization processes within a scheduling software module of the VM management software application. In the case of multiple secondary sites for storing replicas, one of the secondary sites is assigned as a high priority backup for disaster management. This enables the prioritization processes to follow a priority schedule for initial replication of new data from VM image files to a selected secondary replica VM image at a selected disaster recovery secondary site, and subsequently, to other secondary sites of lower priority. If the disaster management secondary site encounters a problem and fails, then the VM management software application can assign responsibility to a different secondary site by changing the priority on an existing secondary site to a high priority disaster management secondary site.
  • In another exemplary embodiment, migration of a live VM image is implemented by the VM software application using computer coded instructions to move a VM image to a remote cloud site that does not have a replica of the VM image. Another exemplary implementation, using the VM software application, incorporates a hiber-waking method. In this method, a replica VM image at a destination cloud site is transitioned to an active state from a previously passive (hibernated) state, and becomes the primary VM image, while the previously active VM image at the source cloud site is re-designated as a replica VM image. One requirement in the hiber-waking method is for the active VM image at a source site to be hibernated prior to designation as a replica VM image, while the replica VM image at a destination site is awakened from its hibernation state, and is re-designated as the active VM image.
  • An Enterprise Cloud Manager (ECM) software module can be deployed on a centralized backend computing device as the VM management software application to monitor the interactions of multiple VMs from the secondary sites. A statistics collector (SC) sub-module of the ECM software module collects statistics and mines for data from site managers (SM) software modules located at each secondary site. The SC module then presents this data to a replica placement manager sub-module (RPM) within the ECM. The SM is responsible for VM placement and scheduling at the local site. The site manager also monitors optimal conditions defined to meet performance objectives which are pre-defined for each site. The determination of a site objective can be set through the ECM based on such factors, as the hardware, network and software of the remote secondary site. By way of an example, an objective set by one site manager at one site for the optimization of VM storage at the site is minimization of overall energy consumption at the site.
  • Each VM also comprises an application manager (AM), which interacts with the SM at each secondary site. The AM monitors the application behavior and ensures that the BMs are allocated sufficient computing resources so that the service level objectives (SLO) as defined by a service level agreement (SLA) are not violated. The SLA can be defined between a company, which wishes to deploy cloud computing capabilities for its business, and the cloud computing service providers. Further, much of the monitoring of the AM software module can be implemented automatically. This is enabled by converting the SLA into computer-coded business rules that can be implemented by a software module to monitor current usage, and trigger alarms if pre-defined limits are violated.
  • The RPM sub-module determines the number of replicas and their site locations after considering the de-duplication ratios and long-term average cost provided by the statistics collector (SC) sub-module. De-duplication is a process of removing redundant data from multiple replicas using mathematical methods. De-duplication can be implemented on multiple replicas at pre-designated time schedules to reduce the amount of data stored between secondary VM sites. In an exemplary embodiment, a de-duplication method comprises reduction of each replica using the mathematical model in equation (1).
  • l ijt = F C t + I 1 - λ t I + F S jt - λ t F + F ( B + μ Y ) B μ - λ t F ( B + μ Y ) l ijt : Expected latency when image i is hosted on site j at time t ( 1 )
  • Where, in equation (1), F is the average file size, Ct indicates the client network bandwidth at time t, I signifies the initialization time, λt denotes the network arrival rate at time t, Sjt denotes the server network bandwidth of site j at time t, B indicates the buffer size, μ represents the dynamic server rate and Y represents the static server time. Further, the costs involved with storing and operating a primary copy of the VM image at site j can be derived using equation (2).
  • min j t C _ jt t Z ijt + j C j k Y kj size k ( 2 )
  • In equation (2), i is an identifier for a VM image iεI; k is an identifier for chunks of the VM image kεK; j is an identifier for VM image jεJ; Ki is the set of chunks for VM image i; sizek is the size of the kth chunk of the VM image; C jt is the operational cost of hosting at site j during time t, and Ct j is the per unit cost of storage at site j. Ykj has a value of 1, if the chunk k is stored at site j, otherwise Ykj is 0. Similarly, Zijt is 1 if the replica of the VM image i at site j is the primary copy at time t otherwise Ykj is 0.
  • Equation (2) is further subject to the conditions of equation (3), equation (4), equation (5), and equation (6) below.
  • 1 j X ij N i max i ( 3 ) Y kj X ij i , j , k K i ( 4 ) X ij Z ijt i , j , t ( 5 ) Z ijt l ijt l i max i , j , t ( 6 )
  • Where Ak is the number of VM images where chunk k occurs, Ni min is the minimum number of replicas of VM image i; Ni max is the maximum number of replicas of VM image i; li max is the maximum acceptable latency for VM image i; and is Xij if a replica of the VM image i is placed at site j, else Xij is 0.
  • However, equation (1) is computationally large for a moderate cardinality of the set K. A greedy heuristic approach is used to resolve equation (1) for determining the sites for replica placement. Assuming a set, Dii; as the result of a de-duplicating pair of VM images, i and i′, a high value of dii′, where diiεDii′, indicates that the VM images i and i′ share a significant proportion of their content. Further, dii′ is expressed as a percentage, and is calculated as a ratio of the size of the common content to the total size of the content of i or i′, whichever is maximum. The objective of this calculation is to create an algorithm to detect sites with complimentary cost structures ( C jt). As a result, if j and j′ are two sites with complimentary cost patters C jt and C j′t, and if C j′t> C jt, the cost of maintaining only one replica at site j′ is equivalent to the cost of operating VM image i during time t at site j′, as against the cost of maintaining two replicas at site j and j′. In this case, the cost of maintaining two replicas includes the additional storage requirement as a consequence of having a replica at site j.
  • Using the exemplary equations for latency calculation (equation (1)) and cost comparison (equation (2)), an exemplary algorithm can calculate latency issues and profitability of reserving multiple replicas at different sites or instead, maintaining fewer replicas at the expense of a higher cost operation. This algorithm is software coded into the RPM sub-module and is implemented at various times using data collected by the SC sub-module. The scheduler sub-module interacts with the RPM, to trigger the RPM into ensuring that the current costs and latencies are within acceptable limits as defined in an SLA or following the profitability objectives of the cloud service provider. The SLA business rules can be computer-coded to data comparison algorithms to ensure that the SLA or profitability requirements are maintained at all times. This comparison can be implemented in the enterprise cloud manager (ECM), with full accessibility to the cloud service provider (target VM machine administrators) and limited accessibility to the cloud client (client VM machines).
  • An exemplary algorithm for calculating latency and costs for the RPM utilizes two phases—a distribution phase, and a consolidation phase. At the distribution phase, a set of sites (J1εJ) is identified that adheres to the latency requirements for VM i1 and another set of sites (J2εJ) is identified that adheres to the latency requirements of VM i2. The members that are common to the sets J1 and J2 fulfill the latency requirements for virtual machines, i1 and i2. If there are common members (if J1∩J2 is NOT null), then the replicas of the VMs i1 and i2 are placed at sites jεJ1∩J2, and the algorithm proceeds to the next iteration. However, if there are no common members (if J1∩J2 is null), then there are no replicas for placement, and the algorithm proceeds to the next iteration. In a set where dii′ is initialized to maxii′ {Dii′}; i=i1, i′=i2, for subsequent iterations of i, the next best value is chosen among the remaining values within set Dii′. The remaining values are assigned to d ii′. The iterations are continued for as long as dii′ is greater than a user defined threshold. The result of this phase are Ij, the set of VM images at site j, Kj, the set of unique chunks at site j, and Ri, the set of replicas of VM image i.
  • In the next phase of the algorithm, the consolidation phase, the distribution phase results are consolidated to reduce the number of replicas generated. For each site j, the savings in storage spaces as a consequence of the de-duplicating of set Ij is calculated. With rij→Ij×Ri and Savj is the space saving due to de-duplication of the Ij images at site j, the contribution made by each replica is calculated as a ratio of the space savings (Savj) generated when the replica is part of the Ij, as disclosed in equation (7) and the space savings (Savrj) calculated when the replica is left out of the set Ij, as illustrated in equation (8).
  • Sav j = i I 1 k K 1 size k - k K 1 size k ( 7 ) Sav r ij = ( i I 1 & i i ) ( k K 1 , ) size k - ( k K 1 K 1 , & i i ) size k ( 8 )
  • The ratio between equation (7) and equation (8) is calculated for each VM image i in Ii, and then for each site j in J. Further, with r ij=arg minγ η (Savj/Savγ η ) and r ij to as a member of set Ri′, if |Ri″|>Ni max, then the replica rij is removed and Savj and in Savr g , i in Ij, and Ij are updated. This calculation for rij is performed until all the replicas for all the VM images are within the bounds Ni max.
  • Finally, the exemplary algorithm detects the sites where cost structures ( C jt) vary in a similar manner to latency structures. For the VM images with multiple replicas at sites with cost structures enforced, the algorithm calculates whether it is profitable to maintain multiple replicas these sites. In one example, this is implemented by monitoring any decreases in costs for storage cost if the replica of the VM image i is deleted from the site. An absence of a cost benefit will leave the image on the site, but will delete it for a cost benefit. In an exemplary method of implementing cost benefit measures, the ratio of the marginal decrease in cost due to de-duplication when a VM image i is retained at a site, and the marginal decrease in cost due to de-duplication when the VM image is deleted from the site is measured. The VM image with the lowest ratio is considered for deletion, subject to the fulfillment of the other constraints disclosed herein, such as cost of maintaining the VM image at each secondary site.
  • FIG. 1 illustrates of an exemplary embodiment of a method and system for replication of VM images across wide area networks (WAN). An enterprise cloud manager (ECM) 105 software module functions as a VM management software application on a backend computing device to monitor and manage the primary and replica images on remote cloud sites. The ECM can be accessed via a browser or a stand-alone web-enabled software application. While complete control of the sub-module elements of the ECM is extended to the cloud computing service provider hosting the entire method and system 100, partial control is designate to client computing devices. Alternatively, a parent software application is implemented to control the ECM, when the parent application implements business rules defined in a service level agreement (SLA) between the cloud computing service provider and the client computing devices. Computing devices as related to the systems and methods described herein include personal computers (PC), netbooks, laptops, servers, smart phones, and any device with a processor and memory capable of networking.
  • The ECM 105 comprises a scheduler 110 software sub-module, a replica placement manager 115 sub-module and a statistics collector and data miner 125 sub-module. Each of these sub-modules is connected to a database server 120, where the database server can be a remote server with networking capability. Each cloud site location 135 is a geographically disparate location with multiple backend computing devices 145, where each device is managed by an application manager 140. The remote sites 135 are managed by site managers 130, which are connected through a data network to a central ECM software at the location of the cloud computing service provider.
  • The site manager (SM) 130 is a monitoring and enforcement tool that is responsible for VM placement and implementing scheduling decisions sent from the ECM. The SM 130 monitors operating parameters and its values, such as, the network bandwidth, CPU (processing) availability, memory capacity, power usage, among other computing metrics and transfers these metrics to the statistics collector (SC) 125. The SM 130 also provides the SC 125 with site specific inputs, for example, the per unit storage costs, and the per unit cost of computation at different time intervals. The SM 130 also incorporates a de-duplication module for identifying duplicate data blocks for the VM images stored in a centralized shared repository within the cloud site. It is appreciated that if de-duplication and replication (for update propagation) are combined within the SM module at an exemplary primary VM site A 135, then the replication function (or sub-module) within the SM module implements write-coalescing and compression methods over the hash indices maintained by the de-duplication module to transmit non-redundant (new de-duplicated) data of the primary VM image to the secondary VM image replica files. This non-redundant data can then be transmitted to another secondary replica site B 135 chosen earlier by the RPM 115. However, if the SM module incorporates only the de-duplication methods disclosed herein, then the hash index of de-duplication information is presented to the RPM 115 via the SC 125 by the SM module; the RPM determines a replica site, while the hiber-waking and replica provisioning manager 150 via the replication function 720 performs the propagation of the non-redundant data updates to the secondary replicas using compression and write coalescing methods of the replication function (or sub-module). A storage area network (SAN) is an example of a centralized shared repository. The meta-data associated with the data blocks, for example, a hash value of the data contents, and the number VM images are also communicated to the SC module. Additionally, the percentage of similarity between VM images are calculated from the data blocks within the SM. This statistic is also transferred to the SC, where all the data is collated over several scheduling cycles and long-term averages to calculate the operations costs and access costs.
  • The RPM (replica placement manager) 115, periodically communicates with the SC, and uses the statistics collated to resolve any replica placement issues. In a first run of the system, a virtual machine image and associated data image files of a VM are created for a client and stored in a primary cloud site, in a primary VM device. The access to these files is determined in an SLA and further information on the users is maintained in a user directory. The SLA also defines the extent of support and backup provided to the client. In view of the SLA, the number of secondary cloud sites and secondary VM devices, as well as the locations and costs are determined and the VM image and associated image files are replicated to these secondary VM devices.
  • The RPM 115 and scheduler 110 communicates with other modules within the ECM to transmit solutions to any issues developed when the SC data is reviewed. A hiber-waking, migration and replica provisioning manager module 150 analyzes the solution from the scheduler 110, and along with input from the RPM 115, implements a VM image at a different site, by either hiber-waking 155 or live migration depending on the state of the current live VM image. In the live migration process, according to an exemplary implementation, the primary VM image is replicated at a secondary site, where the secondary site did not have a copy of the primary VM image file to begin with. In a hiber-waking method, an up-to-date secondary VM image is activated as primary (awakened), while the primary VM image is hibernated (deactivated or re-designated) as secondary VM image. A solution of live migration or hiber-waking is provided if there is a determination from the SC that the current cloud site or the physical VM hosting device is deemed to have issues, for example, high costs or latencies that were previously non-existent. The information on the location of replicas are maintained in a centralized database, e.g., database 120, and are available to the hiber-waking, migration and replica provisioning manager module 150, and the ECM 105. Sub-modules 150 and 105 make the list of replicas and information regarding location of the replicas available for review by a system administrator for reference or manual control of the replicas.
  • In an exemplary embodiment, the scheduler 110 can be either time-based or event-based or both. In the case that the scheduler is event-based, the scheduler module generates VM scheduling decisions based on event notification from the SM 130, by way of the scheduler 110. As an example, if an SM indicates that operations costs are increasing, and the SC 125 has provided this information to the scheduler 110. The scheduler 110, in turn, notifies the hiber-waking manager 150 that a VM images can be moved or activated at a different location and removed more deactivated from the current location. The hiber-waking, migration and replica provisioning manager module 150 performs a hiber-waking or a live migration process to move a primary VM site to a secondary VM site on the same or a different cloud site. The live migration implementation involves copying the current VM image to a secondary VM site, where the secondary VM site does not have an existing and up-to-date replica of the current VM image. The current VM image at the secondary VM site is then activated the primary VM image. The hiber-waking implementation activates (or wakes) a hibernated replica at the secondary (or destination) VM site, while de-activating (or hibernating) the previously active primary VM image at the primary (or source) VM site. The type of scheduling where the RPM acts on an event is referred to herein as reactive scheduling. The provisioning of VM images across VM sites can be implemented within a cloud site, from one physical VM computing device to another. Such inter-cloud creation and deletion of VM images is implemented if the computing capability of one VM computing devices reaches a pre-determined threshold. The newly designated replica VM image (previously the primary VM image) will be in hibernation and will be updated with new data from the primary VM image. The replica VM image does not perform any live services. As such, the operational costs are at minimum to retain the replica this previously live cloud site.
  • The scheduler 110 can also implement scheduling based on time-sensitive strategies, where the module 110 proactively seeks and selects cloud sites and VM devices within cloud sites for replica VM image placement. The RPM can be invoked with a granularity period in the order of months. Once a feasible solution to a replica placement problem is known at the beginning of replication (for replica placement) period, the number of replicas and locations of the replicas remain fixed till the beginning of the next replication (for replica placement) interval. Each interval consists of a number of scheduling periods.
  • FIG. 2 illustrates the provisioning of VM images on different cloud sites at different time intervals depending on the operating parameters over a time period. Sites 1, 2, 3 and 4 (204, 210, 215 and 220) are remote cloud sites in different locations. In the exemplary implementation in FIG. 2, a primary VM image, VM-1, can be stored in site 1 205. The replica VM images for VM-1, are stored in site 3 215 and site 4 220. The sites are chosen by their availability in terms of the operating costs per the schedule. Further, the latency in the network access to the different sites indicates that a primary VM image would be better represented from a different site. The operating costs at time t1 and t2 are low for site 1, and therefore, site 1 can be implemented ahead of the other sites. However, at time t3, when the costs are higher at site 1, the scheduler indicates to the hiber-waking, migration and replica provisioning manager module 150 to select a new site, deactivate or delete the old replica VM image, and transfer control of the primary VM image to the new or activated replica VM image.
  • For storage of VM images across multiple secondary VM devices in secondary cloud sites, the percentage of similarity of the VM images is used to update the replica VM image without having to copy the entire VM image over. Table 1 illustrates the percentage of similarity between a pair of VM images from FIG. 2. The percentage of similarity is at 70 for VM-2, which implies that the de-duplication will remove the duplicate data blocks from the primary VM image on site-1 205, while the replication module updates sites 2 and 3 (215, and 220) with the non-redundant data blocks. The algorithms discussed above to check for redundancy and to de-duplicate data blocks will be implemented at this stage.
  • TABLE 1
    VM-1 VM-2 VM-3 VM-4
    VM-1 NA 70 30 60
    VM-2 70 NA 20 50
    VM-3 30 20 NA 70
    VM-4 60 50 70 NA
  • Table 2 lists the perceived end-user latencies when the VMs operate from different sites as illustrated in FIG. 2. This information table illustrates one of the operating parameters (latency) and its associated values, which are used to choose a secondary VM cloud site. Further, in another exemplary embodiment, if the VM image data is similar to an unrelated third VM image data, then the operating parameter values of the third VM image data is utilized to find a site for the intended VM image data replica. The similarities of the intended VM image data and the third unrelated VM image data can extend to comparison of the third VM image data latencies, size of the VM image data, the network bandwidth, power consumption, number of users allowed, among other parameters.
  • TABLE 2
    Site-1 Site-2 Site-3 Site-4
    VM-1 4 8 3 2
    VM-2 3 2 4 2
    VM-3 2 3 6 7
    VM-4 8 4 3 2
  • The latency rules are followed according to the combination set by Table 2, and illustrated in FIG. 2, where, the following combinations are never implemented because of high latency, i.e., VM4 at Site1, VM1 at site-S2, VM3 at sites-S3 and S4. However, VM1 at Site1, Site3 and Site4 are eligible combinations having reasonable latency values approved by a client in, for instance an SLA. Similarly, eligible sites for VM2, VM3 and VM4 can be determined. Further, using data from Table 1 and Table 2, the virtual machines VMs—VM1, VM2 and VM3 can have a replica each at site, S1. However, because content commonality between the pairs—VM3:VM1 and VM3:VM2 is not high, the cost of maintaining only one replica—an instance of VM3 at S2 is compared against the cost of additional storage due to two replicas—one instance of VM3 at S2 and a new instance of VM3 at S1. In case of the only instance of VM3 at S2, VM3 will be operational during all the scheduling periods, t1 to t4, as illustrated in FIG. 2 at site 2 210. To overcome the high operation cost at t4 of VM3, VM3 can be implemented at S1; the new instance of VM3 at S 1 205 is scheduled for execution only during time-slot t4, as illustrated in element 230 of site 1 205. As a result, the instance of VM3 at S2 is scheduled for execution during periods, t1 to t3. When the cost of additional storage due to VM3 at S1 is more than the operating cost of VM3 at S2 during scheduling period t4, it suffices to have only one replica of VM3 (at S2). Similarly, VMs—VM2, VM3 and VM4 are candidates for S2. However, since VM2 has little in common with VM3 and VM4 and if we assume that VM2 has replicas at S3 and S4, we decide not to replicate VM2 at S2. The final placement of the replicas for the virtual machine files belonging to the four VMs is shown in FIG. 2. The scheduler 110 in FIG. 1 now draws a schedule for executing the VMs (this involves choosing one of the replicas of a VM as a primary copy) at the four sites in a manner that either balances the load equitably across the different cloud sites or optimizes a cost function.
  • When the scheduler determines that load balancing across clouds is a priority (e.g. a step-wise cost function as shown in the FIG. 2), the scheduler will schedule execution of VM3 at S2 and of VM4 at S3 during time slots t2, t3, t4 and schedule execution of VM4 at S4 during time-slot t1. Similarly, the scheduler schedules execution of VM2 at S4 and VM4 at S1 during time-slot t4. However, if the objective is to minimize the number of inter-cloud migrations (e.g., due to reasons related to performance), then the scheduler schedules execution of VM1 at S4 during time-slot t4.
  • FIG. 3 illustrates the scheduling process implemented according to an exemplary embodiment. The scheduling period 355 indicates the periods when data is collected by the SC 320. The replication horizon 360, for replica placement, occurs every few scheduling cycles and indicates when the replica is updated, and when the de-duplication of the replicas is initiated. The replication horizon 360 is also called the replica placement horizon. The scheduling periods are encapsulated by a replication horizon over pre-determined time periods. If the replication and de-duplication processes are combined into a single SM module for the purposes of incremental update propagation, then the replication (used here to describe incremental update propagation) is initiated based on the schedule by analyzing the hash index of each SM at the primary site and the replica sites. The SM module is then capable of de-duplicating and propagating the new data chucks to the various secondary replicas in the secondary sites. However, if the SM module only tracks the hash index via the de-duplication module 305, while the de-duplicated data is sent to the replication function 720 of FIG. 7 for propagation to the secondary replicas. Further, it is appreciated that the replication module 315 can be limited to decisions on the placement of the replicas, and is, therefore, the same as the replica placement manager 115 in FIG. 1, but is different from the replication function 720 in FIG. 7, which serves to propagate the incremental updates from the primary replica to the secondary replicas using different write coalescing and compression techniques. In this example, only the changes made from the start of the last update to the current update in the primary VM image is allowed for replication by the de-duplication module 305. As a result, the de-duplicated data blocks are placed at the appropriate secondary VM sites. A scheduling module 325 logs the update to the old replica and implements a new schedule 340 for selection of a new primary VM image. This is an illustration of time based selection of primary VM image from amongst the replicas of a VM image according to an exemplary embodiment. The timer or event module 330 might, alternatively, trigger a schedule event to be implemented via the scheduling module 325. Scheduling periods occurs over intervals in the order of hours, while replication (for placement of VM image replicas) horizons occurs in the order of days or even months. Further, propagation of the incremental updates from one primary replica to secondary replica happens in intervals of few seconds. A person skilled in the art will be able to recognize from the context being described that ‘replication’ is used to either describe placement of whole VM images at different sites or propagation of the incremental updates from the primary VM image to the secondary VM images.
  • In an exemplary embodiment for placement of initial replicas 335 of a primary VM image, an event driven or time driven replication is initiated by the replication module 315 for replica placement. The statistics collected in the SC module 320 during different scheduling periods (or previous replication horizons) is used to determine initial replica placement for the next replication horizon 360 at a subsequent time 365 in a pre-determined schedule. The combination of replication and scheduling in conjunction with content-based hashing for duplicate identification and storage mirroring is used to minimize the network latencies while placing an initial VM image replica in a remote VM cloud site. The granularity in the case for initial replica placement is of the order of weeks or months. However, the granularity for update propagation will be of the order of minutes/seconds or even sub-seconds.
  • FIG. 4 illustrates a system and method according to an exemplary embodiment of updating VM images efficiently in a WAN by implementing a hash comparison between active index 410 and stale index 415 at the primary VM backend computing device. The primary image copy 460 is broken down in constituent data blocks which are stored as hash objects for comparison. Hash comparison using Rabin fingerprints is implemented between the active and stale indices to identify any update to the image data files at the primary cloud site 405. At scheduled intervals, the indices are compared and asynchronous updates are applied to chosen replicas at different secondary cloud sites 440, 430 and 420 using only the new data blocks. The index 450 in each secondary cloud site is updated to indicate when the replica VM image was updated and with index data on the new data block. In an exemplary embodiment, one of the secondary cloud sites can be used as a permanent backup site from which the primary VM image cannot be activated, but can be used for retrieval of the primary VM image.
  • FIG. 5 illustrates the live migration of VM images from site-A 505, to site-B 510 after a shutdown has been effected in VM image 1 525. The other VMs 515 in cloud site-A 505 have been disregarded due to overall costs at site-A rather than the capacity of the physical hosting device at site-A. The live migration method implemented by the ECM through its sub-modules initiate a copy of the VM image file and associated data files to site-B 510. However, if the data file accessed by the VM image is in a SAN and is already shared seamlessly across multiple sites, then there is no need to move additional files. FIG. 6 illustrates the hiber-waking process 600, where the primary VM image 635 is deactivated (or hibernated 640) at site-A 605 and an updated replica VM image in a secondary site-B is activated (or awakened from hibernation 640). As a result, the data files associated with the previously active primary VM image at site 605 is now linked to the new primary VM image (previously a replica VM image) at site 610. The previously active primary VM image at 605 is hibernated and designated as a new replica VM image. The new replica VM image will be updated using asynchronous updates made to the new primary VM image at 610. Storage area 630 in either site represents primary storage without large processing capabilities. The result of hibernation of a replica VM image as illustrated in the case of replica VM image 605 will enable the operational costs at 605 to be much lower. This is because of decreased computational requirements and therefore, costs in terms of power and network bandwidth that is required to sustain the physical machine hosting the replica VM image at Site-A 605.
  • FIG. 7 illustrates the interaction between various software modules of the ECM, along with the data flow across the various modules. Site manager (SM) 705 and 795 maintain reference tables for identifying the location a share storage (SAN) 780 and 745 in the case that a SAN is used to store the contents of a chunk of the VM image file for the particular site. Alternatively, the contents of a local storage 780 and 745 are indexed by SMs 705 and 795, for identifying new chucks of the image file, where the new chunks are identified by the de-duplication function 775 of the SM module on the primary site. The primary local host 710 provides the physical resources for an exemplary VM image file accessed from the site illustrated in FIG. 7. From the embodiments for combining de-duplication and replication, it is appreciated that such a combined module can be controlled by the SM or the HWPM, where appropriate.
  • The relationship between the target hosts 710 and 790 is also illustrated in FIG. 7. Replication function 720 within the HWPM 715 module performs the replication of new/updated image file data chunks between the primary host 710 and remote secondary or shared devices 790 through networks 725 and 735. A primary host based splitter software is used to duplicate write data for storage in multiple storage device for the multiple VM image sites. The application write flow from the local host 710 for the primary VM image file is sent to the local SAN 780, and then function 775 for de-duplication. The de-duplicated data is sent for writing into the local storage 765. The SAN operating system (OS) controls the operations of data input and output between the SM software modules. The splitter in the local SAN 780 assigns the de-duplicated data to the secondary sites via the replication function 720. The use of a splitter-based approach doubles the number of writes that is initiated by the host HWPM 715. The splitter may also allow parallel writes to multiple VM sites. This process requires a small processor load, and negligible incremental input/output load on the network, as long as the host adapter on the primary site is not saturated. The software driver for the splitter is resident in the local SAN operating system 785 driver stack, under the file system and volume manager of the OS, and interacts with the port, miniport and the multi-pathing drivers of the host device as illustrated in FIG. 7. The service manager component polling negotiates between site managers during the migration process to indicate the control of the primary image data and the status of the de-duplication efforts implemented by the primary site 705.
  • Additionally, FIG. 7 also demonstrates the replication of VM image data from the primary local host 710 prior to the movement from a local host and to a remote host for the purposes of migrating the primary VM image between physical devices in the cloud sites. Replication functions can also reside within the SAN switch of the SAN OS for the replication of SAN image data to remote devices. This migration process requires specialized switching hardware components for routing de-duplicate writes to the various storage devices 765, 745, and duplicate writes to the de-duplication functions and storage device 780.
  • FIG. 8 illustrates the system of this disclosure, where the HWPM 815 has completed re-designation of the primary VM image to the remote host 890, and designated the previously primary host 810 as a secondary site. Thereafter, de-duplication of SAN data can be implemented using the local de-duplication function 850, controlled by site SM 895. De-duplicated write flow from the remote storage 855 of the new primary site comprising host 890 is directed to the replication function for transmission to the old primary site (now new, optional remote site) comprising host 810 via remote storage 880 and 865.
  • FIG. 9 a and FIG. 9 b illustrate the incremental updates applied to the secondary sites, site-B and site-C, using the de-duplicated data from primary site- A 900 and 950. Site-D can be used as a permanent backup (for disaster recovery (DR) purposes) to any of the secondary sites or the primary site. Site-D will have high priority scheduling during the replication (for update propagation) for any new data from the primary site-A. Should site-D encounter a failure, priority can be assigned to one of the other secondary sites (B or C) to undertake disaster management responsibilities. However, FIG. 9 a further illustrates an exemplary embodiment of a VM image file that is stored at Site-D for DR purposes, where site-D is a set at high priority for replication from primary site-A. This is an image file that can be accessed for backup by the primary VM image at site-A, if the primary image file at the site fails or becomes damaged as a result of natural or any type of disaster. The DR image file can also be accessed by the secondary sites, B and C, as well. When a secondary site is re-designated as a primary site, the incremental updates are propagated from the primary site for the VM image file and from the DR file host site (e.g., site-D in FIG. 9 b) for the backup files in case of a failure at the primary site during the re-designation process.
  • The embodiments described above are intended to be exemplary. One skilled in the art recognizes that numerous alternative components and embodiments that may be substituted for the particular examples described herein and still fall within the scope of the invention.

Claims (12)

1. A computer-implemented method of automatically replicating and migrating live virtual machines, the method comprising:
comparing, in a primary backend computing device, a plurality of first virtual machine image components from a first virtual machine image and a plurality of second virtual machine image components from updates applied to the first virtual machine image, to identify new virtual machine image components;
updating, in each of a plurality of secondary backend computing devices, a replica of the first virtual machine image with the new virtual machine image components;
calculating, in the primary backend computing device, a plurality of operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device;
comparing, in the primary backend computing device, the operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device, wherein an operating range within limits of the operating parameter values is defined for each operating parameter by a computer-coded business rule;
selecting at least one secondary backend computing device from the plurality of backend computing devices, where the operating parameter values of the selected secondary backend computing device are within the range of the limits and the operating parameter values of the primary backend computing device are outside the range of the limits; and
activating, in the selected secondary backend computing device, the replica of the updated first virtual machine image as a new first virtual machine, thereby designating the selected secondary backend computing device as a new primary backend computing device and re-designating the primary backend computing device as a secondary backend computing device.
2. The method according to claim 1, wherein the plurality of operating parameter values are calculated from a plurality of input parameters provided from each of the primary and secondary backend computing devices.
3. The method according to claim 2, wherein the operating parameters include network bandwidth, processor consumption, memory capacity, power consumed, heat generated, number of access users allowed and cost of resources.
4. The method according to claim 1, wherein comparing the first and second virtual machine image components is performed by content based redundancy elimination method, including Rabin fingerprints.
5. The method according to claim 1, wherein the operating parameters and the operating ranges in the computer-coded business rule are defined by a service level agreement (SLA) between a virtual machine service provider and a client of the virtual machine service provider.
6. The method according to claim 1, wherein updating a replica of the first virtual machine is performed by implementing a write coalescing of the new active virtual machine image components, and then compressing the new active virtual machine image components
7. A computer-implemented system of automatically replicating and migrating live virtual machines, the method comprising:
comparing, in a primary backend computing device, a plurality of first virtual machine image components from a first virtual machine image and a plurality of second virtual machine image components from updates applied to the first virtual machine image, to identify new virtual machine image components;
updating, in each of a plurality of secondary backend computing devices, a replica of the first virtual machine image with the new virtual machine image components;
calculating, in the primary backend computing device, a plurality of operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device;
comparing, in the primary backend computing device, the operating parameter values for each of the plurality of secondary backend computing devices and the primary backend computing device, wherein an operating range within limits of the operating parameter values is defined for each operating parameter by a computer-coded business rule;
selecting at least one secondary backend computing device from the plurality of backend computing devices, where the operating parameter values of the selected secondary backend computing device are within the range of the limits and the operating parameter values of the primary backend computing device are outside the range of the limits; and
activating, in the selected secondary backend computing device, the replica of the updated first virtual machine image as a new first virtual machine, thereby designating the selected secondary backend computing device as a new primary backend computing device and re-designating the primary backend computing device as a secondary backend computing device.
8. The system according to claim 7, wherein the plurality of operating parameter values are calculated from a plurality of input parameters provided from each of the primary and secondary backend computing devices.
9. The system according to claim 8, wherein the operating parameters include network bandwidth, processor consumption, memory capacity, power consumed, heat generated, number of access users allowed and cost of resources.
10. The system according to claim 7, wherein comparing the first and second virtual machine image components is performed by content based redundancy elimination method, including Rabin fingerprints.
11. The system according to claim 7, wherein the operating parameters and the operating ranges in the computer-coded business rule are defined by a service level agreement (SLA) between a virtual machine service provider and a client of the virtual machine service provider.
12. The system according to claim 7, wherein updating a replica of the first virtual machine is performed by implementing a write coalescing of the new active virtual machine image components, and then compressing the new active virtual machine image components.
US12/959,091 2010-10-05 2010-12-02 Automatic replication and migration of live virtual machines Abandoned US20120084445A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US12/959,091 US20120084445A1 (en) 2010-10-05 2010-12-02 Automatic replication and migration of live virtual machines
EP11831543.1A EP2625605A4 (en) 2010-10-05 2011-10-05 Automatic replication and migration of live virtual machines
PCT/US2011/054975 WO2012048037A2 (en) 2010-10-05 2011-10-05 Automatic replication and migration of live virtual machines
CA2813561A CA2813561A1 (en) 2010-10-05 2011-10-05 Automatic replication and migration of live virtual machines
AU2011312036A AU2011312036B2 (en) 2010-10-05 2011-10-05 Automatic replication and migration of live virtual machines

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US38974810P 2010-10-05 2010-10-05
US12/959,091 US20120084445A1 (en) 2010-10-05 2010-12-02 Automatic replication and migration of live virtual machines

Publications (1)

Publication Number Publication Date
US20120084445A1 true US20120084445A1 (en) 2012-04-05

Family

ID=45890763

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/959,086 Active 2031-11-07 US9110727B2 (en) 2010-10-05 2010-12-02 Automatic replication of virtual machines
US12/959,091 Abandoned US20120084445A1 (en) 2010-10-05 2010-12-02 Automatic replication and migration of live virtual machines

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/959,086 Active 2031-11-07 US9110727B2 (en) 2010-10-05 2010-12-02 Automatic replication of virtual machines

Country Status (5)

Country Link
US (2) US9110727B2 (en)
EP (2) EP2625604A2 (en)
AU (2) AU2011312036B2 (en)
CA (2) CA2813560A1 (en)
WO (2) WO2012048037A2 (en)

Cited By (149)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120180045A1 (en) * 2011-01-11 2012-07-12 International Business Machines Corporation Determining an optimal computing environment for running an image
US20120216052A1 (en) * 2011-01-11 2012-08-23 Safenet, Inc. Efficient volume encryption
US20120226699A1 (en) * 2011-03-03 2012-09-06 Mark David Lillibridge Deduplication while rebuilding indexes
US20120239739A1 (en) * 2011-02-09 2012-09-20 Gaurav Manglik Apparatus, systems and methods for dynamic adaptive metrics based application deployment on distributed infrastructures
US20120243795A1 (en) * 2011-03-22 2012-09-27 International Business Machines Corporation Scalable image distribution in virtualized server environments
US20120254131A1 (en) * 2011-03-30 2012-10-04 International Business Machines Corporation Virtual machine image co-migration
US20120271797A1 (en) * 2011-04-22 2012-10-25 Symantec Corporation Reference volume for initial synchronization of a replicated volume group
US20120290460A1 (en) * 2011-05-09 2012-11-15 Curry Jr Steven Lynn Composite Public Cloud, Method and System
US20120304170A1 (en) * 2011-05-27 2012-11-29 Morgan Christopher Edwin Systems and methods for introspective application reporting to facilitate virtual machine movement between cloud hosts
US20120311574A1 (en) * 2011-06-02 2012-12-06 Fujitsu Limited System and method for providing evidence of the physical presence of virtual machines
US20130151558A1 (en) * 2011-12-12 2013-06-13 Telefonaktiebolaget L M Ericsson (Publ) Methods and apparatus for implementing a distributed database
US20130212578A1 (en) * 2012-02-14 2013-08-15 Vipin Garg Optimizing traffic load in a communications network
US20130238786A1 (en) * 2012-03-08 2013-09-12 Empire Technology Development Llc Secure migration of virtual machines
US20130290274A1 (en) * 2012-04-25 2013-10-31 International Business Machines Corporation Enhanced reliability in deduplication technology over storage clouds
US20130290952A1 (en) * 2012-04-25 2013-10-31 Jerry W. Childers, JR. Copying Virtual Machine Templates To Cloud Regions
US20130305046A1 (en) * 2012-05-14 2013-11-14 Computer Associates Think, Inc. System and Method for Virtual Machine Data Protection in a Public Cloud
US20130311989A1 (en) * 2012-05-21 2013-11-21 Hitachi, Ltd. Method and apparatus for maintaining a workload service level on a converged platform
US20130326175A1 (en) * 2012-05-31 2013-12-05 Michael Tsirkin Pre-warming of multiple destinations for fast live migration
US20130326173A1 (en) * 2012-05-31 2013-12-05 Michael Tsirkin Multiple destination live migration
US20130326174A1 (en) * 2012-05-31 2013-12-05 Michael Tsirkin Pre-warming destination for fast live migration
US20140007094A1 (en) * 2012-06-29 2014-01-02 International Business Machines Corporation Method and apparatus to replicate stateful virtual machines between clouds
US20140040893A1 (en) * 2012-08-03 2014-02-06 International Business Machines Corporation Selecting provisioning targets for new virtual machine instances
US20140052973A1 (en) * 2012-08-14 2014-02-20 Alcatel-Lucent India Limited Method And Apparatus For Providing Traffic Re-Aware Slot Placement
US20140059200A1 (en) * 2012-08-21 2014-02-27 Cisco Technology, Inc. Flow de-duplication for network monitoring
US20140229939A1 (en) * 2013-02-14 2014-08-14 International Business Machines Corporation System and method for determining when cloud virtual machines need to be updated
US20140280949A1 (en) * 2013-03-13 2014-09-18 International Business Machines Corporation Load balancing for a virtual networking system
US8850130B1 (en) 2011-08-10 2014-09-30 Nutanix, Inc. Metadata for managing I/O and storage for a virtualization
US8863124B1 (en) 2011-08-10 2014-10-14 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
WO2014189481A1 (en) * 2013-05-20 2014-11-27 Empire Technology Development, Llc Object migration between cloud environments
US8935695B1 (en) * 2012-07-12 2015-01-13 Symantec Corporation Systems and methods for managing multipathing configurations for virtual machines
WO2015016805A1 (en) * 2013-07-29 2015-02-05 Hitachi, Ltd. Method and apparatus to conceal the configuration and processing of the replication by virtual storage
US20150052323A1 (en) * 2013-08-16 2015-02-19 Red Hat Israel, Ltd. Systems and methods for memory deduplication by destination host in virtual machine live migration
US20150052322A1 (en) * 2013-08-16 2015-02-19 Red Hat Israel, Ltd. Systems and methods for memory deduplication by origin host in virtual machine live migration
US20150081910A1 (en) * 2013-09-19 2015-03-19 International Business Machines Corporation System, method and program product for updating virtual machine images
US8997097B1 (en) 2011-08-10 2015-03-31 Nutanix, Inc. System for implementing a virtual disk in a virtualization environment
US9009106B1 (en) 2011-08-10 2015-04-14 Nutanix, Inc. Method and system for implementing writable snapshots in a virtualized storage environment
WO2015054582A1 (en) * 2013-10-11 2015-04-16 Vmware, Inc. Methods and apparatus to manage virtual machines
US20150112941A1 (en) * 2013-10-18 2015-04-23 Power-All Networks Limited Backup management system and method thereof
US9047108B1 (en) * 2012-09-07 2015-06-02 Symantec Corporation Systems and methods for migrating replicated virtual machine disks
US20150199205A1 (en) * 2014-01-10 2015-07-16 Dell Products, Lp Optimized Remediation Policy in a Virtualized Environment
US9104455B2 (en) 2013-02-19 2015-08-11 International Business Machines Corporation Virtual machine-to-image affinity on a physical server
US9110693B1 (en) * 2011-02-17 2015-08-18 Emc Corporation VM mobility over distance
US20150331704A1 (en) * 2014-05-19 2015-11-19 International Business Machines Corporation Agile vm load balancing through micro-checkpointing and multi-architecture emulation
US9201704B2 (en) 2012-04-05 2015-12-01 Cisco Technology, Inc. System and method for migrating application virtual machines in a network environment
US20150370659A1 (en) * 2014-06-23 2015-12-24 Vmware, Inc. Using stretched storage to optimize disaster recovery
US9223634B2 (en) 2012-05-02 2015-12-29 Cisco Technology, Inc. System and method for simulating virtual machine migration in a network environment
US20150378758A1 (en) * 2014-06-26 2015-12-31 Vmware, Inc. Processing Virtual Machine Objects through Multistep Workflows
US9246985B2 (en) * 2011-06-28 2016-01-26 Novell, Inc. Techniques for prevent information disclosure via dynamic secure cloud resources
US20160026501A1 (en) * 2013-03-19 2016-01-28 Emc Corporation Managing provisioning of storage resources
US9354912B1 (en) 2011-08-10 2016-05-31 Nutanix, Inc. Method and system for implementing a maintenance service for managing I/O and storage for a virtualization environment
US9389970B2 (en) * 2013-11-01 2016-07-12 International Business Machines Corporation Selected virtual machine replication and virtual machine restart techniques
US9424058B1 (en) * 2013-09-23 2016-08-23 Symantec Corporation File deduplication and scan reduction in a virtualization environment
US9438670B2 (en) 2013-03-13 2016-09-06 International Business Machines Corporation Data replication for a virtual networking system
US9442792B2 (en) 2014-06-23 2016-09-13 Vmware, Inc. Using stretched storage to optimize disaster recovery
EP2975518A4 (en) * 2013-03-15 2016-11-02 Nec Corp Information processing system and method for relocating application
US9652265B1 (en) 2011-08-10 2017-05-16 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment with multiple hypervisor types
US20170147819A1 (en) * 2015-11-20 2017-05-25 Lastline, Inc. Methods and systems for maintaining a sandbox for use in malware detection
US20170180331A1 (en) * 2013-03-15 2017-06-22 Netiq Corporation Techniques for secure data extraction in a virtual or cloud environment
US9747287B1 (en) 2011-08-10 2017-08-29 Nutanix, Inc. Method and system for managing metadata for a virtualization environment
US9772866B1 (en) 2012-07-17 2017-09-26 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US20170277555A1 (en) * 2016-03-26 2017-09-28 Vmware, Inc. Efficient vm migration across cloud using catalog aware compression
US20170359221A1 (en) * 2015-04-10 2017-12-14 Hitachi, Ltd. Method and management system for calculating billing amount in relation to data volume reduction function
US20180024854A1 (en) * 2015-03-27 2018-01-25 Intel Corporation Technologies for virtual machine migration
US9935894B2 (en) 2014-05-08 2018-04-03 Cisco Technology, Inc. Collaborative inter-service scheduling of logical resources in cloud platforms
US10034201B2 (en) 2015-07-09 2018-07-24 Cisco Technology, Inc. Stateless load-balancing across multiple tunnels
US10037617B2 (en) 2015-02-27 2018-07-31 Cisco Technology, Inc. Enhanced user interface systems including dynamic context selection for cloud-based networks
US10050862B2 (en) 2015-02-09 2018-08-14 Cisco Technology, Inc. Distributed application framework that uses network and application awareness for placing data
US10067780B2 (en) 2015-10-06 2018-09-04 Cisco Technology, Inc. Performance-based public cloud selection for a hybrid cloud environment
US10083062B2 (en) * 2015-07-31 2018-09-25 Cisco Technology, Inc. Data suppression for faster migration
US10084703B2 (en) 2015-12-04 2018-09-25 Cisco Technology, Inc. Infrastructure-exclusive service forwarding
US10108644B1 (en) * 2014-03-12 2018-10-23 EMC IP Holding Company LLC Method for minimizing storage requirements on fast/expensive arrays for data mobility and migration
US10122605B2 (en) 2014-07-09 2018-11-06 Cisco Technology, Inc Annotation of network activity through different phases of execution
US10129177B2 (en) 2016-05-23 2018-11-13 Cisco Technology, Inc. Inter-cloud broker for hybrid cloud networks
US10140172B2 (en) 2016-05-18 2018-11-27 Cisco Technology, Inc. Network-aware storage repairs
US10142346B2 (en) 2016-07-28 2018-11-27 Cisco Technology, Inc. Extension of a private cloud end-point group to a public cloud
US10140112B2 (en) * 2014-03-28 2018-11-27 Ntt Docomo, Inc. Update management system and update management method
US10205677B2 (en) 2015-11-24 2019-02-12 Cisco Technology, Inc. Cloud resource placement optimization and migration execution in federated clouds
US10212074B2 (en) 2011-06-24 2019-02-19 Cisco Technology, Inc. Level of hierarchy in MST for traffic localization and load balancing
US10222986B2 (en) 2015-05-15 2019-03-05 Cisco Technology, Inc. Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system
US10243823B1 (en) 2017-02-24 2019-03-26 Cisco Technology, Inc. Techniques for using frame deep loopback capabilities for extended link diagnostics in fibre channel storage area networks
US10243826B2 (en) 2015-01-10 2019-03-26 Cisco Technology, Inc. Diagnosis and throughput measurement of fibre channel ports in a storage area network environment
US10257042B2 (en) 2012-01-13 2019-04-09 Cisco Technology, Inc. System and method for managing site-to-site VPNs of a cloud managed network
US10254991B2 (en) 2017-03-06 2019-04-09 Cisco Technology, Inc. Storage area network based extended I/O metrics computation for deep insight into application performance
US10263898B2 (en) 2016-07-20 2019-04-16 Cisco Technology, Inc. System and method for implementing universal cloud classification (UCC) as a service (UCCaaS)
US10284433B2 (en) * 2015-06-25 2019-05-07 International Business Machines Corporation Data synchronization using redundancy detection
US10303534B2 (en) 2017-07-20 2019-05-28 Cisco Technology, Inc. System and method for self-healing of application centric infrastructure fabric memory
US10320683B2 (en) 2017-01-30 2019-06-11 Cisco Technology, Inc. Reliable load-balancer using segment routing and real-time application monitoring
US10326838B2 (en) * 2016-09-23 2019-06-18 Microsoft Technology Licensing, Llc Live migration of probe enabled load balanced endpoints in a software defined network
US10326817B2 (en) 2016-12-20 2019-06-18 Cisco Technology, Inc. System and method for quality-aware recording in large scale collaborate clouds
US10334029B2 (en) 2017-01-10 2019-06-25 Cisco Technology, Inc. Forming neighborhood groups from disperse cloud providers
US10353800B2 (en) 2017-10-18 2019-07-16 Cisco Technology, Inc. System and method for graph based monitoring and management of distributed systems
US10367914B2 (en) 2016-01-12 2019-07-30 Cisco Technology, Inc. Attaching service level agreements to application containers and enabling service assurance
US10382597B2 (en) 2016-07-20 2019-08-13 Cisco Technology, Inc. System and method for transport-layer level identification and isolation of container traffic
US10382274B2 (en) 2017-06-26 2019-08-13 Cisco Technology, Inc. System and method for wide area zero-configuration network auto configuration
US10382534B1 (en) 2015-04-04 2019-08-13 Cisco Technology, Inc. Selective load balancing of network traffic
US10404596B2 (en) 2017-10-03 2019-09-03 Cisco Technology, Inc. Dynamic route profile storage in a hardware trie routing table
US20190273779A1 (en) * 2018-03-01 2019-09-05 Hewlett Packard Enterprise Development Lp Execution of software on a remote computing system
US10425288B2 (en) 2017-07-21 2019-09-24 Cisco Technology, Inc. Container telemetry in data center environments with blade servers and switches
US10432532B2 (en) 2016-07-12 2019-10-01 Cisco Technology, Inc. Dynamically pinning micro-service to uplink port
US10439877B2 (en) 2017-06-26 2019-10-08 Cisco Technology, Inc. Systems and methods for enabling wide area multicast domain name system
US10454984B2 (en) 2013-03-14 2019-10-22 Cisco Technology, Inc. Method for streaming packet captures from network access devices to a cloud server over HTTP
US10461959B2 (en) 2014-04-15 2019-10-29 Cisco Technology, Inc. Programmable infrastructure gateway for enabling hybrid cloud services in a network environment
US10462136B2 (en) 2015-10-13 2019-10-29 Cisco Technology, Inc. Hybrid cloud security groups
US10467103B1 (en) 2016-03-25 2019-11-05 Nutanix, Inc. Efficient change block training
US10476982B2 (en) 2015-05-15 2019-11-12 Cisco Technology, Inc. Multi-datacenter message queue
US10511534B2 (en) 2018-04-06 2019-12-17 Cisco Technology, Inc. Stateless distributed load-balancing
US10523592B2 (en) 2016-10-10 2019-12-31 Cisco Technology, Inc. Orchestration system for migrating user data and services based on user information
US10523657B2 (en) 2015-11-16 2019-12-31 Cisco Technology, Inc. Endpoint privacy preservation with cloud conferencing
US10541866B2 (en) 2017-07-25 2020-01-21 Cisco Technology, Inc. Detecting and resolving multicast traffic performance issues
US10545914B2 (en) 2017-01-17 2020-01-28 Cisco Technology, Inc. Distributed object storage
US10552191B2 (en) 2017-01-26 2020-02-04 Cisco Technology, Inc. Distributed hybrid cloud orchestration model
US10567344B2 (en) 2016-08-23 2020-02-18 Cisco Technology, Inc. Automatic firewall configuration based on aggregated cloud managed information
US10585830B2 (en) 2015-12-10 2020-03-10 Cisco Technology, Inc. Policy-driven storage in a microserver computing environment
US10601693B2 (en) 2017-07-24 2020-03-24 Cisco Technology, Inc. System and method for providing scalable flow monitoring in a data center fabric
US10608865B2 (en) 2016-07-08 2020-03-31 Cisco Technology, Inc. Reducing ARP/ND flooding in cloud environment
US10664169B2 (en) 2016-06-24 2020-05-26 Cisco Technology, Inc. Performance of object storage system by reconfiguring storage devices based on latency that includes identifying a number of fragments that has a particular storage device as its primary storage device and another number of fragments that has said particular storage device as its replica storage device
US10671571B2 (en) 2017-01-31 2020-06-02 Cisco Technology, Inc. Fast network performance in containerized environments for network function virtualization
US10708342B2 (en) 2015-02-27 2020-07-07 Cisco Technology, Inc. Dynamic troubleshooting workspaces for cloud and network management systems
US10705882B2 (en) 2017-12-21 2020-07-07 Cisco Technology, Inc. System and method for resource placement across clouds for data intensive workloads
US10713203B2 (en) 2017-02-28 2020-07-14 Cisco Technology, Inc. Dynamic partition of PCIe disk arrays based on software configuration / policy distribution
US10728361B2 (en) 2018-05-29 2020-07-28 Cisco Technology, Inc. System for association of customer information across subscribers
US10739983B1 (en) 2019-04-10 2020-08-11 Servicenow, Inc. Configuration and management of swimlanes in a graphical user interface
US10764266B2 (en) 2018-06-19 2020-09-01 Cisco Technology, Inc. Distributed authentication and authorization for rapid scaling of containerized services
US10768961B2 (en) * 2016-07-14 2020-09-08 International Business Machines Corporation Virtual machine seed image replication through parallel deployment
US10778765B2 (en) 2015-07-15 2020-09-15 Cisco Technology, Inc. Bid/ask protocol in scale-out NVMe storage
US10805235B2 (en) 2014-09-26 2020-10-13 Cisco Technology, Inc. Distributed application framework for prioritizing network traffic using application priority awareness
US10819571B2 (en) 2018-06-29 2020-10-27 Cisco Technology, Inc. Network traffic optimization using in-situ notification system
US10826829B2 (en) 2015-03-26 2020-11-03 Cisco Technology, Inc. Scalable handling of BGP route information in VXLAN with EVPN control plane
US10860363B1 (en) * 2019-03-14 2020-12-08 Amazon Technologies, Inc. Managing virtual machine hibernation state incompatibility with underlying host configurations
US10872056B2 (en) 2016-06-06 2020-12-22 Cisco Technology, Inc. Remote memory access using memory mapped addressing among multiple compute nodes
US10892940B2 (en) 2017-07-21 2021-01-12 Cisco Technology, Inc. Scalable statistics and analytics mechanisms in cloud networking
US10904322B2 (en) 2018-06-15 2021-01-26 Cisco Technology, Inc. Systems and methods for scaling down cloud-based servers handling secure connections
US10904342B2 (en) 2018-07-30 2021-01-26 Cisco Technology, Inc. Container networking using communication tunnels
US10917260B1 (en) * 2017-10-24 2021-02-09 Druva Data management across cloud storage providers
US10942666B2 (en) 2017-10-13 2021-03-09 Cisco Technology, Inc. Using network device replication in distributed storage clusters
US11005682B2 (en) 2015-10-06 2021-05-11 Cisco Technology, Inc. Policy-driven switch overlay bypass in a hybrid cloud network environment
US11005710B2 (en) 2015-08-18 2021-05-11 Microsoft Technology Licensing, Llc Data center resource tracking
US11005731B2 (en) 2017-04-05 2021-05-11 Cisco Technology, Inc. Estimating model parameters for automatic deployment of scalable micro services
US11019083B2 (en) 2018-06-20 2021-05-25 Cisco Technology, Inc. System for coordinating distributed website analysis
US11044162B2 (en) 2016-12-06 2021-06-22 Cisco Technology, Inc. Orchestration of cloud and fog interactions
US20210240671A1 (en) * 2020-01-31 2021-08-05 EMC IP Holding Company LLC Intelligent filesystem for container images
US11169883B1 (en) * 2017-05-04 2021-11-09 Amazon Technologies, Inc. User and system initiated instance hibernation
US11182193B2 (en) * 2019-07-02 2021-11-23 International Business Machines Corporation Optimizing image reconstruction for container registries
US11403127B2 (en) * 2016-11-08 2022-08-02 International Business Machines Corporation Generating a virtual machines relocation protocol
US11418588B2 (en) 2020-09-29 2022-08-16 EMC IP Holding Company LLC Intelligent peer-to-peer container filesystem
US11481362B2 (en) 2017-11-13 2022-10-25 Cisco Technology, Inc. Using persistent memory to enable restartability of bulk load transactions in cloud databases
US11563695B2 (en) 2016-08-29 2023-01-24 Cisco Technology, Inc. Queue protection using a shared global memory reserve
US11588783B2 (en) 2015-06-10 2023-02-21 Cisco Technology, Inc. Techniques for implementing IPV6-based distributed storage space
US11595474B2 (en) 2017-12-28 2023-02-28 Cisco Technology, Inc. Accelerating data replication using multicast and non-volatile memory enabled nodes

Families Citing this family (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10673952B1 (en) * 2014-11-10 2020-06-02 Turbonomic, Inc. Systems, apparatus, and methods for managing computer workload availability and performance
US8396843B2 (en) * 2010-06-14 2013-03-12 Dell Products L.P. Active file instant cloning
US8799422B1 (en) * 2010-08-16 2014-08-05 Juniper Networks, Inc. In-service configuration upgrade using virtual machine instances
US9805108B2 (en) 2010-12-23 2017-10-31 Mongodb, Inc. Large distributed database clustering systems and methods
US10346430B2 (en) 2010-12-23 2019-07-09 Mongodb, Inc. System and method for determining consensus within a distributed database
US9740762B2 (en) 2011-04-01 2017-08-22 Mongodb, Inc. System and method for optimizing data migration in a partitioned database
US9881034B2 (en) 2015-12-15 2018-01-30 Mongodb, Inc. Systems and methods for automating management of distributed databases
US10740353B2 (en) 2010-12-23 2020-08-11 Mongodb, Inc. Systems and methods for managing distributed database deployments
US11544288B2 (en) 2010-12-23 2023-01-03 Mongodb, Inc. Systems and methods for managing distributed database deployments
US8996463B2 (en) 2012-07-26 2015-03-31 Mongodb, Inc. Aggregation framework system architecture and method
US10614098B2 (en) 2010-12-23 2020-04-07 Mongodb, Inc. System and method for determining consensus within a distributed database
US10977277B2 (en) 2010-12-23 2021-04-13 Mongodb, Inc. Systems and methods for database zone sharding and API integration
US10366100B2 (en) 2012-07-26 2019-07-30 Mongodb, Inc. Aggregation framework system architecture and method
US10713280B2 (en) 2010-12-23 2020-07-14 Mongodb, Inc. Systems and methods for managing distributed database deployments
US8572031B2 (en) * 2010-12-23 2013-10-29 Mongodb, Inc. Method and apparatus for maintaining replica sets
US11615115B2 (en) 2010-12-23 2023-03-28 Mongodb, Inc. Systems and methods for managing distributed database deployments
US10262050B2 (en) 2015-09-25 2019-04-16 Mongodb, Inc. Distributed database systems and methods with pluggable storage engines
US10997211B2 (en) 2010-12-23 2021-05-04 Mongodb, Inc. Systems and methods for database zone sharding and API integration
KR101544482B1 (en) * 2011-03-15 2015-08-21 주식회사 케이티 Cloud center controlling apparatus and cloud center selecting method of the same
US8806485B2 (en) * 2011-05-03 2014-08-12 International Business Machines Corporation Configuring virtual machine images in a networked computing environment
US9311327B1 (en) 2011-06-30 2016-04-12 Emc Corporation Updating key value databases for virtual backups
US8849769B1 (en) 2011-06-30 2014-09-30 Emc Corporation Virtual machine file level recovery
US8849777B1 (en) 2011-06-30 2014-09-30 Emc Corporation File deletion detection in key value databases for virtual backups
US9229951B1 (en) 2011-06-30 2016-01-05 Emc Corporation Key value databases for virtual backups
US8671075B1 (en) 2011-06-30 2014-03-11 Emc Corporation Change tracking indices in virtual machines
US8843443B1 (en) * 2011-06-30 2014-09-23 Emc Corporation Efficient backup of virtual data
US9158632B1 (en) 2011-06-30 2015-10-13 Emc Corporation Efficient file browsing using key value databases for virtual backups
US8949829B1 (en) 2011-06-30 2015-02-03 Emc Corporation Virtual machine disaster recovery
JP5976840B2 (en) 2011-12-29 2016-08-24 ヴイエムウェア インコーポレイテッドVMware,Inc. N-way synchronization of desktop images
US9613045B2 (en) * 2011-12-29 2017-04-04 Vmware, Inc. Synchronization of desktop images with smart image merging
US8850420B2 (en) * 2012-03-22 2014-09-30 Sap Ag Dynamically updating on-demand runtime platforms executing business applications
US8804494B1 (en) * 2012-04-04 2014-08-12 Wichorus, Inc. Methods and apparatus for improving network performance using virtual instances for system redundancy
US11403317B2 (en) 2012-07-26 2022-08-02 Mongodb, Inc. Aggregation framework system architecture and method
US11544284B2 (en) 2012-07-26 2023-01-03 Mongodb, Inc. Aggregation framework system architecture and method
US10872095B2 (en) 2012-07-26 2020-12-22 Mongodb, Inc. Aggregation framework system architecture and method
JP5996787B2 (en) * 2012-10-04 2016-09-21 株式会社日立製作所 System management method and computer system
US9286051B2 (en) 2012-10-05 2016-03-15 International Business Machines Corporation Dynamic protection of one or more deployed copies of a master operating system image
US9208041B2 (en) * 2012-10-05 2015-12-08 International Business Machines Corporation Dynamic protection of a master operating system image
US9311070B2 (en) 2012-10-05 2016-04-12 International Business Machines Corporation Dynamically recommending configuration changes to an operating system image
US8990772B2 (en) 2012-10-16 2015-03-24 International Business Machines Corporation Dynamically recommending changes to an association between an operating system image and an update group
US9069677B2 (en) 2013-04-29 2015-06-30 International Business Machines Corporation Input/output de-duplication based on variable-size chunks
WO2015084308A1 (en) * 2013-12-02 2015-06-11 Empire Technology Development, Llc Computing resource provisioning based on deduplication
US9411627B2 (en) * 2014-07-18 2016-08-09 International Business Machines Corporation Allocating storage for virtual machine instances based on input/output (I/O) usage rate of the disk extents stored in an I/O profile of a previous incarnation of the virtual machine
US9992078B1 (en) * 2015-02-26 2018-06-05 Amdocs Software Systems Limited System, method, and computer program for deploying disk images in a communication network, based on network topology
US11120892B2 (en) 2015-06-18 2021-09-14 Amazon Technologies, Inc. Content testing during image production
US10496669B2 (en) 2015-07-02 2019-12-03 Mongodb, Inc. System and method for augmenting consensus election in a distributed database
US10282092B1 (en) * 2015-09-09 2019-05-07 Citigroup Technology, Inc. Methods and systems for creating and maintaining a library of virtual hard disks
US10394822B2 (en) 2015-09-25 2019-08-27 Mongodb, Inc. Systems and methods for data conversion and comparison
US10673623B2 (en) 2015-09-25 2020-06-02 Mongodb, Inc. Systems and methods for hierarchical key management in encrypted distributed databases
US10846411B2 (en) 2015-09-25 2020-11-24 Mongodb, Inc. Distributed database systems and methods with encrypted storage engines
US10423626B2 (en) 2015-09-25 2019-09-24 Mongodb, Inc. Systems and methods for data conversion and comparison
US9898214B2 (en) 2015-09-29 2018-02-20 International Business Machines Corporation Storage site selection in a multi-target environment using weights
US10671496B2 (en) 2016-05-31 2020-06-02 Mongodb, Inc. Method and apparatus for reading and writing committed data
US10621050B2 (en) 2016-06-27 2020-04-14 Mongodb, Inc. Method and apparatus for restoring data from snapshots
CN106445643B (en) * 2016-11-14 2019-10-22 上海云轴信息科技有限公司 It clones, the method and apparatus of upgrading virtual machine
US10747581B2 (en) 2017-02-15 2020-08-18 International Business Machines Corporation Virtual machine migration between software defined storage systems
CN108632680B (en) * 2017-03-21 2020-12-18 华为技术有限公司 Live broadcast content scheduling method, scheduling server and terminal
US10866868B2 (en) 2017-06-20 2020-12-15 Mongodb, Inc. Systems and methods for optimization of database operations
US10754741B1 (en) 2017-10-23 2020-08-25 Amazon Technologies, Inc. Event-driven replication for migrating computing resources
CN110908741A (en) 2018-09-14 2020-03-24 阿里巴巴集团控股有限公司 Application performance management display method and device
CN111814003B (en) * 2019-04-12 2024-04-23 伊姆西Ip控股有限责任公司 Method, electronic device and computer program product for establishing metadata index
US11934283B2 (en) 2020-05-19 2024-03-19 EMC IP Holding Company LLC Cost-optimized true zero recovery time objective for multiple applications using failure domains
US11797400B2 (en) * 2020-05-19 2023-10-24 EMC IP Holding Company LLC Cost-optimized true zero recovery time objective for multiple applications based on interdependent applications
US11899957B2 (en) 2020-05-19 2024-02-13 EMC IP Holding Company LLC Cost-optimized true zero recovery time objective for multiple applications
US11836512B2 (en) 2020-05-19 2023-12-05 EMC IP Holding Company LLC Virtual machine replication strategy based on predicted application failures
CN111541553B (en) 2020-07-08 2021-08-24 支付宝(杭州)信息技术有限公司 Trusted starting method and device of block chain all-in-one machine
CN112491812B (en) * 2020-07-08 2022-03-01 支付宝(杭州)信息技术有限公司 Hash updating method and device of block chain all-in-one machine

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090307166A1 (en) * 2008-06-05 2009-12-10 International Business Machines Corporation Method and system for automated integrated server-network-storage disaster recovery planning
US20100094999A1 (en) * 2008-10-10 2010-04-15 Netapp Inc. Limiting simultaneous data transfers and efficient throttle management
US20110167221A1 (en) * 2010-01-06 2011-07-07 Gururaj Pangal System and method for efficiently creating off-site data volume back-ups
US20120072903A1 (en) * 2010-09-20 2012-03-22 International Business Machines Corporation Multi-image migration system and method
US20120179778A1 (en) * 2010-01-22 2012-07-12 Brutesoft, Inc. Applying networking protocols to image file management
US20120221820A1 (en) * 2010-08-20 2012-08-30 International Business Machines Corporation Switching visibility between virtual data storage entities
US20120297130A1 (en) * 2011-05-16 2012-11-22 Ramtron International Corporation Stack processor using a ferroelectric random access memory (f-ram) for both code and data space
US20130007402A1 (en) * 2004-04-30 2013-01-03 Commvault Systems, Inc. Systems and methods for storage modeling and costing

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6912645B2 (en) * 2001-07-19 2005-06-28 Lucent Technologies Inc. Method and apparatus for archival data storage
US7080378B1 (en) * 2002-05-17 2006-07-18 Storage Technology Corporation Workload balancing using dynamically allocated virtual servers
GB2419701A (en) * 2004-10-29 2006-05-03 Hewlett Packard Development Co Virtual overlay infrastructure with dynamic control of mapping
US7756544B1 (en) * 2005-01-13 2010-07-13 Enterasys Networks, Inc. Power controlled network devices for security and power conservation
US7937547B2 (en) * 2005-06-24 2011-05-03 Syncsort Incorporated System and method for high performance enterprise data protection
US20070208918A1 (en) * 2006-03-01 2007-09-06 Kenneth Harbin Method and apparatus for providing virtual machine backup
US20080201455A1 (en) * 2007-02-15 2008-08-21 Husain Syed M Amir Moving Execution of a Virtual Machine Across Different Virtualization Platforms
US8126854B1 (en) * 2007-03-05 2012-02-28 Emc Corporation Using versioning to back up multiple versions of a stored object
US8291411B2 (en) * 2007-05-21 2012-10-16 International Business Machines Corporation Dynamic placement of virtual machines for managing violations of service level agreements (SLAs)
US8291180B2 (en) * 2008-03-20 2012-10-16 Vmware, Inc. Loose synchronization of virtual disks
US7856419B2 (en) * 2008-04-04 2010-12-21 Vmware, Inc Method and system for storage replication
US8019861B2 (en) * 2009-01-29 2011-09-13 Vmware, Inc. Speculative virtual machine resource scheduling
US9778718B2 (en) * 2009-02-13 2017-10-03 Schneider Electric It Corporation Power supply and data center control
WO2010102084A2 (en) * 2009-03-05 2010-09-10 Coach Wei System and method for performance acceleration, data protection, disaster recovery and on-demand scaling of computer applications
US8489744B2 (en) * 2009-06-29 2013-07-16 Red Hat Israel, Ltd. Selecting a host from a host cluster for live migration of a virtual machine
US8473557B2 (en) * 2010-08-24 2013-06-25 At&T Intellectual Property I, L.P. Methods and apparatus to migrate virtual machines between distributive computing networks across a wide area network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130007402A1 (en) * 2004-04-30 2013-01-03 Commvault Systems, Inc. Systems and methods for storage modeling and costing
US20090307166A1 (en) * 2008-06-05 2009-12-10 International Business Machines Corporation Method and system for automated integrated server-network-storage disaster recovery planning
US20100094999A1 (en) * 2008-10-10 2010-04-15 Netapp Inc. Limiting simultaneous data transfers and efficient throttle management
US20110167221A1 (en) * 2010-01-06 2011-07-07 Gururaj Pangal System and method for efficiently creating off-site data volume back-ups
US20120179778A1 (en) * 2010-01-22 2012-07-12 Brutesoft, Inc. Applying networking protocols to image file management
US20120221820A1 (en) * 2010-08-20 2012-08-30 International Business Machines Corporation Switching visibility between virtual data storage entities
US20120072903A1 (en) * 2010-09-20 2012-03-22 International Business Machines Corporation Multi-image migration system and method
US20120297130A1 (en) * 2011-05-16 2012-11-22 Ramtron International Corporation Stack processor using a ferroelectric random access memory (f-ram) for both code and data space

Cited By (268)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10261819B2 (en) 2011-01-11 2019-04-16 Servicenow, Inc. Determining an optimal computing environment for running an image based on performance of similar images
US20120216052A1 (en) * 2011-01-11 2012-08-23 Safenet, Inc. Efficient volume encryption
US11204793B2 (en) 2011-01-11 2021-12-21 Servicenow, Inc. Determining an optimal computing environment for running an image
JP2014501989A (en) * 2011-01-11 2014-01-23 インターナショナル・ビジネス・マシーンズ・コーポレーション Determining the best computing environment to run an image
US20120180045A1 (en) * 2011-01-11 2012-07-12 International Business Machines Corporation Determining an optimal computing environment for running an image
US9348650B2 (en) 2011-01-11 2016-05-24 International Business Machines Corporation Determining an optimal computing environment for running an image based on performance of similar images
US8572623B2 (en) * 2011-01-11 2013-10-29 International Business Machines Corporation Determining an optimal computing environment for running an image based on performance of similar images
US20120239739A1 (en) * 2011-02-09 2012-09-20 Gaurav Manglik Apparatus, systems and methods for dynamic adaptive metrics based application deployment on distributed infrastructures
US10678602B2 (en) * 2011-02-09 2020-06-09 Cisco Technology, Inc. Apparatus, systems and methods for dynamic adaptive metrics based application deployment on distributed infrastructures
US9110693B1 (en) * 2011-02-17 2015-08-18 Emc Corporation VM mobility over distance
US8589406B2 (en) * 2011-03-03 2013-11-19 Hewlett-Packard Development Company, L.P. Deduplication while rebuilding indexes
US20120226699A1 (en) * 2011-03-03 2012-09-06 Mark David Lillibridge Deduplication while rebuilding indexes
US9467712B2 (en) * 2011-03-22 2016-10-11 International Business Machines Corporation Scalable image distribution in virtualized server environments
US9609345B2 (en) * 2011-03-22 2017-03-28 International Business Machines Corporation Scalable image distribution in virtualized server environments
US20130004089A1 (en) * 2011-03-22 2013-01-03 International Business Machines Corporation Scalable image distribution in virtualized server environments
US9734431B2 (en) * 2011-03-22 2017-08-15 International Business Machines Corporation Scalable image distribution in virtualized server environments
US9326001B2 (en) * 2011-03-22 2016-04-26 International Business Machines Corporation Scalable image distribution in virtualized server environments
US20120243795A1 (en) * 2011-03-22 2012-09-27 International Business Machines Corporation Scalable image distribution in virtualized server environments
US20120254131A1 (en) * 2011-03-30 2012-10-04 International Business Machines Corporation Virtual machine image co-migration
US8442955B2 (en) * 2011-03-30 2013-05-14 International Business Machines Corporation Virtual machine image co-migration
US20120271797A1 (en) * 2011-04-22 2012-10-25 Symantec Corporation Reference volume for initial synchronization of a replicated volume group
US9311328B2 (en) * 2011-04-22 2016-04-12 Veritas Us Ip Holdings Llc Reference volume for initial synchronization of a replicated volume group
US20150178805A1 (en) * 2011-05-09 2015-06-25 Metacloud Inc. Composite public cloud, method and system
US8977754B2 (en) * 2011-05-09 2015-03-10 Metacloud Inc. Composite public cloud, method and system
US20120290460A1 (en) * 2011-05-09 2012-11-15 Curry Jr Steven Lynn Composite Public Cloud, Method and System
US20120304170A1 (en) * 2011-05-27 2012-11-29 Morgan Christopher Edwin Systems and methods for introspective application reporting to facilitate virtual machine movement between cloud hosts
US10102018B2 (en) * 2011-05-27 2018-10-16 Red Hat, Inc. Introspective application reporting to facilitate virtual machine movement between cloud hosts
US11442762B2 (en) * 2011-05-27 2022-09-13 Red Hat, Inc. Systems and methods for introspective application reporting to facilitate virtual machine movement between cloud hosts
US20190050250A1 (en) * 2011-05-27 2019-02-14 Red Hat, Inc. Systems and methods for introspective application reporting to facilitate virtual machine movement between cloud hosts
US8776057B2 (en) * 2011-06-02 2014-07-08 Fujitsu Limited System and method for providing evidence of the physical presence of virtual machines
US20120311574A1 (en) * 2011-06-02 2012-12-06 Fujitsu Limited System and method for providing evidence of the physical presence of virtual machines
US10212074B2 (en) 2011-06-24 2019-02-19 Cisco Technology, Inc. Level of hierarchy in MST for traffic localization and load balancing
US10178183B2 (en) 2011-06-28 2019-01-08 Micro Focus Software Inc. Techniques for prevent information disclosure via dynamic secure cloud resources
US9246985B2 (en) * 2011-06-28 2016-01-26 Novell, Inc. Techniques for prevent information disclosure via dynamic secure cloud resources
US8863124B1 (en) 2011-08-10 2014-10-14 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US11314421B2 (en) 2011-08-10 2022-04-26 Nutanix, Inc. Method and system for implementing writable snapshots in a virtualized storage environment
US8850130B1 (en) 2011-08-10 2014-09-30 Nutanix, Inc. Metadata for managing I/O and storage for a virtualization
US9619257B1 (en) 2011-08-10 2017-04-11 Nutanix, Inc. System and method for implementing storage for a virtualization environment
US9652265B1 (en) 2011-08-10 2017-05-16 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment with multiple hypervisor types
US11853780B2 (en) 2011-08-10 2023-12-26 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US9389887B1 (en) * 2011-08-10 2016-07-12 Nutanix, Inc. Method and system for managing de-duplication of data in a virtualization environment
US9354912B1 (en) 2011-08-10 2016-05-31 Nutanix, Inc. Method and system for implementing a maintenance service for managing I/O and storage for a virtualization environment
US9747287B1 (en) 2011-08-10 2017-08-29 Nutanix, Inc. Method and system for managing metadata for a virtualization environment
US9575784B1 (en) 2011-08-10 2017-02-21 Nutanix, Inc. Method and system for handling storage in response to migration of a virtual machine in a virtualization environment
US11301274B2 (en) 2011-08-10 2022-04-12 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US9052936B1 (en) 2011-08-10 2015-06-09 Nutanix, Inc. Method and system for communicating to a storage controller in a virtualization environment
US9256374B1 (en) 2011-08-10 2016-02-09 Nutanix, Inc. Metadata for managing I/O and storage for a virtualization environment
US8997097B1 (en) 2011-08-10 2015-03-31 Nutanix, Inc. System for implementing a virtual disk in a virtualization environment
US9009106B1 (en) 2011-08-10 2015-04-14 Nutanix, Inc. Method and system for implementing writable snapshots in a virtualized storage environment
US9256475B1 (en) 2011-08-10 2016-02-09 Nutanix, Inc. Method and system for handling ownership transfer in a virtualization environment
US9256456B1 (en) 2011-08-10 2016-02-09 Nutanix, Inc. Architecture for managing I/O and storage for a virtualization environment
US10359952B1 (en) 2011-08-10 2019-07-23 Nutanix, Inc. Method and system for implementing writable snapshots in a virtualized storage environment
US20130151558A1 (en) * 2011-12-12 2013-06-13 Telefonaktiebolaget L M Ericsson (Publ) Methods and apparatus for implementing a distributed database
US10257042B2 (en) 2012-01-13 2019-04-09 Cisco Technology, Inc. System and method for managing site-to-site VPNs of a cloud managed network
US8862744B2 (en) * 2012-02-14 2014-10-14 Telefonaktiebolaget L M Ericsson (Publ) Optimizing traffic load in a communications network
US20130212578A1 (en) * 2012-02-14 2013-08-15 Vipin Garg Optimizing traffic load in a communications network
US9678774B2 (en) 2012-03-08 2017-06-13 Empire Technology Development Llc Secure migration of virtual machines
US9054917B2 (en) * 2012-03-08 2015-06-09 Empire Technology Development Llc Secure migration of virtual machines
US20130238786A1 (en) * 2012-03-08 2013-09-12 Empire Technology Development Llc Secure migration of virtual machines
US9201704B2 (en) 2012-04-05 2015-12-01 Cisco Technology, Inc. System and method for migrating application virtual machines in a network environment
US20130290274A1 (en) * 2012-04-25 2013-10-31 International Business Machines Corporation Enhanced reliability in deduplication technology over storage clouds
US8903764B2 (en) * 2012-04-25 2014-12-02 International Business Machines Corporation Enhanced reliability in deduplication technology over storage clouds
US9229819B2 (en) 2012-04-25 2016-01-05 International Business Machines Corporation Enhanced reliability in deduplication technology over storage clouds
US20130290952A1 (en) * 2012-04-25 2013-10-31 Jerry W. Childers, JR. Copying Virtual Machine Templates To Cloud Regions
US9223634B2 (en) 2012-05-02 2015-12-29 Cisco Technology, Inc. System and method for simulating virtual machine migration in a network environment
US8838968B2 (en) * 2012-05-14 2014-09-16 Ca, Inc. System and method for virtual machine data protection in a public cloud
US20130305046A1 (en) * 2012-05-14 2013-11-14 Computer Associates Think, Inc. System and Method for Virtual Machine Data Protection in a Public Cloud
US20130311989A1 (en) * 2012-05-21 2013-11-21 Hitachi, Ltd. Method and apparatus for maintaining a workload service level on a converged platform
US9348724B2 (en) * 2012-05-21 2016-05-24 Hitachi, Ltd. Method and apparatus for maintaining a workload service level on a converged platform
US9201679B2 (en) * 2012-05-31 2015-12-01 Red Hat Israel, Ltd. Multiple destination live migration
US9058199B2 (en) * 2012-05-31 2015-06-16 Red Hat Israel, Ltd. Pre-warming destination for fast live migration
US20130326174A1 (en) * 2012-05-31 2013-12-05 Michael Tsirkin Pre-warming destination for fast live migration
US20130326173A1 (en) * 2012-05-31 2013-12-05 Michael Tsirkin Multiple destination live migration
US9110704B2 (en) * 2012-05-31 2015-08-18 Red Hat Israel, Ltd. Pre-warming of multiple destinations for fast live migration
US20130326175A1 (en) * 2012-05-31 2013-12-05 Michael Tsirkin Pre-warming of multiple destinations for fast live migration
US9256463B2 (en) * 2012-06-29 2016-02-09 International Business Machines Corporation Method and apparatus to replicate stateful virtual machines between clouds
US20140007088A1 (en) * 2012-06-29 2014-01-02 International Business Machines Corporation Method and apparatus to replicate stateful virtual machines between clouds
US9256464B2 (en) * 2012-06-29 2016-02-09 International Business Machines Corporation Method and apparatus to replicate stateful virtual machines between clouds
US20140007094A1 (en) * 2012-06-29 2014-01-02 International Business Machines Corporation Method and apparatus to replicate stateful virtual machines between clouds
US8935695B1 (en) * 2012-07-12 2015-01-13 Symantec Corporation Systems and methods for managing multipathing configurations for virtual machines
US9772866B1 (en) 2012-07-17 2017-09-26 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US10747570B2 (en) 2012-07-17 2020-08-18 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US10684879B2 (en) 2012-07-17 2020-06-16 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US11314543B2 (en) 2012-07-17 2022-04-26 Nutanix, Inc. Architecture for implementing a virtualization environment and appliance
US9135041B2 (en) * 2012-08-03 2015-09-15 International Business Machines Corporation Selecting provisioning targets for new virtual machine instances
US9135040B2 (en) * 2012-08-03 2015-09-15 International Business Machines Corporation Selecting provisioning targets for new virtual machine instances
US9489231B2 (en) 2012-08-03 2016-11-08 International Business Machines Corporation Selecting provisioning targets for new virtual machine instances
US20140040893A1 (en) * 2012-08-03 2014-02-06 International Business Machines Corporation Selecting provisioning targets for new virtual machine instances
US20140040891A1 (en) * 2012-08-03 2014-02-06 International Business Machines Corporation Selecting provisioning targets for new virtual machine instances
US20150278146A1 (en) * 2012-08-14 2015-10-01 Alcatel Lucent Method And Apparatus For Providing Traffic Re-Aware Slot Placement
US9104462B2 (en) * 2012-08-14 2015-08-11 Alcatel Lucent Method and apparatus for providing traffic re-aware slot placement
US20140052973A1 (en) * 2012-08-14 2014-02-20 Alcatel-Lucent India Limited Method And Apparatus For Providing Traffic Re-Aware Slot Placement
US20140059200A1 (en) * 2012-08-21 2014-02-27 Cisco Technology, Inc. Flow de-duplication for network monitoring
US9548908B2 (en) * 2012-08-21 2017-01-17 Cisco Technology, Inc. Flow de-duplication for network monitoring
US9047108B1 (en) * 2012-09-07 2015-06-02 Symantec Corporation Systems and methods for migrating replicated virtual machine disks
US11074057B2 (en) 2013-02-14 2021-07-27 International Business Machines Corporation System and method for determining when cloud virtual machines need to be updated
US20140229939A1 (en) * 2013-02-14 2014-08-14 International Business Machines Corporation System and method for determining when cloud virtual machines need to be updated
CN103995728A (en) * 2013-02-14 2014-08-20 国际商业机器公司 System and method for determining when cloud virtual machines need to be updated
US9983864B2 (en) 2013-02-14 2018-05-29 International Business Machines Corporation System and method for determining when cloud virtual machines need to be updated
US9298443B2 (en) * 2013-02-14 2016-03-29 International Business Machines Corporation System and method for determining when cloud virtual machines need to be updated
US9104455B2 (en) 2013-02-19 2015-08-11 International Business Machines Corporation Virtual machine-to-image affinity on a physical server
US9104457B2 (en) 2013-02-19 2015-08-11 International Business Machines Corporation Virtual machine-to-image affinity on a physical server
US20140280949A1 (en) * 2013-03-13 2014-09-18 International Business Machines Corporation Load balancing for a virtual networking system
US10044622B2 (en) 2013-03-13 2018-08-07 International Business Machines Corporation Load balancing for a virtual networking system
US10230795B2 (en) 2013-03-13 2019-03-12 International Business Machines Corporation Data replication for a virtual networking system
US10700979B2 (en) 2013-03-13 2020-06-30 International Business Machines Corporation Load balancing for a virtual networking system
US11095716B2 (en) 2013-03-13 2021-08-17 International Business Machines Corporation Data replication for a virtual networking system
US9378068B2 (en) * 2013-03-13 2016-06-28 International Business Machines Corporation Load balancing for a virtual networking system
US9438670B2 (en) 2013-03-13 2016-09-06 International Business Machines Corporation Data replication for a virtual networking system
US10454984B2 (en) 2013-03-14 2019-10-22 Cisco Technology, Inc. Method for streaming packet captures from network access devices to a cloud server over HTTP
US20170180331A1 (en) * 2013-03-15 2017-06-22 Netiq Corporation Techniques for secure data extraction in a virtual or cloud environment
EP2975518A4 (en) * 2013-03-15 2016-11-02 Nec Corp Information processing system and method for relocating application
US10454902B2 (en) * 2013-03-15 2019-10-22 Netiq Corporation Techniques for secure data extraction in a virtual or cloud environment
US20160026501A1 (en) * 2013-03-19 2016-01-28 Emc Corporation Managing provisioning of storage resources
US9934069B2 (en) * 2013-03-19 2018-04-03 EMC IP Holding Company LLC Managing provisioning of storage resources
US9648134B2 (en) 2013-05-20 2017-05-09 Empire Technology Development Llc Object migration between cloud environments
WO2014189481A1 (en) * 2013-05-20 2014-11-27 Empire Technology Development, Llc Object migration between cloud environments
WO2015016805A1 (en) * 2013-07-29 2015-02-05 Hitachi, Ltd. Method and apparatus to conceal the configuration and processing of the replication by virtual storage
US9454400B2 (en) * 2013-08-16 2016-09-27 Red Hat Israel, Ltd. Memory duplication by origin host in virtual machine live migration
US9459902B2 (en) * 2013-08-16 2016-10-04 Red Hat Israel, Ltd. Memory duplication by destination host in virtual machine live migration
US20150052322A1 (en) * 2013-08-16 2015-02-19 Red Hat Israel, Ltd. Systems and methods for memory deduplication by origin host in virtual machine live migration
US20150052323A1 (en) * 2013-08-16 2015-02-19 Red Hat Israel, Ltd. Systems and methods for memory deduplication by destination host in virtual machine live migration
US10372435B2 (en) 2013-09-19 2019-08-06 International Business Machines Corporation System, method and program product for updating virtual machine images
US20150081910A1 (en) * 2013-09-19 2015-03-19 International Business Machines Corporation System, method and program product for updating virtual machine images
US9600262B2 (en) * 2013-09-19 2017-03-21 International Business Machines Corporation System, method and program product for updating virtual machine images
US9424058B1 (en) * 2013-09-23 2016-08-23 Symantec Corporation File deduplication and scan reduction in a virtualization environment
US9465834B2 (en) 2013-10-11 2016-10-11 Vmware, Inc. Methods and apparatus to manage virtual machines
US9361336B2 (en) 2013-10-11 2016-06-07 Vmware, Inc. Methods and apparatus to manage virtual machines
US9361335B2 (en) 2013-10-11 2016-06-07 Vmware, Inc. Methods and apparatus to manage virtual machines
US9336266B2 (en) 2013-10-11 2016-05-10 Vmware, Inc. Methods and apparatus to manage deployments of virtual machines
WO2015054582A1 (en) * 2013-10-11 2015-04-16 Vmware, Inc. Methods and apparatus to manage virtual machines
US20150112941A1 (en) * 2013-10-18 2015-04-23 Power-All Networks Limited Backup management system and method thereof
US9389970B2 (en) * 2013-11-01 2016-07-12 International Business Machines Corporation Selected virtual machine replication and virtual machine restart techniques
US20150199205A1 (en) * 2014-01-10 2015-07-16 Dell Products, Lp Optimized Remediation Policy in a Virtualized Environment
US9817683B2 (en) * 2014-01-10 2017-11-14 Dell Products, Lp Optimized remediation policy in a virtualized environment
US10108644B1 (en) * 2014-03-12 2018-10-23 EMC IP Holding Company LLC Method for minimizing storage requirements on fast/expensive arrays for data mobility and migration
US10140112B2 (en) * 2014-03-28 2018-11-27 Ntt Docomo, Inc. Update management system and update management method
US11606226B2 (en) 2014-04-15 2023-03-14 Cisco Technology, Inc. Programmable infrastructure gateway for enabling hybrid cloud services in a network environment
US10461959B2 (en) 2014-04-15 2019-10-29 Cisco Technology, Inc. Programmable infrastructure gateway for enabling hybrid cloud services in a network environment
US10972312B2 (en) 2014-04-15 2021-04-06 Cisco Technology, Inc. Programmable infrastructure gateway for enabling hybrid cloud services in a network environment
US9935894B2 (en) 2014-05-08 2018-04-03 Cisco Technology, Inc. Collaborative inter-service scheduling of logical resources in cloud platforms
US9904571B2 (en) * 2014-05-19 2018-02-27 International Business Machines Corporation Agile VM load balancing through micro-checkpointing and multi-architecture emulation
US20170052813A1 (en) * 2014-05-19 2017-02-23 International Business Machines Corporation Agile vm load balancing through micro-checkpointing and multi-architecture emulation
US10789091B2 (en) * 2014-05-19 2020-09-29 International Business Machines Corporation Agile VM load balancing through micro-checkpointing and multi-architecture emulation
US20150331704A1 (en) * 2014-05-19 2015-11-19 International Business Machines Corporation Agile vm load balancing through micro-checkpointing and multi-architecture emulation
US9513939B2 (en) * 2014-05-19 2016-12-06 International Business Machines Corporation Agile VM load balancing through micro-checkpointing and multi-architecture emulation
US20190188026A1 (en) * 2014-05-19 2019-06-20 International Business Machines Corporation Agile vm load balancing through micro-checkpointing and multi-architecture emulation
US20150370659A1 (en) * 2014-06-23 2015-12-24 Vmware, Inc. Using stretched storage to optimize disaster recovery
US9489273B2 (en) * 2014-06-23 2016-11-08 Vmware, Inc. Using stretched storage to optimize disaster recovery
US9442792B2 (en) 2014-06-23 2016-09-13 Vmware, Inc. Using stretched storage to optimize disaster recovery
US20150378758A1 (en) * 2014-06-26 2015-12-31 Vmware, Inc. Processing Virtual Machine Objects through Multistep Workflows
US9430284B2 (en) * 2014-06-26 2016-08-30 Vmware, Inc. Processing virtual machine objects through multistep workflows
US10122605B2 (en) 2014-07-09 2018-11-06 Cisco Technology, Inc Annotation of network activity through different phases of execution
US10805235B2 (en) 2014-09-26 2020-10-13 Cisco Technology, Inc. Distributed application framework for prioritizing network traffic using application priority awareness
US10243826B2 (en) 2015-01-10 2019-03-26 Cisco Technology, Inc. Diagnosis and throughput measurement of fibre channel ports in a storage area network environment
US10050862B2 (en) 2015-02-09 2018-08-14 Cisco Technology, Inc. Distributed application framework that uses network and application awareness for placing data
US10037617B2 (en) 2015-02-27 2018-07-31 Cisco Technology, Inc. Enhanced user interface systems including dynamic context selection for cloud-based networks
US10708342B2 (en) 2015-02-27 2020-07-07 Cisco Technology, Inc. Dynamic troubleshooting workspaces for cloud and network management systems
US10825212B2 (en) 2015-02-27 2020-11-03 Cisco Technology, Inc. Enhanced user interface systems including dynamic context selection for cloud-based networks
US10826829B2 (en) 2015-03-26 2020-11-03 Cisco Technology, Inc. Scalable handling of BGP route information in VXLAN with EVPN control plane
US20180024854A1 (en) * 2015-03-27 2018-01-25 Intel Corporation Technologies for virtual machine migration
US10382534B1 (en) 2015-04-04 2019-08-13 Cisco Technology, Inc. Selective load balancing of network traffic
US11843658B2 (en) 2015-04-04 2023-12-12 Cisco Technology, Inc. Selective load balancing of network traffic
US11122114B2 (en) 2015-04-04 2021-09-14 Cisco Technology, Inc. Selective load balancing of network traffic
US20170359221A1 (en) * 2015-04-10 2017-12-14 Hitachi, Ltd. Method and management system for calculating billing amount in relation to data volume reduction function
US10938937B2 (en) 2015-05-15 2021-03-02 Cisco Technology, Inc. Multi-datacenter message queue
US11354039B2 (en) 2015-05-15 2022-06-07 Cisco Technology, Inc. Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system
US10671289B2 (en) 2015-05-15 2020-06-02 Cisco Technology, Inc. Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system
US10476982B2 (en) 2015-05-15 2019-11-12 Cisco Technology, Inc. Multi-datacenter message queue
US10222986B2 (en) 2015-05-15 2019-03-05 Cisco Technology, Inc. Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system
US11588783B2 (en) 2015-06-10 2023-02-21 Cisco Technology, Inc. Techniques for implementing IPV6-based distributed storage space
US10284433B2 (en) * 2015-06-25 2019-05-07 International Business Machines Corporation Data synchronization using redundancy detection
US10034201B2 (en) 2015-07-09 2018-07-24 Cisco Technology, Inc. Stateless load-balancing across multiple tunnels
US10778765B2 (en) 2015-07-15 2020-09-15 Cisco Technology, Inc. Bid/ask protocol in scale-out NVMe storage
US20190034225A1 (en) * 2015-07-31 2019-01-31 Cisco Technology, Inc. Data supression for faster migration
US10733011B2 (en) * 2015-07-31 2020-08-04 Cisco Technology, Inc. Data suppression for faster migration
US10083062B2 (en) * 2015-07-31 2018-09-25 Cisco Technology, Inc. Data suppression for faster migration
US11005710B2 (en) 2015-08-18 2021-05-11 Microsoft Technology Licensing, Llc Data center resource tracking
US10901769B2 (en) 2015-10-06 2021-01-26 Cisco Technology, Inc. Performance-based public cloud selection for a hybrid cloud environment
US10067780B2 (en) 2015-10-06 2018-09-04 Cisco Technology, Inc. Performance-based public cloud selection for a hybrid cloud environment
US11005682B2 (en) 2015-10-06 2021-05-11 Cisco Technology, Inc. Policy-driven switch overlay bypass in a hybrid cloud network environment
US11218483B2 (en) 2015-10-13 2022-01-04 Cisco Technology, Inc. Hybrid cloud security groups
US10462136B2 (en) 2015-10-13 2019-10-29 Cisco Technology, Inc. Hybrid cloud security groups
US10523657B2 (en) 2015-11-16 2019-12-31 Cisco Technology, Inc. Endpoint privacy preservation with cloud conferencing
US10474819B2 (en) * 2015-11-20 2019-11-12 Lastline, Inc. Methods and systems for maintaining a sandbox for use in malware detection
US20170147819A1 (en) * 2015-11-20 2017-05-25 Lastline, Inc. Methods and systems for maintaining a sandbox for use in malware detection
US10205677B2 (en) 2015-11-24 2019-02-12 Cisco Technology, Inc. Cloud resource placement optimization and migration execution in federated clouds
US10084703B2 (en) 2015-12-04 2018-09-25 Cisco Technology, Inc. Infrastructure-exclusive service forwarding
US10949370B2 (en) 2015-12-10 2021-03-16 Cisco Technology, Inc. Policy-driven storage in a microserver computing environment
US10585830B2 (en) 2015-12-10 2020-03-10 Cisco Technology, Inc. Policy-driven storage in a microserver computing environment
US10999406B2 (en) 2016-01-12 2021-05-04 Cisco Technology, Inc. Attaching service level agreements to application containers and enabling service assurance
US10367914B2 (en) 2016-01-12 2019-07-30 Cisco Technology, Inc. Attaching service level agreements to application containers and enabling service assurance
US10467103B1 (en) 2016-03-25 2019-11-05 Nutanix, Inc. Efficient change block training
US20170277555A1 (en) * 2016-03-26 2017-09-28 Vmware, Inc. Efficient vm migration across cloud using catalog aware compression
US10210011B2 (en) * 2016-03-26 2019-02-19 Vmware, Inc. Efficient VM migration across cloud using catalog aware compression
US10140172B2 (en) 2016-05-18 2018-11-27 Cisco Technology, Inc. Network-aware storage repairs
US10129177B2 (en) 2016-05-23 2018-11-13 Cisco Technology, Inc. Inter-cloud broker for hybrid cloud networks
US10872056B2 (en) 2016-06-06 2020-12-22 Cisco Technology, Inc. Remote memory access using memory mapped addressing among multiple compute nodes
US10664169B2 (en) 2016-06-24 2020-05-26 Cisco Technology, Inc. Performance of object storage system by reconfiguring storage devices based on latency that includes identifying a number of fragments that has a particular storage device as its primary storage device and another number of fragments that has said particular storage device as its replica storage device
US10659283B2 (en) 2016-07-08 2020-05-19 Cisco Technology, Inc. Reducing ARP/ND flooding in cloud environment
US10608865B2 (en) 2016-07-08 2020-03-31 Cisco Technology, Inc. Reducing ARP/ND flooding in cloud environment
US10432532B2 (en) 2016-07-12 2019-10-01 Cisco Technology, Inc. Dynamically pinning micro-service to uplink port
US10768961B2 (en) * 2016-07-14 2020-09-08 International Business Machines Corporation Virtual machine seed image replication through parallel deployment
US10263898B2 (en) 2016-07-20 2019-04-16 Cisco Technology, Inc. System and method for implementing universal cloud classification (UCC) as a service (UCCaaS)
US10382597B2 (en) 2016-07-20 2019-08-13 Cisco Technology, Inc. System and method for transport-layer level identification and isolation of container traffic
US10142346B2 (en) 2016-07-28 2018-11-27 Cisco Technology, Inc. Extension of a private cloud end-point group to a public cloud
US10567344B2 (en) 2016-08-23 2020-02-18 Cisco Technology, Inc. Automatic firewall configuration based on aggregated cloud managed information
US11563695B2 (en) 2016-08-29 2023-01-24 Cisco Technology, Inc. Queue protection using a shared global memory reserve
US10326838B2 (en) * 2016-09-23 2019-06-18 Microsoft Technology Licensing, Llc Live migration of probe enabled load balanced endpoints in a software defined network
US10523592B2 (en) 2016-10-10 2019-12-31 Cisco Technology, Inc. Orchestration system for migrating user data and services based on user information
US11716288B2 (en) 2016-10-10 2023-08-01 Cisco Technology, Inc. Orchestration system for migrating user data and services based on user information
US11403127B2 (en) * 2016-11-08 2022-08-02 International Business Machines Corporation Generating a virtual machines relocation protocol
US11044162B2 (en) 2016-12-06 2021-06-22 Cisco Technology, Inc. Orchestration of cloud and fog interactions
US10326817B2 (en) 2016-12-20 2019-06-18 Cisco Technology, Inc. System and method for quality-aware recording in large scale collaborate clouds
US10334029B2 (en) 2017-01-10 2019-06-25 Cisco Technology, Inc. Forming neighborhood groups from disperse cloud providers
US10545914B2 (en) 2017-01-17 2020-01-28 Cisco Technology, Inc. Distributed object storage
US10552191B2 (en) 2017-01-26 2020-02-04 Cisco Technology, Inc. Distributed hybrid cloud orchestration model
US10320683B2 (en) 2017-01-30 2019-06-11 Cisco Technology, Inc. Reliable load-balancer using segment routing and real-time application monitoring
US10917351B2 (en) 2017-01-30 2021-02-09 Cisco Technology, Inc. Reliable load-balancer using segment routing and real-time application monitoring
US10671571B2 (en) 2017-01-31 2020-06-02 Cisco Technology, Inc. Fast network performance in containerized environments for network function virtualization
US10243823B1 (en) 2017-02-24 2019-03-26 Cisco Technology, Inc. Techniques for using frame deep loopback capabilities for extended link diagnostics in fibre channel storage area networks
US11252067B2 (en) 2017-02-24 2022-02-15 Cisco Technology, Inc. Techniques for using frame deep loopback capabilities for extended link diagnostics in fibre channel storage area networks
US10713203B2 (en) 2017-02-28 2020-07-14 Cisco Technology, Inc. Dynamic partition of PCIe disk arrays based on software configuration / policy distribution
US10254991B2 (en) 2017-03-06 2019-04-09 Cisco Technology, Inc. Storage area network based extended I/O metrics computation for deep insight into application performance
US11005731B2 (en) 2017-04-05 2021-05-11 Cisco Technology, Inc. Estimating model parameters for automatic deployment of scalable micro services
US11169883B1 (en) * 2017-05-04 2021-11-09 Amazon Technologies, Inc. User and system initiated instance hibernation
US10382274B2 (en) 2017-06-26 2019-08-13 Cisco Technology, Inc. System and method for wide area zero-configuration network auto configuration
US10439877B2 (en) 2017-06-26 2019-10-08 Cisco Technology, Inc. Systems and methods for enabling wide area multicast domain name system
US11055159B2 (en) 2017-07-20 2021-07-06 Cisco Technology, Inc. System and method for self-healing of application centric infrastructure fabric memory
US10303534B2 (en) 2017-07-20 2019-05-28 Cisco Technology, Inc. System and method for self-healing of application centric infrastructure fabric memory
US10425288B2 (en) 2017-07-21 2019-09-24 Cisco Technology, Inc. Container telemetry in data center environments with blade servers and switches
US11411799B2 (en) 2017-07-21 2022-08-09 Cisco Technology, Inc. Scalable statistics and analytics mechanisms in cloud networking
US11695640B2 (en) 2017-07-21 2023-07-04 Cisco Technology, Inc. Container telemetry in data center environments with blade servers and switches
US10892940B2 (en) 2017-07-21 2021-01-12 Cisco Technology, Inc. Scalable statistics and analytics mechanisms in cloud networking
US11196632B2 (en) 2017-07-21 2021-12-07 Cisco Technology, Inc. Container telemetry in data center environments with blade servers and switches
US11159412B2 (en) 2017-07-24 2021-10-26 Cisco Technology, Inc. System and method for providing scalable flow monitoring in a data center fabric
US10601693B2 (en) 2017-07-24 2020-03-24 Cisco Technology, Inc. System and method for providing scalable flow monitoring in a data center fabric
US11233721B2 (en) 2017-07-24 2022-01-25 Cisco Technology, Inc. System and method for providing scalable flow monitoring in a data center fabric
US11102065B2 (en) 2017-07-25 2021-08-24 Cisco Technology, Inc. Detecting and resolving multicast traffic performance issues
US10541866B2 (en) 2017-07-25 2020-01-21 Cisco Technology, Inc. Detecting and resolving multicast traffic performance issues
US10404596B2 (en) 2017-10-03 2019-09-03 Cisco Technology, Inc. Dynamic route profile storage in a hardware trie routing table
US10999199B2 (en) 2017-10-03 2021-05-04 Cisco Technology, Inc. Dynamic route profile storage in a hardware trie routing table
US11570105B2 (en) 2017-10-03 2023-01-31 Cisco Technology, Inc. Dynamic route profile storage in a hardware trie routing table
US10942666B2 (en) 2017-10-13 2021-03-09 Cisco Technology, Inc. Using network device replication in distributed storage clusters
US10353800B2 (en) 2017-10-18 2019-07-16 Cisco Technology, Inc. System and method for graph based monitoring and management of distributed systems
US10866879B2 (en) 2017-10-18 2020-12-15 Cisco Technology, Inc. System and method for graph based monitoring and management of distributed systems
US10917260B1 (en) * 2017-10-24 2021-02-09 Druva Data management across cloud storage providers
US11481362B2 (en) 2017-11-13 2022-10-25 Cisco Technology, Inc. Using persistent memory to enable restartability of bulk load transactions in cloud databases
US10705882B2 (en) 2017-12-21 2020-07-07 Cisco Technology, Inc. System and method for resource placement across clouds for data intensive workloads
US11595474B2 (en) 2017-12-28 2023-02-28 Cisco Technology, Inc. Accelerating data replication using multicast and non-volatile memory enabled nodes
US20190273779A1 (en) * 2018-03-01 2019-09-05 Hewlett Packard Enterprise Development Lp Execution of software on a remote computing system
US11233737B2 (en) 2018-04-06 2022-01-25 Cisco Technology, Inc. Stateless distributed load-balancing
US10511534B2 (en) 2018-04-06 2019-12-17 Cisco Technology, Inc. Stateless distributed load-balancing
US11252256B2 (en) 2018-05-29 2022-02-15 Cisco Technology, Inc. System for association of customer information across subscribers
US10728361B2 (en) 2018-05-29 2020-07-28 Cisco Technology, Inc. System for association of customer information across subscribers
US10904322B2 (en) 2018-06-15 2021-01-26 Cisco Technology, Inc. Systems and methods for scaling down cloud-based servers handling secure connections
US11968198B2 (en) 2018-06-19 2024-04-23 Cisco Technology, Inc. Distributed authentication and authorization for rapid scaling of containerized services
US11552937B2 (en) 2018-06-19 2023-01-10 Cisco Technology, Inc. Distributed authentication and authorization for rapid scaling of containerized services
US10764266B2 (en) 2018-06-19 2020-09-01 Cisco Technology, Inc. Distributed authentication and authorization for rapid scaling of containerized services
US11019083B2 (en) 2018-06-20 2021-05-25 Cisco Technology, Inc. System for coordinating distributed website analysis
US10819571B2 (en) 2018-06-29 2020-10-27 Cisco Technology, Inc. Network traffic optimization using in-situ notification system
US10904342B2 (en) 2018-07-30 2021-01-26 Cisco Technology, Inc. Container networking using communication tunnels
US10860363B1 (en) * 2019-03-14 2020-12-08 Amazon Technologies, Inc. Managing virtual machine hibernation state incompatibility with underlying host configurations
US10739983B1 (en) 2019-04-10 2020-08-11 Servicenow, Inc. Configuration and management of swimlanes in a graphical user interface
US11182193B2 (en) * 2019-07-02 2021-11-23 International Business Machines Corporation Optimizing image reconstruction for container registries
US11822522B2 (en) * 2020-01-31 2023-11-21 EMC IP Holding Company LLC Intelligent filesystem for container images
US20210240671A1 (en) * 2020-01-31 2021-08-05 EMC IP Holding Company LLC Intelligent filesystem for container images
US11418588B2 (en) 2020-09-29 2022-08-16 EMC IP Holding Company LLC Intelligent peer-to-peer container filesystem

Also Published As

Publication number Publication date
EP2625605A2 (en) 2013-08-14
US20120084414A1 (en) 2012-04-05
WO2012048037A2 (en) 2012-04-12
WO2012048030A3 (en) 2012-07-19
CA2813560A1 (en) 2012-04-12
WO2012048030A2 (en) 2012-04-12
AU2011312036B2 (en) 2016-06-09
EP2625605A4 (en) 2018-01-03
AU2011312029B2 (en) 2016-05-19
US9110727B2 (en) 2015-08-18
CA2813561A1 (en) 2012-04-12
AU2011312029A1 (en) 2013-05-02
WO2012048037A3 (en) 2012-07-19
EP2625604A2 (en) 2013-08-14
AU2011312036A1 (en) 2013-05-02

Similar Documents

Publication Publication Date Title
US9110727B2 (en) Automatic replication of virtual machines
US11716385B2 (en) Utilizing cloud-based storage systems to support synchronous replication of a dataset
US9906598B1 (en) Distributed data storage controller
EP3069274B1 (en) Managed service for acquisition, storage and consumption of large-scale data streams
CA2930101C (en) Partition-based data stream processing framework
US20220083245A1 (en) Declarative provisioning of storage
EP3069495B1 (en) Client-configurable security options for data streams
CA2930026C (en) Data stream ingestion and persistence techniques
Rao et al. Performance issues of heterogeneous hadoop clusters in cloud computing
AU2011312100B2 (en) Automatic selection of secondary backend computing devices for virtual machine image replication
US8918392B1 (en) Data storage mapping and management
US10942814B2 (en) Method for discovering database backups for a centralized backup system
US11314444B1 (en) Environment-sensitive distributed data management
Bose et al. CloudSpider: Combining replication with scheduling for optimizing live migration of virtual machines across wide area networks
US8930364B1 (en) Intelligent data integration
US10810042B2 (en) Distributed job scheduler with intelligent job splitting
US10909000B2 (en) Tagging data for automatic transfer during backups
US11442817B2 (en) Intelligent scheduling of backups
US11134121B2 (en) Method and system for recovering data in distributed computing system
You et al. Ursa: Scalable load and power management in cloud storage systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: DEUTSCH BANK NATIONAL TRUST COMPANY; GLOBAL TRANSA

Free format text: SECURITY AGREEMENT;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:025864/0519

Effective date: 20110228

AS Assignment

Owner name: GENERAL ELECTRIC CAPITAL CORPORATION, AS AGENT, IL

Free format text: SECURITY AGREEMENT;ASSIGNOR:UNISYS CORPORATION;REEL/FRAME:026509/0001

Effective date: 20110623

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:DEUTSCHE BANK TRUST COMPANY;REEL/FRAME:030004/0619

Effective date: 20121127

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:DEUTSCHE BANK TRUST COMPANY AMERICAS, AS COLLATERAL TRUSTEE;REEL/FRAME:030082/0545

Effective date: 20121127

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: UNISYS CORPORATION, PENNSYLVANIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION (SUCCESSOR TO GENERAL ELECTRIC CAPITAL CORPORATION);REEL/FRAME:044416/0358

Effective date: 20171005