WO2016109743A1 - Systems and methods for implementing stretch clusters in a virtualization environment - Google Patents

Systems and methods for implementing stretch clusters in a virtualization environment Download PDF

Info

Publication number
WO2016109743A1
WO2016109743A1 PCT/US2015/068178 US2015068178W WO2016109743A1 WO 2016109743 A1 WO2016109743 A1 WO 2016109743A1 US 2015068178 W US2015068178 W US 2015068178W WO 2016109743 A1 WO2016109743 A1 WO 2016109743A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
node
destination
source
replication
Prior art date
Application number
PCT/US2015/068178
Other languages
French (fr)
Inventor
Parthasarathy Ramachandran
Brian Byrne
Original Assignee
Nutanix, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/586,614 external-priority patent/US9933956B2/en
Application filed by Nutanix, Inc. filed Critical Nutanix, Inc.
Publication of WO2016109743A1 publication Critical patent/WO2016109743A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/178Techniques for file synchronisation in file systems

Definitions

  • This disclosure concerns a mechanism for performing data replication in a networked virtualization environment.
  • Data replication involves replicating data located at a source location to a destination location. There may be any number of reasons that it is desirable to perform data replication. One possible reason to implement data replication is for the purpose of disaster recovery, where data replicated from the source to the destination may be later recovered at the destination when the source undergoes failure.
  • both the source and destination systems may be implemented as clusters, where each cluster is a collection of datastores having shared resources and possibly a shared management interface.
  • the goal of the data replication is to "stretch" the clusters so that all or part of the datastore from the source cluster is replicated to the destination cluster - so that the datastore appears to be stretched across the two clusters.
  • the problem is that there may be a number of different configuration differences and incompatibilities between the source and destination clusters and datastores.
  • the namespace protocol at the source datastore may be quite different from the namespace protocol at the destination datastore.
  • Embodiments of the present invention provide a method, system, and computer program product for stretching datastores/clusters in a virtualization environment. Some embodiments provide an approach to perform data replication across multiple namespace protocols. In addition, some embodiments can control the granularity of the data replication such that different combinations of data subsets are replicated from one cluster to another.
  • a method, system, and computer program product that operates by receiving a request to replicate data of a first namespace type from a first node to a second node, wherein the data is to be replicated to the second node as a second namespace type, translating the request to replicate the data of the first namespace type into a normalized format that is implemented by a storage system, translating the normalized format into a request corresponding to the second namespace type, and replicating the data to the second node in the second namespace type.
  • the first node corresponds to a first virtualization node
  • the second node corresponds to a second virtualization node
  • the data corresponds to storage in a virtualization environment comprising virtual disks.
  • a controller virtual machine performs namespace translations.
  • a mapping structure may be employed to perform namespace translations. Translations may occur to replicate the data into a different storage architecture.
  • a portion of a storage hierarchy at the first node is not replicated to the second node or is replicated to a different hierarchical location at the second node.
  • multiple nodes may replicate the data to a single node.
  • One embodiment operates by traversing a hierarchy for the data at the first node to identify necessary nodes, constructing metadata at the second node corresponding to the necessary nodes, and replicating the data to the second node in correspondence to the metadata.
  • the first node corresponds to a first hypervisor type and the second node corresponds to a second hypervisor types, where the first hypervisor type is different form the second hypervisor type.
  • a replication policy can be established that dynamically adjusts between asynchronous and synchronous replication for replicating the data.
  • FIG. 1 illustrates a networked virtualization environment for storage management according to some embodiments of the invention.
  • FIG. 2 provides an illustration of an approach to implement stretch clusters in a virtualization environment according to some embodiments of the invention.
  • Fig. 3 shows an approach that can be taken to implement stretch clusters in this situation according to some embodiments of the invention.
  • Fig. 4 shows a flowchart that illustrates the process of Fig. 3.
  • Fig. 5 shows an example structure of source data to be replicated.
  • Fig. 6 shows an approach that can be taken to perform replication according to some embodiments of the invention.
  • Fig. 7 illustrates an application of this process to perform the replication shown in
  • FIG. 8 illustrates a 1 -to-many relationship for data replication.
  • FIG. 9 illustrates a many -to- 1 relationship for data replication.
  • Fig. 10 illustrates an example of active-to-active replication.
  • Fig. 11 illustrates data replication in a chained relationship.
  • FIG. 12 is a block diagram of an illustrative computing system suitable for implementing an embodiment of the present invention. DETAILED DESCRIPTION OF THE EMBODIMENTS OF THE INVENTION
  • Embodiments of the present invention provide a method, system, and computer program product for stretching datastores/clusters in a virtualization environment. Some embodiments provide an approach to perform data replication across multiple namespace protocols. In addition, some embodiments can control the granularity of the data replication such that different combinations of data subsets are replicated from one cluster to another.
  • the embodiments of the invention pertain to a virtualization environment, where a "virtual machine” or a "VM” operates in the virtualization environment.
  • a VM refers to a specific software-based implementation of a machine in the virtualization environment, in which the hardware resources of a real computer (e.g., CPU, memory, etc.) are virtualized or transformed into the underlying support for the fully functional virtual machine that can run its own operating system and applications on the underlying physical resources just like a real computer.
  • Virtualization works by inserting a thin layer of software directly on the computer hardware or on a host operating system. This layer of software contains a virtual machine monitor or "hypervisor" that allocates hardware resources dynamically and transparently.
  • Multiple operating systems run concurrently on a single physical computer and share hardware resources with each other.
  • a virtual machine By encapsulating an entire machine, including CPU, memory, operating system, and network devices, a virtual machine is completely compatible with most standard operating systems, applications, and device drivers. Most modern implementations allow several operating systems and applications to safely run at the same time on a single computer, with each having access to the resources it needs when it needs them.
  • Virtualization allows one to run multiple virtual machines on a single physical machine, with each virtual machine sharing the resources of that one physical computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer.
  • Data Centers are often architected as diskless computers (“application servers”) that communicate with a set of networked storage appliances (“storage servers”) via a network, such as a Fiber Channel or Ethernet network.
  • a storage server exposes volumes that are mounted by the application servers for their storage needs. If the storage server is a block- based server, it exposes a set of volumes that are also called Logical Unit Numbers (LUNs). If, on the other hand, a storage server is file-based, it exposes a set of volumes that are also called file systems.
  • LUNs Logical Unit Numbers
  • Storage devices comprise one type of physical resources that can be managed and utilized in a virtualization environment.
  • VMWare is a company that provides products to implement virtualization, in which networked storage devices are managed by the VMWare virtualization software to provide the underlying storage infrastructure for the VMs in the computing environment.
  • the VMWare approach implements a file system (VMFS) that exposes storage hardware to the VMs.
  • VMFS file system
  • the VMWare approach uses VMDK "files" to represent virtual disks that can be accessed by the VMs in the system. Effectively, a single volume can be accessed and shared among multiple VMs.
  • Microsoft is another company that offers a virtualization product, known as the Hyper-V product. This type of virtualization product is often used to implement SMB type file shares to store its underlying data.
  • FIG. 1 illustrates an example networked virtualization environment for implementing storage management according to some embodiments of the invention.
  • the networked virtualization environment of FIG. 1 can be implemented for a distributed platform that contains multiple nodes (e.g., servers) 100a and 100b that manages multiple- tiers of storage.
  • the multiple tiers of storage include storage that is accessible through a network 140, such as cloud storage 126 or networked storage 128 (e.g., a SAN or "storage area network").
  • the present embodiment also permits local storage 122/124 that is within or directly attached to the node and/or appliance to be managed as part of the storage pool 160.
  • vDisks can be structure from the storage devices in the storage pool 160.
  • the term vDisk refers to the storage abstraction that is exposed by a Service VM to be used by a user VM.
  • the vDisk is exposed via iSCSI ("internet small computer system interface") or NFS ("network file system”) and is mounted as a virtual disk on the user VM.
  • Each node 100a or 100b runs virtualization software, such as VMWare ESX(i), Microsoft Hyper-V, or RedHat KVM.
  • the virtualization software includes a hypervisor 130/132 to manage the interactions between the underlying hardware and the one or more user VMs 102a, 102b, 102c and 102d that run client software.
  • a special VM 1 lOa/110b is used to manage storage and I/O activities according to some embodiments of the invention, which is referred to herein as a "Service VM” or “Controller VM”. This is the “Storage Controller” in the currently described networked virtualization environment for storage management. Multiple such storage controllers coordinate within a cluster to form a single-system.
  • the Service VMs 1 lOa/110b are not formed as part of specific implementations of hypervisors 130/132.
  • the Service VMs run as virtual machines above hypervisors 130/132 on the various servers 102a and 102b, and work together to form a distributed system 110 that manages all the storage resources, including the locally attached storage 122/124, the networked storage 128, and the cloud storage 126. Since the Service VMs run above the hypervisors 130/132, this means that the current approach can be used and implemented within any virtual machine architecture, since the Service VMs of embodiments of the invention can be used in conjunction with any hypervisor from any virtualization vendor.
  • Each Service VM 1 lOa-b exports one or more block devices or NFS server targets that appear as disks to the client VMs 102a-d. These disks are virtual, since they are implemented by the software running inside the Service VMs 1 lOa-b. Thus, to the user VMs 102a-d, the Service VMs 1 lOa-b appear to be exporting a clustered storage appliance that contains some disks. All user data (including the operating system) in the client VMs 102a-d resides on these virtual disks.
  • the virtualization environment is capable of managing and accessing locally attached storage, as is the case with the present embodiment, various optimizations can then be implemented to improve system performance even further.
  • the data to be stored in the various storage devices can be analyzed and categorized to determine which specific device should optimally be used to store the items of data. Data that needs to be accessed much faster or more frequently can be identified for storage in the locally attached storage 122. On the other hand, data that does not require fast access or which is accessed infrequently can be stored in the networked storage devices 128 or in cloud storage 126.
  • Another advantage provided by this approach is that administration activities can be handled on a much more efficient granular level.
  • Prior art approaches of using a legacy storage appliance in conjunction with VMFS heavily relies on what the hypervisor can do at its own layer with individual "virtual hard disk" files, effectively making all storage array capabilities meaningless. This is because the storage array manages much coarser grained volumes while the hypervisor needs to manage finer-grained virtual disks.
  • the present embodiment can be used to implement administrative tasks at much smaller levels of granularity, one in which the smallest unit of administration at the hypervisor matches exactly with that of the storage tier itself.
  • Yet another advantage of the present embodiment of the invention is that storage- related optimizations for access and storage of data can be implemented directly within the primary storage path.
  • the Service VM 110a can directly perform data deduplication tasks when storing data within the storage devices. This is far advantageous to prior art approaches that require add-on vendors/products outside of the primary storage path to provide deduplication functionality for a storage system.
  • Other examples of optimizations that can be provided by the Service VMs include quality of service (QOS) functions, encryption and compression.
  • QOS quality of service
  • virtualization environment massively parallelizes storage, by placing a storage controller -in the form of a Service VM - at each hypervisor, and thus makes it possible to render enough CPU and memory resources to achieve the aforementioned optimizations.
  • Data replication involves replicating data located at a source to a destination. This may be performed, for example, to implement a disaster recovery process, where data replicated from the source to the destination may be later recovered at the destination when the source undergoes failure.
  • the networked virtualization environment illustrated in FIG. 1 may be representative of the source networked virtualization environment or destination networked virtualization environment for purposes of data replication.
  • Fig. 14/019,139 filed on September 5, 2013, entitled "System and Methods for Performing Distributed Data Replication in a Networked Virtualization Environment”.
  • a source datastore 202a in a first cluster 1 is to be replicated as a replicated datastore 202b in a second cluster 2.
  • This replication may be necessary for any of multiple possible purposes.
  • the data replication may be necessary to implement disaster recovery, where the source datastore 202a corresponds to a primary data storage location and the destination datastore 202b corresponds to a failover data storage location.
  • the source datastore 202a and the destination datastore 202b are implemented using different namespace types.
  • the source datastore 202a is implemented using SMB (e.g., because its corresponding virtualization system implements a Hyper-V hypervisor 230).
  • the destination datastore 202b is implemented using an entirely different namespace protocol, such as NFS or iSCSI (e.g., because its corresponding virtualization system implements a hypervisor 232 that differs from the hypervisor 230 of the source system).
  • Fig. 3 shows an approach that can be taken to implement stretch clusters in this situation according to some embodiments of the invention.
  • a request 311 in the appropriate protocol for the source datastore is received from the source virtualization system, 301.
  • This request 311 is specific to the namespace protocol of the source datastore.
  • the request 311 itself would correspond to the appropriate SMB protocol and syntax.
  • a protocol translator 304 is employed to translate the original request 311 into an intermediate and/or normalized format.
  • the intermediate format corresponds to an internal data representation understandable by the storage controller of the system.
  • the internal representation would correspond to any internal data representations that is used by the controller VMs.
  • a protocol translator 306 is employed at the destination to translate the
  • intermediate/normalized request 305 into the format appropriate or the namespace at the destination datastore.
  • a mapping table 307 can be employed by protocol translator 304 to translate the original request 311 from the source namespace format into the normalized request 305.
  • the same or similar mapping table can also be used by protocol translator 306 to translate the normalized request 305 into the final request 313.
  • the mapping table 307 comprises any information necessary to map from the different namespaces in the system to each other and/or to any internal representations used by the system storage controllers (e.g., the controller VMs).
  • Fig. 4 shows a flowchart that illustrates this process.
  • a first request is received for the source datastore, where the first request is in the appropriate format for the namespace type for the first datastore.
  • the problem is that the destination datastore has a completely different namespace type.
  • the first request is translated into an intermediate/normalized format.
  • a mapping table can be employed to translate the first request into the intermediate/normalized format.
  • the intermediate/normalized request is sent to the location of the destination datastore.
  • This location is likely in a second cluster, which is different from the cluster that holds the source datastore.
  • the underlying virtualization technology may also be different, e.g., where the hypervisor at the source system is different from the hypervisor at the destination system in terms of its type, manufacture, or underlying technology.
  • the intermediate/normalized request is then translated into the appropriate namespace protocol for the destination datastore. Thereafter, at 409, the request is executed at the destination datastore.
  • the replication of the data may thus require the data to be re-formatted and/or reconfigured as necessary so that it can fit within the structure of the destination datastore.
  • the data to be replicated at the original source datastore may be in a first namespace type at a first inode number.
  • An "inode" is an index node that corresponds to a data structure to represent a filesystem object, such as a file or a directory.
  • the destination datastore may correspond to a different namespace type from the source datastore, and the inode numbers at the source datastore may not have any relevance to the inode numbers used at the destination datastore.
  • the file/directory structure of the source data may need to be modified into the appropriate file/directory structure that exists at the destination datastore.
  • the original inode number for the data would be changed to correspond to the appropriate inode number that is usable at the destination data store.
  • the incompatibilities that may need to be addressed are not limited only to namespace type differences between the source datastore and the destination datastore.
  • the granularity and/or quantity of the data to be replicated may also be different between the datastore and the destination datastore.
  • the below description describes examples of replication from a source node to a destination node. It is noted that replication occurs from a first cluster to a second cluster, and therefore the term "node" can correspond to one or more nodes at a given cluster.
  • Fig. 5 shows an example structure of some source data that is to be replicated.
  • the source data is in a hierarchical form, having a root node 502, a node 504 that is a child of node 502, and nodes 506, 508, and 5410 that branch off from node 504.
  • nodes 506 and 510 correspond to VMs for which a replication policy is established that requires them to be replicated to a destination system, where the policy does not specify the same level of replication for the other data within the source datastore.
  • QoS quality of service
  • Fig. 6 shows an approach that can be taken to perform this type of replication according to some embodiments of the invention.
  • a request is received to implement replication from a source datastore to a destination datastore.
  • the request may pertain to synchronization of an arbitrary portion of the source datastore (and not the entirety of the source datastore). Therefore, the replication approach needs to understand which portions of the source datastore needs to be processed in order to fulfill the replication request.
  • a traversal is made from the specific leaf nodes identified for the replication to the root of the data in the source datastore. This is performed to determine all of the intermediate nodes which were not specifically identified for replication, but which may need to be processed to ensure that proper dependencies are handled for the replication.
  • replication will proceed to replicate the desired data at the destination datastore (e.g., using the process described above to send the replication request from the source to the destination in the appropriate formats).
  • the destination will first construct the metadata for the replicated data. This is implemented by modifying the metadata of the destination data to account for the new directories and/or files to be added to the destination datastore. For example, the directory file object that tracks the directories and files in the destination datastore would be modified as necessary to account for the inclusion of the replicated data.
  • any other metadata managed by the storage system to account for data at the destination datastore would also be modified at this point.
  • the actual data would be replicated to the destination datastore. This may involve an immediate copy of all of the to-be-replicated data from the source datastore to the destination datastore. The data would be placed into the appropriate locations configured for that data (based at least upon the modifications made to the metadata).
  • a multi-phase approach can be taken to replicate the data, where only a portion of the data to be replicated is immediately copied, and where the bulk of the data is copied in the background at a later point in time.
  • This approach can be taken to reduce the immediate latency of the replication operation. For example, as described in more detail below, only newly modified data can be immediately replicated, whereas source data that has not been modified is replicated later on.
  • Fig. 7 illustrates an application of this process to perform the replication shown in Fig. 5.
  • nodes 506 and 510 have been specifically identified in the source datastore to be replicated to the destination datastore. However, by traversing from these nodes through the data hierarchy, it can be seen that these nodes have dependencies that may exist through intermediate node 504 back upwards in the hierarchy to the root node 502. Therefore, when replication occurs, in addition to nodes 506 and 510, their parent nodes 502 and 504 will also be identified for replication. Any files within these directories/nodes that are required for consistency purposes will also be identified for replication.
  • the first intermediate synchronization will involve construction of metadata for these objects at the destination datastore.
  • the next stage of the replication will then involve replication of the data for these objects to be replicated to the destination datastore.
  • Asynchronous data replication occurs where a write operation for a piece of data at a source is committed as soon as the source acknowledges the completion of the write operation. Replication of the data at the destination may occur at a later time after the write operation at the source has been committed.
  • Synchronous data replication occurs where a write operation for a piece of data at a source is committed only after the destination has replicated the data and acknowledged completion of the write operation.
  • a committed write operation for data at the source is guaranteed to have a copy at the destination.
  • Asynchronous data replication is advantageous in certain situations because it may be performed with more efficiency due to the fact that a write operation for data at the source can be committed without having to wait for the destination to replicate the data and acknowledge completion of the write operation.
  • asynchronous data replication may result in potential data loss where the source fails prior to the replication of data at the destination.
  • Synchronous data replication guarantees that data loss will not occur when the source fails because the write operation for data is not committed until the destination has verified that it too has a copy of the data.
  • having to wait for data to be written at both the source and the destination before committing a write operation may lead to latency as well as strain on system resources (e.g., CPU usage, memory usage, network traffic, etc.).
  • data replication involves setting a fixed data replication policy (either synchronous or asynchronous).
  • a fixed synchronous data replication policy may be defined by various timing parameters such as the time taken for performing the data replication process or the wait time between successive data replication processes.
  • the fixed synchronous data replication policy may dictate the exact timing parameters for data replication such that performance of synchronous data replication under a particular policy must be made using the exact timing parameters of that particular policy.
  • the fixed synchronous data replication policy may provide a guideline for data replication such that performance of synchronous data replication under a particular policy attempts to meet the timing parameters of that particular policy without necessarily exactly meeting those timing parameters.
  • Setting a fixed data replication policy may be efficient where the source and destination operate at a steady resource consumption rate and the amount of data to be replicated remains steady. However, where the rate of resource consumption or amount of data to be replicated exhibits volatility, the fixed data replication policy may lead to the underutilization of resources when additional resources are available or where the amount of data to be replicated significantly decreases. Similarly, inefficiency may occur where the fixed data replication policy overutilizes resource availability when fewer resources are available or where the amount of data to be replicated significantly increases.
  • any number of sources may be servicing any number of destinations.
  • a one-to-one configuration may exist between a source and a destination.
  • a 1 -to-many relationship it is possible for a 1 -to-many relationship to exist, where a source set of data 802 to be replicated to multiple destination datastores 802' and 802". This may be used, for example, to take a single set (or subset) of data, and break that data into even smaller subsets at the multiple destinations. Of course, the original set/subset of data can be merely replicated in its entirety to the multiple destinations.
  • FIG. 9 it is also possible for a many-to-1 relationship to exist, where multiple source sets of data 902/904 are replicated to a single destination (902/904)' . This situation may be used, for example, to replicate multiple small subsets of data to form a single larger set of data at the destination. It is noted that in the many-to-1 scenario, there could be multiple different mount points at the destination (and not just one mount point as shown in Fig. 9).
  • FIG. 10 shows another possible architecture, where active-to-active replication occurs.
  • each datastore will replicate some or all of its data to the other datastore.
  • the data 1002 is replicated from cluster 1 to cluster 2 as data 1002' .
  • data 1004 is replicated from cluster 2 to cluster 1 as data 1004'. It is important to note, however, that each destination may additionally be a source to any number of other destinations in a chained arrangement, as shown in Fig. 11 where the data object 1 102 at cluster 1 is replicated to cluster 2, and the replicated data object 1102' is in turn replicated to cluster 3 as data 1102".
  • parameters associated with data replication may vary over time, with the number of source(s) and destination(s) changing for the different data objects/VMs. Even when the numbers for the data replication remains the same at the source, a corresponding destination may experience various parameter changes that decrease the efficiency of using a fixed data replication policy.
  • dynamically adjustments can be made between synchronous and asynchronous data replication policies.
  • dynamically adjusting between synchronous and asynchronous data replication policies may refer to the act of switching from an asynchronous data replication policy to a synchronous data replication policy or vice versa and may additionally refer to the act of transitioning between an asynchronous data replication policy with a first set of timing parameters to an asynchronous data replication policy with a different set of timing parameters.
  • the data replication policy may shift from a synchronous data replication policy to an asynchronous data replication policy.
  • the data replication policy may shift from a
  • the data replication policy may shift from an asynchronous data replication policy to a synchronous data replication policy.
  • the data replication policy may shift from a synchronous data replication policy with a long replication time to a data replication policy with a shorter replication time to account for the low resource utilization.
  • the process for dynamically adjusting between data replication policies may initiate under various different circumstances. In some circumstances, the process may begin at periodic intervals, such as every two hours. Alternatively, the process may begin when a resource utilization level at either the source or the destination rises above or falls below a particular threshold. As another example, the process for dynamically adjusting between data replication policies may initiate whenever a service VM loses or gains additional user VMs.
  • the data replication policy may be a synchronous data replication policy, where every write operation of data for the user VM is not committed until the data is replicated and the write operation is acknowledged at the destination.
  • the data replication policy may be an asynchronous data replication policy, where a write operation of data for the user VM is committed once the source has acknowledged the write operation.
  • the asynchronous data replication policy may indicate a time period for performing data replication. For example, the asynchronous data replication policy may indicate that data replication is to be performed in five minutes. Alternatively, the asynchronous data replication policy may indicate a time period between successive data replications.
  • the synchronous data replication policy may indicate that a five minute period of time passes between successive replication steps.
  • the asynchronous data replication policy may indicate a total time period for data replication.
  • the synchronous data replication policy may indicate that data replication is to be performed for five minutes with a five minute pause between successive replication steps.
  • a load level may then be determined by the source service VM.
  • the load level may indicate the current amount of resources being utilized by the source service VM.
  • the load level being utilized by the source service VM may be important in determining how the data replication policy should be adjusted because it indicates the amount of additional load the source service VM can take on or the amount of load the source service VM needs to be reduced by in order to perform at an optimal level.
  • the load level may indicate the current amount of resources being utilized by the source service VM as well as the amount of resources being utilized by the destination service VM.
  • the load level being utilized by the destination service VM may be important in determining how the data replication policy should be adjusted because it indicates the amount of additional load the destination service VM can take on or the amount of load the destination service VM needs to be reduced by in order to perform at an optimal level.
  • service VMs may monitor their own resources usage.
  • the source service VM may determine its load level by consulting its monitored resource usage and the source service VM may determine the load level at the destination service VM by communicating with the destination service VM to determine the amount of resource usage at the destination.
  • a central controller may monitor the resource usage of both the source service VM and the destination service VM, and the source service VM may communicate with the central controller to determine the load level for both the source and the destination.
  • the load level at either the source service VM, destination service VM or their combination may include various resource usage parameters such as, for example, CPU usage, memory usage, and network bandwidth utilization.
  • the shift in replication time from the current data replication policy to the desired data replication policy may involve lengthening or shortening the time for performing data replication.
  • the shift in replication time from a current data replication policy to a desired data replication policy may involve lengthening or shortening the time between successive data replications.
  • the approach provides for transitioning from an asynchronous data replication policy to a synchronous data replication policy without having to first place the destination (e.g., destination vDisk) into the same state as the source (e.g., source vDisk). Initially, a snapshot of the source vDisk is taken at a first point in time where the state of the source vDisk snapshot is the same as the destination vDisk prior to the data replication policy transitioning from an asynchronous data replication policy to a synchronous data replication policy.
  • a snapshot of the source vDisk is taken at a first point in time where the state of the source vDisk snapshot is the same as the destination vDisk prior to the data replication policy transitioning from an asynchronous data replication policy to a synchronous data replication policy.
  • the service VM facilitating data replication at the source may determine that a snapshot should be taken based on a user indicating that the data replication policy should be transitioned from an asynchronous data replication policy to a synchronous data replication policy. The snapshot is then taken at the point in time where the state of the source vDisk is equivalent to state of the destination vDisk prior to the data replication policy transitioning from an asynchronous data replication policy to a synchronous data replication policy.
  • the service VM facilitating data replication at the source may determine that a snapshot should be taken based on the service VM at the source losing connection with the service VM at the destination. The snapshot is taken at the last point in time where the source vDisk and the destination vDisk have the same state (e.g., point in time immediately preceding the loss of connection).
  • the snapshot taken of the source vDisk provides the state of destination vDisk at the last point in time prior to the data replication policy transitioning from an asynchronous mode to a synchronous mode. After the first point in time, the source vDisk may continue to perform I/O operations that may change the contents of the source vDisk. Because an asynchronous data replication policy is used to replicate data between the source vDisk and destination vDisk after the first point in time, but prior to the data replication policy transitioning into a synchronous data replication policy, I/O operations performed on the source vDisk during that time are not immediately replicated at the destination vDisk. Thus, at a second point in time when the data replication policy transitions from the asynchronous data replication policy to the synchronous data replication policy, the changes made to the source vDisk after the first snapshot are not yet replicated at the destination vDisk.
  • a second snapshot of the source vDisk is taken.
  • the second snapshot provides the state of the source vDisk at the point where the data replication policy transitions from an asynchronous data replication policy to a synchronous data replication policy.
  • the service VM facilitating data replication at the source provides enough metadata to the service VM facilitating data replication at the destination to allow for a shell destination vDisk to be generated.
  • metadata is provided from the source to the destination.
  • the metadata provided from the source to the destination includes the structural parameters of the source vDisk, but does not identify the content changes that occurred between the first point in time and the second point in time.
  • a shell destination vDisk that is structurally similar to the source vDisk at the time of the second snapshot, but does not have all of the contents of the source vDisk may be generated at the destination.
  • the shell destination vDisk is later populated with the contents of the source vDisk through a background process.
  • any write operations performed on the source vDisk are synchronously replicated at the shell destination vDisk.
  • the shell destination vDisk does not yet have all the contents of the source vDisk, it is structurally similar to the source vDisk and so any write operations performed at the source vDisk after the second point in time may be replicated on the shell destination vDisk.
  • the shell destination vDisk may have an empty structure with a same number data blocks as the source vDisk, and may perform a write operation to any of those empty data blocks to replicate a write operation that occurs at the source vDisk.
  • synchronous data replication between the source and the destination may begin immediately without having to first place the destination vDisk into the same state as the source vDisk. This significantly decreases the amount of delay incurred during a transition from an asynchronous data replication policy to a synchronous data replication policy.
  • the differences between the snapshot at the first point in time and the snapshot at the second point in time are then provided to the destination as a background process. Those differences are then used to populate the shell destination vDisk so that the shell destination vDisk may have the same state as the source vDisk.
  • the differences between the snapshot at the first point in time and the snapshot at the second point in time may be provided in a single operation (e.g., batch process). In other embodiments, the differences between the snapshot at the first point in time and the snapshot at the second point in time may be provided over several different operations.
  • the above-described techniques for implementing stretch clusters can be applied even when the source and destination datastores have the same namespace protocols. This may occur due to different storage-related properties between the source and the destination. For example, when the same namespace type is on both ends (such as NFS on both ends), the above-described mapping may still occur due to the need for a new inode at the destination end.
  • FIG. 12 is a block diagram of an illustrative computing system 1400 suitable for implementing an embodiment of the present invention.
  • Computer system 1400 includes a bus 1406 or other communication mechanism for communicating information, which interconnects subsystems and devices, such as processor 1407, system memory 1408 (e.g., RAM), static storage device 1409 (e.g., ROM), disk drive 1410 (e.g., magnetic or optical), communication interface 1414 (e.g., modem or Ethernet card), display 1411 (e.g., CRT or LCD), input device 1412 (e.g., keyboard), and cursor control.
  • processor 1407 system memory 1408 (e.g., RAM), static storage device 1409 (e.g., ROM), disk drive 1410 (e.g., magnetic or optical), communication interface 1414 (e.g., modem or Ethernet card), display 1411 (e.g., CRT or LCD), input device 1412 (e.g., keyboard), and cursor control.
  • system memory 1408 e.
  • computer system 1400 performs specific operations by processor 1407 executing one or more sequences of one or more instructions contained in system memory 1408. Such instructions may be read into system memory 1408 from another computer readable/usable medium, such as static storage device 1409 or disk drive 1410.
  • hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention.
  • embodiments of the invention are not limited to any specific combination of hardware circuitry and/or software.
  • the term "logic" shall mean any combination of software or hardware that is used to implement all or part of the invention.
  • Non-volatile media includes, for example, optical or magnetic disks, such as disk drive 1410.
  • Volatile media includes dynamic memory, such as system memory 1408.
  • Common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
  • execution of the sequences of instructions to practice the invention is performed by a single computer system 1400.
  • two or more computer systems 1400 coupled by communication link 1415 may perform the sequence of instructions required to practice the invention in coordination with one another.
  • Computer system 1400 may transmit and receive messages, data, and instructions, including program, i.e., application code, through communication link 1415 and communication interface 1414.
  • Received program code may be executed by processor 1407 as it is received, and/or stored in disk drive 1410, or other non-volatile storage for later execution.
  • a database 1432 in storage medium 1431 may be accessed through a data interface 1433.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Described is an approach for implementing stretching datastores/clusters in a virtualization environment. In this approach, data replication can be performed across multiple namespace protocols. In addition, control can be made of the granularity of the data replication such that different combinations of data subsets are replicated from one cluster to another.

Description

SYSTEMS AND METHODS FOR IMPLEMENTING STRETCH CLUSTERS IN A
VIRTUALIZATION ENVIRONMENT
FIELD
[0001] This disclosure concerns a mechanism for performing data replication in a networked virtualization environment.
BACKGROUND
[0002] Data replication involves replicating data located at a source location to a destination location. There may be any number of reasons that it is desirable to perform data replication. One possible reason to implement data replication is for the purpose of disaster recovery, where data replicated from the source to the destination may be later recovered at the destination when the source undergoes failure.
[0003] However, the process of performing data replication is made much more complicated in virtualization environments. In virtualization environments, both the source and destination systems may be implemented as clusters, where each cluster is a collection of datastores having shared resources and possibly a shared management interface. The goal of the data replication is to "stretch" the clusters so that all or part of the datastore from the source cluster is replicated to the destination cluster - so that the datastore appears to be stretched across the two clusters.
[0004] The problem is that there may be a number of different configuration differences and incompatibilities between the source and destination clusters and datastores. For example, the namespace protocol at the source datastore may be quite different from the namespace protocol at the destination datastore. Moreover, it may also be desirable to change the granularity and quantity of the data from the source cluster that is replicated to the destination cluster. With these problems, it often becomes very difficult to perform data replication in real-world virtualization environment, particularly with respect to the specific data replication policies and granularities desired by administrators of those environments. SUMMARY
[0005] Embodiments of the present invention provide a method, system, and computer program product for stretching datastores/clusters in a virtualization environment. Some embodiments provide an approach to perform data replication across multiple namespace protocols. In addition, some embodiments can control the granularity of the data replication such that different combinations of data subsets are replicated from one cluster to another.
[0006] According to some embodiments, disclosed is a method, system, and computer program product that operates by receiving a request to replicate data of a first namespace type from a first node to a second node, wherein the data is to be replicated to the second node as a second namespace type, translating the request to replicate the data of the first namespace type into a normalized format that is implemented by a storage system, translating the normalized format into a request corresponding to the second namespace type, and replicating the data to the second node in the second namespace type.
[0007] In some embodiments, the first node corresponds to a first virtualization node, the second node corresponds to a second virtualization node, and the data corresponds to storage in a virtualization environment comprising virtual disks.
[0008] According to some embodiments, a controller virtual machine performs namespace translations. A mapping structure may be employed to perform namespace translations. Translations may occur to replicate the data into a different storage architecture.
[0009] In some cases, a portion of a storage hierarchy at the first node is not replicated to the second node or is replicated to a different hierarchical location at the second node. In addition, multiple nodes may replicate the data to a single node.
[0010] One embodiment operates by traversing a hierarchy for the data at the first node to identify necessary nodes, constructing metadata at the second node corresponding to the necessary nodes, and replicating the data to the second node in correspondence to the metadata.
[0011] The method of claim 1, wherein the first node replicates data to multiple other nodes.
[0012] The method of claim 1, wherein the data is replicated from a first cluster having a first collection of multiple datastores to a second cluster having a second collection of multiple datastores, so that the data extends across multiple clusters. [0013] In some embodiments, the first node corresponds to a first hypervisor type and the second node corresponds to a second hypervisor types, where the first hypervisor type is different form the second hypervisor type.
[0014] According to some embodiments, a replication policy can be established that dynamically adjusts between asynchronous and synchronous replication for replicating the data.
[0015] Further details of aspects, objects, and advantages of the invention are described below in the detailed description, drawings, and claims. Both the foregoing general description and the following detailed description are exemplary and explanatory, and are not intended to be limiting as to the scope of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The drawings illustrate the design and utility of embodiments of the present invention, in which similar elements are referred to by common reference numerals. In order to better appreciate the advantages and objects of embodiments of the invention, reference should be made to the accompanying drawings. However, the drawings depict only certain embodiments of the invention, and should not be taken as limiting the scope of the invention.
[0017] FIG. 1 illustrates a networked virtualization environment for storage management according to some embodiments of the invention.
[0018] Fig. 2 provides an illustration of an approach to implement stretch clusters in a virtualization environment according to some embodiments of the invention.
[0019] Fig. 3 shows an approach that can be taken to implement stretch clusters in this situation according to some embodiments of the invention.
[0020] Fig. 4 shows a flowchart that illustrates the process of Fig. 3.
[0021] Fig. 5 shows an example structure of source data to be replicated.
[0022] Fig. 6 shows an approach that can be taken to perform replication according to some embodiments of the invention.
[0023] Fig. 7 illustrates an application of this process to perform the replication shown in
Fig. 5.
[0024] . 8 illustrates a 1 -to-many relationship for data replication.
[0025] . 9 illustrates a many -to- 1 relationship for data replication.
[0026] Fig. 10 illustrates an example of active-to-active replication.
[0027] Fig. 11 illustrates data replication in a chained relationship.
[0028] FIG. 12 is a block diagram of an illustrative computing system suitable for implementing an embodiment of the present invention. DETAILED DESCRIPTION OF THE EMBODIMENTS OF THE INVENTION
[0029] Embodiments of the present invention provide a method, system, and computer program product for stretching datastores/clusters in a virtualization environment. Some embodiments provide an approach to perform data replication across multiple namespace protocols. In addition, some embodiments can control the granularity of the data replication such that different combinations of data subsets are replicated from one cluster to another.
[0030] It is noted that various embodiments of the invention are described hereinafter with reference to the figures. It should also be noted that the figures are not necessarily drawn to scale, that the figures are only intended to facilitate the description of the embodiments, and that the figures are not intended as an exhaustive description of the invention or as a limitation on the scope of the invention. In addition, an illustrated embodiment need not have all the aspects or advantages shown. An aspect or advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated. Also, reference throughout this specification to "some embodiments" or "other embodiments" means that a particular feature, structure, material, or characteristic described in connection with the embodiments is included in at least one embodiment. Thus, the appearances of the phrase "in some embodiments" or "in other embodiments", in various places throughout this specification are not necessarily referring to the same embodiment or embodiments.
[0031] The embodiments of the invention pertain to a virtualization environment, where a "virtual machine" or a "VM" operates in the virtualization environment. A VM refers to a specific software-based implementation of a machine in the virtualization environment, in which the hardware resources of a real computer (e.g., CPU, memory, etc.) are virtualized or transformed into the underlying support for the fully functional virtual machine that can run its own operating system and applications on the underlying physical resources just like a real computer. Virtualization works by inserting a thin layer of software directly on the computer hardware or on a host operating system. This layer of software contains a virtual machine monitor or "hypervisor" that allocates hardware resources dynamically and transparently. Multiple operating systems run concurrently on a single physical computer and share hardware resources with each other. By encapsulating an entire machine, including CPU, memory, operating system, and network devices, a virtual machine is completely compatible with most standard operating systems, applications, and device drivers. Most modern implementations allow several operating systems and applications to safely run at the same time on a single computer, with each having access to the resources it needs when it needs them. Virtualization allows one to run multiple virtual machines on a single physical machine, with each virtual machine sharing the resources of that one physical computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer.
[0032] One reason for the broad adoption of virtualization in modern business and computing environments is because of the resource utilization advantages provided by virtual machines. Without virtualization, if a physical machine is limited to a single dedicated operating system, then during periods of inactivity by the dedicated operating system the physical machine is not utilized to perform useful work. This is wasteful and inefficient if there are users on other physical machines which are currently waiting for computing resources. To address this problem, virtualization allows multiple VMs to share the underlying physical resources so that during periods of inactivity by one VM, other VMs can take advantage of the resource availability to process workloads. This can produce great efficiencies for the utilization of physical devices, and can result in reduced redundancies and better resource cost management.
[0033] Data Centers are often architected as diskless computers ("application servers") that communicate with a set of networked storage appliances ("storage servers") via a network, such as a Fiber Channel or Ethernet network. A storage server exposes volumes that are mounted by the application servers for their storage needs. If the storage server is a block- based server, it exposes a set of volumes that are also called Logical Unit Numbers (LUNs). If, on the other hand, a storage server is file-based, it exposes a set of volumes that are also called file systems.
[0034] Storage devices comprise one type of physical resources that can be managed and utilized in a virtualization environment. For example, VMWare is a company that provides products to implement virtualization, in which networked storage devices are managed by the VMWare virtualization software to provide the underlying storage infrastructure for the VMs in the computing environment. The VMWare approach implements a file system (VMFS) that exposes storage hardware to the VMs. The VMWare approach uses VMDK "files" to represent virtual disks that can be accessed by the VMs in the system. Effectively, a single volume can be accessed and shared among multiple VMs. Microsoft is another company that offers a virtualization product, known as the Hyper-V product. This type of virtualization product is often used to implement SMB type file shares to store its underlying data.
[0035] FIG. 1 illustrates an example networked virtualization environment for implementing storage management according to some embodiments of the invention. The networked virtualization environment of FIG. 1 can be implemented for a distributed platform that contains multiple nodes (e.g., servers) 100a and 100b that manages multiple- tiers of storage. The multiple tiers of storage include storage that is accessible through a network 140, such as cloud storage 126 or networked storage 128 (e.g., a SAN or "storage area network"). Unlike the prior art, the present embodiment also permits local storage 122/124 that is within or directly attached to the node and/or appliance to be managed as part of the storage pool 160. Examples of such storage include Solid State Drives (henceforth "SSDs") 125 or Hard Disk Drives (henceforth "HDDs" or "spindle drives") 127. These collected storage devices, both local and networked, form a storage pool 160. Virtual disks (or "vDisks") can be structure from the storage devices in the storage pool 160. As used herein, the term vDisk refers to the storage abstraction that is exposed by a Service VM to be used by a user VM. In some embodiments, the vDisk is exposed via iSCSI ("internet small computer system interface") or NFS ("network file system") and is mounted as a virtual disk on the user VM.
[0036] Each node 100a or 100b runs virtualization software, such as VMWare ESX(i), Microsoft Hyper-V, or RedHat KVM. The virtualization software includes a hypervisor 130/132 to manage the interactions between the underlying hardware and the one or more user VMs 102a, 102b, 102c and 102d that run client software.
[0037] A special VM 1 lOa/110b is used to manage storage and I/O activities according to some embodiments of the invention, which is referred to herein as a "Service VM" or "Controller VM". This is the "Storage Controller" in the currently described networked virtualization environment for storage management. Multiple such storage controllers coordinate within a cluster to form a single-system. The Service VMs 1 lOa/110b are not formed as part of specific implementations of hypervisors 130/132. Instead, the Service VMs run as virtual machines above hypervisors 130/132 on the various servers 102a and 102b, and work together to form a distributed system 110 that manages all the storage resources, including the locally attached storage 122/124, the networked storage 128, and the cloud storage 126. Since the Service VMs run above the hypervisors 130/132, this means that the current approach can be used and implemented within any virtual machine architecture, since the Service VMs of embodiments of the invention can be used in conjunction with any hypervisor from any virtualization vendor.
[0038] Each Service VM 1 lOa-b exports one or more block devices or NFS server targets that appear as disks to the client VMs 102a-d. These disks are virtual, since they are implemented by the software running inside the Service VMs 1 lOa-b. Thus, to the user VMs 102a-d, the Service VMs 1 lOa-b appear to be exporting a clustered storage appliance that contains some disks. All user data (including the operating system) in the client VMs 102a-d resides on these virtual disks.
[0039] Significant performance advantages can be gained by allowing the virtualization environment to access and utilize local (e.g., server-internal) storage 122. This is because I/O performance is typically much faster when performing access to local storage 122 as compared to performing access to networked storage 128 across a network 140. This faster performance for locally attached storage 122 can be increased even further by using certain types of optimized local storage devices, such as SSDs 125.
[0040] Once the virtualization environment is capable of managing and accessing locally attached storage, as is the case with the present embodiment, various optimizations can then be implemented to improve system performance even further. For example, the data to be stored in the various storage devices can be analyzed and categorized to determine which specific device should optimally be used to store the items of data. Data that needs to be accessed much faster or more frequently can be identified for storage in the locally attached storage 122. On the other hand, data that does not require fast access or which is accessed infrequently can be stored in the networked storage devices 128 or in cloud storage 126.
[0041] Another advantage provided by this approach is that administration activities can be handled on a much more efficient granular level. Prior art approaches of using a legacy storage appliance in conjunction with VMFS heavily relies on what the hypervisor can do at its own layer with individual "virtual hard disk" files, effectively making all storage array capabilities meaningless. This is because the storage array manages much coarser grained volumes while the hypervisor needs to manage finer-grained virtual disks. In contrast, the present embodiment can be used to implement administrative tasks at much smaller levels of granularity, one in which the smallest unit of administration at the hypervisor matches exactly with that of the storage tier itself.
[0042] Yet another advantage of the present embodiment of the invention is that storage- related optimizations for access and storage of data can be implemented directly within the primary storage path. For example, in some embodiments of the invention, the Service VM 110a can directly perform data deduplication tasks when storing data within the storage devices. This is far advantageous to prior art approaches that require add-on vendors/products outside of the primary storage path to provide deduplication functionality for a storage system. Other examples of optimizations that can be provided by the Service VMs include quality of service (QOS) functions, encryption and compression. The networked
virtualization environment massively parallelizes storage, by placing a storage controller -in the form of a Service VM - at each hypervisor, and thus makes it possible to render enough CPU and memory resources to achieve the aforementioned optimizations.
[0043] Additional details regarding networked virtualization environments for storage management are described in U.S. Patent No. 8,601,473, entitled "Architecture for Managing I/O and Storage for a Virtualization Environment".
[0044] Data replication involves replicating data located at a source to a destination. This may be performed, for example, to implement a disaster recovery process, where data replicated from the source to the destination may be later recovered at the destination when the source undergoes failure. The networked virtualization environment illustrated in FIG. 1 may be representative of the source networked virtualization environment or destination networked virtualization environment for purposes of data replication. A source
service/controller VM may be utilized to perform data replication for its corresponding user VM. The source service VM does so by identifying the file(s) to be replicated for a particular user VM and coordinating with one or more destination service VMs for performing replication of the file(s) at the destination. At the destination, one or more destination service VMs are assigned to the source service VM for receiving file(s) to be replicated and storing those files. Additional details for performing such distributed data replication may be found in co-pending Application Ser. No. 14/019,139, filed on September 5, 2013, entitled "System and Methods for Performing Distributed Data Replication in a Networked Virtualization Environment". [0045] Fig. 2 provides an illustration of an approach to implement stretch clusters in a virtualization environment according to some embodiments of the invention. Here, a source datastore 202a in a first cluster 1 is to be replicated as a replicated datastore 202b in a second cluster 2. This replication may be necessary for any of multiple possible purposes. For example, the data replication may be necessary to implement disaster recovery, where the source datastore 202a corresponds to a primary data storage location and the destination datastore 202b corresponds to a failover data storage location.
[0046] The issue here is that the source datastore 202a and the destination datastore 202b are implemented using different namespace types. For example, assume that the source datastore 202a is implemented using SMB (e.g., because its corresponding virtualization system implements a Hyper-V hypervisor 230). Further assume that the destination datastore 202b is implemented using an entirely different namespace protocol, such as NFS or iSCSI (e.g., because its corresponding virtualization system implements a hypervisor 232 that differs from the hypervisor 230 of the source system).
[0047] Fig. 3 shows an approach that can be taken to implement stretch clusters in this situation according to some embodiments of the invention. Here, a request 311 in the appropriate protocol for the source datastore is received from the source virtualization system, 301. This request 311 is specific to the namespace protocol of the source datastore. For example for a SMB datastore, the request 311 itself would correspond to the appropriate SMB protocol and syntax.
[0048] A protocol translator 304 is employed to translate the original request 311 into an intermediate and/or normalized format. In some embodiments, the intermediate format corresponds to an internal data representation understandable by the storage controller of the system. Here, since the storage controller functionality is implemented using controller VMs, the internal representation would correspond to any internal data representations that is used by the controller VMs.
[0049] A protocol translator 306 is employed at the destination to translate the
intermediate/normalized request 305 into the format appropriate or the namespace at the destination datastore.
[0050] A mapping table 307 can be employed by protocol translator 304 to translate the original request 311 from the source namespace format into the normalized request 305. The same or similar mapping table can also be used by protocol translator 306 to translate the normalized request 305 into the final request 313. The mapping table 307 comprises any information necessary to map from the different namespaces in the system to each other and/or to any internal representations used by the system storage controllers (e.g., the controller VMs).
[0051] Fig. 4 shows a flowchart that illustrates this process. At 401, a first request is received for the source datastore, where the first request is in the appropriate format for the namespace type for the first datastore.
[0052] The problem is that the destination datastore has a completely different namespace type. To address this, at 403, the first request is translated into an intermediate/normalized format. As noted above, a mapping table can be employed to translate the first request into the intermediate/normalized format.
[0053] Next, at 405, the intermediate/normalized request is sent to the location of the destination datastore. This location is likely in a second cluster, which is different from the cluster that holds the source datastore. In the current scenario, since the second datastore is in a completely different namespace type from the first datastore, the underlying virtualization technology may also be different, e.g., where the hypervisor at the source system is different from the hypervisor at the destination system in terms of its type, manufacture, or underlying technology.
[0054] At 407, the intermediate/normalized request is then translated into the appropriate namespace protocol for the destination datastore. Thereafter, at 409, the request is executed at the destination datastore.
[0055] The replication of the data may thus require the data to be re-formatted and/or reconfigured as necessary so that it can fit within the structure of the destination datastore. For example, the data to be replicated at the original source datastore may be in a first namespace type at a first inode number. An "inode" is an index node that corresponds to a data structure to represent a filesystem object, such as a file or a directory. The destination datastore may correspond to a different namespace type from the source datastore, and the inode numbers at the source datastore may not have any relevance to the inode numbers used at the destination datastore. Therefore, to satisfy the replication request, the file/directory structure of the source data may need to be modified into the appropriate file/directory structure that exists at the destination datastore. In addition, the original inode number for the data would be changed to correspond to the appropriate inode number that is usable at the destination data store.
[0056] The incompatibilities that may need to be addressed are not limited only to namespace type differences between the source datastore and the destination datastore. The granularity and/or quantity of the data to be replicated may also be different between the datastore and the destination datastore. The below description describes examples of replication from a source node to a destination node. It is noted that replication occurs from a first cluster to a second cluster, and therefore the term "node" can correspond to one or more nodes at a given cluster.
[0057] Fig. 5 shows an example structure of some source data that is to be replicated. Here, the source data is in a hierarchical form, having a root node 502, a node 504 that is a child of node 502, and nodes 506, 508, and 5410 that branch off from node 504. Consider if it is desired to replicate only a portion of the source data. For example, assume that nodes 506 and 510 correspond to VMs for which a replication policy is established that requires them to be replicated to a destination system, where the policy does not specify the same level of replication for the other data within the source datastore. This may occur, for example, if there are quality of service (QoS) guarantees for users of the system that may affect different items of data differently and/or create performance requirements that require data items to be handled differently to allow the system to meet performance expectations. In this situation, it may very well be the case that the data for certain VMs will have a replication policy that dictates different handling compared to the replication requirements for other VMs.
[0058] Fig. 6 shows an approach that can be taken to perform this type of replication according to some embodiments of the invention. At 601, a request is received to implement replication from a source datastore to a destination datastore.
[0059] It is possible that the request may pertain to synchronization of an arbitrary portion of the source datastore (and not the entirety of the source datastore). Therefore, the replication approach needs to understand which portions of the source datastore needs to be processed in order to fulfill the replication request. At 603, a traversal is made from the specific leaf nodes identified for the replication to the root of the data in the source datastore. This is performed to determine all of the intermediate nodes which were not specifically identified for replication, but which may need to be processed to ensure that proper dependencies are handled for the replication.
[0060] Next, replication will proceed to replicate the desired data at the destination datastore (e.g., using the process described above to send the replication request from the source to the destination in the appropriate formats). To implement the replication at the destination, at 605, the destination will first construct the metadata for the replicated data. This is implemented by modifying the metadata of the destination data to account for the new directories and/or files to be added to the destination datastore. For example, the directory file object that tracks the directories and files in the destination datastore would be modified as necessary to account for the inclusion of the replicated data. In addition, any other metadata managed by the storage system to account for data at the destination datastore would also be modified at this point.
[0061] Thereafter, at 607, the actual data would be replicated to the destination datastore. This may involve an immediate copy of all of the to-be-replicated data from the source datastore to the destination datastore. The data would be placed into the appropriate locations configured for that data (based at least upon the modifications made to the metadata).
[0062] As an optional step, at 609, a multi-phase approach can be taken to replicate the data, where only a portion of the data to be replicated is immediately copied, and where the bulk of the data is copied in the background at a later point in time. This approach can be taken to reduce the immediate latency of the replication operation. For example, as described in more detail below, only newly modified data can be immediately replicated, whereas source data that has not been modified is replicated later on.
[0063] Fig. 7 illustrates an application of this process to perform the replication shown in Fig. 5. Recall that only nodes 506 and 510 have been specifically identified in the source datastore to be replicated to the destination datastore. However, by traversing from these nodes through the data hierarchy, it can be seen that these nodes have dependencies that may exist through intermediate node 504 back upwards in the hierarchy to the root node 502. Therefore, when replication occurs, in addition to nodes 506 and 510, their parent nodes 502 and 504 will also be identified for replication. Any files within these directories/nodes that are required for consistency purposes will also be identified for replication. The first intermediate synchronization will involve construction of metadata for these objects at the destination datastore. The next stage of the replication will then involve replication of the data for these objects to be replicated to the destination datastore.
[0064] Two modes of data replication can be used for the data replication: asynchronous data replication and synchronous data replication. Asynchronous data replication occurs where a write operation for a piece of data at a source is committed as soon as the source acknowledges the completion of the write operation. Replication of the data at the destination may occur at a later time after the write operation at the source has been committed. Synchronous data replication occurs where a write operation for a piece of data at a source is committed only after the destination has replicated the data and acknowledged completion of the write operation. Thus, in a synchronous data replication mode, a committed write operation for data at the source is guaranteed to have a copy at the destination.
[0065] Asynchronous data replication is advantageous in certain situations because it may be performed with more efficiency due to the fact that a write operation for data at the source can be committed without having to wait for the destination to replicate the data and acknowledge completion of the write operation. However, asynchronous data replication may result in potential data loss where the source fails prior to the replication of data at the destination.
[0066] Synchronous data replication guarantees that data loss will not occur when the source fails because the write operation for data is not committed until the destination has verified that it too has a copy of the data. However, having to wait for data to be written at both the source and the destination before committing a write operation may lead to latency as well as strain on system resources (e.g., CPU usage, memory usage, network traffic, etc.).
[0067] In some embodiments, data replication involves setting a fixed data replication policy (either synchronous or asynchronous). A fixed synchronous data replication policy may be defined by various timing parameters such as the time taken for performing the data replication process or the wait time between successive data replication processes. In some situations, the fixed synchronous data replication policy may dictate the exact timing parameters for data replication such that performance of synchronous data replication under a particular policy must be made using the exact timing parameters of that particular policy. In other situations, the fixed synchronous data replication policy may provide a guideline for data replication such that performance of synchronous data replication under a particular policy attempts to meet the timing parameters of that particular policy without necessarily exactly meeting those timing parameters.
[0068] By setting a fixed data replication policy, the manner in which data replication occurs remains static regardless of the changing nature of the system (e.g., source networked virtualization environment or destination networked virtualization environment). System parameters such as the amount of data being replicated or the amount of resources being consumed by the source or destination may vary over time. Thus, fixing the data replication policy for a system fails to account for the dynamic nature of system parameters and may lead to inefficiencies where the system parameters change substantially or frequently over the course over system operation.
[0069] Setting a fixed data replication policy may be efficient where the source and destination operate at a steady resource consumption rate and the amount of data to be replicated remains steady. However, where the rate of resource consumption or amount of data to be replicated exhibits volatility, the fixed data replication policy may lead to the underutilization of resources when additional resources are available or where the amount of data to be replicated significantly decreases. Similarly, inefficiency may occur where the fixed data replication policy overutilizes resource availability when fewer resources are available or where the amount of data to be replicated significantly increases.
[0070] With the introduction of networked virtualization environments for storage management, various configurations may exist for performing data replication. At any given time, any number of sources may be servicing any number of destinations. For example, a one-to-one configuration may exist between a source and a destination.
[0071] As shown in Fig. 8, it is possible for a 1 -to-many relationship to exist, where a source set of data 802 to be replicated to multiple destination datastores 802' and 802". This may be used, for example, to take a single set (or subset) of data, and break that data into even smaller subsets at the multiple destinations. Of course, the original set/subset of data can be merely replicated in its entirety to the multiple destinations.
[0072] As shown in Fig. 9, it is also possible for a many-to-1 relationship to exist, where multiple source sets of data 902/904 are replicated to a single destination (902/904)' . This situation may be used, for example, to replicate multiple small subsets of data to form a single larger set of data at the destination. It is noted that in the many-to-1 scenario, there could be multiple different mount points at the destination (and not just one mount point as shown in Fig. 9).
[0073] Fig. 10 shows another possible architecture, where active-to-active replication occurs. In this situation, each datastore will replicate some or all of its data to the other datastore. Here, the data 1002 is replicated from cluster 1 to cluster 2 as data 1002' .
However, data 1004 is replicated from cluster 2 to cluster 1 as data 1004'. It is important to note, however, that each destination may additionally be a source to any number of other destinations in a chained arrangement, as shown in Fig. 11 where the data object 1 102 at cluster 1 is replicated to cluster 2, and the replicated data object 1102' is in turn replicated to cluster 3 as data 1102".
[0074] In these configurations, parameters associated with data replication may vary over time, with the number of source(s) and destination(s) changing for the different data objects/VMs. Even when the numbers for the data replication remains the same at the source, a corresponding destination may experience various parameter changes that decrease the efficiency of using a fixed data replication policy.
[0075] Because of the various different configurations that may be used for performing data replication and the changing nature of parameters associated with the networked virtualization environments, various inefficiencies may arise when a fixed data replication policy is used. The dynamic nature of networked virtualization parameters such as the amount of data being replicated or the amount of resources available to and being consumed by the source service VM or destination service VM may vary over time regardless of whether an asynchronous data replication policy or synchronous data replication policy is used. As such, using a fixed data replication policy will necessarily lead to the inefficiencies described above.
[0076] In some embodiment, dynamically adjustments can be made between synchronous and asynchronous data replication policies. As used herein, the term dynamically adjusting between synchronous and asynchronous data replication policies may refer to the act of switching from an asynchronous data replication policy to a synchronous data replication policy or vice versa and may additionally refer to the act of transitioning between an asynchronous data replication policy with a first set of timing parameters to an asynchronous data replication policy with a different set of timing parameters.
[0077] By dynamically adjusting between different data replication policies, the fluctuations in system parameters during operation may be accounted for and the utilization of system resources may be made more optimal and efficient. For example, where the resources available to a service VM at the source or destination are heavily utilized (due to the number of user VMs being serviced or amount of data being replicated), the data replication policy may shift from a synchronous data replication policy to an asynchronous data replication policy. Alternatively, the data replication policy may shift from a
synchronous data replication policy with a short replication time to a data replication policy with a longer replication time to account for the heavy resource utilization.
[0078] As another example, where the resources available to a service VM at the source or destination are underutilized (due to the number of user VMs being serviced or amount of data being replicated), the data replication policy may shift from an asynchronous data replication policy to a synchronous data replication policy. Alternatively, the data replication policy may shift from a synchronous data replication policy with a long replication time to a data replication policy with a shorter replication time to account for the low resource utilization. The process for dynamically adjusting between data replication policies may initiate under various different circumstances. In some circumstances, the process may begin at periodic intervals, such as every two hours. Alternatively, the process may begin when a resource utilization level at either the source or the destination rises above or falls below a particular threshold. As another example, the process for dynamically adjusting between data replication policies may initiate whenever a service VM loses or gains additional user VMs.
[0079] An administrator or user of the user VM may establish a preferred data replication policy. In some embodiments, the data replication policy may be a synchronous data replication policy, where every write operation of data for the user VM is not committed until the data is replicated and the write operation is acknowledged at the destination. In some other embodiments, the data replication policy may be an asynchronous data replication policy, where a write operation of data for the user VM is committed once the source has acknowledged the write operation. The asynchronous data replication policy may indicate a time period for performing data replication. For example, the asynchronous data replication policy may indicate that data replication is to be performed in five minutes. Alternatively, the asynchronous data replication policy may indicate a time period between successive data replications. For example, the synchronous data replication policy may indicate that a five minute period of time passes between successive replication steps. Additionally, the asynchronous data replication policy may indicate a total time period for data replication. For example, the synchronous data replication policy may indicate that data replication is to be performed for five minutes with a five minute pause between successive replication steps.
[0080] A load level may then be determined by the source service VM. In some embodiments, the load level may indicate the current amount of resources being utilized by the source service VM. The load level being utilized by the source service VM may be important in determining how the data replication policy should be adjusted because it indicates the amount of additional load the source service VM can take on or the amount of load the source service VM needs to be reduced by in order to perform at an optimal level.
[0081] In other embodiments, the load level may indicate the current amount of resources being utilized by the source service VM as well as the amount of resources being utilized by the destination service VM. The load level being utilized by the destination service VM may be important in determining how the data replication policy should be adjusted because it indicates the amount of additional load the destination service VM can take on or the amount of load the destination service VM needs to be reduced by in order to perform at an optimal level.
[0082] In some embodiments, service VMs may monitor their own resources usage. In this situation, the source service VM may determine its load level by consulting its monitored resource usage and the source service VM may determine the load level at the destination service VM by communicating with the destination service VM to determine the amount of resource usage at the destination. In some other embodiments, a central controller may monitor the resource usage of both the source service VM and the destination service VM, and the source service VM may communicate with the central controller to determine the load level for both the source and the destination.
[0083] The load level at either the source service VM, destination service VM or their combination may include various resource usage parameters such as, for example, CPU usage, memory usage, and network bandwidth utilization. [0084] In some embodiments, the shift in replication time from the current data replication policy to the desired data replication policy may involve lengthening or shortening the time for performing data replication. In other embodiments, the shift in replication time from a current data replication policy to a desired data replication policy may involve lengthening or shortening the time between successive data replications.
[0085] In some embodiments, the approach provides for transitioning from an asynchronous data replication policy to a synchronous data replication policy without having to first place the destination (e.g., destination vDisk) into the same state as the source (e.g., source vDisk). Initially, a snapshot of the source vDisk is taken at a first point in time where the state of the source vDisk snapshot is the same as the destination vDisk prior to the data replication policy transitioning from an asynchronous data replication policy to a synchronous data replication policy. In some embodiments, the service VM facilitating data replication at the source may determine that a snapshot should be taken based on a user indicating that the data replication policy should be transitioned from an asynchronous data replication policy to a synchronous data replication policy. The snapshot is then taken at the point in time where the state of the source vDisk is equivalent to state of the destination vDisk prior to the data replication policy transitioning from an asynchronous data replication policy to a synchronous data replication policy. In some other embodiments, the service VM facilitating data replication at the source may determine that a snapshot should be taken based on the service VM at the source losing connection with the service VM at the destination. The snapshot is taken at the last point in time where the source vDisk and the destination vDisk have the same state (e.g., point in time immediately preceding the loss of connection).
[0086] The snapshot taken of the source vDisk provides the state of destination vDisk at the last point in time prior to the data replication policy transitioning from an asynchronous mode to a synchronous mode. After the first point in time, the source vDisk may continue to perform I/O operations that may change the contents of the source vDisk. Because an asynchronous data replication policy is used to replicate data between the source vDisk and destination vDisk after the first point in time, but prior to the data replication policy transitioning into a synchronous data replication policy, I/O operations performed on the source vDisk during that time are not immediately replicated at the destination vDisk. Thus, at a second point in time when the data replication policy transitions from the asynchronous data replication policy to the synchronous data replication policy, the changes made to the source vDisk after the first snapshot are not yet replicated at the destination vDisk.
[0087] At this second point in time, a second snapshot of the source vDisk is taken. The second snapshot provides the state of the source vDisk at the point where the data replication policy transitions from an asynchronous data replication policy to a synchronous data replication policy.
[0088] In order to allow for the destination vDisk to begin replicating data from the source vDisk in a synchronous manner without having to first place the destination vDisk into the same state as the source vDisk, the service VM facilitating data replication at the source provides enough metadata to the service VM facilitating data replication at the destination to allow for a shell destination vDisk to be generated. Thus, metadata is provided from the source to the destination. The metadata provided from the source to the destination includes the structural parameters of the source vDisk, but does not identify the content changes that occurred between the first point in time and the second point in time. Using the metadata provided by the source, a shell destination vDisk that is structurally similar to the source vDisk at the time of the second snapshot, but does not have all of the contents of the source vDisk may be generated at the destination. The shell destination vDisk is later populated with the contents of the source vDisk through a background process.
[0089] Once the shell destination vDisk has been generated at the destination, any write operations performed on the source vDisk are synchronously replicated at the shell destination vDisk. Although the shell destination vDisk does not yet have all the contents of the source vDisk, it is structurally similar to the source vDisk and so any write operations performed at the source vDisk after the second point in time may be replicated on the shell destination vDisk. For example, the shell destination vDisk may have an empty structure with a same number data blocks as the source vDisk, and may perform a write operation to any of those empty data blocks to replicate a write operation that occurs at the source vDisk.
[0090] By generating the shell destination vDisk, synchronous data replication between the source and the destination may begin immediately without having to first place the destination vDisk into the same state as the source vDisk. This significantly decreases the amount of delay incurred during a transition from an asynchronous data replication policy to a synchronous data replication policy. [0091] The differences between the snapshot at the first point in time and the snapshot at the second point in time are then provided to the destination as a background process. Those differences are then used to populate the shell destination vDisk so that the shell destination vDisk may have the same state as the source vDisk. In some embodiments, the differences between the snapshot at the first point in time and the snapshot at the second point in time may be provided in a single operation (e.g., batch process). In other embodiments, the differences between the snapshot at the first point in time and the snapshot at the second point in time may be provided over several different operations.
[0092] By providing the differences between the snapshot at the first point in time and the second point in time as a background process after synchronous data replication has begun rather than providing the differences as a foreground process prior to initiating synchronous data replication, significant reduction in delay may be achieved while at the same time providing a destination vDisk that has the same state as the source vDisk.
[0093] Further details regarding an example approach that can be taken to implement data replication using synch/asynch processing is described in co-pending U.S. Application Serial No. 14/270,705, filed on May 6, 2014, entitled "System and Methods for Dynamically Adjusting Between Asynchronous and Synchronous Data Replication Policies in a
Networked Virtualization Environment".
[0094] The above-described techniques for implementing stretch clusters can be applied even when the source and destination datastores have the same namespace protocols. This may occur due to different storage-related properties between the source and the destination. For example, when the same namespace type is on both ends (such as NFS on both ends), the above-described mapping may still occur due to the need for a new inode at the destination end.
[0095] Therefore, what has been described is an improved approach for implementing stretching clusters in a virtualization environment, where data replication can be performed across multiple namespace protocols. In addition, control can be made of the granularity of the data replication such that different combinations of data subsets are replicated from one cluster to another. SYSTEM ARCHITECTURE
[0096] FIG. 12 is a block diagram of an illustrative computing system 1400 suitable for implementing an embodiment of the present invention. Computer system 1400 includes a bus 1406 or other communication mechanism for communicating information, which interconnects subsystems and devices, such as processor 1407, system memory 1408 (e.g., RAM), static storage device 1409 (e.g., ROM), disk drive 1410 (e.g., magnetic or optical), communication interface 1414 (e.g., modem or Ethernet card), display 1411 (e.g., CRT or LCD), input device 1412 (e.g., keyboard), and cursor control.
[0097] According to one embodiment of the invention, computer system 1400 performs specific operations by processor 1407 executing one or more sequences of one or more instructions contained in system memory 1408. Such instructions may be read into system memory 1408 from another computer readable/usable medium, such as static storage device 1409 or disk drive 1410. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and/or software. In one embodiment, the term "logic" shall mean any combination of software or hardware that is used to implement all or part of the invention.
[0098] The term "computer readable medium" or "computer usable medium" as used herein refers to any medium that participates in providing instructions to processor 1407 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as disk drive 1410. Volatile media includes dynamic memory, such as system memory 1408.
[0099] Common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
[0100] In an embodiment of the invention, execution of the sequences of instructions to practice the invention is performed by a single computer system 1400. According to other embodiments of the invention, two or more computer systems 1400 coupled by communication link 1415 (e.g., LAN, PTSN, or wireless network) may perform the sequence of instructions required to practice the invention in coordination with one another.
[0101] Computer system 1400 may transmit and receive messages, data, and instructions, including program, i.e., application code, through communication link 1415 and communication interface 1414. Received program code may be executed by processor 1407 as it is received, and/or stored in disk drive 1410, or other non-volatile storage for later execution. A database 1432 in storage medium 1431 may be accessed through a data interface 1433.
[0102] In the foregoing specification, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention. For example, the above-described process flows are described with reference to a particular ordering of process actions. However, the ordering of many of the described process actions may be changed without affecting the scope or operation of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense.

Claims

WHAT IS CLAIMED IS:
1. A method, comprising:
receiving a request to replicate data of a first namespace type from a first node to a second node, wherein the data is to be replicated to the second node as a second namespace type; translating the request to replicate the data of the first namespace type into a normalized format that is implemented by a storage system; translating the normalized format into a request corresponding to the second namespace type; and replicating the data to the second node in the second namespace type.
2. The method of claim 1, wherein the first node corresponds to a first virtualization node, the second node corresponds to a second virtualization node, and the data corresponds to storage in a virtualization environment comprising virtual disks.
3. The method of claim 1, wherein a controller virtual machine performs namespace translations.
4. The method of claim 1, wherein a mapping structure is used to perform namespace translations.
5. The method of claim 1, wherein translations occur to replicate the data into a different storage architecture.
6. The method of claim 1, wherein a portion of a storage hierarchy at the first node is not replicated to the second node or is replicated to a different hierarchical location at the second node.
7. The method of claim 1, wherein multiple nodes replicate the data to a single node.
8. The method of claim 1, further comprising:
traversing a hierarchy for the data at the first node to identify necessary nodes;
constructing metadata at the second node corresponding to the necessary nodes; and replicating the data to the second node in correspondence to the metadata.
9. The method of claim 1, wherein the first node replicates data to multiple other nodes.
10. The method of claim 1, wherein the data is replicated from a first cluster having a first collection of multiple datastores to a second cluster having a second collection of multiple datastores, so that the data extends across multiple clusters.
11. The method of claim 1, wherein the first node corresponds to a first hypervisor type and the second node corresponds to a second hypervisor types, the first hypervisor type being different form the second hypervisor type.
12. The method of claim 1, wherein a replication policy is established that dynamically adjusts between asynchronous and synchronous replication for replicating the data.
13. A computer program product embodied on a computer readable medium, the computer readable medium having stored thereon a sequence of instructions which, when executed by a processor causes the processor to execute any of the methods of claims 1-12.
14. A system, comprising: a computer processor to execute a set of program code instructions; a memory to hold the program code instructions, in which the program code instructions comprises program code to perform any of the methods of claims 1-12.
PCT/US2015/068178 2014-12-30 2015-12-30 Systems and methods for implementing stretch clusters in a virtualization environment WO2016109743A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/586,614 US9933956B2 (en) 2013-09-05 2014-12-30 Systems and methods for implementing stretch clusters in a virtualization environment
US14/586,614 2014-12-30

Publications (1)

Publication Number Publication Date
WO2016109743A1 true WO2016109743A1 (en) 2016-07-07

Family

ID=56287640

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/068178 WO2016109743A1 (en) 2014-12-30 2015-12-30 Systems and methods for implementing stretch clusters in a virtualization environment

Country Status (1)

Country Link
WO (1) WO2016109743A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946569B1 (en) 2016-02-08 2018-04-17 Nutanix, Inc. Virtual machine bring-up with on-demand processing of storage requests

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8589574B1 (en) * 2005-12-29 2013-11-19 Amazon Technologies, Inc. Dynamic application instance discovery and state management within a distributed system
US8769105B2 (en) * 2012-09-14 2014-07-01 Peaxy, Inc. Software-defined network attachable storage system and method
US20140258616A1 (en) * 2010-07-07 2014-09-11 Nexenta System, Inc Method and system for heterogeneous data volume
US8850528B2 (en) * 2010-06-15 2014-09-30 Oracle International Corporation Organizing permission associated with a cloud customer in a virtual computing infrastructure
US20140330787A1 (en) * 2013-05-01 2014-11-06 Netapp, Inc. Namespace mirroring in an expandable storage volume

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8589574B1 (en) * 2005-12-29 2013-11-19 Amazon Technologies, Inc. Dynamic application instance discovery and state management within a distributed system
US8850528B2 (en) * 2010-06-15 2014-09-30 Oracle International Corporation Organizing permission associated with a cloud customer in a virtual computing infrastructure
US20140258616A1 (en) * 2010-07-07 2014-09-11 Nexenta System, Inc Method and system for heterogeneous data volume
US8769105B2 (en) * 2012-09-14 2014-07-01 Peaxy, Inc. Software-defined network attachable storage system and method
US20140330787A1 (en) * 2013-05-01 2014-11-06 Netapp, Inc. Namespace mirroring in an expandable storage volume

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9946569B1 (en) 2016-02-08 2018-04-17 Nutanix, Inc. Virtual machine bring-up with on-demand processing of storage requests

Similar Documents

Publication Publication Date Title
US9933956B2 (en) Systems and methods for implementing stretch clusters in a virtualization environment
US9817606B1 (en) System and methods for dynamically adjusting between asynchronous and synchronous data replication policies in a networked virtualization environment
US9671967B2 (en) Method and system for implementing a distributed operations log
EP2176747B1 (en) Unified provisioning of physical and virtual disk images
EP3117311B1 (en) Method and system for implementing virtual machine images
US11573714B2 (en) Compressibility instrumented dynamic volume provisioning
US8086808B2 (en) Method and system for migration between physical and virtual systems
US9665386B2 (en) Method for leveraging hypervisor functionality for maintaining application consistent snapshots in a virtualization environment
US9286344B1 (en) Method and system for maintaining consistency for I/O operations on metadata distributed amongst nodes in a ring structure
US20190332473A1 (en) Dynamic erasure coding
US10379759B2 (en) Method and system for maintaining consistency for I/O operations on metadata distributed amongst nodes in a ring structure
US9952782B1 (en) Method and system for accessing data between different virtual disk formats in a virtualization environment
US9582221B2 (en) Virtualization-aware data locality in distributed data processing
US20190235904A1 (en) Cloning services in virtualized computing systems
US10740133B2 (en) Automated data migration of services of a virtual machine to containers
US10353872B2 (en) Method and apparatus for conversion of virtual machine formats utilizing deduplication metadata
KR20090025204A (en) Converting machines to virtual machines
US8732427B2 (en) Systems and methods for collapsing a derivative version of a primary storage volume
US9971785B1 (en) System and methods for performing distributed data replication in a networked virtualization environment
US10002173B1 (en) System and methods for dynamically adjusting between asynchronous and synchronous data replication policies in a networked virtualization environment
WO2016109743A1 (en) Systems and methods for implementing stretch clusters in a virtualization environment
US10846011B2 (en) Moving outdated data from a multi-volume virtual disk to a backup storage device
US10339011B1 (en) Method and system for implementing data lossless synthetic full backups
US20190332413A1 (en) Migration of services of infrastructure management virtual machines to containers
US11853317B1 (en) Creating replicas using queries to a time series database

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15876312

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15876312

Country of ref document: EP

Kind code of ref document: A1