WO2015034827A1 - Replication of snapshots and clones - Google Patents

Replication of snapshots and clones Download PDF

Info

Publication number
WO2015034827A1
WO2015034827A1 PCT/US2014/053709 US2014053709W WO2015034827A1 WO 2015034827 A1 WO2015034827 A1 WO 2015034827A1 US 2014053709 W US2014053709 W US 2014053709W WO 2015034827 A1 WO2015034827 A1 WO 2015034827A1
Authority
WO
WIPO (PCT)
Prior art keywords
snapshot
data
time
snapshot data
source
Prior art date
Application number
PCT/US2014/053709
Other languages
French (fr)
Inventor
Shobhit Dayal
Gideon W. GLASS
Edward K. Lee
Original Assignee
Tintri Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tintri Inc. filed Critical Tintri Inc.
Priority to EP14841628.2A priority Critical patent/EP3042289A4/en
Priority to JP2016537937A priority patent/JP6309103B2/en
Publication of WO2015034827A1 publication Critical patent/WO2015034827A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/128Details of file system snapshots on the file-level, e.g. snapshot creation, administration, deletion

Definitions

  • a snapshot can be represented as a snapshot index that tracks the changes made to a storage system between two given points in time.
  • many conventional approaches "expand" the state of the data at the point-in-time corresponding to the snapshot.
  • the expanded state of the data contains all data values that exist or can be accessed at that point-in-time and is usually much larger than the delta representation, which only contains changes that have been made since the next older snapshot. Transmission to and storage of expanded states of data at a destination system can be inefficient.
  • FIG. 1 is a diagram showing an embodiment of a storage system for the storage of
  • FIG. 2 is a block diagram illustrating an embodiment of a storage system including data and metadata.
  • FIG. 3 is a diagram showing an example of a set of metadata associated with a set of data.
  • FIG. 4 is a diagram showing an example of a set of metadata associated with source data and a set of metadata associated with a clone.
  • FIG. 5 is a diagram showing an example of snapshots that can be stored at a source system and a destination system.
  • FIG. 6 is a diagram showing an embodiment of a system for performing replication of snapshots between storage systems.
  • FIG. 7 is a diagram showing an example of how snapshot indices associated with the same expanded data state may differ at different storage systems.
  • FIG. 8 is a flow diagram showing an embodiment of a process for performing replication of a selected snapshot from a source system to a destination system.
  • FIGS. 9A and 9B are diagrams showing an example of replicating the snapshot at time t2 from a source system to a destination system.
  • FIGS. 10A and 10B are diagrams showing an example of replicating the snapshot at time t3 from a source system to a destination system.
  • FIGS. 11 A and 1 IB are diagrams showing another example of replicating the snapshot at time t3 from a source system to a destination system.
  • FIGS. 12A and 12B are diagrams showing another example of replicating the snapshot at time t4 from a source system to a destination system.
  • FIG. 13 is a flow diagram showing an example of a process of refactoring a younger snapshot index relative to an older snapshot index.
  • FIG. 14 is a flow diagram showing an embodiment of a process for performing replication of a selected snapshot associated with a clone from a source system to a destination system.
  • FIGS. 15 A, 15B, and 15C are diagrams showing an example of replicating a snapshot at time t4 (S4) associated with a clone from a source system to a destination system.
  • the invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor.
  • these implementations, or any other form that the invention may take, may be referred to as techniques.
  • the order of the steps of disclosed processes may be altered within the scope of the invention.
  • a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task.
  • the term 'processor' refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
  • Embodiments of performing efficient, flexible replication of snapshots and clones between storage systems are described herein.
  • Storage systems for which the replication of snapshots is performed may be located great distances apart from each other. Snapshots and clones allow space-efficient representation of point-in-time copies of data. Snapshots are generally readonly copies, while clones are generally copies that can be read or written to. Typical replication of snapshots and clones at a remote storage system often results in the loss of space efficiency or places restrictions on the subset or order in which snapshots and clones must be replicated.
  • Embodiments described herein enable replicating snapshots and clones to be performed using a minimal amount of information, represented as changes or deltas, and to be transmitted and stored between the replicating storage systems. Any subset of snapshots and clones may be replicated in any order to any system, while preserving a minimal representation of data and metadata on the storage systems.
  • a "snapshot" comprises a point-in-time state of a set of data and in various embodiments, a subsequently generated snapshot includes mappings to data that was modified since the previous snapshot was created.
  • a set of data may be associated with a virtual machine (also sometimes referred to as a "VM"), a virtual disk (also sometimes referred to as a "vdisk”), or a file, for example.
  • the metadata associated with a set of data (e.g., a VM, a vdisk, or a file) comprises one or more snapshots.
  • a snapshot associated with a point-in-time state of a set of data is physically represented/stored as an index at a storage system.
  • a “snapshot” is sometimes used to refer to a state of a set of data at a particular point-in-time and/or the physical representation (e.g., an index) that represents that state of the set of data at that particular point-in-time at a particular storage system.
  • a "user” performs read operations on a snapshot using "logical offsets," which are mapped to "physical offsets” using the indices associated with the snapshots comprising the set of data.
  • the physical offsets can then be used to read and write data from the underlying physical storage devices. Read operations lookup the logical offset in one or more indices to find the corresponding physical offset, while write operations create new entries or update existing entries in indices.
  • each snapshot index includes mappings to data modified since the immediately previously generated (i.e., older) snapshot index
  • each snapshot index (other than the oldest snapshot index) associated with the set of data may depend on (e.g., point to, link to, and/or otherwise reference) at least a next older snapshot index.
  • snapshots associated with different points-in-time states of the set of data can be represented as a sequence of snapshot indices at a storage system. Due to the dependencies among snapshot indices in a sequence, as will be described in further detail below, different storage systems with the same points-in-time snapshots associated with the same set of data may store indices that map somewhat different sets of logical offsets to correspond to their respective sequences of snapshots.
  • a "clone” refers to a copy of an existing set of data (the existing set of data is sometimes referred to as "source data"). In various embodiments, a clone is generated from a snapshot of the source data.
  • the snapshot of the source data from which a clone is created is referred to as a "shared snapshot.”
  • a new set of metadata is created and data associating the clone's new set of metadata to the source data's set of metadata is stored such that at least some of the snapshot indices associated with the source data are to be shared with the new set of metadata associated with the clone and at least some of the data associated with source data is shared with the clone.
  • FIG. 1 is a diagram showing an embodiment of a storage system for the storage of
  • system 100 includes server 106, network 104, and storage system 102.
  • network 104 includes various high-speed data networks and/or telecommunications networks.
  • storage system 102 communicates with server 106 via network 104.
  • the file system for the storage of VMs using virtual machine storage abstractions does not include network 104, and storage system 102 is a component of server 106.
  • server 106 is configured to communicate with more storage systems other than storage system 102.
  • server 106 runs several VMs.
  • VMs are examples of VMs.
  • a VM is a software implementation of a physical machine that executes programs like a physical machine.
  • a physical machine e.g., a computer
  • Each VM may run a different operating system.
  • different operating systems may concurrently run and share the resources of the same physical machine.
  • a VM may span more than one physical machine and/or may be moved (e.g., migrated) from one physical machine to another.
  • a VM includes one or more virtual disks (vdisks) and other data related to the specific VM (e.g., configuration files and utility files for implementing functionality, such as snapshots, that are supported by the VM management infrastructure).
  • vdisks virtual disks
  • other data related to the specific VM e.g., configuration files and utility files for implementing functionality, such as snapshots, that are supported by the VM management infrastructure.
  • a vdisk appears to be an ordinary physical disk drive to the guest operating system running on a VM.
  • one or more files may be used to store the contents of vdisks.
  • a VM management infrastructure e.g., a hypervisor
  • the hypervisor may create a set of files in a directory for each specific VM. Examples of files created by the hypervisor store the content of one or more vdisks, the state of the VM's BIOS, information and metadata about snapshots created by the hypervisor, configuration information of the specific VM, etc.
  • data associated with a particular VM is stored on a storage system as one or more files.
  • the files are examples of virtual machine storage abstractions.
  • the respective files associated with (at least) VMs 108, 110, and 112 running on server 106 are stored on storage system 102.
  • storage system 102 is configured to store meta-information identifying which stored data objects, such as files or other virtual machine storage abstractions, are associated with which VM or vdisk.
  • storage system 102 stores the data of VMs running on server 106 and also stores the metadata that provides mapping or other
  • mapping or identification of specific VMs includes mapping to the files on the storage that are associated with each specific VM.
  • storage system 102 also stores at least a portion of the files associated with the specific VMs in addition to the mappings to those files.
  • storage system 102 refers to one or more physical systems and/or associated hardware and/or software components configured to work together to store and manage stored data, such as files or other stored data objects.
  • a hardware component that is used to (at least in part) implement the storage system may be comprised of either disk or flash, or a combination of disk and flash.
  • FIG. 2 is a block diagram illustrating an embodiment of a storage system including data and metadata.
  • storage system 102 includes a network connection 202 and a communication interface 204, such as a network interface card or other interface, which enable the storage system to be connected to and communicate via a network such as network 104 of FIG. 1.
  • the storage system 102 further includes a network file system front end 206 configured to handle NFS requests from virtual machines running on systems such as server 106 of FIG. 1.
  • the network file system front end is configured to associate NFS requests as received and processed with a corresponding virtual machine and/or vdisk with which the request is associated, for example, using meta-information stored on storage system 102 or elsewhere.
  • the storage system 102 includes a file system 208 configured and optimized to store VM data.
  • metadata 210 is configured to store sets of metadata associated with various sets of data and their associated snapshots and clones.
  • a set of metadata may be associated with a VM, a vdisk, or a file.
  • Storage 212 may comprise at least one tier of storage. In some embodiments, storage 212 may comprise at least two tiers of storage, where the first tier of storage comprises flash or other solid state disk (SSD) and the second tier of storage comprises a hard disk drive (HDD) or other disk storage.
  • SSD solid state disk
  • HDD hard disk drive
  • a set of metadata stored at metadata 210 includes at least one index that includes mappings to locations in storage 212 at which a set of data (e.g., VM, vdisk, or file) associated with the set of metadata is stored.
  • a set of metadata stored at metadata 210 includes at least an index that is a snapshot associated with a set of data stored in storage 212.
  • a set of metadata stored at metadata 210 includes a sequence of one or more snapshot indices associated with a set of data stored in storage 212, where each snapshot index (physically) depends on at least an older (i.e., an earlier generated) snapshot index, if one exists.
  • a clone may be generated based on an existing (or source) set of data stored in storage 212.
  • the clone may be generated using a snapshot of the source set of data in the source data's set of metadata that is stored in metadata 210.
  • the snapshot of the source data from which a clone is generated is referred to as a "shared snapshot.”
  • a new set of metadata is created for the clone and data associating the clone (and/or the clone's set of metadata) with the set of metadata associated with the (e.g., shared snapshot of the) source data is stored at metadata 210. At least some of the metadata associated with the source data is shared with the clone.
  • a received request includes an operation (e.g., read or write) to access (e.g., a current state or to a past state of) data from a set of data (e.g., a VM, a vdisk, or a file)
  • the set of metadata associated with that data is retrieved.
  • the data associated with the request comprises a clone
  • at least a portion of the set of metadata associated with the source data may be accessed as well.
  • FIG. 3 is a diagram showing an example of a set of metadata associated with a set of data.
  • a set of metadata may be associated with a set of data (e.g., a VM, a vdisk, or a file).
  • the set of metadata is associated with a file.
  • the set of metadata includes a current snapshot index, a snapshot at time t2, and a snapshot at time tl .
  • the current snapshot index depends on (e.g., is linked to) the snapshot at time t2 and the snapshot at time t2 depends on (e.g., is linked to) the snapshot at time tl .
  • data associated with the file may be stored at offsets 1, 2, 3, and 4.
  • Metadata may be thought of as the mapping used to translate a logical location (e.g., a logical offset) to a physical location (e.g., a physical offset) of underlying storage for data that a user may have written.
  • the metadata may be organized as an efficient index data structure such as a hash table or a B-tree.
  • index data structure such as a hash table or a B-tree.
  • the relationship between a logical offset of a data, the index, and the physical offset of the data may be described as follows: logical-offset INDEX physical-offset.
  • each set of metadata includes at least one active index: the
  • the current snapshot index is active in the sense that it can be modified. In some embodiments, the current snapshot index stores all offsets in the file that have been mapped since the previous snapshot was created.
  • a snapshot is typically a read-only file, but the current snapshot index is modifiable until the next prescribed snapshot creation event occurs.
  • a prescribed snapshot creation event may be configured by a user and may comprise the elapse of an interval of time, the detection of a particular event, or a receipt of a user selection to create a new snapshot. Once the next prescribed snapshot creation event is reached, the state of the current snapshot index is preserved to create a new snapshot and a new empty current snapshot index is created.
  • write operations to the set of data result in the update of the current snapshot index.
  • read operations of the set of data result in the search of a current snapshot index and subsequently, a search through the sequence of snapshots if the desired data is not found in the current snapshot index,
  • each index is searched in a prescribed manner.
  • a snapshot of a file is the point-in-time state of the file at the time the snapshot was created.
  • a snapshot of a VM is the collection of file-level snapshots of files that comprise the VM.
  • a snapshot is represented as an index that stores mappings to the data that was modified after the previous snapshot was created.
  • each snapshot only includes the updates to a file (i.e., deltas) for a given time period (since the creation of the previous snapshot).
  • the snapshot may be represented by a compact space-efficient structure.
  • the current snapshot index becomes the index of that snapshot, and a new empty current snapshot index is created in preparation for the next snapshot.
  • Each snapshot is linked to (or otherwise physically dependent on) the next younger and next older snapshot.
  • the links that go backward in time i.e., the links to the next older snapshots are traversed during snapshot and clone read operations.
  • the current snapshot index is linked (e.g., points to) the snapshot at time t2 and the snapshot at time t2 is linked to the snapshot at time tl .
  • each of the snapshot at time t2 and the snapshot at time tl is represented by a corresponding index.
  • the snapshot at time tl can be referred to as being "older” than the snapshot at time t2 and snapshot at time t2 can be referred to as being "younger” than the snapshot at time tl because time tl is earlier than time t2.
  • each snapshot index of the set of metadata associated with the file is associated with a stored "file global ID" that identifies that the sequence of snapshots belongs to the file.
  • Read operations to the current state of the file can be serviced from the current snapshot index and/or the snapshot at time t2 and the snapshot at time tl, while write operations to the file update the current snapshot index.
  • data A is written before time tl at offset 1 and then the snapshot at time tl is created.
  • a read operation on a specified snapshot for a logical block offset may proceed in the following manner: First, a lookup of the specified snapshot index is performed for the logical block offset of the read operation.
  • mapping exists, then data is read from the physical device (underlying storage) at the corresponding physical address and returned. Otherwise, if the mapping does not exist within the specified snapshot index, the link to the next older snapshot is traversed and a search of this older snapshot's index is performed. This process continues until a mapping for the logical block offset is found in a snapshot index or the last snapshot in the chain has been examined. For example, assume that a read operation to the set of data requests current data associated with offset 1. First, the current snapshot index of the set of data is searched for a mapping to data associated with offset 1.
  • the mapping is not found in the current snapshot index, so the link (e.g., the stored associating data) from the current snapshot index to the snapshot at time t2 is traversed and a search of the snapshot at time t2 is performed.
  • the mapping is not found in the snapshot at time t2, so the link from the snapshot at time t2 to the next older snapshot, the snapshot at time tl, is traversed and a search of the snapshot at time tl is performed.
  • the mapping associated with offset 1 is found in the snapshot at time tl, the search ends, and the snapshot at time tl is used to service the request.
  • FIG. 4 is a diagram showing an example of a set of metadata associated with source data and a set of metadata associated with a clone.
  • a clone may be created from an existing snapshot of a set of data.
  • a snapshot of the source data was first created, then a clone was created from this snapshot.
  • snapshots are represented in a compact format that only stores the changes that have been made to the associated set of data since the previous snapshot was created.
  • the set of metadata associated with the source data includes a snapshot at time t3, a snapshot at time t2, and a snapshot at time tl . As shown in the example of FIG.
  • each of the snapshot at time t3, the snapshot at time t2, and the snapshot at time tl is represented by a corresponding index.
  • the clone is created from the snapshot at time t2 of the source metadata. Therefore, the snapshot at time t2 is now also referred to as a shared snapshot because it is now shared between the source data and its clone. While not shown in the example, one or more other clones besides the one shown may be created from the snapshot at time t2 of the source metadata.
  • each snapshot has an associated reference count that tracks the total number of clones that have been created from the snapshot.
  • the reference count of the shared snapshot is incremented by the number of new clones that were created from the snapshot.
  • the reference count associated with the shared snapshot from which the clone was created is decremented by one.
  • the reference count of a shared snapshot is considered when it is determined whether the shared snapshot should be deleted. For example, a snapshot cannot be deleted if it has a non-zero reference count, thus preserving the data shared by the clones.
  • creating clones does not require copying metadata and/or data. Instead, a new set of metadata is created for a new clone.
  • the new set of metadata created for a new clone may include at least one or more of the following: a new file global ID, a current snapshot index (not shown in the diagram), an identifier associated with the shared snapshot from which the clone was generated, and an identifier associated with the set of source metadata (e.g., source sequence of snapshots).
  • information associating each clone with the shared snapshot of the source data is stored.
  • information associating each clone with the shared snapshot of the source data may include the identifier ("snapshot global ID," which will be described in further detail below) that identifies the particular snapshot that is the shared snapshot from the sequence of snapshots associated with the source data.
  • the snapshot itself may be composed of snapshots of data in multiple files.
  • the snapshot metadata in turn identifies the files using the identifier file global ID and the relevant snapshot of the file using the local snapshot ID.
  • the information associating the clone with the shared snapshot may be stored with the clone metadata, the source metadata, and/or elsewhere.
  • the associating data is a pointer or another type of reference that the clone can use to point to the index of the shared snapshot from which the clone was created. This link to the shared snapshot may be traversed during reads of the clone.
  • Snapshots may also be generated for a clone in the same manner that snapshots are generated for a non-clone.
  • a snapshot at time t4 which is represented by a corresponding index, was generated (e.g., using a current snapshot index associated with the clone). Because the clone shares each snapshot of the source data including the shared snapshot (the snapshot at time t2 in the example of FIG. 4) and any older snapshots (the snapshot at time tl in the example of FIG. 4), the clone's snapshot at time t4 includes data (D at logical offset 4) that has been modified since the shared snapshot has been created.
  • the clone now includes data value B and data value A (via the pointer back to the shared snapshot of the source data), which it cloned from the source, and also data value D, which was written to the clone after it was created and captured in a snapshot of the clone.
  • the source data is not aware that data D has been written to the clone and/or captured in a snapshot of the clone.
  • the index of the snapshot is accessed first. If the desired data is not in the clone's snapshot index, then the clone's snapshots are traversed backwards in time.
  • one of the clone's snapshot indices includes a mapping for the logical block offset of the requested data
  • data is read from the corresponding physical address and returned.
  • the source's snapshots are traversed backwards in time starting from the shared snapshot on which the clone was based (i.e., if the mapping to the requested data is not found in the shared snapshot of the source metadata, then the link to the next older snapshot is traversed and searched, and so forth).
  • a read operation to the clone requests for the current data associated with offset 4.
  • the only snapshot of the clone, the snapshot at time t4, is searched for a mapping to data associated with offset 4.
  • the mapping associated with offset 4 is found in the clone's snapshot at time t4, the search ends, and the data from the clone's snapshot index is used to service the request.
  • a read operation to the clone requests data associated with offset 1.
  • the only snapshot of the clone, the snapshot at time t4 is searched for a mapping to data associated with offset 1.
  • the mapping is not found in the only snapshot of the clone, the snapshot at time t4, so the link (e.g., the stored associating data) from the clone's snapshot at time t4 to the shared snapshot is traversed and a search of the shared snapshot, the snapshot at time t2, is performed.
  • the mapping is not found in the shared snapshot, so the link from the shared snapshot to the next older snapshot, the snapshot at time tl, is traversed and a search of the snapshot at time tl is performed.
  • the mapping associated with offset 1 is found in the snapshot at time tl of the source data, the search ends, and the snapshot at time tl is used to service the request.
  • mapping found in the snapshot at time tl of the source data is used to service the read operation to the clone.
  • metadata e.g., snapshots
  • metadata may be shared between a non-clone and its clone and therefore, in some instances, read operations to the clone may be serviced by metadata associated with the source data.
  • FIG. 5 is a diagram showing an example of snapshots that can be stored at a source system and a destination system.
  • snapshots e.g., the indices thereof
  • dependencies e.g., links to, points to, or otherwise references
  • the indices thereof can be stored at the source system and the destination system.
  • the snapshot at time t3 is linked to its next older snapshot at time t2, which is in turn linked to its next older snapshot at time tl .
  • the snapshot at time tl contains all changes made to the storage system since the beginning of time up to and including time tl .
  • the snapshot at time t2 contains changes made up to and including time t2 but after time tl .
  • the snapshot at time t3 contains any changes made up to and including time t3 but after time t2.
  • an expanded state of the data at a point-in-time contains all data values that exist or can be accessed at that point-in-time and is usually much larger than the delta representation at the same point-in-time, which only contains changes that have been made since the next older snapshot was generated. Therefore, these conventional approaches transmit the complete expanded state of the data at time tl, then the expanded state of the data at time t2 and so on, instead of just the deltas. Transmission of expanded states of data results in much more state information being transmitted and stored than if only the deltas were sent.
  • a clone can be represented as an index that depends on (e.g., links to, points to, or otherwise references) a "shared snapshot" index.
  • a clone, cl is created from the snapshot at time t2.
  • a snapshot may be shared by any number of clones and clones may themselves have snapshots, which may be shared by other clones.
  • a clone, cl 1 is created from the snapshot at time tl and other clones, clO and c3, may be created from cl 1.
  • Embodiments described herein enable replicating snapshots and clones to be performed using a minimal amount of information, represented as changes or deltas, and to be transmitted and stored between the replicating storage systems.
  • snapshot refers collectively to non-clone snapshots and clone snapshots.
  • clone specifically refers to clone snapshots rather than non-clone snapshots.
  • FIG. 6 is a diagram showing an embodiment of a system for performing replication of snapshots between storage systems.
  • system 600 includes first storage system 602, network 604, second storage system 606, and snapshot replication system 608.
  • Network 604 comprises high-speed data networks and/or telecommunications networks.
  • each of first storage system 602 and second storage system 606 is implemented with storage system 102 of system 100 of FIG. 1.
  • First storage system 602 and second storage system 606 may communicate to each other and to snapshot replication system 608 over network 604.
  • First storage system 602 and second storage system 606 may each store a corresponding sequence of snapshots associated with the same set of data.
  • the set of data may be associated with a VM, a vdisk, or a file.
  • first storage system 602 is associated with a production system and is configured to generate a snapshot for the set of data every configured interval.
  • First storage system 602 would store a copy of each newly created snapshot and send a copy of the snapshot to second storage system 606, a backup system.
  • first storage system 602 and second storage system 606 may each store a corresponding sequence of snapshot indices associated with the same set of data
  • each of first storage system 602 and second storage system 606 may not necessarily store snapshot indices that represent the same points-in-time data states for the same set of data.
  • first storage system 602 the production system
  • first storage system 602 the production system, may merge the index of a to-be-deleted snapshot into the index of an adjacent snapshot in its stored sequence of snapshots and then delete the index of the to-be- deleted snapshot. After deleting a snapshot index from the sequence of snapshot indices, the stored physical dependencies associated with the snapshot indices previously adjacent to the deleted snapshot index can be changed at first storage system 602 to accommodate the removal of that snapshot index. While first storage system 602, the production system, may have deleted a snapshot index from its sequence (e.g., to enforce a retention policy associated with that snapshot), second storage system 606, the backup system, may still maintain a copy of the snapshot.
  • first storage system 602 the production system, has deleted the snapshot from its sequence of snapshots, it may be desired to replicate the deleted snapshot back at first storage system 602.
  • a snapshot may be desired to be replicated at a storage system in the event of a disaster recovery or a desired reversion to an earlier point-in-time data state associated with the snapshot.
  • a set of snapshot data e.g., an index
  • the set of snapshot data comprises a delta between two snapshot indices stored at the source system (second storage system 606 in this example).
  • the set of snapshot data sent by second storage system 606 can be used to generate a snapshot index to represent the point-in-time data state associated with the desired snapshot and then the snapshot index can be inserted into the sequence of snapshot indices stored at first storage system 602, as will be described in further detail below.
  • a desired snapshot can be replicated from first storage system 602 and inserted into the sequence of snapshots stored at second storage system 606.
  • identifying information associated with each snapshot include the following:
  • a "snapshot global ID" associated with each snapshot comprises a combination
  • the "creator file system ID” comprises a (e.g., 64-bit) global unique identifier of the storage system that created the snapshot.
  • the "creator file system ID” comprises a (e.g., 64-bit) global unique identifier of the storage system that created the snapshot.
  • the storage system that created a clone from the set of data can generate new snapshots for the clone.
  • other storage systems may create new clones based on the snapshots of the aforementioned clone and create new snapshots for the new clones. In FIG.
  • first storage system 602 and second storage system 606 may each store a corresponding sequence of snapshots associated with the same set of data
  • the "creator snapshot ID” comprises an (e.g., 64-bit) identifier that is determined by storing a counter that is incremented each time a new snapshot is created on the storage system. As such, a younger snapshot will also have a higher creator snapshot ID than an older snapshot created by the same storage system.
  • the "snapshot global ID" of a snapshot uniquely identifies the "expanded" state of the set of data at the point-in-time associated with the snapshot.
  • snapshots associated with the same snapshot global ID may be represented using different physical representations (e.g., indices) at different storage systems depending on the other snapshots of the same sequence that are stored at those storages systems.
  • snapshot global IDs allow any storage system or a snapshot replication system 608 to determine the ordering relationship of the three snapshots even if one or more of the snapshots are deleted.
  • the management of snapshot global IDs allows each storage system to build a graph of "next younger" relationships that can be used to replicate and store snapshots efficiently as deltas rather than expanded states.
  • a "snapshot file global ID" associated with each snapshot comprises a combination
  • the "creator file system ID” and the “creator snapshot ID” for the "snapshot file global ID” are the same as for the "snapshot global ID," as described above.
  • the "file global ID” comprise an identifier of the set of data with which the snapshot is associated.
  • the file global ID can identify the particular file or vdisk or a cloned file of any of the aforementioned sets of data with which the snapshot is associated.
  • the file global ID can be used to determine which snapshots belong to which sequence of snapshots and/or set of metadata.
  • the file global ID of two snapshots associated with the same creator file system ID but different creator snapshot IDs can help determine that the two snapshots belong to two different sequences of snapshots, one of which to a particular VM and the other to a clone of that VM.
  • the identifying information described above associated with each snapshot that is stored at each of first storage system 602 and second storage system 606 may be stored by one or more of first storage system 602, second storage system 606, and snapshot replication system 608.
  • the identifying information can be used by at least one of first storage system 602, second storage system 606, and snapshot replication system 608 to determine which snapshots are stored at which systems and also deduce the ordering of snapshots. Therefore, such identifying information can be used to perform replication of snapshots from first storage system 602 to second storage system 606, and vice versa, regardless of which storage system had created the snapshots and/or the order in which snapshots are replicated.
  • Snapshot replication system 608 may not necessarily store snapshots but it may be configured to determine the set of snapshot data that should be sent from a source system to a destination system in order to replicate a desired snapshot at the destination system.
  • the identifying information may be pruned only when a snapshot has been deleted from all systems. In some embodiments, however, some information may be pruned when it is deemed that the benefits of keeping the information are low.
  • FIG. 7 is a diagram showing an example of how snapshot indices associated with the same expanded data state may differ at different storage systems.
  • the first system stores a sequence of three snapshots associated with a particular set of data (e.g., a VM, a vdisk, or a file) and the second system stores a sequence of two snapshots associated with the same set of data.
  • Each snapshot is represented as an index that maps logical offsets (e.g., 1, 2, 3, or 4) to data values stored at corresponding physical offsets.
  • the expanded states (i.e., point-in-time states) of the set of data that can be accessed from the first system include the snapshot at time tl, the snapshot at time t2, and the snapshot at time t3.
  • each of the respective indices associated with the snapshot at time tl, the snapshot at time t2, and the snapshot at time t3 at the first system links back to (e.g., physically depends on) the next older snapshot (e.g., the snapshot at time t3 links to snapshot at time t2 and snapshot at time t2 links to snapshot at time tl).
  • the expanded states (i.e., point-in-time states) of the set of data that can be accessed from the second system include the snapshot at time tl and the snapshot at time t3.
  • the second system may have previously stored a copy of the snapshot at time t2 but then
  • index 704 associated with the snapshot at time t3 stored at the second system includes the data (e.g., a mapping of offset 2 to data value B) from the snapshot at time t2 and is also modified to link to the index associated with the snapshot at time tl at the second system.
  • the respective snapshot indices used to represent the snapshot at time t3 at each of the two systems are different due to the presence of different snapshots at each system.
  • the snapshot at time t3 stored at the first system is represented by index 702 and the snapshot at time t3 stored at the second system is represented by index 704.
  • Snapshot at time t3 index 702 at the first system which includes only a mapping of offset 3 to data value C
  • snapshot at time t3 index 704 at the second system which includes a mapping of offset 2 to data value B and a mapping of offset 3 to data value C
  • the snapshot at time t3 index 702 links to an index associated with the snapshot at time t2 while snapshot at time t3 index 704 links to an index associated with the snapshot at time tl .
  • snapshots associated with the same expanded state may be represented using different indices at different systems.
  • the source system and the destination system will both store a copy of that snapshot, but each system may store a different physical representation (e.g., index) to represent that snapshot, depending on the other snapshots that the system stores.
  • a different physical representation e.g., index
  • FIG. 8 is a flow diagram showing an embodiment of a process for performing replication of a selected snapshot from a source system to a destination system.
  • the source system and the destination system of process 800 can be implemented using first storage system 602 and second storage system 606 of system 600 of FIG. 6, respectively, or second storage system 606 and first storage system 602, respectively.
  • process 800 is implemented at first storage system 602, second storage system 606, or snapshot replication system 608 of system 600 of FIG. 6.
  • a request to store at a destination system a snapshot data to represent at the destination system a state of a set of data at a first point-in-time is received, wherein a first source system snapshot data that represents at a source system the state of the set of data at the first point- in-time depends on a second source system snapshot data that represents at the source system a state of the set of data at a second point-in-time.
  • a snapshot stored at a source system that is identified by its corresponding point-in-time (i.e., expanded state) (e.g., associated with identifying information such as the snapshot global ID or the snapshot file global ID) is selected (e.g., by a system administer and/or computer program) to be replicated at a destination system.
  • the snapshot data (e.g., index or other type of physical representation) of the selected snapshot at the source system depends on the snapshot data (e.g., index or other type of physical representation) of a next older snapshot in a sequence of snapshot data (e.g., indices) stored at the source system.
  • the snapshot index of the selected snapshot being dependent on the snapshot index of the next older snapshot describes that the data stored in the snapshot index of the selected snapshot comprises
  • the selected snapshot does not need to be replicated in a particular chronological order. Put another way, the selected snapshot does not need to be replicated only after the next older snapshot was replicated.
  • the snapshot data to represent at the destination system the state of the set of data at the first point-in-time is determined, wherein the snapshot data is determined based at least in part on data comprising the first source system snapshot data and a destination system snapshot data that represents at the destination system a state of the set of data at a third point-in-time.
  • the point-in-time (i.e., expanded state) of an existing snapshot at the destination system can be identified (e.g., using the stored identifying information) to help determine the older/younger and/or difference in point-in-time relationships between the selected snapshot at the source system and the existing snapshot at the destination system.
  • Such ordering relationships can be used to determine how data can be (e.g., efficiently) sent from the source system to replicate the selected snapshot at the destination system.
  • the snapshot data (e.g. index) determined to be sent from the source system to the destination system can include metadata (e.g., logical mappings to underlying data) and underlying data.
  • the snapshot data determined to be sent from the source system to the destination system comprises a delta determined based at least in part on the snapshot index of the selected snapshot and the snapshot index of another snapshot stored at the source system.
  • the other snapshot may be one on which the selected snapshot depends (e.g., links to).
  • Sending deltas between snapshot data is much more efficient than sending expanded states of snapshots, as is conventionally done.
  • a smaller delta may be created by comparing with a younger, rather than older snapshot.
  • inserting the snapshot data into the existing snapshot sequence at the destination system includes removing entries from the snapshot index of an existing snapshot relative to the snapshot data (refactoring, as will be described in further detail below), adding a new snapshot data (e.g., index) to the snapshot data sequence at the destination system to represent at the destination system the point-in-time data state associated with the selected snapshot, and/or changing the dependencies of the snapshot indices at the snapshot data sequence at the destination system to accommodate the addition of the new snapshot index.
  • a new snapshot data e.g., index
  • FIGS. 9A and 9B are diagrams showing an example of replicating the snapshot at time t2 from a source system to a destination system.
  • the snapshot sequence at the source system includes the snapshot at time t2 linking to the snapshot at time tl .
  • the snapshot at time t2 is represented by index 904 at the source system.
  • the snapshot sequence at the destination system prior to the snapshot at time t2 being replicated, includes the snapshot at time t3 linking to the snapshot at time tl .
  • the snapshot at time t3 is represented by index 902 at the destination system.
  • snapshot at time t3 index 902 contained all the changes up to and including time t3 and after tl, which includes a mapping of offset 1 to data value A (stored at time t3) and a mapping of offset 3 to data value C (stored at time t2).
  • an index associated with the snapshot at time t2 would need to be "spliced" in between the snapshot at time t3 and the snapshot at time tl at the destination system.
  • "splicing" is the process by which a snapshot is inserted in a sequence of snapshots.
  • a snapshot can be spliced as an intermediate snapshot in between two existing snapshots of a sequence, as the youngest snapshot in the sequence, or as the oldest snapshot in the sequence.
  • Splicing includes changing the physical dependencies between snapshots such that a younger existing snapshot that becomes adjacent to the spliced snapshot in the sequence is caused to depend on (e.g., link to, point to, and/or otherwise reference) the spliced snapshot.
  • Splicing also includes causing the spliced snapshot to depend on an older existing snapshot that becomes adjacent to the spliced snapshot in the sequence.
  • the source can send the delta between the snapshot at time t2 at the source system and the snapshot at time tl at the source system.
  • This delta between the snapshot at time t2 and the snapshot at time tl at the source system may be represented by index 904 of FIG. 9B.
  • the delta between the snapshot at time t2 at the source system and the snapshot at time tl at the source system is therefore the same as the index that is used to represent the snapshot at time t2 at the source system.
  • the delta between the snapshot at time t2 and the snapshot at time tl at the source system as represented by index 904 is sent from the source system to the destination system and spliced into the existing snapshot sequence, in between the snapshot at time t3 and the snapshot at time tl, to represent the snapshot at time t2 at the destination system.
  • the snapshot at time t3 index 902 at the destination system had contained all the changes up to and including time t3 and after tl, after the insertion of index 904 at the destination system, the redundant entries between index 902 representing the snapshot at time t3 and index 904 representing the snapshot at time t2 need to be removed from index 902 at the destination system.
  • replication as described herein can take advantage of a snapshot that is already present on the destination system by
  • "refactoring" the replicated snapshot data at the destination.
  • "refactoring” is the process by which redundant metadata entries are removed from either a younger snapshot or an older snapshot between the replicated snapshot and an adjacent existing snapshot of the destination system. Redundant entries are often created when a snapshot is replicated and spliced into an existing snapshot sequence at a different system. The replicated snapshot and an adjacent existing younger or older snapshot will sometimes contain some of the same entries as the replicated snapshot, making the shared entries in the index of the replicated or its adjacent existing snapshot redundant. As shown in FIG.
  • the snapshot sequence includes the snapshot at time t3 (which has been modified to link to the snapshot at time t2), the snapshot at time t2 (which has been modified to link to the snapshot at time tl), and the snapshot at time tl .
  • replication of a snapshot to a destination system can modify the physical dependencies among snapshots of a sequence at the destination system.
  • FIGS. 10A and 10B are diagrams showing an example of replicating the snapshot at time t3 from a source system to a destination system.
  • the snapshot sequence at the source system includes the snapshot at time t3 linking to the snapshot at time tl .
  • the snapshot sequence at the destination system, prior to the snapshot at time t3 being replicated includes the snapshot at time t2 linking to the snapshot at time tl .
  • the snapshot at time t3 is represented by index 1002 at the source system.
  • the snapshot at time t3 index 1002 contained all the changes up to and including time t3 and after tl, which includes a mapping of offset 1 to data value A (stored at time t3) and a mapping of offset 3 to data value C (stored at time t2).
  • an index associated with the snapshot at time t3 would need to be "spliced" to link to the snapshot at time t2 at the destination system.
  • the source system can generate a delta between the snapshot at time t3 and the snapshot time t2, but the source does not have the snapshot time t2. In this case, the source can generate a delta relative to the snapshot at time tl, the most recent snapshot older than the snapshot at time t3 that is stored at both the source and destination systems.
  • a third system e.g., a snapshot replication system
  • This delta between the snapshot at time t3 and the snapshot at time tl may be represented by index 1002 of FIG. 10B. (Given that index 1002 associated with the snapshot at time t3 at the source system already contains only the changes since the snapshot at time tl was generated, the delta between the snapshot at time t3 at the source system and the snapshot at time tl at the source system is therefore the same as the index that is used to represent the snapshot at time t3 at the source system.)
  • the delta is spliced to point to the snapshot at time t2 at the destination system and then refactored to create index 1004 to represent the snapshot at time t3 at the destination system by eliminating entries from the delta comprising index 1002 that are common with an existing older snapshot at the destination system, the snapshot at time t2, which is represented by index 1006.
  • the entry associated with a mapping of offset 3 to data value C is removed from the delta comprising index 1002 to create index 1004 to represent the snapshot at time t3 because the same entry is already present in index 1006 representing the snapshot at time t2.
  • the refactoring of the replicated snapshot at time t3 at the destination system i.e., the delta comprising index 1002
  • the refactoring of the replicated snapshot at time t3 at the destination system can also be done as the delta is received at the destination.
  • refactoring can be performed before the entire set of snapshot data (the delta) (including the logical to physical offset mappings and the underlying data to which they map) is completely sent from the source to the destination. For example, referring to FIG.
  • the two offset entries are determined to be redundant, and therefore, the entry is excluded from index 1004 that is used to represent the snapshot at time t3 at the destination system and the underlying data pointed to by the redundant offset is also not sent from the source system.
  • the replication need not be completed before refactoring is performed.
  • the destination system could send the source system a temporary copy of the snapshot at time t2 (e.g., as a delta between the snapshot at time t2 and the snapshot at time tl) that the source system can use to generate a delta between the snapshot at time t3 and the snapshot at time t2. Then, the delta between the snapshot at time t3 and the snapshot at time t2 can be sent from the source system to be spliced to point to the snapshot at time t2 at the destination system.
  • a temporary copy of the snapshot at time t2 e.g., as a delta between the snapshot at time t2 and the snapshot at time tl
  • the delta between the snapshot at time t3 and the snapshot at time t2 can be sent from the source system to be spliced to point to the snapshot at time t2 at the destination system.
  • FIGS. 11 A and 1 IB are diagrams showing another example of replicating the snapshot at time t3 from a source system to a destination system.
  • the snapshots at times tl, t2, and t3 were created sequentially as a part of the same snapshot sequence.
  • a "common snapshot” refers to a point-in-time snapshot as the snapshot against which a delta is generated at the source system and against which the delta will be spliced at the destination system.
  • the source prior to the snapshot at time t3 being replicated, the source has only the snapshot at time t3 and the snapshot at time tl while the destination has only the snapshot at time t2.
  • snapshot at time t3 Prior to the snapshot at time t3 being replicated, the source and the destination systems do not have a common snapshot of the snapshot at time t2.
  • the snapshot at time t3 is represented by index 1102 at the source system.
  • index 1102 at the source system contained all the changes up to and including time t3 and after tl, which includes a mapping of offset 2 to data value B (stored at time t3) and a mapping of offset 4 to data value D (stored at time t2).
  • an index associated with the snapshot at time t3 would need to be "spliced" to link to the snapshot at time t2 at the destination system.
  • snapshot at time t3 index 1102 at the source system has not been refactored with respect to the index of the snapshot at time tl at the source system because both indices include a mapping of offset 2 to data value B.
  • the same data values associated with the same offset at adjacent snapshot indices at the source system may be preserved and used in generating a delta at the source system, as will be described further below.
  • the source can be determined from the identifying information stored by either or both the source system and the destination system and/or a third system (e.g., a snapshot replication system), snapshots associated with which point-in-times (e.g., expanded states) are already stored at the destination system and which order in the sequence they would be relative to the selected snapshot.
  • the source generates a delta of the snapshot at time t3 relative to the snapshot at time tl .
  • This delta between the snapshot at time t3 and the snapshot at time tl may be represented by index 1102 of FIG. 10B.
  • the delta between the snapshot at time t3 at the source system and the snapshot at time tl at the source system is therefore the same as the index that is used to represent the snapshot at time t3 at the source system.
  • the delta must contain all "offsets" that were modified between the snapshots at times t3 and tl even if the data values are the same.
  • offset 2 has been modified in between when the snapshots at times tl and t3 were generated even though the snapshots at times tl and t3 store the same data value of B for offset 2.
  • index 1102 associated with the delta between the snapshot at time t3 at the source system and the snapshot at time tl at the source system includes offset 2 even though the data values are the same at offset 2 in both snapshots at times tl and t3.
  • the delta may exclude data values that are the same between the two snapshots even if the corresponding offsets were modified in the younger snapshot.
  • the delta is spliced to point to the snapshot at time t2 at the destination system and then refactored to represent the snapshot at time t3 at the destination system by eliminating entries from the delta comprising index 1102 that are common with the snapshot at time t2, which is represented by index 1104 at the destination system.
  • the entry associated with a mapping of offset 4 to data value D is removed from index 1102 representing the snapshot at time t3 because the same entry is already present in index 1104 representing the snapshot at time t2 at the destination system.
  • the refactoring may be performed prior to the completion of the replication of the snapshot at time t3 at the destination system.
  • FIGS. 12A and 12B are diagrams showing another example of replicating the snapshot at time t4 from a source system to a destination system.
  • the snapshots at times tl, t2, t3, and t4 were created sequentially as a part of the same snapshot sequence.
  • the source prior to the snapshot at time t4 being replicated, the source has the snapshots at times t4, t3, t2, and tl while the destination has only the snapshot at time t2 and the snapshot at time tl .
  • the source can send the delta between the snapshot at time t4 at the source system relative to the snapshot at time t2 at the source system.
  • the entries of the snapshot at time t3 Prior to generating the delta of the snapshot at time t4 relative to the snapshot at time t2, the entries of the snapshot at time t3 are first merged into the snapshot at time t4. For example, in generating the delta of the snapshot at time t4 relative to the snapshot at time t2, the entries of the snapshot at time t3 are first logically merged into the snapshot at time t4 (while not actually deleting the index of the snapshot at time t3 from the source system) and then the delta is generated between the index of the snapshot at time t4 merged with the offsets of the snapshot at time t3 and the snapshot at time t2.
  • This delta between the snapshot at time t4 (with the merged entries of the snapshot at time t3) and the snapshot at time t2 may be represented by index 1202 of FIG. 12B.
  • index 1202 representing the delta can be used directly at the snapshot at t4 at the destination system.
  • the source instead of the source sending the delta of the snapshot at time t4 at the destination system relative to the snapshot at time t2 at the destination system, the source generates the expanded state of data at the snapshot at time t4.
  • An index representing the expanded state of data at the snapshot at time t4 would include mappings of offset 1 to data value A, offset 2 to data value E, offset 3 to data value C, and offset 4 to data value D.
  • the index representing the expanded state of data at the snapshot at time t4 would be sent to the destination.
  • the index representing the expanded state of data at the snapshot at time t4 would be refactored to remove redundant entries from the snapshot at time t2 at the destination system and the snapshot at time tl at the destination system.
  • the resulting index to use to represent the snapshot at time t4 at the destination system would still be index 1202 of FIG. 12B.
  • FIG. 13 is a flow diagram showing an example of a process of refactoring a younger snapshot index relative to an older snapshot index.
  • process 1300 is implemented at first storage system 602, second storage system 606, or snapshot replication system 608 of system 600 of FIG. 6.
  • process 1300 is implemented after a selected snapshot has been completely replicated at a destination system.
  • process 1300 is implemented before a selected snapshot has been completely replicated at a destination system (e.g., process 1300 can be implemented at least partially concurrently to the replication of snapshot data at the destination system).
  • the younger snapshot index between the replicated snapshot and an adjacent existing snapshot at the destination system is refactored to remove entries.
  • the replicated snapshot refers to the delta data that is to be or was sent from the source system.
  • the "younger snapshot index" described in process 1300 below refers to the relatively younger snapshot index between the replicated snapshot index and an adjacent existing snapshot index at the destination system and the "older snapshot index” described in process 1300 below refers to the relatively older snapshot index between the replicated snapshot index and an adjacent existing snapshot index at the destination system.
  • the existing snapshot index at time t3 at the destination was the younger snapshot index that was refactored relative to the replicated snapshot index at time t2.
  • the replicated snapshot index at time t3 was the younger snapshot index that was refactored relative to the existing snapshot index at time t2 at the destination.
  • the replicated snapshot may be refactored with respect to both an adjacent existing older snapshot and an adjacent existing younger snapshot at the destination. Therefore, for example, process 1300 may be applied twice in splicing a snapshot into an existing snapshot sequence at the destination - process 1300 may be applied a first time where the replicated snapshot comprises the "younger snapshot index" and an adjacent existing older snapshot at the destination comprises the "older snapshot index” of process 1300 and process 1300 may be applied a second time where the replicated snapshot comprises the "older snapshot index" and an adjacent existing younger snapshot at the destination comprises the "young snapshot index” of process 1300.
  • a first fingerprint corresponding to an offset associated with a younger snapshot index is determined.
  • a fingerprint is determined based on the data value mapped to by a logical offset that is included in the younger snapshot index.
  • the data value corresponding to the offset is read from the younger snapshot index and the fingerprint of the data value can be determined based on a (e.g., SHA1) hash technique.
  • a second fingerprint corresponding to the offset associated with an older snapshot index is determined.
  • a fingerprint is determined based on the data value mapped to by the same logical offset that is included in the older snapshot index.
  • the data value corresponding to the offset is read from the older snapshot index. This fingerprint of the data value can be determined based on the same technique that was used to obtain the fingerprint in step 1302.
  • the offset is removed from the younger snapshot index.
  • the redundant offset is removed from the younger snapshot index and the underlying data is deleted from the destination system and/or prevented from being transferred from the source system to the destination system.
  • a snapshot associated with a clone can be replicated from a source system to a destination system similar to the manner in which a non-clone snapshot can be replicated.
  • a sequence of snapshots associated with a clone is generated from a snapshot of a set of source data also stored at the source system; the source snapshot is referred to as the "shared snapshot.” Because the clone is generated from (and therefore depends on) the shared snapshot of the source data, in various embodiments, a clone snapshot is replicated as a delta of the shared snapshot rather than as an expanded state.
  • replicating a clone snapshot from a source system to a destination system takes into consideration whether the shared snapshot of the source data is already present at the destination system.
  • FIG. 14 is a flow diagram showing an embodiment of a process for performing replication of a selected snapshot associated with a clone from a source system to a destination system.
  • the source system and the destination system of process 1400 can be implemented using first storage system 602 and second storage system 606 of system 600 of FIG. 6, respectively, or second storage system 606 and first storage system 602, respectively.
  • process 1400 is implemented at first storage system 602, second storage system 606, or snapshot replication system 608 of system 600 of FIG. 6.
  • a request to replicate at a destination system a selected snapshot is received, wherein the selected snapshot is associated with a set of clone data, wherein the set of clone data is associated with a shared snapshot of a set of source data.
  • a snapshot associated with a clone is requested to be replicated at a destination system.
  • a set of metadata stored for the clone can be used to identify which particular shared snapshot of which particular source data is the shared snapshot from which the clone was generated.
  • the shared snapshot and its associated source data may be identified by a snapshot global ID, which indicates the system that created the snapshot, the expanded state associated with the snapshot, and also the set of data (e.g., vdisks or files) with which the snapshot is associated.
  • a snapshot global ID indicates the system that created the snapshot, the expanded state associated with the snapshot, and also the set of data (e.g., vdisks or files) with which the snapshot is associated.
  • the shared snapshot of the set of source data already exists at the destination system it is determined whether the shared snapshot from which the clone was generated is already present at the destination system. For example, whether the shared snapshot of the source data is already present at the destination system can be determined from the stored identifying information as described above. For example, it can be determined from the stored identifying information whether the destination system currently stores a snapshot associated with the snapshot global ID of the shared snapshot. In the event that the shared snapshot does not already exist at the destination system, control is transferred to 1406. Otherwise, in the event that the shared snapshot already exists at the destination system, control is transferred to 1408.
  • the shared snapshot of the set of source data is replicated at the destination system. If the shared snapshot of the source data is not already present at the destination system, then the shared snapshot is first replicated at the destination system.
  • a process such as process 800 of FIG. 8 is used to replicate the shared snapshot from the source system to the destination system.
  • the shared snapshot can be replicated as a delta of an existing snapshot (at either the source system or the destination system).
  • the shared snapshot may also be subsequently reused for other replicated clones that link to the shared snapshot.
  • a shared snapshot that is only used by a single clone may be automatically deleted and merged with the next younger snapshot of the clone to save space.
  • a set of metadata associated with the set of clone data is caused to be generated at the destination system. If the clone does not already exist at the destination, then the clone is generated at the destination by at least generating a set of metadata associated with the clone.
  • the set of metadata associated with the clone may include a set of file global IDs associated with the clone, data linking the clone to the shared snapshot at the destination system, and/or a current snapshot index associated with the clone to use to create subsequent snapshots associated with the clone.
  • the file global IDs associated with the clone can be used to identify which snapshots belong to the clone and which do not (e.g., snapshots that belong to the source data from which the clone was generated).
  • the selected snapshot is replicated based at least in part on the shared snapshot at the destination system.
  • the selected clone snapshot can be replicated to the destination system.
  • a process such as process 800 of FIG. 8 is used to replicate the selected clone snapshot from the source system to the destination system.
  • the selected clone snapshot can be replicated as a delta of another snapshot (e.g., the shared snapshot).
  • FIGS. 15 A, 15B, and 15C are diagrams showing an example of replicating a snapshot at time t4 (S4) associated with a clone from a source system to a destination system.
  • the source system includes two snapshot sequences.
  • the first snapshot sequence at the source system is associated with a set of source data with the file global ID of "VMl” and comprises the snapshot at time t3 (S3), the snapshot at time t2 (S2), and the snapshot at time tl (SI).
  • the second snapshot sequence at the source system is associated with a clone with the file global ID of "Clone VMl" and comprises the snapshot at time t5 (S5) and the snapshot at time t4 (S4).
  • the snapshot at time t2 (S2) is the shared snapshot associated with the snapshot sequence of "Clone VMl .”
  • the snapshot sequence at the destination system, prior to the snapshot at time t4 (S4) being replicated includes the snapshot at time tl (SI) associated with file global ID "VMl .” Because the shared snapshot, the snapshot at time t2 (S2), is not present at the destination system, the snapshot at time t2 (S2) will be replicated at the destination system first.
  • the shared snapshot the snapshot at time t2 (S2)
  • FIG. 15B shows the result of replicating the snapshot at time t2 (S2) at the destination system, which includes splicing the snapshot at time t2 (S2) to point to the snapshot at time tl (SI) in a snapshot sequence associated with "VMl .”
  • FIG. 15B shows the result of generating the metadata associated with clone "Clone VMl” at the destination system as a box with dotted lines labeled "Clone VMl” and that links to the snapshot at time t2 (S2) of the snapshot sequence associated with "VMl .”
  • the snapshot at time t4 associated with "Clone VMl” is replicated at the destination system.
  • the snapshot at time t4 (S4) can be replicated at the destination system in a manner similar to replicating a non-clone snapshot.
  • the snapshot at time t4 (S4) can be replicated at the destination system by sending the delta of the snapshot at time t4 (S4) relative to the shared snapshot, the snapshot at time t2 (S2), to the destination system, for example.
  • 15C shows the result of replicating the snapshot at time t4 (S4) at the destination system, which includes associating the snapshot at time t4 (S4) with file global ID "Clone VMl” and splicing the snapshot at time t4 (S4) to point to the snapshot at time t2 (S2) in a snapshot sequence associated with "VMl .”

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Performing replication of snapshots between storage systems is disclosed, including: receiving a request to store at a destination system a snapshot data to represent at the destination system a state of a set of data at a first point-in-time, wherein a first source system snapshot data that represents at a source system the state of the set of data at the first point-in-time depends on a second source system snapshot data that represents at the source system a state of the set of data at a second point-in-time; and determining the snapshot data to represent at the destination system the state of the set of data at the first point-in-time, wherein the snapshot data is determined based on data comprising the first source system snapshot data and a destination system snapshot data that represents at the destination system a state of the set of data at a third point-in-time.

Description

REPLICATION OF SNAPSHOTS AND CLONES
CROSS REFERENCE TO OTHER APPLICATIONS
[0001] This application claims priority to U.S. Provisional Patent Application No.
61/873,241 entitled REPLICATION OF SNAPSHOTS AND CLONES filed September 3, 2013 which is incorporated herein by reference for all purposes.
BACKGROUND OF THE INVENTION
[0002] In some systems, a snapshot can be represented as a snapshot index that tracks the changes made to a storage system between two given points in time. When replicating a snapshot associated with a state of data at a point-in-time, many conventional approaches "expand" the state of the data at the point-in-time corresponding to the snapshot. The expanded state of the data contains all data values that exist or can be accessed at that point-in-time and is usually much larger than the delta representation, which only contains changes that have been made since the next older snapshot. Transmission to and storage of expanded states of data at a destination system can be inefficient.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
[0004] FIG. 1 is a diagram showing an embodiment of a storage system for the storage of
VMs using virtual machine storage abstractions.
[0005] FIG. 2 is a block diagram illustrating an embodiment of a storage system including data and metadata.
[0006] FIG. 3 is a diagram showing an example of a set of metadata associated with a set of data.
[0007] FIG. 4 is a diagram showing an example of a set of metadata associated with source data and a set of metadata associated with a clone.
[0008] FIG. 5 is a diagram showing an example of snapshots that can be stored at a source system and a destination system. [0009] FIG. 6 is a diagram showing an embodiment of a system for performing replication of snapshots between storage systems.
[0010] FIG. 7 is a diagram showing an example of how snapshot indices associated with the same expanded data state may differ at different storage systems.
[0011] FIG. 8 is a flow diagram showing an embodiment of a process for performing replication of a selected snapshot from a source system to a destination system.
[0012] FIGS. 9A and 9B are diagrams showing an example of replicating the snapshot at time t2 from a source system to a destination system.
[0013] FIGS. 10A and 10B are diagrams showing an example of replicating the snapshot at time t3 from a source system to a destination system.
[0014] FIGS. 11 A and 1 IB are diagrams showing another example of replicating the snapshot at time t3 from a source system to a destination system.
[0015] FIGS. 12A and 12B are diagrams showing another example of replicating the snapshot at time t4 from a source system to a destination system.
[0016] FIG. 13 is a flow diagram showing an example of a process of refactoring a younger snapshot index relative to an older snapshot index.
[0017] FIG. 14 is a flow diagram showing an embodiment of a process for performing replication of a selected snapshot associated with a clone from a source system to a destination system.
[0018] FIGS. 15 A, 15B, and 15C are diagrams showing an example of replicating a snapshot at time t4 (S4) associated with a clone from a source system to a destination system.
DETAILED DESCRIPTION
[0019] The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term 'processor' refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
[0020] A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention
encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
[0021] Embodiments of performing efficient, flexible replication of snapshots and clones between storage systems are described herein. Storage systems for which the replication of snapshots is performed may be located great distances apart from each other. Snapshots and clones allow space-efficient representation of point-in-time copies of data. Snapshots are generally readonly copies, while clones are generally copies that can be read or written to. Typical replication of snapshots and clones at a remote storage system often results in the loss of space efficiency or places restrictions on the subset or order in which snapshots and clones must be replicated.
Embodiments described herein enable replicating snapshots and clones to be performed using a minimal amount of information, represented as changes or deltas, and to be transmitted and stored between the replicating storage systems. Any subset of snapshots and clones may be replicated in any order to any system, while preserving a minimal representation of data and metadata on the storage systems.
[0022] A "snapshot" comprises a point-in-time state of a set of data and in various embodiments, a subsequently generated snapshot includes mappings to data that was modified since the previous snapshot was created. A set of data may be associated with a virtual machine (also sometimes referred to as a "VM"), a virtual disk (also sometimes referred to as a "vdisk"), or a file, for example. In various embodiments, the metadata associated with a set of data (e.g., a VM, a vdisk, or a file) comprises one or more snapshots. In various embodiments, a snapshot associated with a point-in-time state of a set of data is physically represented/stored as an index at a storage system. As used herein, a "snapshot" is sometimes used to refer to a state of a set of data at a particular point-in-time and/or the physical representation (e.g., an index) that represents that state of the set of data at that particular point-in-time at a particular storage system. A "user" performs read operations on a snapshot using "logical offsets," which are mapped to "physical offsets" using the indices associated with the snapshots comprising the set of data. The physical offsets can then be used to read and write data from the underlying physical storage devices. Read operations lookup the logical offset in one or more indices to find the corresponding physical offset, while write operations create new entries or update existing entries in indices. Because each snapshot index includes mappings to data modified since the immediately previously generated (i.e., older) snapshot index, each snapshot index (other than the oldest snapshot index) associated with the set of data may depend on (e.g., point to, link to, and/or otherwise reference) at least a next older snapshot index. As such, snapshots associated with different points-in-time states of the set of data can be represented as a sequence of snapshot indices at a storage system. Due to the dependencies among snapshot indices in a sequence, as will be described in further detail below, different storage systems with the same points-in-time snapshots associated with the same set of data may store indices that map somewhat different sets of logical offsets to correspond to their respective sequences of snapshots.
[0023] In various embodiments, a "clone" refers to a copy of an existing set of data (the existing set of data is sometimes referred to as "source data"). In various embodiments, a clone is generated from a snapshot of the source data. In various embodiments, the snapshot of the source data from which a clone is created is referred to as a "shared snapshot." To generate the clone, a new set of metadata is created and data associating the clone's new set of metadata to the source data's set of metadata is stored such that at least some of the snapshot indices associated with the source data are to be shared with the new set of metadata associated with the clone and at least some of the data associated with source data is shared with the clone.
[0024] FIG. 1 is a diagram showing an embodiment of a storage system for the storage of
VMs using virtual machine storage abstractions. In the example shown, system 100 includes server 106, network 104, and storage system 102. In various embodiments, network 104 includes various high-speed data networks and/or telecommunications networks. In some embodiments, storage system 102 communicates with server 106 via network 104. In some embodiments, the file system for the storage of VMs using virtual machine storage abstractions does not include network 104, and storage system 102 is a component of server 106. In some embodiments, server 106 is configured to communicate with more storage systems other than storage system 102.
[0025] In various embodiments, server 106 runs several VMs. In the example shown, VMs
108, 110, and 112 (and other VMs) are running on server 106. A VM is a software implementation of a physical machine that executes programs like a physical machine. For example, a physical machine (e.g., a computer) may be provisioned to run more than one VM. Each VM may run a different operating system. As such, different operating systems may concurrently run and share the resources of the same physical machine. In various embodiments, a VM may span more than one physical machine and/or may be moved (e.g., migrated) from one physical machine to another. In various embodiments, a VM includes one or more virtual disks (vdisks) and other data related to the specific VM (e.g., configuration files and utility files for implementing functionality, such as snapshots, that are supported by the VM management infrastructure). A vdisk appears to be an ordinary physical disk drive to the guest operating system running on a VM. In various
embodiments, one or more files may be used to store the contents of vdisks. In some embodiments, a VM management infrastructure (e.g., a hypervisor) creates the files that store the contents of the vdisks (e.g., the guest operating system, program files and data files) and the other data associated with the specific VM. For example, the hypervisor may create a set of files in a directory for each specific VM. Examples of files created by the hypervisor store the content of one or more vdisks, the state of the VM's BIOS, information and metadata about snapshots created by the hypervisor, configuration information of the specific VM, etc. In various embodiments, data associated with a particular VM is stored on a storage system as one or more files. In various embodiments, the files are examples of virtual machine storage abstractions. In some embodiments, the respective files associated with (at least) VMs 108, 110, and 112 running on server 106 are stored on storage system 102.
[0026] In various embodiments, storage system 102 is configured to store meta-information identifying which stored data objects, such as files or other virtual machine storage abstractions, are associated with which VM or vdisk. In various embodiments, storage system 102 stores the data of VMs running on server 106 and also stores the metadata that provides mapping or other
identification of which data objects are associated with which specific VMs. In various
embodiments, mapping or identification of specific VMs includes mapping to the files on the storage that are associated with each specific VM. In various embodiments, storage system 102 also stores at least a portion of the files associated with the specific VMs in addition to the mappings to those files. In various embodiments, storage system 102 refers to one or more physical systems and/or associated hardware and/or software components configured to work together to store and manage stored data, such as files or other stored data objects. In some embodiments, a hardware component that is used to (at least in part) implement the storage system may be comprised of either disk or flash, or a combination of disk and flash.
[0027] FIG. 2 is a block diagram illustrating an embodiment of a storage system including data and metadata. In the example shown, storage system 102 includes a network connection 202 and a communication interface 204, such as a network interface card or other interface, which enable the storage system to be connected to and communicate via a network such as network 104 of FIG. 1. The storage system 102 further includes a network file system front end 206 configured to handle NFS requests from virtual machines running on systems such as server 106 of FIG. 1. In various embodiments, the network file system front end is configured to associate NFS requests as received and processed with a corresponding virtual machine and/or vdisk with which the request is associated, for example, using meta-information stored on storage system 102 or elsewhere. The storage system 102 includes a file system 208 configured and optimized to store VM data. In the example shown, metadata 210 is configured to store sets of metadata associated with various sets of data and their associated snapshots and clones. For example, a set of metadata may be associated with a VM, a vdisk, or a file. Storage 212 may comprise at least one tier of storage. In some embodiments, storage 212 may comprise at least two tiers of storage, where the first tier of storage comprises flash or other solid state disk (SSD) and the second tier of storage comprises a hard disk drive (HDD) or other disk storage.
[0028] In various embodiments, a set of metadata stored at metadata 210 includes at least one index that includes mappings to locations in storage 212 at which a set of data (e.g., VM, vdisk, or file) associated with the set of metadata is stored. In some embodiments, a set of metadata stored at metadata 210 includes at least an index that is a snapshot associated with a set of data stored in storage 212. In various embodiments, a set of metadata stored at metadata 210 includes a sequence of one or more snapshot indices associated with a set of data stored in storage 212, where each snapshot index (physically) depends on at least an older (i.e., an earlier generated) snapshot index, if one exists.
[0029] A clone may be generated based on an existing (or source) set of data stored in storage 212. In various embodiments, the clone may be generated using a snapshot of the source set of data in the source data's set of metadata that is stored in metadata 210. In various
embodiments, the snapshot of the source data from which a clone is generated is referred to as a "shared snapshot." A new set of metadata is created for the clone and data associating the clone (and/or the clone's set of metadata) with the set of metadata associated with the (e.g., shared snapshot of the) source data is stored at metadata 210. At least some of the metadata associated with the source data is shared with the clone. In various embodiment, when a received request includes an operation (e.g., read or write) to access (e.g., a current state or to a past state of) data from a set of data (e.g., a VM, a vdisk, or a file), the set of metadata associated with that data is retrieved. In the event that the data associated with the request comprises a clone, then in some instances, at least a portion of the set of metadata associated with the source data may be accessed as well.
[0030] FIG. 3 is a diagram showing an example of a set of metadata associated with a set of data. A set of metadata may be associated with a set of data (e.g., a VM, a vdisk, or a file). In the example of FIG. 3, assume that the set of metadata is associated with a file. In the example, the set of metadata includes a current snapshot index, a snapshot at time t2, and a snapshot at time tl . The current snapshot index depends on (e.g., is linked to) the snapshot at time t2 and the snapshot at time t2 depends on (e.g., is linked to) the snapshot at time tl . In the example, data associated with the file may be stored at offsets 1, 2, 3, and 4.
[0031] Metadata may be thought of as the mapping used to translate a logical location (e.g., a logical offset) to a physical location (e.g., a physical offset) of underlying storage for data that a user may have written. In various embodiments, the metadata may be organized as an efficient index data structure such as a hash table or a B-tree. For example, the relationship between a logical offset of a data, the index, and the physical offset of the data may be described as follows: logical-offset INDEX physical-offset.
[0032] In various embodiments, each set of metadata includes at least one active index: the
"current snapshot index." The current snapshot index is active in the sense that it can be modified. In some embodiments, the current snapshot index stores all offsets in the file that have been mapped since the previous snapshot was created. A snapshot is typically a read-only file, but the current snapshot index is modifiable until the next prescribed snapshot creation event occurs. For example, a prescribed snapshot creation event may be configured by a user and may comprise the elapse of an interval of time, the detection of a particular event, or a receipt of a user selection to create a new snapshot. Once the next prescribed snapshot creation event is reached, the state of the current snapshot index is preserved to create a new snapshot and a new empty current snapshot index is created. In some embodiments, write operations to the set of data result in the update of the current snapshot index. In some embodiments, read operations of the set of data result in the search of a current snapshot index and subsequently, a search through the sequence of snapshots if the desired data is not found in the current snapshot index, In various embodiments, each index is searched in a prescribed manner.
[0033] In some embodiments, a snapshot of a file is the point-in-time state of the file at the time the snapshot was created. A snapshot of a VM is the collection of file-level snapshots of files that comprise the VM. In some embodiments, at a storage system, a snapshot is represented as an index that stores mappings to the data that was modified after the previous snapshot was created. In other words, in some embodiments, each snapshot only includes the updates to a file (i.e., deltas) for a given time period (since the creation of the previous snapshot). As a result, the snapshot may be represented by a compact space-efficient structure.
[0034] When a snapshot is created, the current snapshot index becomes the index of that snapshot, and a new empty current snapshot index is created in preparation for the next snapshot. Each snapshot is linked to (or otherwise physically dependent on) the next younger and next older snapshot. In some embodiments, the links that go backward in time (i.e., the links to the next older snapshots) are traversed during snapshot and clone read operations.
[0035] Returning to the example of FIG. 3, the current snapshot index is linked (e.g., points to) the snapshot at time t2 and the snapshot at time t2 is linked to the snapshot at time tl . As shown in the example of FIG. 3, each of the snapshot at time t2 and the snapshot at time tl is represented by a corresponding index. The snapshot at time tl can be referred to as being "older" than the snapshot at time t2 and snapshot at time t2 can be referred to as being "younger" than the snapshot at time tl because time tl is earlier than time t2. Because the snapshot at time t2 is linked to the snapshot at time tl, the snapshot at time t2 and the snapshot at time tl can be referred to as a chain or sequence of snapshots associated with the file. In some embodiments, each snapshot index of the set of metadata associated with the file is associated with a stored "file global ID" that identifies that the sequence of snapshots belongs to the file. Read operations to the current state of the file can be serviced from the current snapshot index and/or the snapshot at time t2 and the snapshot at time tl, while write operations to the file update the current snapshot index. In the example of FIG. 3, data A is written before time tl at offset 1 and then the snapshot at time tl is created. The data B is written before time t2 and after time tl at offset 2 and then the snapshot at time t2 is created. The data value C is written after time t2, at time t3 at offset 3 and tracked in the current snapshot index. For example, if a new data value D (not shown) is to overwrite the data currently at offset 3, data value C, at time t4, then offset 3 of the current snapshot index would be updated to map to data value D. [0036] In various embodiments, a read operation on a specified snapshot for a logical block offset may proceed in the following manner: First, a lookup of the specified snapshot index is performed for the logical block offset of the read operation. If a mapping exists, then data is read from the physical device (underlying storage) at the corresponding physical address and returned. Otherwise, if the mapping does not exist within the specified snapshot index, the link to the next older snapshot is traversed and a search of this older snapshot's index is performed. This process continues until a mapping for the logical block offset is found in a snapshot index or the last snapshot in the chain has been examined. For example, assume that a read operation to the set of data requests current data associated with offset 1. First, the current snapshot index of the set of data is searched for a mapping to data associated with offset 1. The mapping is not found in the current snapshot index, so the link (e.g., the stored associating data) from the current snapshot index to the snapshot at time t2 is traversed and a search of the snapshot at time t2 is performed. The mapping is not found in the snapshot at time t2, so the link from the snapshot at time t2 to the next older snapshot, the snapshot at time tl, is traversed and a search of the snapshot at time tl is performed. The mapping associated with offset 1 is found in the snapshot at time tl, the search ends, and the snapshot at time tl is used to service the request.
[0037] FIG. 4 is a diagram showing an example of a set of metadata associated with source data and a set of metadata associated with a clone. In some embodiments, a clone may be created from an existing snapshot of a set of data. In the example, a snapshot of the source data was first created, then a clone was created from this snapshot. As previously described, in order to reduce metadata and data space consumption, snapshots are represented in a compact format that only stores the changes that have been made to the associated set of data since the previous snapshot was created. The set of metadata associated with the source data (the source metadata) includes a snapshot at time t3, a snapshot at time t2, and a snapshot at time tl . As shown in the example of FIG. 4, each of the snapshot at time t3, the snapshot at time t2, and the snapshot at time tl is represented by a corresponding index. In the example, the clone is created from the snapshot at time t2 of the source metadata. Therefore, the snapshot at time t2 is now also referred to as a shared snapshot because it is now shared between the source data and its clone. While not shown in the example, one or more other clones besides the one shown may be created from the snapshot at time t2 of the source metadata. In some embodiments, each snapshot has an associated reference count that tracks the total number of clones that have been created from the snapshot. After a clone creation operation has completed, the reference count of the shared snapshot is incremented by the number of new clones that were created from the snapshot. When a clone is deleted, the reference count associated with the shared snapshot from which the clone was created is decremented by one. In some embodiments, the reference count of a shared snapshot is considered when it is determined whether the shared snapshot should be deleted. For example, a snapshot cannot be deleted if it has a non-zero reference count, thus preserving the data shared by the clones.
[0038] In various embodiments, creating clones (e.g., of snapshots of VMs) does not require copying metadata and/or data. Instead, a new set of metadata is created for a new clone. In some embodiments, the new set of metadata created for a new clone may include at least one or more of the following: a new file global ID, a current snapshot index (not shown in the diagram), an identifier associated with the shared snapshot from which the clone was generated, and an identifier associated with the set of source metadata (e.g., source sequence of snapshots).
Furthermore, information associating each clone with the shared snapshot of the source data is stored. For example, information associating each clone with the shared snapshot of the source data may include the identifier ("snapshot global ID," which will be described in further detail below) that identifies the particular snapshot that is the shared snapshot from the sequence of snapshots associated with the source data. The snapshot itself may be composed of snapshots of data in multiple files. The snapshot metadata in turn identifies the files using the identifier file global ID and the relevant snapshot of the file using the local snapshot ID. The information associating the clone with the shared snapshot may be stored with the clone metadata, the source metadata, and/or elsewhere. For example, the associating data is a pointer or another type of reference that the clone can use to point to the index of the shared snapshot from which the clone was created. This link to the shared snapshot may be traversed during reads of the clone.
[0039] Snapshots may also be generated for a clone in the same manner that snapshots are generated for a non-clone. In the example of FIG. 4, after the clone was created, a snapshot at time t4, which is represented by a corresponding index, was generated (e.g., using a current snapshot index associated with the clone). Because the clone shares each snapshot of the source data including the shared snapshot (the snapshot at time t2 in the example of FIG. 4) and any older snapshots (the snapshot at time tl in the example of FIG. 4), the clone's snapshot at time t4 includes data (D at logical offset 4) that has been modified since the shared snapshot has been created. The clone now includes data value B and data value A (via the pointer back to the shared snapshot of the source data), which it cloned from the source, and also data value D, which was written to the clone after it was created and captured in a snapshot of the clone. Note that the source data is not aware that data D has been written to the clone and/or captured in a snapshot of the clone. [0040] To perform a read of a snapshot of the clone, the index of the snapshot is accessed first. If the desired data is not in the clone's snapshot index, then the clone's snapshots are traversed backwards in time. If one of the clone's snapshot indices includes a mapping for the logical block offset of the requested data, then data is read from the corresponding physical address and returned. However, if the desired data is not in any of the clone's snapshot indices, then the source's snapshots are traversed backwards in time starting from the shared snapshot on which the clone was based (i.e., if the mapping to the requested data is not found in the shared snapshot of the source metadata, then the link to the next older snapshot is traversed and searched, and so forth). In a first example, assume that a read operation to the clone requests for the current data associated with offset 4. First, the only snapshot of the clone, the snapshot at time t4, is searched for a mapping to data associated with offset 4. The mapping associated with offset 4 is found in the clone's snapshot at time t4, the search ends, and the data from the clone's snapshot index is used to service the request. In a second example, assume that a read operation to the clone requests data associated with offset 1. First, the only snapshot of the clone, the snapshot at time t4, is searched for a mapping to data associated with offset 1. The mapping is not found in the only snapshot of the clone, the snapshot at time t4, so the link (e.g., the stored associating data) from the clone's snapshot at time t4 to the shared snapshot is traversed and a search of the shared snapshot, the snapshot at time t2, is performed. The mapping is not found in the shared snapshot, so the link from the shared snapshot to the next older snapshot, the snapshot at time tl, is traversed and a search of the snapshot at time tl is performed. The mapping associated with offset 1 is found in the snapshot at time tl of the source data, the search ends, and the snapshot at time tl is used to service the request. Therefore, the mapping found in the snapshot at time tl of the source data is used to service the read operation to the clone. As shown in the second example, metadata (e.g., snapshots) may be shared between a non-clone and its clone and therefore, in some instances, read operations to the clone may be serviced by metadata associated with the source data.
[0041] FIG. 5 is a diagram showing an example of snapshots that can be stored at a source system and a destination system. As shown in the example, snapshots (e.g., the indices thereof) of various different dependencies (e.g., links to, points to, or otherwise references) on other snapshots (e.g., the indices thereof) can be stored at the source system and the destination system. Consider the following sequence of snapshots that is also shown at the source system:
[0042] t3 -> tl -> tl
[0043] Here, the snapshot at time t3 is linked to its next older snapshot at time t2, which is in turn linked to its next older snapshot at time tl . [0044] The snapshot at time tl contains all changes made to the storage system since the beginning of time up to and including time tl .
[0045] The snapshot at time t2 contains changes made up to and including time t2 but after time tl .
[0046] The snapshot at time t3 contains any changes made up to and including time t3 but after time t2.
[0047] Given this set of changes, the state of the data at time tl, t2, or t3 can be recreated with their corresponding snapshots.
[0048] When replicating snapshots, many conventional approaches "expand" the state of the data at each point-in-time. In some embodiments, an expanded state of the data at a point-in-time contains all data values that exist or can be accessed at that point-in-time and is usually much larger than the delta representation at the same point-in-time, which only contains changes that have been made since the next older snapshot was generated. Therefore, these conventional approaches transmit the complete expanded state of the data at time tl, then the expanded state of the data at time t2 and so on, instead of just the deltas. Transmission of expanded states of data results in much more state information being transmitted and stored than if only the deltas were sent. Other conventional systems send only the deltas between states, but require that the deltas be sent in chronological order, in this example, first tl, then t2 and then t3. Yet other conventional systems may impose other significant constraints in either the order or subset in which snapshots and clones may be replicated.
[0049] In particular, due to ordering constraints of conventional systems, it may not be possible to skip tl, or to send tl after t2, or if tl is sent after t2, then only after expansion rather than as a delta, or if t2 is sent before tl, then only after expansion rather than as a delta. Similarly, it may not be possible in some conventional systems to delete t2 on the destination and then send t3 from the source as a delta. Some conventional systems may be unable to replicate younger snapshots until t2 is sent again.
[0050] On some conventional systems, it may not be possible to continue replication at all unless there is a common snapshot that can be used to generate deltas for replicating between the source and destination. In the above example, assume that if the source has only t3 and t2 and the destination has only tl (t2 is deleted from the destination), then it may not be possible to replicate t3 in its minimal form. Re-establishing a common snapshot usually requires resending a snapshot previously deleted on destination, sometimes in fully expanded form, In this case, t2 may need to be resent to the destination.
[0051] On yet other conventional systems, it may not be possible to continue replication without losing data points. Consider a system where the source system has the snapshot at time t3 and destination has the snapshot at time tl . Since there is neither a common snapshot nor an incremental delta that the source system can send, it may not be possible to continue replication without deleting either t3 or tl .
[0052] As described above, a clone can be represented as an index that depends on (e.g., links to, points to, or otherwise references) a "shared snapshot" index. In the example shown in FIG. 5, a clone, cl, is created from the snapshot at time t2.
[0053] A snapshot may be shared by any number of clones and clones may themselves have snapshots, which may be shared by other clones. In the example shown in FIG. 5, a clone, cl 1, is created from the snapshot at time tl and other clones, clO and c3, may be created from cl 1.
[0054] Similar to snapshots, most conventional approaches replicate clones as expanded states or place restrictions on the subset or order in which the clones may be replicated. In particular, once clones are replicated they are no longer represented as deltas from shared snapshots, and therefore use much more space for data and metadata at the replication destination.
[0055] Considering the full set of use cases and operations involving snapshots, cloning, and replication is complex. One can, for example, replicate a set of snapshots and clones, clone from the replicated copies, create snapshots and clones from the new clones and then replicate these new snapshots and clones back to the original storage system. In particular, sometimes there may be constraints on replicating snapshots that originated or were subsequently derived from such snapshots back to the originating system. At other times, replicating the original or derived snapshots back to the originating system may result in loss of minimal representation.
[0056] Additional complexities arise if the conventional system supports the following two features:
[0057] The replication of snapshots and clones in arbitrary subsets or order, rather than in the strict order in which they were created.
[0058] The deletion of snapshots and clones in arbitrary order, irrespective of their replication status. [0059] Preserving the minimal representation for replication and storage of snapshots and clones under such conditions is extremely difficult.
[0060] Embodiments described herein enable replicating snapshots and clones to be performed using a minimal amount of information, represented as changes or deltas, and to be transmitted and stored between the replicating storage systems. In the following, without loss of generality, the term "snapshot" refers collectively to non-clone snapshots and clone snapshots. However, the term "clone" specifically refers to clone snapshots rather than non-clone snapshots.
[0061] FIG. 6 is a diagram showing an embodiment of a system for performing replication of snapshots between storage systems. As shown in the example, system 600 includes first storage system 602, network 604, second storage system 606, and snapshot replication system 608.
Network 604 comprises high-speed data networks and/or telecommunications networks. In some embodiments, each of first storage system 602 and second storage system 606 is implemented with storage system 102 of system 100 of FIG. 1. First storage system 602 and second storage system 606 may communicate to each other and to snapshot replication system 608 over network 604.
[0062] First storage system 602 and second storage system 606 may each store a corresponding sequence of snapshots associated with the same set of data. For example, the set of data may be associated with a VM, a vdisk, or a file. In one example scenario in which both first storage system 602 and second storage system 606 would both maintain snapshots associated with the same set of data, first storage system 602 is associated with a production system and is configured to generate a snapshot for the set of data every configured interval. First storage system 602 would store a copy of each newly created snapshot and send a copy of the snapshot to second storage system 606, a backup system.
[0063] However, while both first storage system 602 and second storage system 606 may each store a corresponding sequence of snapshot indices associated with the same set of data, each of first storage system 602 and second storage system 606 may not necessarily store snapshot indices that represent the same points-in-time data states for the same set of data. For example, in the same scenario that is described above, first storage system 602, the production system, may have a shorter retention policy for at least some of its stored snapshots (e.g., because storage space is more scarce at the production system) than second storage system 606. As such, first storage system 602, the production system, may merge the index of a to-be-deleted snapshot into the index of an adjacent snapshot in its stored sequence of snapshots and then delete the index of the to-be- deleted snapshot. After deleting a snapshot index from the sequence of snapshot indices, the stored physical dependencies associated with the snapshot indices previously adjacent to the deleted snapshot index can be changed at first storage system 602 to accommodate the removal of that snapshot index. While first storage system 602, the production system, may have deleted a snapshot index from its sequence (e.g., to enforce a retention policy associated with that snapshot), second storage system 606, the backup system, may still maintain a copy of the snapshot.
Sometime after first storage system 602, the production system, has deleted the snapshot from its sequence of snapshots, it may be desired to replicate the deleted snapshot back at first storage system 602. For example, a snapshot may be desired to be replicated at a storage system in the event of a disaster recovery or a desired reversion to an earlier point-in-time data state associated with the snapshot. As such, a set of snapshot data (e.g., an index) that can be used to represent the point-in-time data state associated with the desired snapshot at first storage system 602 can be sent from second storage system 606, the backup system, to first storage system 602, the production system. In some embodiments, the set of snapshot data comprises a delta between two snapshot indices stored at the source system (second storage system 606 in this example). The set of snapshot data sent by second storage system 606 can be used to generate a snapshot index to represent the point-in-time data state associated with the desired snapshot and then the snapshot index can be inserted into the sequence of snapshot indices stored at first storage system 602, as will be described in further detail below. Likewise, a desired snapshot can be replicated from first storage system 602 and inserted into the sequence of snapshots stored at second storage system 606.
[0064] When replicating a snapshot from one system to another, the ordering, that is the younger/older relationship between snapshots maintained by different systems, may be deduced using identifying information associated with each snapshot. Examples of identifying information associated with each snapshot include the following:
[0065] A "snapshot global ID" associated with each snapshot comprises a combination
(e.g., a concatenation) of the following two identifiers: 1) a "creator file system ID" and 2) a "creator snapshot ID." The "creator file system ID" comprises a (e.g., 64-bit) global unique identifier of the storage system that created the snapshot. In some embodiments, while several different storage systems can store snapshots associated with a set of data, only the storage system that created a clone from the set of data can generate new snapshots for the clone. However, other storage systems may create new clones based on the snapshots of the aforementioned clone and create new snapshots for the new clones. In FIG. 6, in some embodiments, while both first storage system 602 and second storage system 606 may each store a corresponding sequence of snapshots associated with the same set of data, assume that in this example, only first storage system 602 may generate new snapshots for that set of data. The "creator snapshot ID" comprises an (e.g., 64-bit) identifier that is determined by storing a counter that is incremented each time a new snapshot is created on the storage system. As such, a younger snapshot will also have a higher creator snapshot ID than an older snapshot created by the same storage system. The "snapshot global ID" of a snapshot uniquely identifies the "expanded" state of the set of data at the point-in-time associated with the snapshot. That is, two "copies" of a snapshot on two different storage systems with the same snapshot global ID correspond to the same expanded state. However, snapshots associated with the same snapshot global ID may be represented using different physical representations (e.g., indices) at different storage systems depending on the other snapshots of the same sequence that are stored at those storages systems. For example, assume that if the storage system with the creator file system ID of "PRODUCTION 1" creates a sequence of snapshots with respective creator snapshot IDs of "SI, "S2," and "S3," then the snapshot global ID of the sequence of snapshots would be "PRODUCTION 1 -SI," "PRODUCTION 1-S2," and "PRODUCTION 1 -S3." The use of snapshot global IDs allows any storage system or a snapshot replication system 608 to determine the ordering relationship of the three snapshots even if one or more of the snapshots are deleted. The management of snapshot global IDs allows each storage system to build a graph of "next younger" relationships that can be used to replicate and store snapshots efficiently as deltas rather than expanded states.
[0066] A "snapshot file global ID" associated with each snapshot comprises a combination
(e.g., a concatenation) of the following three identifiers: 1) a "creator file system ID" 2) a "creator snapshot ID" and 3) a "file global ID." The "creator file system ID" and the "creator snapshot ID" for the "snapshot file global ID" are the same as for the "snapshot global ID," as described above. The "file global ID" comprise an identifier of the set of data with which the snapshot is associated. For example, the file global ID can identify the particular file or vdisk or a cloned file of any of the aforementioned sets of data with which the snapshot is associated. The file global ID can be used to determine which snapshots belong to which sequence of snapshots and/or set of metadata. For example, the file global ID of two snapshots associated with the same creator file system ID but different creator snapshot IDs can help determine that the two snapshots belong to two different sequences of snapshots, one of which to a particular VM and the other to a clone of that VM.
[0067] The identifying information described above associated with each snapshot that is stored at each of first storage system 602 and second storage system 606 may be stored by one or more of first storage system 602, second storage system 606, and snapshot replication system 608. The identifying information can be used by at least one of first storage system 602, second storage system 606, and snapshot replication system 608 to determine which snapshots are stored at which systems and also deduce the ordering of snapshots. Therefore, such identifying information can be used to perform replication of snapshots from first storage system 602 to second storage system 606, and vice versa, regardless of which storage system had created the snapshots and/or the order in which snapshots are replicated. Such identifying information can also be used to preserve the younger/older relationship between snapshots in a sequence at a system when replicating a snapshot into the sequence stored at that system. Snapshot replication system 608 may not necessarily store snapshots but it may be configured to determine the set of snapshot data that should be sent from a source system to a destination system in order to replicate a desired snapshot at the destination system.
[0068] To guarantee that snapshots are always replicated in the most efficient manner, the identifying information may be pruned only when a snapshot has been deleted from all systems. In some embodiments, however, some information may be pruned when it is deemed that the benefits of keeping the information are low.
[0069] In various examples below, for purposes of illustration, it is assumed that all the snapshots are associated with the same creator file system ID and file global ID (unless otherwise noted), and therefore the expanded state (i.e., the point-in-time data state) that is accessible from a snapshot is uniquely denoted by the creator snapshot ID, which may be written in the format of "snapshot at time t< creator snapshot ID>" or "snapshot at t< creator snapshot ID>."
[0070] FIG. 7 is a diagram showing an example of how snapshot indices associated with the same expanded data state may differ at different storage systems. In the example, the first system stores a sequence of three snapshots associated with a particular set of data (e.g., a VM, a vdisk, or a file) and the second system stores a sequence of two snapshots associated with the same set of data. Each snapshot is represented as an index that maps logical offsets (e.g., 1, 2, 3, or 4) to data values stored at corresponding physical offsets.
[0071] The expanded states (i.e., point-in-time states) of the set of data that can be accessed from the first system include the snapshot at time tl, the snapshot at time t2, and the snapshot at time t3. As shown in the example of FIG. 7, each of the respective indices associated with the snapshot at time tl, the snapshot at time t2, and the snapshot at time t3 at the first system links back to (e.g., physically depends on) the next older snapshot (e.g., the snapshot at time t3 links to snapshot at time t2 and snapshot at time t2 links to snapshot at time tl). [0072] The expanded states (i.e., point-in-time states) of the set of data that can be accessed from the second system include the snapshot at time tl and the snapshot at time t3. For example, the second system may have previously stored a copy of the snapshot at time t2 but then
determined to delete the snapshot at time t2. Prior to deleting its copy of the snapshot at time t2, the second system merged the data from the index associated with the snapshot at time t2 into index 704 associated with its snapshot at time t3. As such, index 704 associated with the snapshot at time t3 stored at the second system includes the data (e.g., a mapping of offset 2 to data value B) from the snapshot at time t2 and is also modified to link to the index associated with the snapshot at time tl at the second system.
[0073] Note that while both the first and second system store the expanded state of the set of data accessible from the snapshot at time t3, the respective snapshot indices used to represent the snapshot at time t3 at each of the two systems are different due to the presence of different snapshots at each system. The snapshot at time t3 stored at the first system is represented by index 702 and the snapshot at time t3 stored at the second system is represented by index 704. Snapshot at time t3 index 702 at the first system, which includes only a mapping of offset 3 to data value C, differs from snapshot at time t3 index 704 at the second system, which includes a mapping of offset 2 to data value B and a mapping of offset 3 to data value C, because the snapshot at time t3 index 702 links to an index associated with the snapshot at time t2 while snapshot at time t3 index 704 links to an index associated with the snapshot at time tl . As such, due to the presence of different snapshots (e.g., and therefore, different dependencies between the snapshots) associated with the same set of data at different systems, snapshots associated with the same expanded state may be represented using different indices at different systems.
[0074] As will be described in various examples below, after a snapshot is replicated at a destination system and inserted into an existing sequence of snapshots at the destination system, the source system and the destination system will both store a copy of that snapshot, but each system may store a different physical representation (e.g., index) to represent that snapshot, depending on the other snapshots that the system stores.
[0075] FIG. 8 is a flow diagram showing an embodiment of a process for performing replication of a selected snapshot from a source system to a destination system. In some
embodiments, the source system and the destination system of process 800 can be implemented using first storage system 602 and second storage system 606 of system 600 of FIG. 6, respectively, or second storage system 606 and first storage system 602, respectively. In some embodiments, process 800 is implemented at first storage system 602, second storage system 606, or snapshot replication system 608 of system 600 of FIG. 6.
[0076] At 802, a request to store at a destination system a snapshot data to represent at the destination system a state of a set of data at a first point-in-time is received, wherein a first source system snapshot data that represents at a source system the state of the set of data at the first point- in-time depends on a second source system snapshot data that represents at the source system a state of the set of data at a second point-in-time.
[0077] For example, a snapshot stored at a source system that is identified by its corresponding point-in-time (i.e., expanded state) (e.g., associated with identifying information such as the snapshot global ID or the snapshot file global ID) is selected (e.g., by a system administer and/or computer program) to be replicated at a destination system. The snapshot data (e.g., index or other type of physical representation) of the selected snapshot at the source system depends on the snapshot data (e.g., index or other type of physical representation) of a next older snapshot in a sequence of snapshot data (e.g., indices) stored at the source system. The snapshot index of the selected snapshot being dependent on the snapshot index of the next older snapshot describes that the data stored in the snapshot index of the selected snapshot comprises
new/modified data relative to the time at which the snapshot index of the next older snapshot was generated. In some embodiments, the selected snapshot does not need to be replicated in a particular chronological order. Put another way, the selected snapshot does not need to be replicated only after the next older snapshot was replicated.
[0078] At 804, the snapshot data to represent at the destination system the state of the set of data at the first point-in-time is determined, wherein the snapshot data is determined based at least in part on data comprising the first source system snapshot data and a destination system snapshot data that represents at the destination system a state of the set of data at a third point-in-time.
[0079] The point-in-time (i.e., expanded state) of an existing snapshot at the destination system can be identified (e.g., using the stored identifying information) to help determine the older/younger and/or difference in point-in-time relationships between the selected snapshot at the source system and the existing snapshot at the destination system. Such ordering relationships can be used to determine how data can be (e.g., efficiently) sent from the source system to replicate the selected snapshot at the destination system. The snapshot data (e.g. index) determined to be sent from the source system to the destination system can include metadata (e.g., logical mappings to underlying data) and underlying data. In various embodiments, the snapshot data determined to be sent from the source system to the destination system comprises a delta determined based at least in part on the snapshot index of the selected snapshot and the snapshot index of another snapshot stored at the source system. For example, the other snapshot may be one on which the selected snapshot depends (e.g., links to). Sending deltas between snapshot data (e.g., indices) is much more efficient than sending expanded states of snapshots, as is conventionally done. In some cases, a smaller delta may be created by comparing with a younger, rather than older snapshot. In some embodiments, inserting the snapshot data into the existing snapshot sequence at the destination system includes removing entries from the snapshot index of an existing snapshot relative to the snapshot data (refactoring, as will be described in further detail below), adding a new snapshot data (e.g., index) to the snapshot data sequence at the destination system to represent at the destination system the point-in-time data state associated with the selected snapshot, and/or changing the dependencies of the snapshot indices at the snapshot data sequence at the destination system to accommodate the addition of the new snapshot index.
[0080] FIGS. 9A and 9B are diagrams showing an example of replicating the snapshot at time t2 from a source system to a destination system. As shown in FIG. 9A, the snapshot sequence at the source system includes the snapshot at time t2 linking to the snapshot at time tl . Note that the snapshot at time t2 is represented by index 904 at the source system. Also, as shown in FIG. 9A, the snapshot sequence at the destination system, prior to the snapshot at time t2 being replicated, includes the snapshot at time t3 linking to the snapshot at time tl . Note that prior to the snapshot at time t2 being replicated, the snapshot at time t3 is represented by index 902 at the destination system. Before replication of the snapshot at time t2, snapshot at time t3 index 902 contained all the changes up to and including time t3 and after tl, which includes a mapping of offset 1 to data value A (stored at time t3) and a mapping of offset 3 to data value C (stored at time t2). In performing the replication of the snapshot at time t3, an index associated with the snapshot at time t2 would need to be "spliced" in between the snapshot at time t3 and the snapshot at time tl at the destination system. In various embodiments, "splicing" is the process by which a snapshot is inserted in a sequence of snapshots. A snapshot can be spliced as an intermediate snapshot in between two existing snapshots of a sequence, as the youngest snapshot in the sequence, or as the oldest snapshot in the sequence. Splicing includes changing the physical dependencies between snapshots such that a younger existing snapshot that becomes adjacent to the spliced snapshot in the sequence is caused to depend on (e.g., link to, point to, and/or otherwise reference) the spliced snapshot. Splicing also includes causing the spliced snapshot to depend on an older existing snapshot that becomes adjacent to the spliced snapshot in the sequence. [0081] It can be determined from the identifying information stored by either or both the source system and the destination system and/or a third system (e.g., a snapshot replication system), snapshots associated with which point-in-times (e.g., expanded states) are already stored at the destination system and which order in the sequence they would be relative to the selected snapshot. Given that the destination system already stored the snapshot at time tl, to minimize the amount of data to transmit from the source system to the destination system, the source can send the delta between the snapshot at time t2 at the source system and the snapshot at time tl at the source system. This delta between the snapshot at time t2 and the snapshot at time tl at the source system may be represented by index 904 of FIG. 9B. (Given that the index 904 associated with the snapshot at time t2 at the source system already contains only the changes since the snapshot at time tl was generated, the delta between the snapshot at time t2 at the source system and the snapshot at time tl at the source system is therefore the same as the index that is used to represent the snapshot at time t2 at the source system.)
[0082] The delta between the snapshot at time t2 and the snapshot at time tl at the source system as represented by index 904 is sent from the source system to the destination system and spliced into the existing snapshot sequence, in between the snapshot at time t3 and the snapshot at time tl, to represent the snapshot at time t2 at the destination system. Because the snapshot at time t3 index 902 at the destination system had contained all the changes up to and including time t3 and after tl, after the insertion of index 904 at the destination system, the redundant entries between index 902 representing the snapshot at time t3 and index 904 representing the snapshot at time t2 need to be removed from index 902 at the destination system. As such, replication as described herein can take advantage of a snapshot that is already present on the destination system by
"refactoring" the replicated snapshot data at the destination. In various embodiments, "refactoring" is the process by which redundant metadata entries are removed from either a younger snapshot or an older snapshot between the replicated snapshot and an adjacent existing snapshot of the destination system. Redundant entries are often created when a snapshot is replicated and spliced into an existing snapshot sequence at a different system. The replicated snapshot and an adjacent existing younger or older snapshot will sometimes contain some of the same entries as the replicated snapshot, making the shared entries in the index of the replicated or its adjacent existing snapshot redundant. As shown in FIG. 9B, after replication of the snapshot at time t2 at the destination, the entry associated with a mapping of offset 3 to data value C is removed from index 902 representing the snapshot at time t3 at the destination system because the same entry is already present in index 904 representing an existing older snapshot at the destination system, the snapshot at time t2. [0083] After replication, at the destination system, the snapshot sequence includes the snapshot at time t3 (which has been modified to link to the snapshot at time t2), the snapshot at time t2 (which has been modified to link to the snapshot at time tl), and the snapshot at time tl . As such, replication of a snapshot to a destination system can modify the physical dependencies among snapshots of a sequence at the destination system.
[0084] FIGS. 10A and 10B are diagrams showing an example of replicating the snapshot at time t3 from a source system to a destination system. As shown in FIG. 10A, the snapshot sequence at the source system includes the snapshot at time t3 linking to the snapshot at time tl . Also, as shown in FIG. 10A, the snapshot sequence at the destination system, prior to the snapshot at time t3 being replicated, includes the snapshot at time t2 linking to the snapshot at time tl . Note that the snapshot at time t3 is represented by index 1002 at the source system. Before replication of the snapshot at time t3, the snapshot at time t3 index 1002 contained all the changes up to and including time t3 and after tl, which includes a mapping of offset 1 to data value A (stored at time t3) and a mapping of offset 3 to data value C (stored at time t2). In performing the replication of the snapshot at time t3, an index associated with the snapshot at time t3 would need to be "spliced" to link to the snapshot at time t2 at the destination system.
[0085] It can be determined from the identifying information stored by either or both the source system and the destination system and/or a third system (e.g., a snapshot replication system), snapshots associated with which point-in-times (e.g., expanded states) are already stored at the destination system and which order in the sequence they would be relative to the selected snapshot. Ideally, the source system would generate a delta between the snapshot at time t3 and the snapshot time t2, but the source does not have the snapshot time t2. In this case, the source can generate a delta relative to the snapshot at time tl, the most recent snapshot older than the snapshot at time t3 that is stored at both the source and destination systems. This delta between the snapshot at time t3 and the snapshot at time tl may be represented by index 1002 of FIG. 10B. (Given that index 1002 associated with the snapshot at time t3 at the source system already contains only the changes since the snapshot at time tl was generated, the delta between the snapshot at time t3 at the source system and the snapshot at time tl at the source system is therefore the same as the index that is used to represent the snapshot at time t3 at the source system.)
[0086] Once the delta is received at the destination, it is spliced to point to the snapshot at time t2 at the destination system and then refactored to create index 1004 to represent the snapshot at time t3 at the destination system by eliminating entries from the delta comprising index 1002 that are common with an existing older snapshot at the destination system, the snapshot at time t2, which is represented by index 1006. As shown in FIG. 10B, after replication of the snapshot at time t3 at the destination, the entry associated with a mapping of offset 3 to data value C is removed from the delta comprising index 1002 to create index 1004 to represent the snapshot at time t3 because the same entry is already present in index 1006 representing the snapshot at time t2.
[0087] In some embodiments, the refactoring of the replicated snapshot at time t3 at the destination system (i.e., the delta comprising index 1002) relative to the snapshot at time t2 can also be done as the delta is received at the destination. Put another way, refactoring can be performed before the entire set of snapshot data (the delta) (including the logical to physical offset mappings and the underlying data to which they map) is completely sent from the source to the destination. For example, referring to FIG. 10B, for each offset of delta snapshot index 1002 that is common to index 1006, which represents the snapshot at time t2 at the destination system, prior to sending the underlying data to which the offset points from the source system, a determination can be made as to whether a fingerprint associated with the underlying data pointed to by the offset in the delta snapshot index 1002 matches the fingerprint associated with the underlying data pointed to by the same offset in the index 1006. In the event that the two fingerprints match, then the two offset entries are determined to be redundant, and therefore, the entry is excluded from index 1004 that is used to represent the snapshot at time t3 at the destination system and the underlying data pointed to by the redundant offset is also not sent from the source system. Put another way, the replication need not be completed before refactoring is performed.
[0088] In another embodiment, the destination system could send the source system a temporary copy of the snapshot at time t2 (e.g., as a delta between the snapshot at time t2 and the snapshot at time tl) that the source system can use to generate a delta between the snapshot at time t3 and the snapshot at time t2. Then, the delta between the snapshot at time t3 and the snapshot at time t2 can be sent from the source system to be spliced to point to the snapshot at time t2 at the destination system.
[0089] FIGS. 11 A and 1 IB are diagrams showing another example of replicating the snapshot at time t3 from a source system to a destination system. In this example, the snapshots at times tl, t2, and t3 were created sequentially as a part of the same snapshot sequence. In some embodiments, a "common snapshot" refers to a point-in-time snapshot as the snapshot against which a delta is generated at the source system and against which the delta will be spliced at the destination system. As shown in FIG. 11 A, prior to the snapshot at time t3 being replicated, the source has only the snapshot at time t3 and the snapshot at time tl while the destination has only the snapshot at time t2. In other words, prior to the snapshot at time t3 being replicated, the source and the destination systems do not have a common snapshot of the snapshot at time t2. Note that the snapshot at time t3 is represented by index 1102 at the source system. Before replication of the snapshot at time t3, snapshot at time t3 index 1102 at the source system contained all the changes up to and including time t3 and after tl, which includes a mapping of offset 2 to data value B (stored at time t3) and a mapping of offset 4 to data value D (stored at time t2). In performing the replication of the snapshot at time t3, an index associated with the snapshot at time t3 would need to be "spliced" to link to the snapshot at time t2 at the destination system. In this example, snapshot at time t3 index 1102 at the source system has not been refactored with respect to the index of the snapshot at time tl at the source system because both indices include a mapping of offset 2 to data value B. However, given that there is no common snapshot of the snapshot at time t2 on the source and destination systems, the same data values associated with the same offset at adjacent snapshot indices at the source system may be preserved and used in generating a delta at the source system, as will be described further below.
[0090] It can be determined from the identifying information stored by either or both the source system and the destination system and/or a third system (e.g., a snapshot replication system), snapshots associated with which point-in-times (e.g., expanded states) are already stored at the destination system and which order in the sequence they would be relative to the selected snapshot. In this case, the source generates a delta of the snapshot at time t3 relative to the snapshot at time tl . This delta between the snapshot at time t3 and the snapshot at time tl may be represented by index 1102 of FIG. 10B. (Given that index 1102 associated with snapshot at time t3 at the source system already contains only the changes since the snapshot at time tl was generated, the delta between the snapshot at time t3 at the source system and the snapshot at time tl at the source system is therefore the same as the index that is used to represent the snapshot at time t3 at the source system.) Note that in this case where the source system and the destination system do not have a common snapshot of the snapshot at time t2, the delta must contain all "offsets" that were modified between the snapshots at times t3 and tl even if the data values are the same. In particular, it is possible for the same offset to be modified at the snapshots at times tl, t2 and t3 such that the data values in the snapshots at times tl and t3 are the same but different in the snapshot at time t2. In such a case, comparing the data values between the snapshots at times t3 and tl would detect no change even though this data value must be applied to the snapshot at time t2 in order to create the snapshot at time t3. Note that by including all offsets that were modified between the snapshots at times t3 and tl, we ensure that any offsets that were modified between the snapshots at times t2 and tl are also included. [0091] As shown in FIG. 11 A, at the source system, offset 2 has been modified in between when the snapshots at times tl and t3 were generated even though the snapshots at times tl and t3 store the same data value of B for offset 2. Given that the source system and the destination system do not have a common snapshot of the snapshot at time t2, index 1102 associated with the delta between the snapshot at time t3 at the source system and the snapshot at time tl at the source system includes offset 2 even though the data values are the same at offset 2 in both snapshots at times tl and t3.
[0092] By contrast, when there is a common snapshot, as described in previous examples, the delta may exclude data values that are the same between the two snapshots even if the corresponding offsets were modified in the younger snapshot.
[0093] Once the delta is received at the destination, it is spliced to point to the snapshot at time t2 at the destination system and then refactored to represent the snapshot at time t3 at the destination system by eliminating entries from the delta comprising index 1102 that are common with the snapshot at time t2, which is represented by index 1104 at the destination system. As shown in FIG. 1 IB, after replication of the snapshot at time t3 at the destination, the entry associated with a mapping of offset 4 to data value D is removed from index 1102 representing the snapshot at time t3 because the same entry is already present in index 1104 representing the snapshot at time t2 at the destination system. In some embodiments, the refactoring may be performed prior to the completion of the replication of the snapshot at time t3 at the destination system.
[0094] FIGS. 12A and 12B are diagrams showing another example of replicating the snapshot at time t4 from a source system to a destination system. In this example, the snapshots at times tl, t2, t3, and t4 were created sequentially as a part of the same snapshot sequence. However, as shown in FIG. 12A, prior to the snapshot at time t4 being replicated, the source has the snapshots at times t4, t3, t2, and tl while the destination has only the snapshot at time t2 and the snapshot at time tl .
[0095] It can be determined from the identifying information stored by either or both the source system and the destination system and/or a third system (e.g., a snapshot replication system), snapshots associated with which point-in-times (e.g., expanded states) are already stored at the destination system and which order in the sequence they would be relative to the selected snapshot. Given that the destination system already stored the snapshot at time t2, to minimize the amount of data to transmit from the source system to the destination system, in one example, the source can send the delta between the snapshot at time t4 at the source system relative to the snapshot at time t2 at the source system. Prior to generating the delta of the snapshot at time t4 relative to the snapshot at time t2, the entries of the snapshot at time t3 are first merged into the snapshot at time t4. For example, in generating the delta of the snapshot at time t4 relative to the snapshot at time t2, the entries of the snapshot at time t3 are first logically merged into the snapshot at time t4 (while not actually deleting the index of the snapshot at time t3 from the source system) and then the delta is generated between the index of the snapshot at time t4 merged with the offsets of the snapshot at time t3 and the snapshot at time t2. This delta between the snapshot at time t4 (with the merged entries of the snapshot at time t3) and the snapshot at time t2 may be represented by index 1202 of FIG. 12B. Once the delta is received at the destination, it is spliced to point to the snapshot at time t2 at the destination system. In this example, because the delta is being generated and spliced with respect to the same snapshot on source and destination, the snapshot at time t2 in this example, then there is not an opportunity for re factoring at the destination system. As such, index 1202 representing the delta can be used directly at the snapshot at t4 at the destination system.
[0096] In another example (not shown in FIG. 12B), instead of the source sending the delta of the snapshot at time t4 at the destination system relative to the snapshot at time t2 at the destination system, the source generates the expanded state of data at the snapshot at time t4. An index representing the expanded state of data at the snapshot at time t4 would include mappings of offset 1 to data value A, offset 2 to data value E, offset 3 to data value C, and offset 4 to data value D. The index representing the expanded state of data at the snapshot at time t4 would be sent to the destination. At the destination, the index representing the expanded state of data at the snapshot at time t4 would be refactored to remove redundant entries from the snapshot at time t2 at the destination system and the snapshot at time tl at the destination system. The resulting index to use to represent the snapshot at time t4 at the destination system would still be index 1202 of FIG. 12B.
[0097] FIG. 13 is a flow diagram showing an example of a process of refactoring a younger snapshot index relative to an older snapshot index. In some embodiments, process 1300 is implemented at first storage system 602, second storage system 606, or snapshot replication system 608 of system 600 of FIG. 6. In some embodiments, process 1300 is implemented after a selected snapshot has been completely replicated at a destination system. In some embodiments, process 1300 is implemented before a selected snapshot has been completely replicated at a destination system (e.g., process 1300 can be implemented at least partially concurrently to the replication of snapshot data at the destination system). [0098] As described in various examples above, when a snapshot is replicated at a destination system and spliced into an existing snapshot sequence at the destination system, in some embodiments, the younger snapshot index between the replicated snapshot and an adjacent existing snapshot at the destination system is refactored to remove entries. In some embodiments, the replicated snapshot refers to the delta data that is to be or was sent from the source system. The "younger snapshot index" described in process 1300 below refers to the relatively younger snapshot index between the replicated snapshot index and an adjacent existing snapshot index at the destination system and the "older snapshot index" described in process 1300 below refers to the relatively older snapshot index between the replicated snapshot index and an adjacent existing snapshot index at the destination system. Referring back to the example of FIG. 9B, the existing snapshot index at time t3 at the destination was the younger snapshot index that was refactored relative to the replicated snapshot index at time t2. Referring back to the example of FIG. 10B, the replicated snapshot index at time t3 was the younger snapshot index that was refactored relative to the existing snapshot index at time t2 at the destination.
[0099] In some embodiments, the replicated snapshot may be refactored with respect to both an adjacent existing older snapshot and an adjacent existing younger snapshot at the destination. Therefore, for example, process 1300 may be applied twice in splicing a snapshot into an existing snapshot sequence at the destination - process 1300 may be applied a first time where the replicated snapshot comprises the "younger snapshot index" and an adjacent existing older snapshot at the destination comprises the "older snapshot index" of process 1300 and process 1300 may be applied a second time where the replicated snapshot comprises the "older snapshot index" and an adjacent existing younger snapshot at the destination comprises the "young snapshot index" of process 1300.
[00100] Returning to FIG. 13, at 1302, a first fingerprint corresponding to an offset associated with a younger snapshot index is determined. A fingerprint is determined based on the data value mapped to by a logical offset that is included in the younger snapshot index. In some embodiments, the data value corresponding to the offset is read from the younger snapshot index and the fingerprint of the data value can be determined based on a (e.g., SHA1) hash technique.
[00101] At 1304, a second fingerprint corresponding to the offset associated with an older snapshot index is determined. A fingerprint is determined based on the data value mapped to by the same logical offset that is included in the older snapshot index. In some embodiments, the data value corresponding to the offset is read from the older snapshot index. This fingerprint of the data value can be determined based on the same technique that was used to obtain the fingerprint in step 1302.
[00102] At 1306, it is determined whether the first fingerprint and the second fingerprint match. In the event that the first fingerprint and the second fingerprint match, the data values pointed to by the same offset in the two snapshot indices are the same (redundant) and control is transferred to 1308. Otherwise, in the event that the first fingerprint and the second fingerprint do not match, the data values pointed to by the same offset in the two snapshot indices are not the same and control is transferred to 1310.
[00103] At 1308, the offset is removed from the younger snapshot index. The redundant offset is removed from the younger snapshot index and the underlying data is deleted from the destination system and/or prevented from being transferred from the source system to the destination system.
[00104] At 1310, it is determined whether there are more common offsets to the younger snapshot index and the older snapshot index. In the event that there are more common offsets, control is transferred to 1312. Otherwise, in the event that there are no more common offsets, process 1300 ends. At 1312, a next offset common to the younger snapshot index and the old snapshot index is selected.
[00105] In various embodiments, a snapshot associated with a clone can be replicated from a source system to a destination system similar to the manner in which a non-clone snapshot can be replicated. As described above, a sequence of snapshots associated with a clone is generated from a snapshot of a set of source data also stored at the source system; the source snapshot is referred to as the "shared snapshot." Because the clone is generated from (and therefore depends on) the shared snapshot of the source data, in various embodiments, a clone snapshot is replicated as a delta of the shared snapshot rather than as an expanded state. As will be described in further detail below, replicating a clone snapshot from a source system to a destination system takes into consideration whether the shared snapshot of the source data is already present at the destination system.
[00106] FIG. 14 is a flow diagram showing an embodiment of a process for performing replication of a selected snapshot associated with a clone from a source system to a destination system. In some embodiments, the source system and the destination system of process 1400 can be implemented using first storage system 602 and second storage system 606 of system 600 of FIG. 6, respectively, or second storage system 606 and first storage system 602, respectively. In some embodiments, process 1400 is implemented at first storage system 602, second storage system 606, or snapshot replication system 608 of system 600 of FIG. 6.
[00107] At 1402, a request to replicate at a destination system a selected snapshot is received, wherein the selected snapshot is associated with a set of clone data, wherein the set of clone data is associated with a shared snapshot of a set of source data. A snapshot associated with a clone is requested to be replicated at a destination system. A set of metadata stored for the clone can be used to identify which particular shared snapshot of which particular source data is the shared snapshot from which the clone was generated. For example, the shared snapshot and its associated source data may be identified by a snapshot global ID, which indicates the system that created the snapshot, the expanded state associated with the snapshot, and also the set of data (e.g., vdisks or files) with which the snapshot is associated.
[00108] At 1404, it is determined whether the shared snapshot of the set of source data already exists at the destination system. It is determined whether the shared snapshot from which the clone was generated is already present at the destination system. For example, whether the shared snapshot of the source data is already present at the destination system can be determined from the stored identifying information as described above. For example, it can be determined from the stored identifying information whether the destination system currently stores a snapshot associated with the snapshot global ID of the shared snapshot. In the event that the shared snapshot does not already exist at the destination system, control is transferred to 1406. Otherwise, in the event that the shared snapshot already exists at the destination system, control is transferred to 1408.
[00109] At 1406, the shared snapshot of the set of source data is replicated at the destination system. If the shared snapshot of the source data is not already present at the destination system, then the shared snapshot is first replicated at the destination system. In some embodiments, a process such as process 800 of FIG. 8 is used to replicate the shared snapshot from the source system to the destination system. For example, the shared snapshot can be replicated as a delta of an existing snapshot (at either the source system or the destination system). The shared snapshot may also be subsequently reused for other replicated clones that link to the shared snapshot.
[00110] In some cases, a shared snapshot that is only used by a single clone may be automatically deleted and merged with the next younger snapshot of the clone to save space.
Information about the deleted base of the clone is, however, retained by the clone. [00111] At 1408, it is determined whether the set of metadata associated with the set of clone data already exists at the destination system. In the event that the set of metadata associated with the set of clone data does not already exist at the destination system, control is transferred to 1410. Otherwise, in the event that the shared snapshot already exists at the destination system, control is transferred to 1412.
[00112] At 1410, a set of metadata associated with the set of clone data is caused to be generated at the destination system. If the clone does not already exist at the destination, then the clone is generated at the destination by at least generating a set of metadata associated with the clone. For example, the set of metadata associated with the clone may include a set of file global IDs associated with the clone, data linking the clone to the shared snapshot at the destination system, and/or a current snapshot index associated with the clone to use to create subsequent snapshots associated with the clone. In particular, the file global IDs associated with the clone can be used to identify which snapshots belong to the clone and which do not (e.g., snapshots that belong to the source data from which the clone was generated).
[00113] At 1412, the selected snapshot is replicated based at least in part on the shared snapshot at the destination system. Once it is determined that the shared snapshot and the clone metadata are present at the destination system, the selected clone snapshot can be replicated to the destination system. In some embodiments, a process such as process 800 of FIG. 8 is used to replicate the selected clone snapshot from the source system to the destination system. For example, the selected clone snapshot can be replicated as a delta of another snapshot (e.g., the shared snapshot).
[00114] FIGS. 15 A, 15B, and 15C are diagrams showing an example of replicating a snapshot at time t4 (S4) associated with a clone from a source system to a destination system. As shown in FIG. 15 A, the source system includes two snapshot sequences. The first snapshot sequence at the source system is associated with a set of source data with the file global ID of "VMl" and comprises the snapshot at time t3 (S3), the snapshot at time t2 (S2), and the snapshot at time tl (SI). The second snapshot sequence at the source system is associated with a clone with the file global ID of "Clone VMl" and comprises the snapshot at time t5 (S5) and the snapshot at time t4 (S4). "Clone VMl" was generated from (and therefore depends on) the snapshot at time t2 (S2) of "VMl ." Therefore, the snapshot at time t2 (S2) is the shared snapshot associated with the snapshot sequence of "Clone VMl ." Also, as shown in FIG. 15 A, the snapshot sequence at the destination system, prior to the snapshot at time t4 (S4) being replicated, includes the snapshot at time tl (SI) associated with file global ID "VMl ." Because the shared snapshot, the snapshot at time t2 (S2), is not present at the destination system, the snapshot at time t2 (S2) will be replicated at the destination system first.
[00115] The shared snapshot, the snapshot at time t2 (S2), can be replicated at the destination system by sending the delta of the snapshot at time t2 (S2) relative to the snapshot at time tl (SI) to the destination system, for example. FIG. 15B shows the result of replicating the snapshot at time t2 (S2) at the destination system, which includes splicing the snapshot at time t2 (S2) to point to the snapshot at time tl (SI) in a snapshot sequence associated with "VMl ."
[00116] After the shared snapshot, the snapshot at time t2 (S2), is replicated at the destination system, the metadata of clone "Clone VMl" is generated at the destination system. FIG. 15B shows the result of generating the metadata associated with clone "Clone VMl" at the destination system as a box with dotted lines labeled "Clone VMl" and that links to the snapshot at time t2 (S2) of the snapshot sequence associated with "VMl ."
[00117] After the metadata of clone "Clone VMl" is generated at the destination system, the snapshot at time t4 associated with "Clone VMl" is replicated at the destination system. The snapshot at time t4 (S4) can be replicated at the destination system in a manner similar to replicating a non-clone snapshot. The snapshot at time t4 (S4) can be replicated at the destination system by sending the delta of the snapshot at time t4 (S4) relative to the shared snapshot, the snapshot at time t2 (S2), to the destination system, for example. FIG. 15C shows the result of replicating the snapshot at time t4 (S4) at the destination system, which includes associating the snapshot at time t4 (S4) with file global ID "Clone VMl" and splicing the snapshot at time t4 (S4) to point to the snapshot at time t2 (S2) in a snapshot sequence associated with "VMl ."
[00118] Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.

Claims

1. A system, comprising:
a processor configured to:
receive a request to store at a destination system a snapshot data to represent at the destination system a state of a set of data at a first point-in-time, wherein a first source system snapshot data that represents at a source system the state of the set of data at the first point-in-time depends on a second source system snapshot data that represents at the source system a state of the set of data at a second point-in-time; and
determine the snapshot data to represent at the destination system the state of the set of data at the first point-in-time, wherein the snapshot data is determined based at least in part on data comprising the first source system snapshot data and a destination system snapshot data that represents at the destination system a state of the set of data at a third point-in-time; and
a memory coupled to the processor and configured to store the request.
2. The system of claim 1, wherein each of the first source system snapshot data and the second source system snapshot data is included in a sequence of snapshot data stored at the source system.
3. The system of claim 1, wherein the processor is configured to insert the snapshot data into a sequence of snapshot data stored at the destination system.
4. The system of claim 1, wherein the processor is further configured to store identifying information usable to determine an order among the first source system snapshot data, the second source system snapshot data, and the destination system snapshot data.
5. The system of claim 1, wherein determining the snapshot data includes determining a delta based at least in part on the first source system snapshot data and the second source system snapshot data.
6. The system of claim 1, wherein the destination system snapshot data comprises a first destination system snapshot data and wherein the processor is further configured to: generate a new destination system snapshot data that represents at the destination system the state of the set of data at the first point-in-time to be inserted into a sequence of snapshot data stored at the destination system based at least in part on the snapshot data; and cause the new destination system snapshot data to depend on the first destination system snapshot data.
7. The system of claim 6, wherein the sequence of snapshot data stored at the destination system includes a second destination system snapshot data that represents at the destination system a state of the set of data at a fourth point-in-time, wherein the first point-in-time is earlier than the fourth point-in-time and wherein generating the new destination system snapshot data includes: determining an offset common to the snapshot data and the second destination system snapshot data;
determining that a first data value associated with the offset associated with the snapshot data matches a second data value associated with the offset associated with the second destination system snapshot data; and
in response to the determination that the first data value matches the second data value, removing the offset from the second destination system snapshot data.
8. The system of claim 7, wherein further in response to the determination that the first data value matches the second data value, preventing the first data value associated with the offset associated with the snapshot data from being sent from the source system to the destination system.
9. The system of claim 6, wherein the third point-in-time is earlier than the first point-in-time and wherein generating the new destination system snapshot data includes: determining an offset common to the snapshot data and the first destination system snapshot data;
determining that a first data value associated with the offset associated with the snapshot data matches a second data value associated with the offset associated with the first destination system snapshot data; and
in response to the determination that the first data value matches the second data value, removing the offset from the snapshot data.
10. The system of claim 9, wherein further in response to the determination that the first data value matches the second data value, preventing the first data value associated with the offset associated with the snapshot data from being sent from the source system to the destination system.
11. The system of claim 6, wherein the new destination system snapshot data is inserted into the sequence of snapshot data stored at the destination system based at least in part on the first point-in- time and the third point-in-time.
12. The system of claim 1, wherein the snapshot data comprises a first snapshot data and wherein the destination system snapshot data comprises a first destination system snapshot data, wherein the processor is further configured to determine a second snapshot data to represent at the source system a state of the set of data at a desired point-in-time, wherein the second snapshot data is determined based at least in part on a second destination system snapshot data that represents at the destination system the state of the set of data at the desired point-in-time.
13. The system of claim 1, wherein the second point-in-time is earlier than the first point-in- time and is the same as the third point-in-time.
14. The system of claim 1, wherein the second point-in-time is earlier than the first point-in- time and wherein the third point-in-time is earlier than the first point-in-time but later than the second point-in-time.
15. The system of claim 1, wherein the set of data comprises a set of clone data generated from a third source system snapshot data that represents at the source system a state of a set of source data at a fourth point-in-time, wherein the second source system snapshot data associated with the set of clone data depends on the third source system snapshot data associated with the set of source data.
16. The system of claim 16, wherein the processor is configured to:
merge one or more entries included in third source system snapshot data associated with the source set of data into the second source system snapshot data associated with the set of clone data; and
delete the third source system snapshot data associated with the source set of data.
17. A method, comprising :
receiving a request to store at a destination system a snapshot data to represent at the destination system a state of a set of data at a first point-in-time, wherein a first source system snapshot data that represents at a source system the state of the set of data at the first point-in-time depends on a second source system snapshot data that represents at the source system a state of the set of data at a second point-in-time; and
determining the snapshot data to represent at the destination system the state of the set of data at the first point-in-time, wherein the snapshot data is determined based at least in part on data comprising the first source system snapshot data and a destination system snapshot data that represents at the destination system a state of the set of data at a third point-in-time.
18. The method of claim 17, further comprising inserting the snapshot data into a sequence of snapshot data stored at the destination system.
19. The method of claim 17, wherein determining the snapshot data includes determining a delta based at least in part on the first source system snapshot data and the second source system snapshot data.
20. The method of claim 17, wherein determining the snapshot data includes determining a delta based at least in part on the first source system snapshot data and the second source system snapshot data.
21. The method of claim 17, wherein the destination system snapshot data comprises a first destination system snapshot data and further comprising: generating a new destination system snapshot data that represents at the destination system the state of the set of data at the first point-in-time to be inserted into a sequence of snapshot data stored at the destination system based at least in part on the snapshot data; and
causing the new destination system snapshot data to depend on the first destination system snapshot data.
22. The method of claim 21, wherein the new destination system snapshot data is inserted into the sequence of snapshot data stored at the destination system based at least in part on the first point-in-time and the third point-in-time.
23. The method of claim 17, wherein the snapshot data comprises a first snapshot data and wherein the destination system snapshot data comprises a first destination system snapshot data, wherein the processor is further configured to determine a second snapshot data to represent at the source system a state of the set of data at a desired point-in-time, wherein the second snapshot data is determined based at least in part on a second destination system snapshot data that represents at the destination system the state of the set of data at the desired point-in-time.
24. The method of claim 17, wherein the set of data comprises a set of clone data generated from a third source system snapshot data that represents at the source system a state of a set of source data at a fourth point-in-time, wherein the second source system snapshot data associated with the set of clone data depends on the third source system snapshot data associated with the set of source data.
25. A computer program product, the computer program product being embodied in a non- transitory computer readable storage medium and comprising computer instructions for:
receiving a request to store at a destination system a snapshot data to represent at the destination system a state of a set of data at a first point-in-time, wherein a first source system snapshot data that represents at a source system the state of the set of data at the first point-in-time depends on a second source system snapshot data that represents at the source system a state of the set of data at a second point-in-time; and
determining the snapshot data to represent at the destination system the state of the set of data at the first point-in-time, wherein the snapshot data is determined based at least in part on data comprising the first source system snapshot data and a destination system snapshot data that represents at the destination system a state of the set of data at a third point-in-time.
PCT/US2014/053709 2013-09-03 2014-09-02 Replication of snapshots and clones WO2015034827A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP14841628.2A EP3042289A4 (en) 2013-09-03 2014-09-02 Replication of snapshots and clones
JP2016537937A JP6309103B2 (en) 2013-09-03 2014-09-02 Snapshot and clone replication

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201361873241P 2013-09-03 2013-09-03
US61/873,241 2013-09-03
US14/472,834 US10628378B2 (en) 2013-09-03 2014-08-29 Replication of snapshots and clones
US14/472,834 2014-08-29

Publications (1)

Publication Number Publication Date
WO2015034827A1 true WO2015034827A1 (en) 2015-03-12

Family

ID=52584674

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/053709 WO2015034827A1 (en) 2013-09-03 2014-09-02 Replication of snapshots and clones

Country Status (4)

Country Link
US (1) US10628378B2 (en)
EP (1) EP3042289A4 (en)
JP (1) JP6309103B2 (en)
WO (1) WO2015034827A1 (en)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160092313A1 (en) * 2014-09-25 2016-03-31 Empire Technology Development Llc Application Copy Counting Using Snapshot Backups For Licensing
US9940378B1 (en) * 2014-09-30 2018-04-10 EMC IP Holding Company LLC Optimizing replication of similar backup datasets
US9778990B2 (en) * 2014-10-08 2017-10-03 Hewlett Packard Enterprise Development Lp Methods and systems for concurrently taking snapshots of a plurality of virtual machines
US9658924B2 (en) * 2014-12-12 2017-05-23 Schneider Electric Software, Llc Event data merge system in an event historian
US10606704B1 (en) * 2014-12-31 2020-03-31 Acronis International Gmbh Creation of consistent copies of application data
CN105893171B (en) * 2015-01-04 2019-02-19 伊姆西公司 Store the method and apparatus that fault recovery is used in equipment
US10678650B1 (en) * 2015-03-31 2020-06-09 EMC IP Holding Company LLC Managing snaps at a destination based on policies specified at a source
US10860546B2 (en) * 2015-05-28 2020-12-08 Hewlett Packard Enterprise Development Lp Translation of source m-node identifier to target m-node identifier
US10262004B2 (en) * 2016-02-29 2019-04-16 Red Hat, Inc. Native snapshots in distributed file systems
US11080242B1 (en) * 2016-03-30 2021-08-03 EMC IP Holding Company LLC Multi copy journal consolidation
US10394482B2 (en) 2016-04-14 2019-08-27 Seagate Technology Llc Snap tree arbitrary replication
US10055149B2 (en) 2016-04-14 2018-08-21 Seagate Technology Llc Intelligent snapshot tree replication
US10353590B2 (en) 2016-05-19 2019-07-16 Hewlett Packard Enterprise Development Lp Methods and systems for pre-processing sensor measurements
US10642784B2 (en) * 2016-09-15 2020-05-05 International Business Machines Corporation Reducing read operations and branches in file system policy checks
US10474629B2 (en) * 2016-09-28 2019-11-12 Elastifile Ltd. File systems with global and local naming
US10140039B1 (en) * 2016-12-15 2018-11-27 EMC IP Holding Company LLC I/O alignment for continuous replication in a storage system
US10613939B2 (en) * 2017-03-28 2020-04-07 Commvault Systems, Inc. Backup index generation process
CN108733541A (en) * 2017-04-17 2018-11-02 伊姆西Ip控股有限责任公司 The method and apparatus for replicating progress for determining data in real time
US10379777B2 (en) * 2017-06-19 2019-08-13 Synology Incorporated Method for performing replication control in storage system with aid of relationship tree within database, and associated apparatus
US20190155936A1 (en) * 2017-11-22 2019-05-23 Rubrik, Inc. Replication Catch-up Strategy
US11874794B2 (en) * 2018-10-19 2024-01-16 Oracle International Corporation Entity snapshots partitioning and combining
US11150831B2 (en) * 2019-03-27 2021-10-19 Red Hat, Inc. Virtual machine synchronization and recovery
CN110769062A (en) * 2019-10-29 2020-02-07 广东睿江云计算股份有限公司 Distributed storage remote disaster recovery method
US11789971B1 (en) * 2019-12-02 2023-10-17 Amazon Technologies, Inc. Adding replicas to a multi-leader replica group for a data set
US11144233B1 (en) * 2020-03-18 2021-10-12 EMC IP Holding Company LLC Efficiently managing point-in-time copies of data within a primary storage system
US11531644B2 (en) * 2020-10-14 2022-12-20 EMC IP Holding Company LLC Fractional consistent global snapshots of a distributed namespace
US20220214903A1 (en) * 2021-01-06 2022-07-07 Baidu Usa Llc Method for virtual machine migration with artificial intelligence accelerator status validation in virtualization environment
US12039356B2 (en) 2021-01-06 2024-07-16 Baidu Usa Llc Method for virtual machine migration with checkpoint authentication in virtualization environment
US11741076B2 (en) 2021-03-22 2023-08-29 Kyndryl, Inc. Adaptive snapshot controller
US11748300B2 (en) * 2021-11-18 2023-09-05 Vmware, Inc. Reverse deletion of a chain of snapshots
US20230409522A1 (en) 2022-06-16 2023-12-21 Oracle International Corporation Scalable and secure cross region and optimized file system delta transfer for cloud scale
WO2023244446A1 (en) * 2022-06-16 2023-12-21 Oracle International Corporation Scalable and secure cross-region and optimized file system replication for cloud scale

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6594744B1 (en) * 2000-12-11 2003-07-15 Lsi Logic Corporation Managing a snapshot volume or one or more checkpoint volumes with multiple point-in-time images in a single repository
US20040267836A1 (en) * 2003-06-25 2004-12-30 Philippe Armangau Replication of snapshot using a file system copy differential
US20060112151A1 (en) * 2002-03-19 2006-05-25 Manley Stephen L System and method for storage of snapshot metadata in a remote file
US20090307450A1 (en) * 2007-04-11 2009-12-10 Dot Hill Systems Corporation Snapshot Preserved Data Cloning
US20120016839A1 (en) 2010-07-15 2012-01-19 Delphix Corp. De-Duplication Based Backup Of File Systems
US8468174B1 (en) 2010-11-30 2013-06-18 Jedidiah Yueh Interfacing with a virtual database system

Family Cites Families (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU8397298A (en) 1997-07-15 1999-02-10 Pocket Soft, Inc. System for finding differences between two computer files and updating the computer files
CA2345661A1 (en) 1998-10-02 2000-04-13 International Business Machines Corporation Conversational browser and conversational systems
US6772172B2 (en) * 2001-04-27 2004-08-03 Sun Microsystems, Inc. Method, system, program, and computer readable medium for indexing object oriented objects in an object oriented database
US7055058B2 (en) 2001-12-26 2006-05-30 Boon Storage Technologies, Inc. Self-healing log-structured RAID
US6826666B2 (en) 2002-02-07 2004-11-30 Microsoft Corporation Method and system for transporting data content on a storage area network
US7016913B2 (en) * 2002-03-20 2006-03-21 Sun Microsystems, Inc. Method, system, data structures, and article of manufacture for implementing a persistent object
US6934822B2 (en) 2002-08-06 2005-08-23 Emc Corporation Organization of multiple snapshot copies in a data storage system
EP1486886A1 (en) 2003-06-12 2004-12-15 Hewlett-Packard Development Company, L.P. Systems, protocols and propagation mechanisms for managing information in a network environment
US8959299B2 (en) 2004-11-15 2015-02-17 Commvault Systems, Inc. Using a snapshot as a data source
US7548939B2 (en) 2005-04-15 2009-06-16 Microsoft Corporation Generating storage reports using volume snapshots
US7426618B2 (en) 2005-09-06 2008-09-16 Dot Hill Systems Corp. Snapshot restore method and apparatus
US8364638B2 (en) 2005-09-15 2013-01-29 Ca, Inc. Automated filer technique for use in virtualized appliances and applications
JP4749112B2 (en) 2005-10-07 2011-08-17 株式会社日立製作所 Storage control system and method
US20070088729A1 (en) 2005-10-14 2007-04-19 International Business Machines Corporation Flexible history manager for manipulating data and user actions
US8549051B2 (en) 2005-11-04 2013-10-01 Oracle America, Inc. Unlimited file system snapshots and clones
US20070208918A1 (en) 2006-03-01 2007-09-06 Kenneth Harbin Method and apparatus for providing virtual machine backup
US7676514B2 (en) 2006-05-08 2010-03-09 Emc Corporation Distributed maintenance of snapshot copies by a primary processor managing metadata and a secondary processor providing read-write access to a production dataset
US8122108B2 (en) 2006-05-16 2012-02-21 Oracle International Corporation Database-less leasing
US8571882B1 (en) 2006-07-05 2013-10-29 Ronald J. Teitelbaum Peer to peer database
US7809759B1 (en) 2006-08-18 2010-10-05 Unisys Corporation Dynamic preconditioning of A B+tree
US7680996B2 (en) 2006-09-28 2010-03-16 Paragon Software GmbH Method and system for shrinking a set of data using a differential snapshot, a watch-list structure along with identifying and retaining updated blocks
US9189265B2 (en) 2006-12-21 2015-11-17 Vmware, Inc. Storage architecture for virtual machines
US7941470B2 (en) 2007-03-29 2011-05-10 Vmware, Inc. Synchronization and customization of a clone computer
JP2009146389A (en) 2007-11-22 2009-07-02 Hitachi Ltd Backup system and method
JP4498409B2 (en) * 2007-12-28 2010-07-07 株式会社エスグランツ Database index key update method and program
US8365167B2 (en) 2008-04-15 2013-01-29 International Business Machines Corporation Provisioning storage-optimized virtual machines within a virtual desktop environment
US20090276774A1 (en) 2008-05-01 2009-11-05 Junji Kinoshita Access control for virtual machines in an information system
JP5156518B2 (en) * 2008-07-23 2013-03-06 株式会社日立製作所 Storage control apparatus and method
US8566821B2 (en) 2008-11-11 2013-10-22 Netapp Inc. Cloning virtual machines
JP2010191647A (en) 2009-02-18 2010-09-02 Hitachi Ltd File sharing system, file server, and method for managing file
US20100257403A1 (en) 2009-04-03 2010-10-07 Microsoft Corporation Restoration of a system from a set of full and partial delta system snapshots across a distributed system
US8200633B2 (en) 2009-08-07 2012-06-12 International Business Machines Corporation Database backup and restore with integrated index reorganization
US8463825B1 (en) 2010-04-27 2013-06-11 Tintri Inc. Hybrid file system for virtual machine storage
US8386462B2 (en) 2010-06-28 2013-02-26 International Business Machines Corporation Standby index in physical data replication
US8434081B2 (en) 2010-07-02 2013-04-30 International Business Machines Corporation Storage manager for virtual machines with virtual storage
US20120005672A1 (en) 2010-07-02 2012-01-05 International Business Machines Corporation Image management for virtual machine instances and associated virtual storage
US8412689B2 (en) * 2010-07-07 2013-04-02 Microsoft Corporation Shared log-structured multi-version transactional datastore with metadata to enable melding trees
US8612488B1 (en) 2010-09-15 2013-12-17 Symantec Corporation Efficient method for relocating shared memory
US9304867B2 (en) * 2010-09-28 2016-04-05 Amazon Technologies, Inc. System and method for providing flexible storage and retrieval of snapshot archives
US8620870B2 (en) 2010-09-30 2013-12-31 Commvault Systems, Inc. Efficient data management improvements, such as docking limited-feature data management modules to a full-featured data management system
US8966191B2 (en) 2011-03-18 2015-02-24 Fusion-Io, Inc. Logical interface for contextual storage
JP5445504B2 (en) 2011-03-31 2014-03-19 日本電気株式会社 Data replication apparatus, data replication control method, and data replication control program
US9519496B2 (en) 2011-04-26 2016-12-13 Microsoft Technology Licensing, Llc Detecting and preventing virtual disk storage linkage faults
US8433683B2 (en) 2011-06-08 2013-04-30 Oracle International Corporation Systems and methods of data replication of a file system
US9286182B2 (en) 2011-06-17 2016-03-15 Microsoft Technology Licensing, Llc Virtual machine snapshotting and analysis
US8595238B2 (en) 2011-06-22 2013-11-26 International Business Machines Corporation Smart index creation and reconciliation in an interconnected network of systems
US9116633B2 (en) 2011-09-30 2015-08-25 Commvault Systems, Inc. Information management of virtual machines having mapped storage devices
US9292521B1 (en) 2011-10-20 2016-03-22 Amazon Technologies, Inc. Archiving and querying data updates associated with an electronic catalog system
US10509776B2 (en) * 2012-09-24 2019-12-17 Sandisk Technologies Llc Time sequence data management
US9116726B2 (en) * 2012-09-28 2015-08-25 Vmware, Inc. Virtual disk snapshot consolidation using block merge
US20140136578A1 (en) 2012-11-15 2014-05-15 Microsoft Corporation Techniques to manage virtual files

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6594744B1 (en) * 2000-12-11 2003-07-15 Lsi Logic Corporation Managing a snapshot volume or one or more checkpoint volumes with multiple point-in-time images in a single repository
US20060112151A1 (en) * 2002-03-19 2006-05-25 Manley Stephen L System and method for storage of snapshot metadata in a remote file
US20040267836A1 (en) * 2003-06-25 2004-12-30 Philippe Armangau Replication of snapshot using a file system copy differential
US20090307450A1 (en) * 2007-04-11 2009-12-10 Dot Hill Systems Corporation Snapshot Preserved Data Cloning
US20120016839A1 (en) 2010-07-15 2012-01-19 Delphix Corp. De-Duplication Based Backup Of File Systems
US8468174B1 (en) 2010-11-30 2013-06-18 Jedidiah Yueh Interfacing with a virtual database system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3042289A4

Also Published As

Publication number Publication date
JP6309103B2 (en) 2018-04-11
EP3042289A1 (en) 2016-07-13
US20150066857A1 (en) 2015-03-05
JP2016529633A (en) 2016-09-23
EP3042289A4 (en) 2017-03-15
US10628378B2 (en) 2020-04-21

Similar Documents

Publication Publication Date Title
US10628378B2 (en) Replication of snapshots and clones
US10776315B2 (en) Efficient and flexible organization and management of file metadata
US11086545B1 (en) Optimizing a storage system snapshot restore by efficiently finding duplicate data
US10956364B2 (en) Efficient data synchronization for storage containers
US10185629B2 (en) Optimized remote cloning
US10248336B1 (en) Efficient deletion of shared snapshots
EP3477482B1 (en) Intelligent snapshot tiering
US11914485B2 (en) Restoration of specified content from an archive
EP3477481B1 (en) Data block name based efficient restore of multiple files from deduplicated storage
AU2009307842B2 (en) Atomic multiple modification of data in a distributed storage system
US10872017B2 (en) Restoring a file system object
US9990156B1 (en) Deduplicating snapshots associated with a backup operation
WO2016101283A1 (en) Data processing method, apparatus and system
GB2520361A (en) Method and system for a safe archiving of data
US20230394010A1 (en) File system metadata deduplication
US20220188267A1 (en) Embedded reference counts for file clones

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14841628

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2014841628

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014841628

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2016537937

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE