US20230251997A1 - Recovering the Metadata of Data Backed Up in Cloud Object Storage - Google Patents

Recovering the Metadata of Data Backed Up in Cloud Object Storage Download PDF

Info

Publication number
US20230251997A1
US20230251997A1 US18/303,478 US202318303478A US2023251997A1 US 20230251997 A1 US20230251997 A1 US 20230251997A1 US 202318303478 A US202318303478 A US 202318303478A US 2023251997 A1 US2023251997 A1 US 2023251997A1
Authority
US
United States
Prior art keywords
metadata
storage platform
cloud
segment
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/303,478
Inventor
Wenguang Wang
Vamsi Gunturu
Junlong Gao
Petr Vandrovec
Ilya Languev
Maxime AUSTRUY
Ilia Sokolinski
Satish Pudi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VMware LLC
Original Assignee
VMware LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VMware LLC filed Critical VMware LLC
Priority to US18/303,478 priority Critical patent/US20230251997A1/en
Assigned to VMWARE, INC. reassignment VMWARE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LANGUEV, ILYA, AUSTRUY, MAXIME, GAO, Junlong, GUNTURU, VAMSI, PUDI, SATISH, SOKOLINKSI, ILIA, VANDROVEC, PETR, WANG, WENGUANG
Publication of US20230251997A1 publication Critical patent/US20230251997A1/en
Assigned to VMware LLC reassignment VMware LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: VMWARE, INC.
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/128Details of file system snapshots on the file-level, e.g. snapshot creation, administration, deletion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1435Saving, restoring, recovering or retrying at system level using file system or storage system metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1471Saving, restoring, recovering or retrying involving logging of persistent data for recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/273Asynchronous replication or reconciliation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • G06F11/1451Management of the data involved in backup or backup restore by selection of backup contents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/258Data format conversion from or to a database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/80Database-specific techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/84Using snapshots, i.e. a logical point-in-time copy of the data

Definitions

  • Object storage is a data storage model that manages data in the form of logical containers known as objects, rather than in the form of files (as in file storage) or blocks (as in block storage).
  • Cloud object storage is an implementation of object storage that maintains these objects on a cloud infrastructure, which is a server infrastructure that is accessible via the Internet. Due to its high scalability, high durability, and relatively low cost, cloud object storage is commonly used by companies to backup large volumes of data for disaster recovery and long-term retention/archival.
  • the software systems that are employed to create and manage these backups are referred to herein as cloud object storage-based data backup (COS-DB) systems.
  • COS-DB cloud object storage-based data backup
  • the process of backing up a data set to a cloud object storage platform involves (1) uploading incremental point-in-time versions (i.e., snapshots) of the data set to the cloud object storage platform and (2) uploading associated metadata (which identifies, among other things, the storage objects (e.g., “log segments”) used to hold the data of each snapshot) to a separate cloud block storage platform.
  • incremental point-in-time versions i.e., snapshots
  • associated metadata which identifies, among other things, the storage objects (e.g., “log segments”) used to hold the data of each snapshot
  • a COS-DB system can more efficiently execute certain snapshot management operations.
  • cloud block storage generally offers lower durability than cloud object storage, which makes the metadata stored in cloud block storage more vulnerable to data loss.
  • cloud block storage platform i.e., Elastic Block Store (EBS)
  • EBS Elastic Block Store
  • Amazon's cloud object storage platform i.e., Simple Storage Service (S3)
  • S3 Simple Storage Service
  • FIG. 1 depicts an operating environment and example cloud object storage-based data backup (COS-DB) system according to certain embodiments.
  • COS-DB cloud object storage-based data backup
  • FIG. 2 depicts a snapshot upload workflow according to certain embodiments.
  • FIGS. 3 A, 3 B, and 3 C depict example snapshot upload scenarios.
  • FIG. 4 depicts a garbage collection workflow according to certain embodiments.
  • FIG. 5 depicts an enhanced version of the COS-DB system of FIG. 1 according to certain embodiments.
  • FIG. 6 depicts an enhanced snapshot upload workflow according to certain embodiments.
  • FIG. 7 depicts a metadata recovery workflow according to certain embodiments.
  • Embodiments of the present disclosure are directed to techniques that can be implemented by a COS-DB system for recovering metadata associated with data backed up in a cloud object storage platform.
  • the COS-DB system can upload, as a series of log segments, a snapshot of the data set to the cloud object storage platform, where each log segment in the series includes one or more data blocks in the snapshot and a first set of metadata usable to generate mappings between the one or more data blocks and the log segment.
  • this first set of metadata can include, for each data block in the log segment, a identifier (ID) of the data set, an identifier of the snapshot, and a logical block address (LBA) of the data block.
  • ID identifier
  • LBA logical block address
  • the COS-DB system can (1) populate the mappings between data blocks and log segments in a first metadata database maintained in a cloud block storage platform, (2) populate a second set of metadata pertaining to the snapshot in a second metadata database in the cloud block storage platform, and (3) using a hybrid “asynchronous/synchronous” approach, replicate a transaction log of the second metadata database to a remote site.
  • the COS-DB system can carry out a recovery process that involves reading the log segments in the cloud object storage platform, extracting the first set of metadata included in each log segment, and rebuilding the contents of the first metadata database using the extracted information. Further, at the time of a failure in the cloud block storage platform that causes the second metadata database to be lost, the COS-DB system can carry out a recovery process that involves retrieving the replicated transaction log from the remote site and rebuilding the contents of the second metadata database using the retrieved transaction log.
  • a recovery process that involves retrieving the replicated transaction log from the remote site and rebuilding the contents of the second metadata database using the retrieved transaction log.
  • FIG. 1 depicts an operating environment 100 and an example COS-DB system 102 in which embodiments of the present disclosure may be implemented.
  • operating environment 100 includes a source data center 104 that is communicatively coupled with a cloud infrastructure 106 comprising a cloud object storage platform 108 and a cloud compute and block storage platform 110 .
  • cloud object storage platform 108 include Amazon S3, Microsoft Azure Blob Storage, and Google Cloud Storage.
  • cloud compute and block storage platform 110 include Amazon Elastic Compute Cloud (EC2) and Elastic Block Store (EBS), Microsoft Azure Virtual Machines (VMs) and Managed Disks (MDs), and Google Compute Engine (CE) and Persistent Disks (PDs).
  • EC2 Amazon Elastic Compute Cloud
  • EBS Elastic Block Store
  • VMs Microsoft Azure Virtual Machines
  • MDs Managed Disks
  • CE Google Compute Engine
  • PDs Persistent Disks
  • COS-DB system 102 which components are depicted via dotted lines—includes a diff block generator 112 and uploader agent 114 in source data center 104 and an uploader server 116 , a garbage collector 118 , a first metadata database 120 (comprising a versioned data set map 122 , a chunk map 124 , and a segment usage table (SUT) 126 ), and a second metadata database 128 in cloud compute and block storage platform 110 .
  • the primary objective of COS-DB system 102 is to backup, on an ongoing basis, a data set X (reference numeral 130 ) maintained at source data center 104 to cloud object storage platform 108 for disaster recovery, long-term retention, and/or other purposes.
  • Data set X may be, e.g., a virtual disk file, a Kubernetes persistent volume, a virtual storage area network (vSAN) object, or any other logical collection of data.
  • vSAN virtual storage area network
  • Diff block generator 112 , uploader agent 114 , and uploader server 116 are components of COS-DB system 102 that work in concert to upload snapshots of data set X from source data center 104 to cloud object storage platform 108 , thereby backing up data set X in platform 108 .
  • FIG. 2 depicts a workflow 200 that can be executed by components 112 - 116 for uploading a given snapshot S of X to platform 108 according to certain embodiments.
  • diff block generator 112 can identify data blocks in data set X that have changed since the creation/upload of the last snapshot for X and can provide these modified data blocks, along with their logical block addresses (LBAs), to uploader agent 114 . In the case where no snapshot has previously been created/uploaded for data set X, diff block generator 112 can provide all data blocks of X to uploader agent 114 at step 204 .
  • LBAs logical block addresses
  • uploader agent 114 can receive the data block information from diff block generator 112 and assemble it into a snapshot S composed of, e.g., ⁇ LBA, data block> tuples. Uploader agent 114 can then take a portion of snapshot S that fits into a fixed-size data object conforming to the object format of cloud object storage platform 108 (referred to herein as a “log segment”), package that portion into a log segment L (step 208 ), and upload (i.e., write) log segment L to cloud object storage platform 108 (step 210 ).
  • log segment a fixed-size data object conforming to the object format of cloud object storage platform 108
  • uploader agent 114 performs the upload of these segments in a log-structured manner, such that they do not overwrite existing log segments which contain data for overlapping LBAs of data set X Stated another way, uploader agent 114 uploads/writes every log segment as an entirely new object in cloud object storage platform 108 , regardless of whether it includes LBAs that overlap previously uploaded/written log segments.
  • uploader agent 114 can communicate metadata pertaining to L to uploader server 116 (step 212 ).
  • This metadata can include a first set of metadata that is usable to generate mappings between the snapshot data blocks included in L and L itself (e.g., an ID of data set X, an ID of snapshot S, the LBA of each data block, an ID of log segment L, etc.) and a second set of metadata comprising certain bookkeeping information (e.g., user authentication information, upload timestamp of L, etc.).
  • uploader server 116 can convert the first set of metadata into a first set of metadata entries that conform to the schemas of versioned data set map 122 , chunk map 124 , and SUT 126 and can write the first set of entries to these maps/tables (step 214 ).
  • Uploader server 116 can also convert the second set of metadata into a second set of metadata entries that conform to the schema of second metadata database 128 and write the second set of entries to database 128 (step 216 ).
  • uploader server 116 can check whether there are any remaining portions of snapshot S that have not yet been uploaded. If the answer is yes, uploader server 116 can return an acknowledgement to uploader agent 114 that metadata databases 120 and 128 have been updated with the metadata for log segment L (step 220 ), thereby causing workflow 200 to return to step 208 (so that uploader agent 114 can package the next portion of S into a new log segment for uploading).
  • uploader server 116 can return a final acknowledgement to uploader agent 114 indicating that the upload of snapshot S and all of its metadata is complete (step 222 ) and workflow 200 can end.
  • FIGS. 3 A, 3 B, and 3 C depict three example snapshots of data set X (i.e., snap1 (reference numeral 300 ), snap2 (reference numeral 310 ), and snap3 (reference numeral 320 )) that may be uploaded to cloud object storage platform 108 in accordance with workflow 200 and the log segments that may be created in platform 108 per step 210 of the workflow. As shown in FIG.
  • snapshot snap1 includes twenty data blocks having LBAs L0-L19 and the upload of this snapshot creates four log segments in cloud object storage platform 108 (assuming a max segment size of five data blocks): seg1 (reference numeral 302 ) comprising data blocks L0-L4 of snap1, seg2 (reference numeral 304 ) comprising data blocks L5-L9 of snap1, seg3 (reference numeral 306 ) comprising data blocks L10-L14 of snap1, and seg4 (reference numeral 308 ) comprising data blocks L15-L19 of snap1.
  • snapshot snap2 includes five data blocks L1-L3, L5, and L6 (which represent the content of data set X that has changed since snap1) and the upload of snap2 creates one additional log segment in cloud object storage platform 108 : seg5 (reference numeral 312 ) comprising data blocks L1-L3, L5, and L6 of snap2.
  • seg5 reference numeral 312
  • the prior versions of data blocks L1-L3, L5, and L6 associated with snap1 and included in existing log segments seg1 and seg2 are not overwritten by the upload of snap2; however, these prior data block versions are considered “superseded” by snap2 because they no longer reflect the current data content of LBAs L1-L3, L5, and L6.
  • snapshot snap3 includes nine data blocks L5-L10 and L17-L19 (which represent the content of data set X that has changed since snap2) and the upload of snap3 creates two additional log segments in cloud object storage platform 108 : seg6 (reference numeral 322 ) comprising data blocks L5-L9 of snap3 and seg7 (reference numeral 324 ) comprising data blocks L10 and L17-L19 of snap3.
  • seg6 reference numeral 322
  • seg7 reference numeral 324
  • listings 1-3 below present example metadata entries that may be populated by uploader server 116 in version data set map 122 , chunk map 124 , and SUT 126 respectively (per step 214 of workflow 200 ) as a result of the uploading of snap1, snap2, and snap3:
  • the metadata entries presented here can be understood as mapping the data blocks/LBAs of snap1, snap3, and snap3 (which are all different versions of data set X) to the log segments in which they are stored (i.e., seg1-seg7) per FIGS. 3 A- 3 C .
  • the particular schema employed by these metadata entries comprises a first mapping between each snapshot data block LBA and a “chunk ID” (e.g., C1) via versioned data set map 122 and a second mapping between each chunk ID and a log segment ID (e.g., seg1) via chunk map 124 .
  • This schema provides a level of indirection between the snapshot data blocks and their log segment locations, which allows for more efficient implementation of certain features in COS-DB system 102 such as data deduplication.
  • the chunk ID attribute can be removed and each snapshot data block LBA can be directly mapped to its corresponding log segment ID.
  • the metadata entries presented in listings 1 and 2 make use of a range value (i.e., “N20”, “N5,” etc.) that effectively compresses multiple consecutive metadata entries in maps 122 and 124 into a single entry.
  • a range value i.e., “N20”, “N5,” etc.
  • the first metadata entry shown in listing 1 includes the range value “N20,” which indicates that this entry actually represents twenty metadata entries in versioned data set map 122 with sequentially increasing LBAs and chunk IDs as shown below:
  • the first metadata entry shown in listing 2 includes the range value “N5,” which indicates that this entry actually represents five metadata entries in chunk map 124 with sequentially increasing chunk IDs as shown below:
  • a “live” data block is one that is currently a part of, or referenced by, an existing (i.e., non-deleted) snapshot in cloud object storage platform 108 .
  • seg1 has five live data blocks because it includes data blocks L0-L4 of snap1, which is an existing snapshot in platform 108 per the upload operation depicted in FIG. 3 A .
  • a “dead” data block is one that is not currently a part of, or referenced by, an existing snapshot in cloud object storage platform 108 (and thus can be deleted). The significance of this live/dead distinction is discussed with respect to garbage collector 118 below.
  • a dead data block is one that is not part of, or referenced by, any existing (i.e., non-deleted) snapshot in cloud object storage platform 108 , and thus should ideally be deleted to free the storage space it consumes.
  • garbage collector 118 of COS-DB system 102 can periodically carry out a garbage collection (also known as “segment cleaning”) process to identify and delete dead data blocks from the log segments maintained in cloud object storage platform 108 .
  • FIG. 4 depicts a workflow 400 of this garbage collection process according to certain embodiments.
  • Workflow 400 assumes that, at the time a given snapshot is deleted from cloud object storage platform 108 , the metadata entries mapping the data blocks of that snapshot to their corresponding log segments are removed from versioned data set map 122 and chunk map 124 . Workflow 400 also assumes that the SUT entries of the affected segments in SUT 126 are updated to reflect an appropriately reduced live data block count for those log segments.
  • garbage collector 118 can enter a loop for each log segment in SUT 126 and determine, from the log segment's SUT entry, whether the log segment's “utilization rate” (i.e., its number of live data blocks divided by its number of total data blocks) is less than or equal to some low watermark (e.g., 50%). If the answer is yes, garbage collector 118 can add that log segment to a list of “candidate” log segments that will be garbage collected (step 406 ). If the answer is no, garbage collector 118 can take no action. Garbage collector 118 can then reach the end of the current loop iteration (step 408 ) and repeat the foregoing steps for each additional log segment in SUT 126 .
  • some low watermark e.g. 50%
  • garbage collector 118 can enter a loop for each candidate log segment identified per step 406 (step 410 ) and another loop for each data block of the candidate log segment (step 412 ). Within the data block loop, garbage collector 118 can read the chunk ID of the data block (step 414 ) and check whether the data block's chunk ID exists in chunk map 124 and points to the current candidate log segment within the chunk map (step 416 ). If the answer is yes, garbage collector 118 can conclude that the current data block is a live data block and add the data block's LBA to a list of live data blocks (step 418 ).
  • garbage collector 118 can conclude that the current data block is a dead data block and take no action. Garbage collector 118 can then reach the end of the current iteration for the data block loop (step 420 ) and repeat steps 412 - 420 until all data blocks within the current candidate log segment have been processed.
  • garbage collector 118 can write out all of the live data blocks identified for the current candidate log segment (per step 418 ) to a new log segment, delete the current candidate log segment, and set the ID of the new log segment created at block 422 to the ID of the (now deleted) current candidate log segment, thereby effectively “shrinking” the current candidate log segment to include only its live data blocks (and exclude the dead data blocks).
  • Garbage collector 118 can also update the total data block count for the current candidate log segment in SUT 126 accordingly (step 428 ).
  • garbage collector 118 can reach the end of the current iteration of the candidate log segment loop and repeat steps 410 - 430 for the next candidate log segment. Once all candidate log segments have been processed, workflow 400 can end.
  • COS-DB system 102 can more efficiently execute certain snapshot management operations.
  • cloud compute/block storage platform 110 typically provides a lower degree of durability than cloud object storage platform 108 .
  • this configuration can lead to a scenario in which the metadata of the snapshots of data set X becomes lost (due to, e.g., a failure in platform 110 that causes metadata databases 120 and 128 to become unreadable), while the data content of the snapshots remain accessible via cloud object storage platform 108 . If metadata databases 120 and 128 cannot be rebuilt/recovered in this scenario, the snapshots will be rendered unusable (as the metadata needed to understand the structure and organization of the snapshots will be gone).
  • FIG. 5 depicts a system environment 500 comprising an enhanced version of COS-DB system 102 of FIG. 1 (i.e., COS-DB system 502 ) that includes a modified uploader agent 504 , a modified uploader server 506 , and a novel metadata recovery agent 508 .
  • metadata recovery agent 508 is shown as running on cloud compute and block storage platform 110 ; however, in alternative embodiments metadata recovery agent 508 may run at other locations/systems, such as at source data center 104 or some other component/platform of cloud infrastructure 106 .
  • uploader agent 504 and uploader server 506 can carry out an enhanced snapshot upload process that involves (1) including, by uploader agent 504 in each log segment uploaded to cloud object storage platform 108 , metadata usable to reconstruct the metadata entries in versioned data set map 122 , chunk map 124 , and SUT 126 of first metadata database 120 , and (2) replicating, by uploader server 506 via a hybrid “asynchronous/synchronous” approach, a transaction log of second metadata database 128 to a remote site.
  • This hybrid asynchronous/synchronous approach can comprise “asynchronously” replicating changes to the transaction log during the majority of the snapshot upload (i.e., replicating the transaction log changes in the background, without blocking upload progress), but “synchronously” replicating final changes to the transaction log (i.e., waiting for an acknowledgement from the remote site that those final changes have been successfully replicated, before sending an acknowledgement to uploader agent 504 that the snapshot upload is complete).
  • metadata recovery agent 508 can execute a metadata recovery process that involves (1) rebuilding first metadata database 120 (and constituent maps/tables 122 - 126 ) by reading the log segments stored in cloud object storage platform 108 and extracting the metadata included in each log segment, and (2) rebuilding second metadata database 128 by retrieving the replicated translation log from the remote site and replaying the transaction log.
  • COS-DB system 502 can efficiently recover the contents of metadata databases 120 and 128 in cloud compute and block storage platform 110 , thereby addressing the durability concerns of platform 110 . For example, by incorporating appropriate metadata information in each log segment uploaded in cloud object storage platform 108 , COS-DB system 502 can reconstruct databases 120 and 128 directly from those log segments.
  • COS-DB system 502 can carry out this replication in a manner that (1) has relatively low performance impact (because there is no need to wait for the remote transaction log to be updated each time the local transaction log is updated during the snapshot upload), and (2) is crash consistent (because by synchronizing the completion of snapshot upload to the completion of transaction log replication, the snapshot metadata maintained by uploader agent 504 at source data center 104 will not be discarded before the transaction log is fully replicated).
  • the foregoing techniques can advantageously enable the implementation of new metadata designs/schemas for databases 120 and 128 in a seamless manner. For example, if a new metadata design/schema is desired for versioned data set map 122 , chunk map 124 , and/or SUT 126 of first metadata database 120 , new versions of those maps/tables can be constructed from the log segments in cloud object storage platform 108 , without affecting the operation of existing maps/tables 122 - 126 . Then, once the construction of those new versions is complete, COS-DB system 502 can simply switch over to using the new maps/tables.
  • FIG. 5 is illustrative and not intended to limit embodiments of the present disclosure.
  • FIG. 5 depicts a particular arrangement of entities/components within operating environment 500 and COS-DB system 502 , other arrangements are possible (e.g., the functionality attributed to one entity/component may be split into multiple entities/components, certain entities/components may be combined, etc.).
  • each entity/component may include sub-components or implement functionality that is not specifically described.
  • One of ordinary skill in the art will recognize other variations, modifications, and alternatives.
  • FIG. 6 depicts an enhanced version of workflow 200 of FIG. 2 (i.e., workflow 600 ) that can be executed by diff block generator 112 , uploader agent 504 , and uploader server 506 of FIG. 5 for uploading a given snapshot S of data set X to cloud object storage platform 108 in accordance with the metadata recovery techniques of the present disclosure.
  • Workflow 600 assumes that second metadata database 128 in cloud compute and block storage platform 110 implements a transaction log (sometimes referred to as a “recovery log” or “binary log”) that records historical transactions applied to database 128 and can be replayed to rebuild the contents of database 128 in the case of a crash or other failure.
  • a transaction log sometimes referred to as a “recovery log” or “binary log”
  • diff block generator 112 can identify data blocks in data set X that have changed since the creation/upload of the last snapshot for X and can provide these modified data blocks, along with their LBAs, to uploader agent 504 . In the case where no snapshot has previously been created/uploaded for data set X, diff block generator 112 can provide all data blocks of X to uploader agent 504 at step 604 .
  • uploader agent 504 can receive the data block information from diff block generator 112 and assemble it into a snapshot S composed of, e.g., ⁇ LBA, data block> tuples. Uploader agent 504 can then package a portion of snapshot S into a log segment L (step 608 ) and upload L to cloud object storage platform 108 (step 210 ).
  • uploader agent 504 can include metadata in L that is usable for creating corresponding metadata entries in versioned data set map 122 , chunk map 124 , and SUT 126 of first metadata database 120 .
  • uploader agent 504 can include in L the ID of data set X (i.e., the data set being backed up via L), the ID of L, and the LBA, snapshot ID, and chunk ID of each data block in L.
  • uploader agent 504 can communicate metadata pertaining to L to uploader server 506 (step 612 ).
  • This metadata can include a first set of metadata that similar/identical to the metadata incorporated into L at step 608 and a second set of metadata comprising bookkeeping information such as user authentication information, an upload timestamp of S, and so on.
  • uploader server 506 can convert the first set of metadata into a first set of metadata entries that conform to the schemas of versioned data set map 122 , chunk map 124 , and SUT 126 and can write the first set of entries to these maps/tables (step 614 ).
  • Uploader server 506 can also convert the second set of metadata into a second set of metadata entries that conform to the schema of second metadata database 128 and write the second set of entries to database 128 (step 616 ).
  • uploader server 506 can check whether there are any remaining portions of snapshot S that have not been uploaded yet. If the answer is yes, uploader server 506 can return an acknowledgement to uploader agent 504 that metadata databases 120 and 128 have been updated with the metadata for log segment L (step 620 ), thereby causing workflow 600 to return to step 608 (so that uploader agent 504 can package the next portion of S into a new log segment for uploading). After sending this acknowledgement, a background process of uploader server 506 can, at some later time, replicate changes in the transaction log of second metadata database 128 caused by the updating of database 128 at step 616 to a remote site.
  • uploader server 506 can replicate all of the remaining changes in the transaction log to the remote site (i.e., all of the changes that have not yet been replicated) and wait for an acknowledgement from the remote site that the replication is complete/successful (step 622 ). In this way, uploader server 506 can ensure that the copy of the transaction log at the remote site is consistent with the copy in cloud compute and block storage platform 110 . Upon receiving this acknowledgment from the remote site, uploader server 506 can return a final acknowledgement to uploader agent 504 that the upload of snapshot S and its metadata is complete (step 624 ) and workflow 600 can end.
  • FIG. 7 depicts a workflow 700 that can be executed by metadata recovery agent 508 of FIG. 5 for recovering metadata databases 120 and 128 in cloud compute and block storage platform 110 in the scenario where these databases (or portions thereof) are lost due to a failure.
  • Workflow 700 assumes that the snapshots/log segments to which the metadata in databases 120 and 128 pertain are accessible via cloud object storage platform 108 .
  • metadata recovery agent 508 can retrieve the copy of the transaction log of second metadata database 128 maintained at the remote site and can rebuild the metadata entries of database 128 by replaying the retrieved transaction log.
  • metadata recovery agent 508 can enter a loop for each log segment maintained in cloud object storage platform 108 . Within this loop, metadata recovery agent 508 can extract the metadata included in the log segment per step 608 of workflow 600 (step 708 ). As mentioned previously, this metadata can include the data set ID, snapshot ID, LBA, and chunk ID of each data block included in the log segment, the ID of the log segment itself, and so on.
  • metadata recovery agent 508 can rebuild the metadata entries of the maps/tables in first metadata database 120 (i.e., versioned data set map 122 , chunk map 124 , and SUT 126 ) using the log segment metadata extracted at step 708 .
  • metadata recovery agent 508 can create, for each data block in the log segment, an entry in map 122 mapping the data block's data set ID, snapshot ID, and LBA to its chunk ID.
  • chunk map 124 metadata recovery agent 508 can create, for each data block in the log segment, an entry in map 124 mapping the data block's chunk ID to the log segment ID.
  • step 712 metadata recovery agent 508 can reach the end of the current loop iteration and return to step 706 to process additional log segments. Once all of the log segments in cloud object storage platform 108 have been processed, workflow 700 can end.
  • Certain embodiments described herein can employ various computer-implemented operations involving data stored in computer systems. For example, these operations can require physical manipulation of physical quantities—usually, though not necessarily, these quantities take the form of electrical or magnetic signals, where they (or representations of them) are capable of being stored, transferred, combined, compared, or otherwise manipulated. Such manipulations are often referred to in terms such as producing, identifying, determining, comparing, etc. Any operations described herein that form part of one or more embodiments can be useful machine operations.
  • one or more embodiments can relate to a device or an apparatus for performing the foregoing operations.
  • the apparatus can be specially constructed for specific required purposes, or it can be a generic computer system comprising one or more general purpose processors (e.g., Intel or AMD x86 processors) selectively activated or configured by program code stored in the computer system.
  • general purpose processors e.g., Intel or AMD x86 processors
  • various generic computer systems may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
  • the various embodiments described herein can be practiced with other computer system configurations including handheld devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
  • one or more embodiments can be implemented as one or more computer programs or as one or more computer program modules embodied in one or more non-transitory computer readable storage media.
  • the term non-transitory computer readable storage medium refers to any data storage device that can store data which can thereafter be input to a computer system.
  • the non-transitory computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer system.
  • non-transitory computer readable media examples include a hard drive, network attached storage (NAS), read-only memory, random-access memory, flash-based nonvolatile memory (e.g., a flash memory card or a solid state disk), persistent memory, NVMe device, a CD (Compact Disc) (e.g., CD-ROM, CD-R, CD-RW, etc.), a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices.
  • the non-transitory computer readable media can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Library & Information Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Techniques for recovering metadata associated with data backed up in cloud object storage are provided. In one set of embodiments, a computer system can create a snapshot of a data set, where the snapshot includes a plurality of data blocks of the data set that have been modified since the creation of a prior snapshot of the data set. The computer system can further upload the snapshot to a cloud object storage platform of a cloud infrastructure, where the snapshot is uploaded as a plurality of log segments conforming to an object format of the cloud object storage platform, and where each log segment includes one or more data blocks in the plurality of data blocks, and a set of metadata comprising, for each of the one or more data blocks, an identifier of the data set, an identifier of the snapshot, and a logical block address (LBA) of the data block. The computer system can then communicate the set of metadata to a server component running in a cloud compute and block storage platform of the cloud infrastructure.

Description

    CROSS-REFERENCES TO RELATED APPLICATIONS
  • The present application is a continuation of U.S. patent application Ser. No. 17/002,669, filed Aug. 25, 2020 and entitled “Recovering the Metadata of Data Backed Up in the Cloud Storage.” The entire contents of this application are incorporated herein by reference for all purposes.
  • BACKGROUND
  • Object storage is a data storage model that manages data in the form of logical containers known as objects, rather than in the form of files (as in file storage) or blocks (as in block storage). Cloud object storage is an implementation of object storage that maintains these objects on a cloud infrastructure, which is a server infrastructure that is accessible via the Internet. Due to its high scalability, high durability, and relatively low cost, cloud object storage is commonly used by companies to backup large volumes of data for disaster recovery and long-term retention/archival. The software systems that are employed to create and manage these backups are referred to herein as cloud object storage-based data backup (COS-DB) systems.
  • In some COS-DB systems, the process of backing up a data set to a cloud object storage platform involves (1) uploading incremental point-in-time versions (i.e., snapshots) of the data set to the cloud object storage platform and (2) uploading associated metadata (which identifies, among other things, the storage objects (e.g., “log segments”) used to hold the data of each snapshot) to a separate cloud block storage platform. By maintaining snapshot data and metadata in these two different storage platforms (and via different types of data structures), a COS-DB system can more efficiently execute certain snapshot management operations.
  • However, cloud block storage generally offers lower durability than cloud object storage, which makes the metadata stored in cloud block storage more vulnerable to data loss. For example, in case of Amazon's AWS cloud infrastructure, its cloud block storage platform (i.e., Elastic Block Store (EBS)) guarantees approximately “three nines” of durability, which means there is a 0.01% chance that a customer will lose an EBS volume within a single year. In contrast, Amazon's cloud object storage platform (i.e., Simple Storage Service (S3)) guarantees “eleven nines” of durability, which means there is only a 0.000000001% chance that a customer will lose an S3 object in a single year.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts an operating environment and example cloud object storage-based data backup (COS-DB) system according to certain embodiments.
  • FIG. 2 depicts a snapshot upload workflow according to certain embodiments.
  • FIGS. 3A, 3B, and 3C depict example snapshot upload scenarios.
  • FIG. 4 depicts a garbage collection workflow according to certain embodiments.
  • FIG. 5 depicts an enhanced version of the COS-DB system of FIG. 1 according to certain embodiments.
  • FIG. 6 depicts an enhanced snapshot upload workflow according to certain embodiments.
  • FIG. 7 depicts a metadata recovery workflow according to certain embodiments.
  • DETAILED DESCRIPTION
  • In the following description, for purposes of explanation, numerous examples and details are set forth in order to provide an understanding of various embodiments. It will be evident, however, to one skilled in the art that certain embodiments can be practiced without some of these details or can be practiced with modifications or equivalents thereof.
  • 1. Overview
  • Embodiments of the present disclosure are directed to techniques that can be implemented by a COS-DB system for recovering metadata associated with data backed up in a cloud object storage platform. In one set of embodiments, the COS-DB system can upload, as a series of log segments, a snapshot of the data set to the cloud object storage platform, where each log segment in the series includes one or more data blocks in the snapshot and a first set of metadata usable to generate mappings between the one or more data blocks and the log segment. For example, this first set of metadata can include, for each data block in the log segment, a identifier (ID) of the data set, an identifier of the snapshot, and a logical block address (LBA) of the data block. In addition, as part of the snapshot upload process, the COS-DB system can (1) populate the mappings between data blocks and log segments in a first metadata database maintained in a cloud block storage platform, (2) populate a second set of metadata pertaining to the snapshot in a second metadata database in the cloud block storage platform, and (3) using a hybrid “asynchronous/synchronous” approach, replicate a transaction log of the second metadata database to a remote site.
  • Then, at the time of a failure in the cloud block storage platform that causes the first metadata database to be “lost” (e.g., corrupted, deleted, or otherwise unreadable), the COS-DB system can carry out a recovery process that involves reading the log segments in the cloud object storage platform, extracting the first set of metadata included in each log segment, and rebuilding the contents of the first metadata database using the extracted information. Further, at the time of a failure in the cloud block storage platform that causes the second metadata database to be lost, the COS-DB system can carry out a recovery process that involves retrieving the replicated transaction log from the remote site and rebuilding the contents of the second metadata database using the retrieved transaction log.
  • The foregoing and other aspects of the present disclosure are described in further detail below.
  • 2. Operating Environment and COS-DB System Architecture
  • FIG. 1 depicts an operating environment 100 and an example COS-DB system 102 in which embodiments of the present disclosure may be implemented. As shown, operating environment 100 includes a source data center 104 that is communicatively coupled with a cloud infrastructure 106 comprising a cloud object storage platform 108 and a cloud compute and block storage platform 110. Examples of cloud object storage platform 108 include Amazon S3, Microsoft Azure Blob Storage, and Google Cloud Storage. Examples of cloud compute and block storage platform 110 include Amazon Elastic Compute Cloud (EC2) and Elastic Block Store (EBS), Microsoft Azure Virtual Machines (VMs) and Managed Disks (MDs), and Google Compute Engine (CE) and Persistent Disks (PDs).
  • COS-DB system 102—whose components are depicted via dotted lines—includes a diff block generator 112 and uploader agent 114 in source data center 104 and an uploader server 116, a garbage collector 118, a first metadata database 120 (comprising a versioned data set map 122, a chunk map 124, and a segment usage table (SUT) 126), and a second metadata database 128 in cloud compute and block storage platform 110. The primary objective of COS-DB system 102 is to backup, on an ongoing basis, a data set X (reference numeral 130) maintained at source data center 104 to cloud object storage platform 108 for disaster recovery, long-term retention, and/or other purposes. Data set X may be, e.g., a virtual disk file, a Kubernetes persistent volume, a virtual storage area network (vSAN) object, or any other logical collection of data. The following sub-sections provide brief descriptions of components 112-128 and how they enable COS-DB system 102 system to achieve this objective.
  • 2.1 Diff Block Generator, Uploader Agent, and Uploader Server
  • Diff block generator 112, uploader agent 114, and uploader server 116 are components of COS-DB system 102 that work in concert to upload snapshots of data set X from source data center 104 to cloud object storage platform 108, thereby backing up data set X in platform 108. FIG. 2 depicts a workflow 200 that can be executed by components 112-116 for uploading a given snapshot S of X to platform 108 according to certain embodiments.
  • Starting with steps 202 and 204, diff block generator 112 can identify data blocks in data set X that have changed since the creation/upload of the last snapshot for X and can provide these modified data blocks, along with their logical block addresses (LBAs), to uploader agent 114. In the case where no snapshot has previously been created/uploaded for data set X, diff block generator 112 can provide all data blocks of X to uploader agent 114 at step 204.
  • At step 206, uploader agent 114 can receive the data block information from diff block generator 112 and assemble it into a snapshot S composed of, e.g., <LBA, data block> tuples. Uploader agent 114 can then take a portion of snapshot S that fits into a fixed-size data object conforming to the object format of cloud object storage platform 108 (referred to herein as a “log segment”), package that portion into a log segment L (step 208), and upload (i.e., write) log segment L to cloud object storage platform 108 (step 210). As suggested by the name “log segment,” uploader agent 114 performs the upload of these segments in a log-structured manner, such that they do not overwrite existing log segments which contain data for overlapping LBAs of data set X Stated another way, uploader agent 114 uploads/writes every log segment as an entirely new object in cloud object storage platform 108, regardless of whether it includes LBAs that overlap previously uploaded/written log segments.
  • Upon (or concurrently with) uploading log segment L at step 210, uploader agent 114 can communicate metadata pertaining to L to uploader server 116 (step 212). This metadata can include a first set of metadata that is usable to generate mappings between the snapshot data blocks included in L and L itself (e.g., an ID of data set X, an ID of snapshot S, the LBA of each data block, an ID of log segment L, etc.) and a second set of metadata comprising certain bookkeeping information (e.g., user authentication information, upload timestamp of L, etc.). In response, uploader server 116 can convert the first set of metadata into a first set of metadata entries that conform to the schemas of versioned data set map 122, chunk map 124, and SUT 126 and can write the first set of entries to these maps/tables (step 214). Uploader server 116 can also convert the second set of metadata into a second set of metadata entries that conform to the schema of second metadata database 128 and write the second set of entries to database 128 (step 216).
  • At step 218, uploader server 116 can check whether there are any remaining portions of snapshot S that have not yet been uploaded. If the answer is yes, uploader server 116 can return an acknowledgement to uploader agent 114 that metadata databases 120 and 128 have been updated with the metadata for log segment L (step 220), thereby causing workflow 200 to return to step 208 (so that uploader agent 114 can package the next portion of S into a new log segment for uploading).
  • However, if the answer at step 218 is no, uploader server 116 can return a final acknowledgement to uploader agent 114 indicating that the upload of snapshot S and all of its metadata is complete (step 222) and workflow 200 can end.
  • To clarify the foregoing, FIGS. 3A, 3B, and 3C depict three example snapshots of data set X (i.e., snap1 (reference numeral 300), snap2 (reference numeral 310), and snap3 (reference numeral 320)) that may be uploaded to cloud object storage platform 108 in accordance with workflow 200 and the log segments that may be created in platform 108 per step 210 of the workflow. As shown in FIG. 3A, snapshot snap1 includes twenty data blocks having LBAs L0-L19 and the upload of this snapshot creates four log segments in cloud object storage platform 108 (assuming a max segment size of five data blocks): seg1 (reference numeral 302) comprising data blocks L0-L4 of snap1, seg2 (reference numeral 304) comprising data blocks L5-L9 of snap1, seg3 (reference numeral 306) comprising data blocks L10-L14 of snap1, and seg4 (reference numeral 308) comprising data blocks L15-L19 of snap1.
  • Further, as shown in FIG. 3B, snapshot snap2 includes five data blocks L1-L3, L5, and L6 (which represent the content of data set X that has changed since snap1) and the upload of snap2 creates one additional log segment in cloud object storage platform 108: seg5 (reference numeral 312) comprising data blocks L1-L3, L5, and L6 of snap2. Note that the prior versions of data blocks L1-L3, L5, and L6 associated with snap1 and included in existing log segments seg1 and seg2 are not overwritten by the upload of snap2; however, these prior data block versions are considered “superseded” by snap2 because they no longer reflect the current data content of LBAs L1-L3, L5, and L6.
  • Yet further, as shown in FIG. 3C, snapshot snap3 includes nine data blocks L5-L10 and L17-L19 (which represent the content of data set X that has changed since snap2) and the upload of snap3 creates two additional log segments in cloud object storage platform 108: seg6 (reference numeral 322) comprising data blocks L5-L9 of snap3 and seg7 (reference numeral 324) comprising data blocks L10 and L17-L19 of snap3. Like the scenario of snap2, the prior versions of data blocks L5-L10 and L17-L19 remain in their existing log segments but are considered superseded by the new versions associated with snap3.
  • As a supplement to FIGS. 3A-3C, listings 1-3 below present example metadata entries that may be populated by uploader server 116 in version data set map 122, chunk map 124, and SUT 126 respectively (per step 214 of workflow 200) as a result of the uploading of snap1, snap2, and snap3:
  • Listing 1: Metadata Populated in Version Data Set Map

  • <X,snap1,L0>→<C1,N20>

  • <X,snap2,L1>→<C21,N3>

  • <X,snap2,L5>→<C24,N2>

  • <X,snap3,L5>→<C26,N6>

  • <X,snap3,L17>→<C32,N3>
  • Listing 2: Metadata Populated in Chunk Map

  • C1→<seg1,N5>

  • C6→<seg2,N5>

  • C11→<seg3,N5>

  • C16→<seg4,N5>

  • C21→<seg5,N3>

  • C24→<seg5,N2>

  • C26→<seg6,N5>

  • C31→<seg7,N1>

  • C32→<seg7,N3>
  • Listing 3: Metadata Populated in Segment Usage Table

  • seg1→<LIVE5,TOTAL5>

  • seg2→<LIVE5,TOTAL5>

  • seg3→<LIVE5,TOTAL5>

  • seg4→<LIVE5,TOTAL5>

  • seg5→<LIVE5,TOTAL5>

  • seg6→<LIVE5,TOTAL5>

  • seg7→<LIVE4,TOTAL4>
  • Regarding listings 1 and 2, the metadata entries presented here can be understood as mapping the data blocks/LBAs of snap1, snap3, and snap3 (which are all different versions of data set X) to the log segments in which they are stored (i.e., seg1-seg7) per FIGS. 3A-3C. The particular schema employed by these metadata entries comprises a first mapping between each snapshot data block LBA and a “chunk ID” (e.g., C1) via versioned data set map 122 and a second mapping between each chunk ID and a log segment ID (e.g., seg1) via chunk map 124. This schema provides a level of indirection between the snapshot data blocks and their log segment locations, which allows for more efficient implementation of certain features in COS-DB system 102 such as data deduplication. In alternative embodiments, the chunk ID attribute can be removed and each snapshot data block LBA can be directly mapped to its corresponding log segment ID.
  • Further, the metadata entries presented in listings 1 and 2 make use of a range value (i.e., “N20”, “N5,” etc.) that effectively compresses multiple consecutive metadata entries in maps 122 and 124 into a single entry. For example, the first metadata entry shown in listing 1 (i.e., <X, snap1, L0>→<C1, N20>) includes the range value “N20,” which indicates that this entry actually represents twenty metadata entries in versioned data set map 122 with sequentially increasing LBAs and chunk IDs as shown below:
  • X , snap 1 , L 0 C 1 X , snap 1 , L 1 C 1 X , snap 1 , L 19 C 20 Listing 4
  • Similarly, the first metadata entry shown in listing 2 (i.e., C1→<seg1, N5>) includes the range value “N5,” which indicates that this entry actually represents five metadata entries in chunk map 124 with sequentially increasing chunk IDs as shown below:
  • Listing 5

  • C1→seg1

  • C2→seg1

  • C3→seg1

  • C4→seg1

  • C5→seg1
  • Regarding listing 3, the metadata entries presented here indicate the number of live data blocks and total data blocks included each log segment seg1-seg7 shown in FIGS. 3A-3C. As used herein, a “live” data block is one that is currently a part of, or referenced by, an existing (i.e., non-deleted) snapshot in cloud object storage platform 108. Thus, for example, seg1 has five live data blocks because it includes data blocks L0-L4 of snap1, which is an existing snapshot in platform 108 per the upload operation depicted in FIG. 3A. Conversely, a “dead” data block is one that is not currently a part of, or referenced by, an existing snapshot in cloud object storage platform 108 (and thus can be deleted). The significance of this live/dead distinction is discussed with respect to garbage collector 118 below.
  • 2.2 Garbage Collector
  • One consequence of deleting a snapshot from cloud object storage platform 108 that has been uploaded in accordance with workflow 200 of FIG. 2 is that the deletion can result in dead data blocks in certain log segments. As noted above, a dead data block is one that is not part of, or referenced by, any existing (i.e., non-deleted) snapshot in cloud object storage platform 108, and thus should ideally be deleted to free the storage space it consumes.
  • To understand this phenomenon, consider the scenarios shown in FIGS. 3A-3C where snapshots snap1-snap3 of data set X are sequentially uploaded to cloud object storage platform 108. Assume that after the upload of snap3, snap1 is deleted from platform 108. In this case, data blocks L1-L3, L5-L10, and L17-L19 of snap1 in log segments seg1-seg4 are rendered dead because, while they are still stored in cloud object storage platform 108 via these log segments, their corresponding snapshot snap1 is now gone/deleted and these data blocks will never be referenced by another, later snapshot (by virtue of being superseded by the new versions of these data blocks in snap2 and snap3). Accordingly, these dead data blocks in seg1-seg4 are unnecessarily consuming storage space and should be deleted.
  • To handle the foregoing and other similar scenarios, garbage collector 118 of COS-DB system 102 can periodically carry out a garbage collection (also known as “segment cleaning”) process to identify and delete dead data blocks from the log segments maintained in cloud object storage platform 108. FIG. 4 depicts a workflow 400 of this garbage collection process according to certain embodiments. Workflow 400 assumes that, at the time a given snapshot is deleted from cloud object storage platform 108, the metadata entries mapping the data blocks of that snapshot to their corresponding log segments are removed from versioned data set map 122 and chunk map 124. Workflow 400 also assumes that the SUT entries of the affected segments in SUT 126 are updated to reflect an appropriately reduced live data block count for those log segments.
  • Starting with steps 402 and 404, garbage collector 118 can enter a loop for each log segment in SUT 126 and determine, from the log segment's SUT entry, whether the log segment's “utilization rate” (i.e., its number of live data blocks divided by its number of total data blocks) is less than or equal to some low watermark (e.g., 50%). If the answer is yes, garbage collector 118 can add that log segment to a list of “candidate” log segments that will be garbage collected (step 406). If the answer is no, garbage collector 118 can take no action. Garbage collector 118 can then reach the end of the current loop iteration (step 408) and repeat the foregoing steps for each additional log segment in SUT 126.
  • Once all log segments have been processed, garbage collector 118 can enter a loop for each candidate log segment identified per step 406 (step 410) and another loop for each data block of the candidate log segment (step 412). Within the data block loop, garbage collector 118 can read the chunk ID of the data block (step 414) and check whether the data block's chunk ID exists in chunk map 124 and points to the current candidate log segment within the chunk map (step 416). If the answer is yes, garbage collector 118 can conclude that the current data block is a live data block and add the data block's LBA to a list of live data blocks (step 418). On the other hand, if the answer at step 416 is no, garbage collector 118 can conclude that the current data block is a dead data block and take no action. Garbage collector 118 can then reach the end of the current iteration for the data block loop (step 420) and repeat steps 412-420 until all data blocks within the current candidate log segment have been processed.
  • At steps 422-426, garbage collector 118 can write out all of the live data blocks identified for the current candidate log segment (per step 418) to a new log segment, delete the current candidate log segment, and set the ID of the new log segment created at block 422 to the ID of the (now deleted) current candidate log segment, thereby effectively “shrinking” the current candidate log segment to include only its live data blocks (and exclude the dead data blocks). Garbage collector 118 can also update the total data block count for the current candidate log segment in SUT 126 accordingly (step 428).
  • Finally, at step 430, garbage collector 118 can reach the end of the current iteration of the candidate log segment loop and repeat steps 410-430 for the next candidate log segment. Once all candidate log segments have been processed, workflow 400 can end.
  • 3. High-Level Solution Description
  • As mentioned in the Background section, by separating out the storage of data snapshots and their associated metadata into two different cloud storage locations with different data structures—namely, the storage of data snapshots in the form of log segments in cloud object storage platform 108 and the storage of snapshot metadata in the form of databases 120 and 128 in cloud compute/block storage platform 110—COS-DB system 102 can more efficiently execute certain snapshot management operations. However, because cloud compute/block storage platform 110 typically provides a lower degree of durability than cloud object storage platform 108, this configuration can lead to a scenario in which the metadata of the snapshots of data set X becomes lost (due to, e.g., a failure in platform 110 that causes metadata databases 120 and 128 to become unreadable), while the data content of the snapshots remain accessible via cloud object storage platform 108. If metadata databases 120 and 128 cannot be rebuilt/recovered in this scenario, the snapshots will be rendered unusable (as the metadata needed to understand the structure and organization of the snapshots will be gone).
  • To address the foregoing and other similar issues, FIG. 5 depicts a system environment 500 comprising an enhanced version of COS-DB system 102 of FIG. 1 (i.e., COS-DB system 502) that includes a modified uploader agent 504, a modified uploader server 506, and a novel metadata recovery agent 508. In the example of FIG. 5 , metadata recovery agent 508 is shown as running on cloud compute and block storage platform 110; however, in alternative embodiments metadata recovery agent 508 may run at other locations/systems, such as at source data center 104 or some other component/platform of cloud infrastructure 106.
  • At a high level, uploader agent 504 and uploader server 506 can carry out an enhanced snapshot upload process that involves (1) including, by uploader agent 504 in each log segment uploaded to cloud object storage platform 108, metadata usable to reconstruct the metadata entries in versioned data set map 122, chunk map 124, and SUT 126 of first metadata database 120, and (2) replicating, by uploader server 506 via a hybrid “asynchronous/synchronous” approach, a transaction log of second metadata database 128 to a remote site. This hybrid asynchronous/synchronous approach can comprise “asynchronously” replicating changes to the transaction log during the majority of the snapshot upload (i.e., replicating the transaction log changes in the background, without blocking upload progress), but “synchronously” replicating final changes to the transaction log (i.e., waiting for an acknowledgement from the remote site that those final changes have been successfully replicated, before sending an acknowledgement to uploader agent 504 that the snapshot upload is complete).
  • Further, at the time of a failure in cloud compute and block storage platform 110 that causes metadata databases 120 and 128 to be lost, metadata recovery agent 508 can execute a metadata recovery process that involves (1) rebuilding first metadata database 120 (and constituent maps/tables 122-126) by reading the log segments stored in cloud object storage platform 108 and extracting the metadata included in each log segment, and (2) rebuilding second metadata database 128 by retrieving the replicated translation log from the remote site and replaying the transaction log.
  • With the general techniques above, COS-DB system 502 can efficiently recover the contents of metadata databases 120 and 128 in cloud compute and block storage platform 110, thereby addressing the durability concerns of platform 110. For example, by incorporating appropriate metadata information in each log segment uploaded in cloud object storage platform 108, COS-DB system 502 can reconstruct databases 120 and 128 directly from those log segments. And by employing the hybrid asynchronous/synchronous approach noted above for replicating the transaction log of second metadata database 128 to a remote site, COS-DB system 502 can carry out this replication in a manner that (1) has relatively low performance impact (because there is no need to wait for the remote transaction log to be updated each time the local transaction log is updated during the snapshot upload), and (2) is crash consistent (because by synchronizing the completion of snapshot upload to the completion of transaction log replication, the snapshot metadata maintained by uploader agent 504 at source data center 104 will not be discarded before the transaction log is fully replicated).
  • In addition, the foregoing techniques can advantageously enable the implementation of new metadata designs/schemas for databases 120 and 128 in a seamless manner. For example, if a new metadata design/schema is desired for versioned data set map 122, chunk map 124, and/or SUT 126 of first metadata database 120, new versions of those maps/tables can be constructed from the log segments in cloud object storage platform 108, without affecting the operation of existing maps/tables 122-126. Then, once the construction of those new versions is complete, COS-DB system 502 can simply switch over to using the new maps/tables.
  • It should be appreciated that FIG. 5 is illustrative and not intended to limit embodiments of the present disclosure. For example, although FIG. 5 depicts a particular arrangement of entities/components within operating environment 500 and COS-DB system 502, other arrangements are possible (e.g., the functionality attributed to one entity/component may be split into multiple entities/components, certain entities/components may be combined, etc.). In addition, each entity/component may include sub-components or implement functionality that is not specifically described. One of ordinary skill in the art will recognize other variations, modifications, and alternatives.
  • 4. Enhanced Snapshot Upload Workflow
  • FIG. 6 depicts an enhanced version of workflow 200 of FIG. 2 (i.e., workflow 600) that can be executed by diff block generator 112, uploader agent 504, and uploader server 506 of FIG. 5 for uploading a given snapshot S of data set X to cloud object storage platform 108 in accordance with the metadata recovery techniques of the present disclosure. Workflow 600 assumes that second metadata database 128 in cloud compute and block storage platform 110 implements a transaction log (sometimes referred to as a “recovery log” or “binary log”) that records historical transactions applied to database 128 and can be replayed to rebuild the contents of database 128 in the case of a crash or other failure.
  • Starting with steps 602 and 604, diff block generator 112 can identify data blocks in data set X that have changed since the creation/upload of the last snapshot for X and can provide these modified data blocks, along with their LBAs, to uploader agent 504. In the case where no snapshot has previously been created/uploaded for data set X, diff block generator 112 can provide all data blocks of X to uploader agent 504 at step 604.
  • At step 606, uploader agent 504 can receive the data block information from diff block generator 112 and assemble it into a snapshot S composed of, e.g., <LBA, data block> tuples. Uploader agent 504 can then package a portion of snapshot S into a log segment L (step 608) and upload L to cloud object storage platform 108 (step 210). Significantly, as part of packaging step 608, uploader agent 504 can include metadata in L that is usable for creating corresponding metadata entries in versioned data set map 122, chunk map 124, and SUT 126 of first metadata database 120. For example, uploader agent 504 can include in L the ID of data set X (i.e., the data set being backed up via L), the ID of L, and the LBA, snapshot ID, and chunk ID of each data block in L.
  • Upon (or concurrently with) uploading log segment L at step 610, uploader agent 504 can communicate metadata pertaining to L to uploader server 506 (step 612). This metadata can include a first set of metadata that similar/identical to the metadata incorporated into L at step 608 and a second set of metadata comprising bookkeeping information such as user authentication information, an upload timestamp of S, and so on.
  • In response, uploader server 506 can convert the first set of metadata into a first set of metadata entries that conform to the schemas of versioned data set map 122, chunk map 124, and SUT 126 and can write the first set of entries to these maps/tables (step 614). Uploader server 506 can also convert the second set of metadata into a second set of metadata entries that conform to the schema of second metadata database 128 and write the second set of entries to database 128 (step 616).
  • At step 618, uploader server 506 can check whether there are any remaining portions of snapshot S that have not been uploaded yet. If the answer is yes, uploader server 506 can return an acknowledgement to uploader agent 504 that metadata databases 120 and 128 have been updated with the metadata for log segment L (step 620), thereby causing workflow 600 to return to step 608 (so that uploader agent 504 can package the next portion of S into a new log segment for uploading). After sending this acknowledgement, a background process of uploader server 506 can, at some later time, replicate changes in the transaction log of second metadata database 128 caused by the updating of database 128 at step 616 to a remote site.
  • However, if the answer at step 618 is no, uploader server 506 can replicate all of the remaining changes in the transaction log to the remote site (i.e., all of the changes that have not yet been replicated) and wait for an acknowledgement from the remote site that the replication is complete/successful (step 622). In this way, uploader server 506 can ensure that the copy of the transaction log at the remote site is consistent with the copy in cloud compute and block storage platform 110. Upon receiving this acknowledgment from the remote site, uploader server 506 can return a final acknowledgement to uploader agent 504 that the upload of snapshot S and its metadata is complete (step 624) and workflow 600 can end.
  • 5. Metadata Recovery Workflow
  • FIG. 7 depicts a workflow 700 that can be executed by metadata recovery agent 508 of FIG. 5 for recovering metadata databases 120 and 128 in cloud compute and block storage platform 110 in the scenario where these databases (or portions thereof) are lost due to a failure. Workflow 700 assumes that the snapshots/log segments to which the metadata in databases 120 and 128 pertain are accessible via cloud object storage platform 108.
  • Starting with steps 702 and 704, metadata recovery agent 508 can retrieve the copy of the transaction log of second metadata database 128 maintained at the remote site and can rebuild the metadata entries of database 128 by replaying the retrieved transaction log.
  • At step 706, metadata recovery agent 508 can enter a loop for each log segment maintained in cloud object storage platform 108. Within this loop, metadata recovery agent 508 can extract the metadata included in the log segment per step 608 of workflow 600 (step 708). As mentioned previously, this metadata can include the data set ID, snapshot ID, LBA, and chunk ID of each data block included in the log segment, the ID of the log segment itself, and so on.
  • At step 710, metadata recovery agent 508 can rebuild the metadata entries of the maps/tables in first metadata database 120 (i.e., versioned data set map 122, chunk map 124, and SUT 126) using the log segment metadata extracted at step 708. For example, with respect to versioned data set map 122, metadata recovery agent 508 can create, for each data block in the log segment, an entry in map 122 mapping the data block's data set ID, snapshot ID, and LBA to its chunk ID. Further, with respect to chunk map 124, metadata recovery agent 508 can create, for each data block in the log segment, an entry in map 124 mapping the data block's chunk ID to the log segment ID.
  • Finally, at step 712, metadata recovery agent 508 can reach the end of the current loop iteration and return to step 706 to process additional log segments. Once all of the log segments in cloud object storage platform 108 have been processed, workflow 700 can end.
  • Certain embodiments described herein can employ various computer-implemented operations involving data stored in computer systems. For example, these operations can require physical manipulation of physical quantities—usually, though not necessarily, these quantities take the form of electrical or magnetic signals, where they (or representations of them) are capable of being stored, transferred, combined, compared, or otherwise manipulated. Such manipulations are often referred to in terms such as producing, identifying, determining, comparing, etc. Any operations described herein that form part of one or more embodiments can be useful machine operations.
  • Further, one or more embodiments can relate to a device or an apparatus for performing the foregoing operations. The apparatus can be specially constructed for specific required purposes, or it can be a generic computer system comprising one or more general purpose processors (e.g., Intel or AMD x86 processors) selectively activated or configured by program code stored in the computer system. In particular, various generic computer systems may be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations. The various embodiments described herein can be practiced with other computer system configurations including handheld devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
  • Yet further, one or more embodiments can be implemented as one or more computer programs or as one or more computer program modules embodied in one or more non-transitory computer readable storage media. The term non-transitory computer readable storage medium refers to any data storage device that can store data which can thereafter be input to a computer system. The non-transitory computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer system. Examples of non-transitory computer readable media include a hard drive, network attached storage (NAS), read-only memory, random-access memory, flash-based nonvolatile memory (e.g., a flash memory card or a solid state disk), persistent memory, NVMe device, a CD (Compact Disc) (e.g., CD-ROM, CD-R, CD-RW, etc.), a DVD (Digital Versatile Disc), a magnetic tape, and other optical and non-optical data storage devices. The non-transitory computer readable media can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
  • Finally, boundaries between various components, operations, and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention(s). In general, structures and functionality presented as separate components in exemplary configurations can be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component can be implemented as separate components.
  • As used in the description herein and throughout the claims that follow, “a,” “an,” and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
  • The above description illustrates various embodiments along with examples of how aspects of particular embodiments may be implemented. These examples and embodiments should not be deemed to be the only embodiments and are presented to illustrate the flexibility and advantages of particular embodiments as defined by the following claims. Other arrangements, embodiments, implementations and equivalents can be employed without departing from the scope hereof as defined by the claims.

Claims (21)

What is claimed is:
1. A method comprising:
receiving, by a server that is part of a cloud compute and block storage platform of a cloud infrastructure, metadata pertaining to a segment of a data snapshot, wherein the metadata is received from an uploader agent of a source data center while the uploader agent is uploading the segment to a cloud object storage platform of the cloud infrastructure;
converting, by the server, the metadata into a set of metadata entries that conform to a schema of a metadata database of the cloud compute and block storage platform;
storing, by the server, the set of metadata entries in the metadata database; and
upon determining that there are further segments of the data snapshot to be uploaded to the cloud object storage platform:
returning, by the server, a first acknowledgement to the uploader agent that the metadata database has been updated with the metadata for the segment; and
after returning the first acknowledgement, replicating, by the server, changes in a transaction log of the metadata database caused by the storing of the metadata entries to a remote site.
2. The method of claim 1 further comprising, upon determining that there are no further segments of the data snapshot to be uploaded to the cloud object storage platform:
replicating remaining changes in the transaction log to the remote site;
waiting for a second acknowledgement from the remote site that the replicating of the remaining changes is successful; and
in response to receiving the second acknowledgement, returning a third acknowledgement to the uploader agent indicating that upload of the data snapshot is complete.
3. The method of claim 1 wherein, at a time of a failure at the cloud compute and block storage platform that causes contents of the metadata database to be lost, a metadata recovery agent of the cloud compute and block storage platform:
retrieves the transaction log from the remote site; and
rebuilds the metadata database by replaying the retrieved transaction log.
4. The method of claim 1 wherein the metadata comprises user authentication information and an upload timestamp for the segment.
5. The method of claim 1 wherein the replicating is performed by a background process of the server while the server receives and processes another segment of the data snapshot.
6. The method of claim 1 wherein the segment is a fixed-size portion of the data snapshot that conforms to an object format supported by the cloud object storage platform.
7. The method of claim 1 wherein the cloud compute and block storage platform provides a lower degree of storage durability than the cloud object storage platform.
8. A non-transitory computer readable storage medium having stored thereon program code executable by a server that is part of a cloud compute and block storage platform of a cloud infrastructure, the program code embodying a method comprising:
receiving metadata pertaining to a segment of a data snapshot, wherein the metadata is received from an uploader agent of a source data center while the uploader agent is uploading the segment to a cloud object storage platform of the cloud infrastructure;
converting the metadata into a set of metadata entries that conform to a schema of a metadata database of the cloud compute and block storage platform;
storing the set of metadata entries in the metadata database; and
upon determining that there are further segments of the data snapshot to be uploaded to the cloud object storage platform:
returning a first acknowledgement to the uploader agent that the metadata database has been updated with the metadata for the segment; and
after returning the first acknowledgement, replicating changes in a transaction log of the metadata database caused by the storing of the metadata entries to a remote site.
9. The non-transitory computer readable storage medium of claim 8 wherein the method further comprises, upon determining that there are no further segments of the data snapshot to be uploaded to the cloud object storage platform:
replicating remaining changes in the transaction log to the remote site;
waiting for a second acknowledgement from the remote site that the replicating of the remaining changes is successful; and
in response to receiving the second acknowledgement, returning a third acknowledgement to the uploader agent indicating that upload of the data snapshot is complete.
10. The non-transitory computer readable storage medium of claim 8 wherein, at a time of a failure at the cloud compute and block storage platform that causes contents of the metadata database to be lost, a metadata recovery agent of the cloud compute and block storage platform:
retrieves the transaction log from the remote site; and
rebuilds the metadata database by replaying the retrieved transaction log.
11. The non-transitory computer readable storage medium of claim 8 wherein the metadata comprises user authentication information and an upload timestamp for the segment.
12. The non-transitory computer readable storage medium of claim 8 wherein the replicating is performed by a background process of the server while the server receives and processes another segment of the data snapshot.
13. The non-transitory computer readable storage medium of claim 8 wherein the segment is a fixed-size portion of the data snapshot that conforms to an object format supported by the cloud object storage platform.
14. The non-transitory computer readable storage medium of claim 8 wherein the cloud compute and block storage platform provides a lower degree of storage durability than the cloud object storage platform.
15. A server that is part of a cloud compute and block storage platform of a cloud infrastructure, the server comprising:
a processor; and
a non-transitory computer readable medium having stored thereon program code that, when executed, causes the processor to:
receive metadata pertaining to a segment of a data snapshot, wherein the metadata is received from an uploader agent of a source data center while the uploader agent is uploading the segment to a cloud object storage platform of the cloud infrastructure;
convert the metadata into a set of metadata entries that conform to a schema of a metadata database of the cloud compute and block storage platform;
store the set of metadata entries in the metadata database; and
upon determining that there are further segments of the data snapshot to be uploaded to the cloud object storage platform:
return a first acknowledgement to the uploader agent that the metadata database has been updated with the metadata for the segment; and
after returning the first acknowledgement, replicate changes in a transaction log of the metadata database caused by the storing of the metadata entries to a remote site.
16. The server of claim 15 wherein the program code further causes the processor to, upon determining that there are no further segments of the data snapshot to be uploaded to the cloud object storage platform:
replicate remaining changes in the transaction log to the remote site;
wait for a second acknowledgement from the remote site that the replicating of the remaining changes is successful; and
in response to receiving the second acknowledgement, return a third acknowledgement to the uploader agent indicating that upload of the data snapshot is complete.
17. The server of claim 15 wherein, at a time of a failure at the cloud compute and block storage platform that causes contents of the metadata database to be lost, a metadata recovery agent of the cloud compute and block storage platform:
retrieves the transaction log from the remote site; and
rebuilds the metadata database by replaying the retrieved transaction log.
18. The server of claim 15 wherein the metadata comprises user authentication information and an upload timestamp for the segment.
19. The server of claim 15 wherein the processor performs the replicating via a background process while the processor receives and processes another segment of the data snapshot.
20. The server of claim 15 wherein the segment is a fixed-size portion of the data snapshot that conforms to an object format supported by the cloud object storage platform.
21. The server of claim 15 wherein the cloud compute and block storage platform provides a lower degree of storage durability than the cloud object storage platform.
US18/303,478 2020-08-25 2023-04-19 Recovering the Metadata of Data Backed Up in Cloud Object Storage Pending US20230251997A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/303,478 US20230251997A1 (en) 2020-08-25 2023-04-19 Recovering the Metadata of Data Backed Up in Cloud Object Storage

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/002,669 US11663160B2 (en) 2020-08-25 2020-08-25 Recovering the metadata of data backed up in cloud object storage
US18/303,478 US20230251997A1 (en) 2020-08-25 2023-04-19 Recovering the Metadata of Data Backed Up in Cloud Object Storage

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/002,669 Continuation US11663160B2 (en) 2020-08-25 2020-08-25 Recovering the metadata of data backed up in cloud object storage

Publications (1)

Publication Number Publication Date
US20230251997A1 true US20230251997A1 (en) 2023-08-10

Family

ID=80356944

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/002,669 Active 2041-01-29 US11663160B2 (en) 2020-08-25 2020-08-25 Recovering the metadata of data backed up in cloud object storage
US18/303,478 Pending US20230251997A1 (en) 2020-08-25 2023-04-19 Recovering the Metadata of Data Backed Up in Cloud Object Storage

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/002,669 Active 2041-01-29 US11663160B2 (en) 2020-08-25 2020-08-25 Recovering the metadata of data backed up in cloud object storage

Country Status (1)

Country Link
US (2) US11663160B2 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11968080B2 (en) * 2020-12-30 2024-04-23 Oracle Corporation Synchronizing communication channel state information for high flow availability
US11860817B2 (en) * 2021-07-19 2024-01-02 Rubrik, Inc. Online data format conversion
CN114706725B (en) * 2022-03-14 2023-05-09 广州慧思软件科技有限公司 Equipment data processing method and system based on cloud platform
US20240176713A1 (en) * 2022-11-28 2024-05-30 Dell Products L.P. Eliminating data resynchronization in cyber recovery solutions
US20240176712A1 (en) * 2022-11-28 2024-05-30 Dell Products L.P. Optimizing data resynchronization in cyber recovery solutions
US20240192849A1 (en) * 2022-12-12 2024-06-13 Google Llc Garbage-collection in Log-based Block Devices with Snapshots

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11151082B1 (en) * 2015-03-31 2021-10-19 EMC IP Holding Company LLC File system operation cancellation
US10909105B2 (en) * 2016-11-28 2021-02-02 Sap Se Logical logging for in-memory metadata store
US10613923B2 (en) * 2017-11-03 2020-04-07 EMC IP Holding Company LLC Recovering log-structured filesystems from physical replicas
US11048590B1 (en) * 2018-03-15 2021-06-29 Pure Storage, Inc. Data consistency during recovery in a cloud-based storage system
US10909071B2 (en) * 2018-07-13 2021-02-02 Vmware, Inc. Batch-based deletion of snapshots archived in cloud/object storage
US10976938B2 (en) * 2018-07-30 2021-04-13 Robin Systems, Inc. Block map cache
US11243909B2 (en) * 2018-10-31 2022-02-08 Alibaba Group Holding Limited Journaling overhead reduction with remapping interface
US20200167238A1 (en) * 2018-11-23 2020-05-28 Hewlett Packard Enterprise Development Lp Snapshot format for object-based storage
US11263173B2 (en) * 2019-07-30 2022-03-01 Commvault Systems, Inc. Transaction log index generation in an enterprise backup system
US11029851B2 (en) * 2019-09-27 2021-06-08 Amazon Technologies, Inc. Sub-block modifications for block-level snapshots
US11347684B2 (en) * 2019-10-04 2022-05-31 Robin Systems, Inc. Rolling back KUBERNETES applications including custom resources
US11119862B2 (en) * 2019-10-11 2021-09-14 Seagate Technology Llc Delta information volumes to enable chained replication of data by uploading snapshots of data to cloud

Also Published As

Publication number Publication date
US20220066883A1 (en) 2022-03-03
US11663160B2 (en) 2023-05-30

Similar Documents

Publication Publication Date Title
US20230251997A1 (en) Recovering the Metadata of Data Backed Up in Cloud Object Storage
US11288129B2 (en) Tiering data to a cold storage tier of cloud object storage
US9703640B2 (en) Method and system of performing incremental SQL server database backups
US7650341B1 (en) Data backup/recovery
US10146640B2 (en) Recovering a volume table and data sets
US10318648B2 (en) Main-memory database checkpointing
US10936441B2 (en) Write-ahead style logging in a persistent memory device
US8250033B1 (en) Replication of a data set using differential snapshots
US8433867B2 (en) Using the change-recording feature for point-in-time-copy technology to perform more effective backups
US11409616B2 (en) Recovery of in-memory databases after a system crash
US11397706B2 (en) System and method for reducing read amplification of archival storage using proactive consolidation
US11797397B2 (en) Hybrid NVRAM logging in filesystem namespace
US11741005B2 (en) Using data mirroring across multiple regions to reduce the likelihood of losing objects maintained in cloud object storage
US11556423B2 (en) Using erasure coding in a single region to reduce the likelihood of losing objects maintained in cloud object storage
US11544147B2 (en) Using erasure coding across multiple regions to reduce the likelihood of losing objects maintained in cloud object storage
KR100775141B1 (en) An implementation method of FAT file system which the journaling is applied method
US11455255B1 (en) Read performance of log-structured file system (LFS)-based storage systems that support copy-on-write (COW) snapshotting
Hwang et al. Design of linux based transactional NAND flash memory file system
Runnable US. Patent 0a. 11, 2011 Sheet 1 015 US 8,037,032 B2

Legal Events

Date Code Title Description
AS Assignment

Owner name: VMWARE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WANG, WENGUANG;GUNTURU, VAMSI;GAO, JUNLONG;AND OTHERS;SIGNING DATES FROM 20200820 TO 20220419;REEL/FRAME:063382/0814

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: VMWARE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:VMWARE, INC.;REEL/FRAME:066692/0103

Effective date: 20231121

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED