WO2023244447A1 - Techniques for efficient replication and recovery - Google Patents

Techniques for efficient replication and recovery Download PDF

Info

Publication number
WO2023244447A1
WO2023244447A1 PCT/US2023/024236 US2023024236W WO2023244447A1 WO 2023244447 A1 WO2023244447 A1 WO 2023244447A1 US 2023024236 W US2023024236 W US 2023024236W WO 2023244447 A1 WO2023244447 A1 WO 2023244447A1
Authority
WO
WIPO (PCT)
Prior art keywords
snapshot
file system
target
region
source
Prior art date
Application number
PCT/US2023/024236
Other languages
French (fr)
Inventor
Vikram Singh BISHT
Niharika SALADY
Parth Singhal
Satish Kumar Kashi Visvanathan
Original Assignee
Oracle International Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/169,124 external-priority patent/US20230409539A1/en
Application filed by Oracle International Corporation filed Critical Oracle International Corporation
Publication of WO2023244447A1 publication Critical patent/WO2023244447A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/119Details of migration of file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/128Details of file system snapshots on the file-level, e.g. snapshot creation, administration, deletion

Definitions

  • the present disclosure generally relates to file systems. More specifically, but not by way of limitation, techniques are described for efficient replication and maintaining snapshot data consistency during file storage replications between file systems in different cloud infrastructure regions (e.g., data centers in particular geographic regions).
  • cloud infrastructure regions e.g., data centers in particular geographic regions.
  • the present disclosure generally relates to file systems. More specifically, but not by way of limitation, techniques are described for efficient replication and maintaining snapshot data consistency during file storage replication between file systems in different cloud infrastructure regions (e.g., data centers in particular geographic regions).
  • cloud infrastructure regions e.g., data centers in particular geographic regions.
  • techniques including a method that comprises generating, by the computing system, a first snapshot and a second snapshot in a source file system in a source region; assigning, by the computing system, a first provenance identification to the first snapshot and a second provenance identification to the second snapshot in the source file system, the first provenance identification being unique among all snapshots in all regions and the second provenance identification being unique among all snapshots in all regions, receiving, by a computing system, a request to perform a replication between the source file system in the source region and a target file system in a target region, the source region and the target region being in different regions, in response to the request, comparing, by the computing system, the first provenance identification in the source file system to provenance identification of existing snapshots in the target region; identifying, by a computing system, a matched snapshot with the first provenance identification in the target region to use as a base snapshot for the replication based at least in part on the comparison, and performing, by the computing system, the replication using deltas
  • the method further comprises selecting the matched snapshot as the base snapshot at least in response to the matched snapshot with the first provenance identification in the target region being in the target file system.
  • the target region comprises a non-target file system having a snapshot associated with the first provenance identification
  • the method further comprises performing an in-region copying the matched snapshot with the first provenance identification from the non-target file system to the target file system at least, in response to the matched snapshot with the first provenance identification in the target region not being in the target file system; and selecting the in-region copy of the matched snapshot in the target file system as the base snapshot.
  • the in-region copy of the matched snapshot in the target file system has the same first provenance identification but different resource identification from the matched snapshot in the non-target file system.
  • the method further comprises selecting the first snapshot with the first provenance identification in the source file system as the base snapshot at least in response to no matched snapshot with the first provenance identification being found in the target region.
  • the method further comprises performing a cross-region copying of the first snapshot with the first provenance identification from the source file system to the target file system before generating the deltas between the second snapshot and the base snapshot in the source file system.
  • a system includes one or more data processors and a non-transitory computer readable medium containing instructions which, when executed on the one or more data processors, cause the one or more data processors to perform part or all of one or more methods disclosed herein.
  • a non-transitory computer-readable medium storing computer-executable instructions which, when executed by one or more processors, cause the one or more processors of a computer system to perform one or more methods disclosed herein.
  • a computer-program product comprising computer program/instructions which, when executed by a processor, cause the processor to perform any of the methods disclosed herein.
  • FIG. 1 depicts an example concept of recovery/ time objective (RTO) and recovery’ point objective (RPO), according to certain embodiments.
  • FIG. 4 is a simplified flow diagram illustrating cross-region remote replication, according to certain embodiments.
  • FIG. 5 is a simplified diagram illustrating the high-level concept of B-tree walk, according to certain embodiments.
  • FIG. 6B is a diagram illustrating pipeline stages of cross-region replication, according to certain embodiments.
  • FIG. 7 is a diagram illustrating a layered structure in file storage service (FSS) data plane, according to certain embodiments.
  • FSS file storage service
  • FIG. 9 depicts an example replication bucket format, according to certain embodiments.
  • FIG. 15 is a diagram illustrating delayed snapshot deletion and replication for maintaining consistency between a source FS and a target FS, according to certain embodiments.
  • FIG. 16 is a flow chart illustrating the process of delayed snapshot deletion and replication after detecting a snapshot deletion request, according to certain embodiments.
  • FIG. 18 is a flow diagram illustrating a control plane workflow for a source region and a target region, according to certain embodiments.
  • FIG. 19 is a block diagram illustrating one pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
  • FIG. 21 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
  • FIG. 22 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
  • FSS file system services
  • FS file system services
  • the efficiency for replication and recovery utilizes provenance IDs to efficiently identify a starting point (e.g., a base snapshot) for a cross-region (or x-region) replication process.
  • the non-target file system may perform an in-region cloning of the snapshot with the matched provenance ID to the target FS to create the base snapshot. Thereafter, the x-region replication can be performed between the source file system to the target file system.
  • the in-region cloning conserves cloud resources as well because an in-region cloning does not involve extra encryption/decry ption, data transfer through object storage, etc.
  • the snapshot data consistency techniques disclosed herein can help safeguard data integrity when snapshot creation and deletion requests occur during cross-region replications by temporarily withholding certain requests until appropriate times to execute such requests safely.
  • the control plane communication between the source FS and the target FS for the snapshot and data model exchange snapshot metadata information during the replication process to help achieve the goal of maintaining snapshot data consistency during a replication.
  • system snapshots are created and deleted periodically by FSS, while user snapshots may be created and deleted by users at any time according to the scheduled snapshot policy.
  • user snapshots may be created and deleted by users at any time according to the scheduled snapshot policy.
  • Snapshot deletion and replication may be delayed to ensure snapshot consistency.
  • a user snapshot is created during a replication cycle (e.g., replication cycle N) but not requested to be deleted until more than a replication cycle later (e.g., after replication cycle N+l)
  • the replication of the user snapshot may be delayed for a cycle (i.e., occur in replication cycle N+l).
  • RPO Recovery' point objective
  • a “delta generator” may refer to a component in a file system’s data plane for either extracting the deltas (i.e., the changes) between the key- values of two snapshots if the component is located in a source region or applying the deltas to the latest snapshot in a B-tree of the file system if the component is located in a target region.
  • the delta generator in the source region may use several threads (called delta generator threads or range threads for multiple partitioned B-tree key ranges) to perform the extraction of deltas (or B-tree walk) in parallel.
  • the delta generator in the target region may use several threads to apply the downloaded deltas to its latest snapshot in parallel.
  • a “file system communicator” may refer to a file manager layer running on the storage nodes in a file system’s data plane.
  • the sendee helps with file create, delete, read and write requests, and works with a NFS server (e.g., Orca) to service IOs to clients.
  • Replicator fleet may communicate with many storage nodes thereby distributing the work of reading/writing the file system data among the storage nodes.
  • a “blob,” in certain embodiments, may refer to a data type for storing information (e.g., a formatted binary file) in a database. Blobs are generated during replication by a source region and uploaded to an Object Store (i.e., an object storage) in a target region.
  • a blob may include binary tree (B-tree) keys and values and file data. Blobs in the Object Store are called objects. B-tree key -value pairs and their associated data are packed together in blobs to be uploaded to the Object Store in a target region.
  • B-tree binary tree
  • “Delta application,” in certain embodiments, may refer to the process of applying the deltas downloaded by a target file system to its latest snapshot to create a new snapshot. This may include analyzing manifest files, applying snapshot metadata, inserting the B-tree keys and values into its B-tree, and storing data associated with the B-tree keys (i.e., file data or data portion of blobs) to its local storage. Snapshot metadata is created and applied at the beginning of a replication cycle.
  • a "region,” in certain embodiments, may refer to a logical abstraction corresponding to a geographic area. Each region can include one or more connected data centers. Regions are independent, of other regions and can be separated by vast distances.
  • the File Storage Sendee (FSS) of the present disclosure supports full disaster recovery for failover or fallback with minimal administrative work. Failover is a sequence of actions to make a secondary/target site become primary/source (i.e., start serving workloads) and may include planned and/or unplanned failover.
  • a planned failover (may also refer to as planned migration) is initiated by a user to execute a planned failover from the source side (e.g., source region) to the target side (e.g., a target region) without data loss.
  • An unplanned failover is when the source side stops unexpectedly due to, for example, a disaster, and the user needs to start using the target side because the source side is lost.
  • the secondary site B 104 stops its sendee at time 122.
  • the secondary site B 104 becomes fully operational at time 126, Therefore, the RTO is the time between 122 and 126, The secondary site B 104 can now' assume the role of the primary’ site. However, for customers who use primary’ site A 102, the loss of service is between time 120 and 126.
  • FIG. 2 is a simplified block diagram illustrating an architecture for cross-region remote replication, according to certain embodiments.
  • the end-to-end replication architecture illustrated has two regions, a source region 290 and a target region 292. Each region may contain one or more file systems.
  • the end-to-end replication architecture includes data planes 202 & 212, control planes (only control APIs 208a-n & 218a-n are shown), local storages 204 & 214, Object Store 260, and Key Management Sendee (KMS) 250 for both source region 290 and target region 292,
  • FIG. 2 illustrates only one file system 280 in the source region 290, and one file system 282 in the target region 292 for simplicity.
  • the data 230a and 230b transferred between the source file system 280 and the target file system 282 is a general term, and may include the initial snapshot, keys and values of a B-tree that differ between two snapshots, file data (e.g., finap), snapshot metadata (i.e., a set of snapshot B-tree keys that reflect various snapshots taken in the source file system), and other information (e.g., manifest files) useful for facilitating the replication process.
  • file data e.g., finap
  • snapshot metadata i.e., a set of snapshot B-tree keys that reflect various snapshots taken in the source file system
  • other information e.g., manifest files
  • the upload process may take longer.
  • the source system 280 can take replication snapshot at a specific duration, such as one hour. The source side 280 can then transfer all data within that one hour to the target side 282, and take a new snapshot every one hour. If there are some caches with a lot of changes, the replication may be set to a lower replication interval.
  • multiple threads also run in parallel for storage IO access (e.g., DASD) 204a-n & 214a-n.
  • storage IO access e.g., DASD
  • all processing related to the replication process including accessing the storage, uploading snapshots and data 230a from the source file system 280 to the Object Store 260, and downloading the snapshots and data 230b to the target file system 282, have multiple threads running in parallel to perform the data streaming.
  • File storage is an AD local sendee. When a file system is created, it is in a specific AD. For a customer to transfer or replicate data from one file system to another file system within the same region or different regions, an artifact (also referred to as manifest) transfer may need to be used.
  • VCN peering may be used to set up network connections between remote machines (e.g., between replicator nodes of source and target) and use Classless Inter-Domain Routing (“CIDR”) for each region.
  • remote machines e.g., between replicator nodes of source and target
  • CIDR Classless Inter-Domain Routing
  • a shared databases (SDBs) 316 & 336 of both regions are key-value stores that the components through which both the control plane and data plane (e.g., replicator fleet) can read and write for them to communicate with each other.
  • Control planes 320 & 340 of both regions may queue a new job into their respective shared databases 316 & 336, and replicator fleet 318 & 338 may read the queues in the shared databases 316 & 336 constantly and start file system replication once the replicator fleet 318 & 338 detect the job request.
  • the shared databases 316 & 336 are a conduit between the replicator fleet and the control planes.
  • Replicator fleet 318 in source region A can work with DG 310 to start, walking B- tree in the file system in source region A to collect key -values and convert them into flat files or blobs to be uploaded to the Object Store. Once the data blobs (including key -values and actual data) are uploaded, the target can immediately apply them without waiting for a large number of blobs to be present in the Object Store 360.
  • the Object Store 360 is located in the target region B for disaster recovery reasons. The goal is to push from source to the target region B as soon as possible and keep the data safe.
  • Replicator fleet 318 & 338 in both regions run on virtual machines that can be scaled up and down automatically to build an entire fleet for performing replication.
  • the replicators and replication service can dynamically adjust based on the capacity to support each job. If one replicator is heavily loaded, another can pick up to share the load. Different replicators in the fleet can balance load among themselves to ensure the jobs can continue and do not stop due to overloading individual replicators.
  • Step S3 CP-A 410 notifies replicator 412 (or uploader), a component in the data plane, to copy the latest snapshot:
  • Step S4 CP-A 410 notifies the target (B) control plane (CP- B) 450 about the completion of the upload.
  • Step S5 CP-B 450 calls the target replicator-B 452 (or downloader) to apply the deltas:
  • Replicator-B 452 downloads the data 454 from Object Store 430.
  • Step S6 CP-A 410 is notified of the new snapshot now available on target (B) after the delta application is complete.
  • FIG. 5 is a simplified diagram illustrating the high-level concept of B-tree walk, according to certain embodiments.
  • B-tree structure may be used in a file system.
  • a delta generator walks the B-tree and guarantees consistency for the walk. In other words, the walk ensures that the key-values are what is expected at the end of the walk and captures all information between any two snapshots, such that no data corruption may occur.
  • the file system is a transactional type of file system that may be modified, and the users need to know about the modification and redo the transactions because another user may update the same transaction or data.
  • snapshots are immutable (e.g., cannot be modified except garbage collector can remove them). As illustrated in FIG . 5, there are many snapshots (snapshot 1 ⁇ snapshot N) in the file systems. When a delta generator is walking the B-tree keys (510 - 560) in a source file system, snapshots may be removed because a garbage collector 580 may come in to clean the keys of the snapshots that deem as garbage. When a delta generator walks the B-tree keys, it needs to ensure the keys associated with the remaining snapshots (e.g., not removed by the garbage collector) are copied.
  • the B-tree keys may give a picture of what has changed.
  • the techniques disclosed in the present disclosure can determine what B-tree keys are new and what have been updated between two snapshots.
  • a delta generator may collect the metadata part, keys and values, and associated data, then send to the target. The target can figure out that the received information is between two snapshot ranges and applies in the target file system. After the delta generator (or delta generator threads) walks a section between two keys and confirms its consistency, it uses the last ending key as the next starting key for its next walk. The process is repeated until all keys have been checked, and the delta generator collects the associated data every' time consistency is confirmed.
  • the disclosed techniques can ensure the consistency between the source file system and the target file system by detecting a whiteout file (i.e., a modified file affected by the garbage collector) during B-tree walk, retrieving an unaffected version of the modified file, and providing relevant information to the target file system during the same replication cycle to properly reconstruct the correct snapshot chain.
  • a whiteout file i.e., a modified file affected by the garbage collector
  • FIG. 6A and 6B are diagrams illustrating pipeline stages of cross-region replication, according to certain embodiments.
  • the cross-region replication for a source file system disclosed in the present disclosure has four pipeline stages, namely initiation of the cross- region replication, B-tree walk in the source file system (i.e., delta generation pipeline stage), storage IO access for retrieving data (i.e., data read pipeline stage), data upload to the Object Store (i.e., data upload pipeline stage), in the source file system.
  • the target file system has similar four pipeline stages but in reverse order, namely preparation of cross-region replication, data download from the Object Store, delta application in the target file system, and storage 10 access for storing data.
  • FIG. 6A illustrates the four pipeline stages in the source file system, but a similar concept applies to the target file system.
  • FIG. 6B illustrates the interaction among the processes and components involved in the pipeline stages. All of these pipeline stages may operate in parallel. Each pipeline stage may operate independently and hand off information to the next pipeline stage when the processing in the current stage completes. Each pipeline stage is ensured to take a share of the entire bandwidth and not use more than necessary. In other words, resources are allocated fairly among all jobs. If no other job is working in the system, the working job can get as many resources as possible.
  • the threads in each pipeline stage also perform their tasks in parallel (or concurrently) and independently of each other in the same pipeline stage (i.e., if a thread fails, it will not affect other threads).
  • the tasks (or replication jobs) performed by the threads in each pipeline stage are restartable, which means when a thread fails, a new thread (also referred to as substitute thread) may take over the failed thread to continue the original task from the last successful point.
  • a B-tree walk may be performed with parallel processing threads in the source file system 280.
  • a B-tree may be partitioned into multiple key ranges between the first, key and the last key in the file system. The number of key ranges may be determined by customers. Multiple range threads (e.g., around 8 to 16) per file system may be used for the B-tree walk.
  • One range thread can perform the B-tree walk for a key range, and all range threads operate concurrently and in parallel.
  • the number of threads to be used depends on factors such as the size of the file system, availability of resources, and bandwidth in order to balance the resource and traffic congestion.
  • the number of key ranges is usually more than the number of range threads available to utilize the range threads fully.
  • the B-tree walk can be scalable and processed by concurrent parallel walks (e.g., with multiple threads).
  • the system may drop a transaction that is in progress and has not been committed yet, and go back to the starting point to walk again.
  • the delta generator may ignore the missing keys and their associated data by not collecting them to minimize the amount of information to be processed or uploaded to the target side since these associated data are deemed garbage.
  • the B-tree walk and data transfer can be more efficient.
  • a delta generator does not need to wait for the garbage collector to remove the information to be deleted before walking the B- tree keys. For example, keys have dependencies on each other.
  • Delta generators typically do not modify anything on the source side (e.g., does not delete the keys or blocks of data deemed garbage) but simply does not copy them to the target side.
  • the B-tree walk process and garbage collection are asynchronous processes. For example, when a block of data that a key points to no longer exists, the fde system can flag the key as garbage and note that it should not be modified (e.g., immutable), but only the garbage collector can remove it.
  • a delta generator can continue to walk the next key without waiting for the garbage collector. In other words, delta generators and garbage collectors can proceed at their own pace.
  • These range threads 612am are performed by the delta generator 620. They initialize their GETKEYVAL buffers 640 (shown in FIG. 6B), update their checkpoint records 642 in SDB 622 (shown in FIG. 6B), and perform storage IO access 644 by interacting with DASD IO threads 614a-n.
  • each main thread 610 is responsible for overseeing all the range threads 612a-n it creates.
  • the main thread 610 may generate a master manifest file outlining the whole replication.
  • the range threads 612a-n generate a range manifest file including the number of key ranges (i.e., a sub-division of the whole replication), and then checkpoint manifest (CM) files for each range to provide updates to the target file sy stem about, the number of blobs per checkpoint, where checkpoints are created during the B-tree walk.
  • CM checkpoint manifest
  • the number of range threads 612a-n to be used depends on factors such as the size of the file system, availability of resources and bandwidth to balance the resource, amount of data to generate and traffic congestion.
  • the number of key ranges are usually more than the number of range threads 612a-n available to fully utilize the range threads, around 2x to 4x ratio.
  • Each of the range threads 612a-n has a dedicated buffer (GETKEYVAL) 640 containing available jobs to work on.
  • Each range thread 612 operates independent of other range threads, and updates its checkpoint records 642 in SDB 622 periodically.
  • range threads 612a-n When the range threads 612a-n are walking the B-tree (i.e., recursively visiting every node of the B-tree), they may need to collect file data associated (e.g., FMAP) with B- tree keys and request IO access 644 to storage. These IO requests are enqueued by each range thread 612 to allow DASD IO threads 614a-n (i.e., data read pipeline stage) to work on them. These DASD IO threads 614a-n are common threads shared by all range threads 612a-n.
  • file data associated e.g., FMAP
  • DASD IO threads 614a-n i.e., data read pipeline stage
  • DASD IO threads 614a-n After DASD IO threads 614a-n have obtained the requested data, the data is put into an output buffer 646 to serialize it into blobs for object threads 616a-n (i.e., data upload pipeline stage) of the replicators to upload to the Object Store located in the target region.
  • object threads 616a-n i.e., data upload pipeline stage
  • Each object thread picks up an upload job that may contain a portion of all data to be uploaded, and all object threads perform the upload in parallel.
  • FIG. 7 is a diagram illustrating a layered structure in the FSS data plane, according to certain embodiments.
  • the replicator fleet 710 has four layers, job layer 712, delta generator client 714, encryption/DASD IO 716, and Object 718.
  • the replicator fleet 710 is a single process responsible for interacting with the storage fleet 720, KMS 730, and Object Storage 740.
  • the job layer 712 polls the SDB 704 for enqueued jobs 706, either upload jobs or download jobs.
  • the replicator fleet 710 includes VMs (or threads) that pick up the enqueue replication jobs to their maximum capacity.
  • a replicator thread may own a part of a replication job, but it will work together with another replicator thread that owns the rest of the same replication job to complete the entire replication job concurrently.
  • the replication jobs performed by the replicator fleet 710 are restartable in that if a replicator thread fails in the middle of replication, another replicator thread can take over and continue from the last successful point to complete the job the failed replicator thread initially owns. If a strayed replicator thread (e.g., fails and wakes up again) conflicts with another replicator thread, FSS can use a mechanism called generation number to avoid the conflict by making both replicator threads update different records.
  • the delta generator client layer 714 performs B-tree walking by accessing the delta generator server 724, where the B-tree locates, in storage fleet 720.
  • the encryption / DASD IO layer 716 is responsible for security and storage access.
  • the replicator fleet 710 may request IO access through the encryption / DASD IO layer 716 to access DASD extents 722 for file data associated with the deltas identified during the B-tree walk.
  • Both the replicator fleet 710 and storage fleet 720 update control API 702 their status (e.g., checkpoints and leasing for replicator fleet 710) through SDB 704 regularly to allow the control API 702 to trigger alarms or take actions when necessary.
  • the encryption / DASD IO layer 716 interacts with KVIS and FSK fleet 730 at the target side to create session keys (or snapshot encryption key) during a cross-region replication process, and use FSK for encrypting and decrypting the session keys.
  • object layer 718 is responsible for uploading deltas and file data from the source file system to the Object Store 740 and downloading them to the target file system from the Object Store 740.
  • B-tree keys are processed by replicators and delta generators in the data plane together.
  • Algorithms for computing the changed key-value pairs (i.e., part of deltas) between two given snapshots in a file system can continuously read the keys, and return the keys back to replicators using transaction budgets, and ensure that transactions are confirmed at the end to get consistent key-value pairs for processing.
  • the delta generation and calculation may be scalable.
  • the scalable approach can utilize multiple threads to compute deltas (i.e., the changes of key- value pairs) between two snapshots by breaking a B-tree into many key ranges.
  • a pool of threads i.e., the delta generators
  • FIG. 8 depicts a simplified example binary large object (BLOB) format, according to certain embodiments.
  • a blob is a data type for storing information (e.g., binary data) in a database. Blobs are generated during replication by the source region and uploaded to the Object Store. The target region needs to download and apply the blobs. Blobs and objects may be used interchangeably depending on the context.
  • the delta generator works with replicators to traverse all the pages in the blocks (FM AP blocks) inside DASD extent that the FMAP points to and read them into a data buffer, decrypt the data using a local encryption file key, put into an output buffer to serialize it into blob for replicators to upload to the Object Store.
  • the delta generators need to collect all FMAPs for an identified delta to get all the data related to the differences between the two snapshots.
  • a snapshot delta stored in the Object Store may span over many blobs (or objects if stored in the Object Store).
  • the blob format for these blobs has keys, values, and data associated with the keys if they exist.
  • the snapshot delta 800 includes at least three blobs, 802, 804 and 806.
  • the first blob 802 has a prefix 810 indicating the key- value type, key length and value length, followed by its key 812 (keyl) and value 814 (vail).
  • the second blob 804 has a prefix 820 (key-value type, key length and value length), key 822 (key 2), value 824 (val2), data length 826 and data 828 (data2).
  • the third blob 830 has a similar format to that of the first, blob 810, for example, prefix 830, key 832 (key3), and value 834 (va!3).
  • Data is decrypted, collected, and then written into the blob. All processes are performed parallelly. Multiple blobs can be processed and updated at the same time. Once all processes are done, data can be written into the blob format (shown in FIG. 8), then uploaded to the Object Store with a format or path names (illustrated in FIG. 9).
  • FIG. 9 depicts an example replication bucket format, according to certain embodiments.
  • a “bucket” may refer to a container storing objects in a compartment within an object storage namespace.
  • buckets are used by source replicators to store secured data using server-side encryption (SSE) technique and also used by target replicators to download for applying changes to snapshots.
  • SSE server-side encryption
  • the replication data for all filesystems for a target region may share a bucket in that region.
  • the second object 930 also has two deltas 932 & 940 with a similar format starting with a path name 931.
  • the two objects 910 & 930 in the bucket come from different source regions, IAD for object 910 and PHX for object 930, respectively. Once a blob is applied, the corresponding information in the layout can be removed to reduce space utilization.
  • a final manifest object (i.e., the checkpoint manifest, CM file) is uploaded from the source region to the Object Store to indicate to the target region that the source file system has completed the snapshot delta upload for a particular object.
  • the source CP will communicate this event to the target CP, where the target CP can inform the target DP via SDB to trigger the download process for that object by target replicators.
  • the control plane in a source region or target region orchestrates all of the replication workflows, and drives the replication of data.
  • the control plane performs the following functions: I) creating system snapshots that are the basis for creating the deltas; 2) deciding when such snapshots need to be created; 3) initiating replication based on the snapshots; 4) monitoring the replication; 5) triggering the deltas to be downloaded by the secondary (or target side), and, 6) indicating to the primary (or source) side that snapshot has reached the secondary 7 .
  • a file system has a few operations to handle its resources, including, but not limited to, creating, reading, updating, and deleting (CRUD). These operations are generally synchronous within the same region, and take up workflows as the file system gets HTTPS request from API servers, make changes in the backend for creating resources, and get responses back to customers.
  • the resources are split between source and target regions.
  • the states are maintained for the same resources between the source and target regions. Thus, asynchronous communication between the source and target regions exists.
  • Customers can contact the source region to create or update resources, which can be automatically reflected to the secondary or auxiliary resources in the target region.
  • the state machine in control plane also covers recovery' in many aspects, including but not limited to, failure in the fleet, key management failure, disk failure, and object failure, etc.
  • API Application Programming Interface
  • Control APIs for any new resource work only in the region where the object is created.
  • a field called “IsTargetable” in its APIs can be set to ensure that the target file system undergoing replication cannot be accidentally used by a consumer. In other words, setting this field to be false means that although a consumer can see the target file system, no one can export the target file system or access any data in the live system. Any export, may change the data because the export is a read/write permission to export, not read-only permission. Thus, export is not allowed to prevent any change to the target file system during the replication process.
  • the source When the source creates necessary security and other related artifacts, it uploads the security and the artifacts to the Object Store, and initiates a job on the target (i.e., notifies the target that a job is available), and the target can start downloading the artifacts (e.g., snapshots or deltas). Thereafter, the target continues to keep looking in the Object Store for an end-of-file marker (also referred to herein as checkpoint manifest (CM) file).
  • CM checkpoint manifest
  • the CM file is used as a mechanism for the source side and target side to communicate the completion of the upload of an object during the replication process.
  • the source side uploads this CM file containing information, such as the number of blobs that, have been uploaded up to this checkpoint, such that the target side can download this number of blobs to apply to its current snapshot.
  • This CM file is a mechanism for the source side to communicate to the target side that the upload of an object to the Object Store is complete for the target to start working on that object. In other words, the target will continue to download until there are no more objects in the Object Storage.
  • this scheme enables the concurrent processing of both the source side and the target side.
  • FIG. 10 is a flow chart illustrating state machines for concurrent source upload and target download, according to certain embodiments.
  • both the source file system and the target system can perform the replication concurrently and thus have their respective state machines.
  • each file system may have its own state machine while sharing some common job level states.
  • the source file system has states 1002 to 1018 for performing the data upload plus states 1030 to 1034 for session key generation and transfer.
  • the target file system has states 1050 to 1068 for data download.
  • a session key may be generated at any time in the source file system while the deltas are being uploaded to the Object Storage. Thus, the session key transfer has its own state sequence 1030 to 1034.
  • the target file system cannot start the replication download process
  • a source file system several functional blocks, such as snapshot generator, control API and delta monitor, are part of the CP.
  • Replicator fleet is part of the DP.
  • the snapshot generator is responsible for periodically generating snapshots.
  • the delta monitor monitors the progress of the replicators on replication-related tasks, including snapshot creation and replication schedule on a periodic basis. Once the delta monitor detects that the replicator has completed the replication jobs, it moves the states to copied state (e.g., Manifest Copied state 1014) on the source side or replicated state (e.g., Replicated state 1058) on the target side.
  • states e.g., Manifest Copied state 1014
  • replicated state e.g., Replicated state 1058
  • several file systems can perform replication at the same time from a source region to a target region.
  • the source file system in a concurrent mode state machine, a snapshot generator after creating a snapshot signal to a delta monitor that a snapshot has been generated.
  • the delta monitor which runs a CP replication state (CpRpSt) workflow, is responsible for initiating snapshot metadata upload to the Object Store on the target side.
  • Snapshot metadata may include snapshot type, snapshot identification information, snapshot time, etc.
  • the CpRpSt workflow' sets Ready to Copy Metadata state 1002 for the replicator fleet to begin copying metadata.
  • a replicator gets a replication job, it makes copies of snapshot metadata (i.e., Snapshot-Metadata Copying state 1004) and uploads the copies to the Object Store.
  • Snapshot-Metadata Copying state 1004 When all replicators complete the snapshot metadata upload, the state is set to Snapshot Metadata Copied state 1006.
  • the CpRpSt workflow' then continues polling the source SDB for a session key.
  • the session key may be generated by the source file system while the data upload is in progress.
  • the replicator of the source file system communicates with the target KMS vault to obtain a master key, which may be provided by customers, to create a session key (referred to herein as delta encryption key or DEK).
  • DEK delta encryption key
  • the replicator uses a local file system key (FSK) to encrypt the session key (now becomes encrypted DEK which is also referred to herein as delta transfer key (DTK)).
  • DTK delta transfer key
  • DTK is then stored in SDB in the source region for reuse by replicator threads during a replication cycle.
  • the state machine moves to Ready _to_Copy_DTK state 1030.
  • the source file system transfers DTK and KMS’s resource identification to the target API, which then puts them into SDB in the target region.
  • the state machine is set to CopyingJDTK state 1032.
  • the CpRpSt workflow in the source file system finishes polling the source SDB for the session key, it sends a notification to the target side signaling the session key (DTK) is ready for the target file system to download and use it to decrypt its downloaded deltas for application.
  • the state machine then moves to Copied DTK state 1034.
  • the target side replicator retrieves DTK from its SDB and requests KMS’s API to decrypt it to become a plain text DEK (i.e., decrypted session key).
  • the source file system When the source file system completes the upload of data for a particular replication cycle, including the session key transfer, its delta monitor notifies the target control API of such status as validation information and enters X-region_Copied_Done state 1016. This may occur before the target file system completes the data download and application. The source file system also cleans up its memory and removes all the keys. The source file system then enters Awaiting Target Response state 1018 to wait for a response from the target file system to start a new replication cycle.
  • the target file system cannot start the replication download process until it has received the indication that at least an object has been uploaded by the source file system (i.e., Mainfest_Copied state 1014) to the Object Storage and that a session key is ready for it to download (i.e., Copied DTK state 1034). Once these two conditions are satisfied, the state machine moves to Ready To Reconcile state 1050. Then, at Reconciling state 1052, the target file system starts a reconciliation process with the source side, such as synchronizing snapshots of the source file system and the target file system, and also performs some internal CP administrative works, including taking snapshots and generating statistics. This internal state involves communication within the target file system between its delta monitor and CP API.
  • the replication job is passed to the target replicator (i.e., Ready to Replicate state 1054).
  • the target replicator monitors a checkpoint manifest (CM) file that wall be uploaded by the source file system.
  • CM checkpoint manifest
  • the target replicator threads then start downloading the manifests and applying the downloaded and decrypted deltas (i.e., Replicating state 1056).
  • the target replicator threads also read the FMAP data blocks from the blobs downloaded from the Object Store, and communicates to local FSK services to get file system key FSK, which is used to re-encrypt each FMAP data block and store it in its local storage.
  • the source file system If the source file system has finished the data upload, it will update a final CM: file by setting an end-of-file (eof) field to be true and upload it to the Object Store. As soon as the target file system detects this final CM file, it wall finish the download of blobs, apply them, and the state machine moves to Replicated state 1058.
  • a final CM file by setting an end-of-file (eof) field to be true and upload it to the Object Store.
  • the target file system detects this final CM file, it wall finish the download of blobs, apply them, and the state machine moves to Replicated state 1058.
  • the target file system After the target file system applied all deltas (or blobs), it continues to download snapshot metadata from the Object Store and populates the target file system’s snapshots with the information of the source file system’s snapshots
  • Snapshot metadata Populating state 1060 Once the target file system’s snapshots are populated, the state machine moves to Snapshot_Metadata_Populated state 1062.
  • the target file system deletes all the blobs in the Object Store for those that have been downloaded and applied to its latest snapshot.
  • the target control API will then notify the target delta monitor once the blobs in the Object. Store have been deleted, and proceeds to Snapshot Deleted state 1066.
  • the target file system also cleans up its memory and removes all keys as well.
  • the FSS service also releases the KMS key.
  • the target DP When the target DP finishes the delta application and the clean-up, it validates with the target control API about the status of the source file system and whether it has received the X-region Copied Done notification from the source file system. If the notification has been received, the target delta monitor enters X-region DONE state 1068 and sends X-region DONE notification to the source file system.
  • the target file system is also able to detect whether the source file system has completed the upload by checking whether the end of files has been present for all the key ranges and all the upload processing threads because every object uploaded to the Object Store has a special marker, such as end- of-file marker in a CM file.
  • the source side and target side operate asynchronously.
  • the source file system completes its replication upload, it. notifies the target control API with X- region_Copied_Done notification.
  • the target file system later completes its replication process, its delta monitor target communicates back to the source control API with X-region DONE notification.
  • the source file system goes back to Ready _to_Copy_Metadata state 1002 to start another replication cycle.
  • FIG. 11 is an example flow diagram illustrating the interaction between the data plane and control plane in a source region, according to certain embodiments.
  • Data plane components and control plane components communicate with each other using a shared database (SDB), for example, 1106.
  • SDB is a key-value store that both control plane components and data plane components can read and write.
  • Data plane components include replicators and delta generators. The interaction between components in source region A 1101 and target region B 1102 is also illustrated.
  • a source control plane (CPa) 1103 requests the Object Store in target region B (OSb) 1112 to create a bucket.
  • a source replicator (REPLICATORa) 1108 updates its heartbeat status to the source SDB (SDBa) 1106 regularly.
  • Heartbeat is a concept used to track the replication progress performed by replicators. It uses a mechanism called leasing in which a replicator can keep on updating the heartbeat whenever it w'orks on a job to allow the control plane to be aware of the whole leasing information; for example, the byte count is continuously moving on the job.
  • CPa 1103 also requests file system service workflow (FSW CPa) 1104 to create a snapshot periodically, and at step S4, FSW_CPa 1104 informs CPa 1103 about the new snapshot.
  • CPa 1103 then stores snapshot information in SDBa 1106.
  • REPLICATORa 1108 polls SDB 1106 for any changes to existing snapshots, and retrieves job spec at step S7 if a change is detected.
  • REPLICATORa 1108 detects a change to snapshots, this kicks off the replication process.
  • REPLICATORa 1108 provides information about two snapshots (SNa and SNb) with changes between them to delta generator (DGa) 1110.
  • DGa delta generator
  • REPLICATORa 1 108 put work items information, such as the number of key ranges, into the SDBa 1106.
  • REPLIC ATORa 1108 checks the replication job queue in SDBa 1106 to obtain work items, and at step Si l, assign them to delta generator (DGa) 1110 to scan the B-tree keys of the snapshots (i.e., walking the B-tree) to compute deltas and the corresponding key-value pairs.
  • REPLICATORa 1108 decrypts file data associated with the identified B-tree keys, and pack them together with the key-value pairs into blobs.
  • REPLICATORa 1 108 encrypts the blobs with a session key and uploads them to the OSb 1 112 as objects.
  • REPLICATORa performs a checkpoint and stores the checkpoint record in SDBa 1106.
  • This replication process (S8 to S 14 ) repeats (as a loop) until all deltas have been identified and data has been uploaded to OSb 1112.
  • REPLICATORa 1108 then notifies SDBa 1106 with the replication job details, which is then passed to CPa 1103 at step S16, and further relayed to CPb 1114 as the final CM file at step S 17.
  • CPb 1114 stores the job details in SDBb 1116.
  • Authentication is performed on every component. From replicators to a file system key (FSK), an authentication mechanism exists by using replication ID and file system number. The key can be given to a replicator only when it provides the right content. Thus, the authentication mechanism can prevent an imposter from obtaining decryption keys. Other security mechanisms include blocking network ports.
  • a component called file system key server (FSKS) is a gatekeeper for checking appropriate!' requesters by checking metadata such as the jobs the requesters will perform and other information. For example, suppose a replicator tries to request a key for a file system. In that case, the FSKS can check whether the replicator is associated with a particular job (e.g., a replication is actually associated with that file system) to validate the requester.
  • Availability addresses the situation that, a machine can be restarted automatically after going down or a service continues to be available while software deployments are going on. For example, all replicators are stateless, so losing a replicator is transparent to customers because another replicator can take over to continue working on the jobs.
  • the states of the jobs are kept in a shared database and other reliable locations, not locally.
  • the shared database is a database-like service that the control plane uses to preserve information about file systems, and is based on B-tree.
  • Control plane availability' is high by utilizing many machines that can take over each other in case of any failures. For example, replication progress is not hindered simply due to one control plane’s failure. Thus, there is no single point of failure.
  • Network access availability utilizes congestion management involving various types of throttling to ensure source nodes are not overloaded.
  • Replication is durable by utilizing checkpointing, where replication states are written to a shared database, and the replicators are stateless.
  • the replication process is idempotent. Idempotency may refer to deterministic re-application that when an operation fails, the retry of the same operation should work and lead to the same result, by using, for example, the same key, upload process or walking process, etc.
  • Atomic replay allows the application of deltas to start as soon as the first delta object reaches the Object Store when snapshots are rolled back, for example, from snapshot 10 back to snapshot 5.
  • the entire deltas need to be preserved in the Object Store before the deltas can be applied.
  • the FSS of the present disclosure allows to add as many replication machines (e.g., replicator virtual machines (“VMs”)) as needed to support many file systems.
  • the number of replicators may dynamically increase or decrease by taking into account the bandwidth requirement and availability of resources.
  • thousands of storage can be used to parallelize the process and increase the speed of work.
  • bandwidth rationing ensures each workload does not overuse or cross its predefined throughput limit by automatically throttling, such as, throttling all inter-region bandwidth by figuring out the latency increase and slowing down requests. All replicator processors (or threads) have this capability.
  • snapshot 4 1246 in the target file system 1208 will be the one to use after fallback because it was the latest change in the target file system 1208.
  • the tailback process 1252 for this option involves reverse replication (i.e., reversing the roles of the source file system and the target file system for a replication process), and FSS performs the following steps:
  • Step 4 The FSS services start a reverse replication 1252 with a similar process as discussed in relation to FIG. 4 but in the reverse direction.
  • both the source file system 1206 and the target file system 1208 need to synchronize, then the target file system 1208 can upload deltas to an Object Store in the primary AD 1202.
  • the source file system 1206 can download the deltas from the Object Store to complete the application to snapshot 3 1224 to create a new snapshot 4 1226.
  • Provenance ID is a special identification that uniquely identifies a snapshot among regions, whether it’s a system snapshot or a user snapshot.
  • two file systems have the same provenance ID for a particular snapshot. In that case, which means the snapshot in each of these two file systems is very similar up to that point, either having a common ancestor or the same known point in time, and can be used as a base snapshot for cross-region (or x- region) replication.
  • Provenance ID applies to both system snapshots and user snapshots.
  • Replica and cloning may be different in that replica is achieved by first copying the full data from a source region to a target region, and thereafter copying the deltas between snapshots.
  • cloning copies only necessary data to create a thin client.
  • In- region cloning is much faster than cross-region replication because cloning does not involve extra encryption/decryption, Object Storage transfer, and many stages of pipelines that a replication requires. Once a clone is created, it does not receive more changes in the future, so it only gets a point-in-time snapshot.
  • the provenance ID may be useful for all file systems in the same region by cloning snapshots from another file system to a target FS in the same target region if the snapshots to be replicated from a source region already exist in the target region but not in the target FS. This may be illustrated in FIG. 13.
  • FIG. 13 is a diagram illustrating an example use of the provenance ID, according to certain embodiments. In FIG.
  • FSS then creates replicas (i.e., step 1320) for snapshots 1, 2, 3, and 5 of file system FS2 to become snapNum 1/ProvID Sl/OCID Ml, snapNum 2/ProvID S2/OCID M2, snapNum 3/ProvID S3/OCID M3 and snapNum 5ZProvID K5/OCID M5 of a file system FS3 in region 2.
  • the replication is deleted (i.e., step 1322) after snapNum 5 is replicated, meaning region 1 and region 2 do not communicate anymore.
  • snapshots snapNum 6/ProvID G6/OCID M6 and snapNum 7/ProvID G7/OCID M7 are created in FS3 in region 2 afterward.
  • FS1 which locates in the same region 1 as FS4, can first create clones (i.e., step 1342) for snapshots 1, 2 and 3 (snapNum 1/ProvID Sl/OCID SI, snapNum 2/ProvID S2/OCID S2 and snapNum 3/ProvID S3/OCID S3) of FS1 to become (snapNum 1 /ProvID Sl/OCID Pl, snapNum 2/ProvID S2/OCID P2 and snapNum 3/ProvID S3/OCID P3) of FS4 in the same region 1 as base copies of snapshots.
  • FIG. 14 is a flow chart illustrating the process of using the provenance ID to identify a base snapshot for cross-region replication, according to certain embodiments.
  • a source FS in a source region may periodically generate system snapshots and also generate user snapshots by user’s requests.
  • each snapshot may be assigned a unique provenance ID, and other identifications (e.g., snapshot ID and resource ID).
  • a source FS may receive a request to perform a x-region replication between the source FS and a target FS, either due to an outage or planned failover.
  • the x-region replication process may use the latest snapshot of the source FS as the selected base snapshot. In other words, the source FS may need to transfer the whole base snapshot copy (i.e., the selected base snapshot) to the target FS, as indicated in step 1420, then perform any necessary' delta transfer to the target FS afterward.
  • the process further determines whether the matched provenance ID belongs to a snapshot of the target FS or non-target. FS in the target region.
  • a matched provenance ID i.e., a matched snapshot with the same provenance ID
  • the non-target FS may perform an in-region cloning of the snapshot with the matched provenance ID to the target FS to create the base snapshot.
  • the x-region replication can use the cloned base snapshot for the target FS as the selected base snapshot.
  • the source FS can generate the deltas between its latest snapshot and the selected base snapshot with the matched provenance ID, and transfer only the deltas to the target FS via an Object Store.
  • the non-target FS1 may clone snapshots SI, S2, and S3 (i.e., step 1342) to target FS4 in the same region 1. Since three snapshots (SI, S2 and S3) have matched provenance IDs, all three snapshots may be used as based snapshots.
  • the source FS can use the latest snapshot (i.e., S3) among the three snapshots as the selected based snapshot to generate deltas between snapshots S3 and G7 for x-region replication (i.e., step 1344).
  • the first aspect for maintaining snapshot consistency between a source FS and a target FS is the order of processing snapshot keys and file data.
  • the snapshot and data model of the FSS process snapshot keys and file data in certain order, by processing snapshot keys first, then the file data.
  • Snapshot keys (may also be referred to as snapkeys) are the B-tree keys for snapshots. Whenever a new snapshot is created in the source region, a source data plane performs delta generation involving identifying the new snapshot keys of the new snapshot, transfer to the target region, and the target FS applies and insert the new snapshot keys into its B-tree.
  • the new snapshot keys may be collected by the garbage collector in the source region.
  • Snapshot keys need to be processed (i.e., identified and transferred to the target region) first before reading data blocks in the source region because snapshot keys represent a snapshot and help distinguish the differences between snapshots.
  • file data is associated with B-tree keys. Thus, accessing file data before a B-tree key is created in the target FS may lead to filesystem inconsistency.
  • snapshot keys are involved in billing metering and need to be established first.
  • Snapkey is a marker key for snapshot. When an epoch is created, a marker key is also created. Snapshot number is created based on epoch, which tracks time for a file system. For example, when the epoch advances from N to N+l, the source file system number is N+l, and the source FS creates snapshot number N (for either system snapshots or user snapshots.
  • the second aspect of maintaining snapshot consistency between a source FS and a target FS is handling snapshot deletions.
  • the FSS uses a data plane (DP) to handle snapshot creation and CP to handle snapshot deletion.
  • DP data plane
  • a snapshot generator in source DP generates system snapshots periodically in addition to user snapshots generated by customers. Deltas are computed between two given system snapshots and replicated from a source FS to a target FS. How'ever, snapshots may be deleted during the replication process.
  • system snapshots are preserved in both the source FS and target FS until the target FS has completed its delta application, the user snapshots may be updated or deleted at any time in the source region during a replication process but not in the target region.
  • Both the source CP and target CP need to track and execute the snapshot deletion according to the replication policy. Otherwise, improper handling of the snapshot deletion may lead to inconsistency between the source FS and target FS,
  • Snapshots created in the source region may not be visible to the user until these snapshots have been applied by the target file system. For example, if a source FS has three user snapshots SI, S2 and S3, the source and target CP does not inform the user that snapshots SI, S2 and S3 are available in the target region until these snapshots have been recreated in the target FS. The purpose is to prevent the user from cloning any of these snapshots in the target region when they are not ready. In some embodiments, multiple replications may be performed on several existing user snapshots (e.g., SI, S2 and S3) from a source FS to one or more target file systems in different regions.
  • SI, S2 and S3 may be performed on several existing user snapshots from a source FS to one or more target file systems in different regions.
  • Those existing user snapshots in the source FS may need to be copied to one or more target file systems. But the source FS may create a new system snapshot (e.g., snapshot S4) as a base copy for initial synchronization between the source FS and one or more target file systems before performing the replications.
  • a new system snapshot e.g., snapshot S4
  • the deletion of snapshot keys is tracked and temporarily held by the source CP in its persistence memory, then is applied to both the source FS and the target FS at the end of a replication cycle.
  • Temporary hold or withhold means the deletion is not. immediate and postponed for a short period of time depending on other factors. The reason is that if the deletion is applied immediately during the replication window/process, the garbage collector may interfere with the replication process by removing some of the snapshot keys before they can be applied by the target FS, leading to inconsistency. In other words, if a file deletion happens during a replication window, the deletion is temporarily blocked until the replication completes and then is applied in both the source FS and the target FS. Thus, the deletion application is the final step of the snapshot model.
  • FSS utilizes a scheme called delayed snapshot deletion, which is applicable to user snapshots only.
  • FIG. 15 is a diagram illustrating delayed snapshot deletion and replication for maintaining consistency between a source FS and a target FS, according to certain embodiments.
  • the FSS has three replication cycles starting from the source FS 1510 and ending at the target FS 1530, where the source FS and target FS are in different regions.
  • Replication cycle 1 includes source cycle 1 (1512) and target cycle 1 (1532).
  • Replication cycle 2 includes source cycle 2 (1514) and target cycle 2 (1534).
  • Replication cycle 3 includes source cycle 3 ( 1516) and target, cycle 3 ( 1536).
  • Each replication cycle starts with a system snapshot, for example, system snapshot S10 for replication cycle 1, system snapshot S20 for replication cycle 2, and system snapshot S30 for replication cycle 3.
  • the source FS holds the snapshot deletion until it receives notification from the target FS that the deleted snapshot has been applied by the target FS, typically at the end of the current replication cycle.
  • the snapshot, deletion does not take effect in the target FS until the end of a next replication cycle. This delayed deletion prevents uncertainty and ensures consistency between the source FS and the target FS.
  • snapshot S5 is being deleted (shown as for deleting a snapshot) in the source FS 1510 during the source cycle 1 (1512) before the target FS 1530 starts applying these snapshots at time UTC 18:05.
  • the source CP 1510 allows S5 to continue to be transferred to the target FS 1530, but temporarily holds the deletion (i.e., keep snapshot S5 in ‘‘deleting” state) and then deletes S5 (i.e., CP changes to “deleted” state for S5) at the end of the whole replication cycle 1 (or target cycle 1 (1532)) after receiving a notification from the target FS 1530 indicating S5 has been applied by the target FS 1530 at time 18: 15 UTC.
  • the internal state is set to “deleting” state (i.e., pending delete), so any other requests related to S5 may receive a HTTP 409 response (i.e., indicate a conflict between the other requests and the current state of the resources).
  • deleting i.e., pending delete
  • snapshot S5 is not actually deleted by the target FS 1530 until the end of replication cycle 2 (or target cycle 2 (1534)) at 19: 15 UTC (shown as “-S5”).
  • This postponed deletion of S5 may be referred to as blocked deletion because the snapshot is blocked from instant deletion.
  • more snapshots are created (e.g., S16 and SI 8) while some snapshots are deleted (e.g., S7 and S 16) after replication cycle 1 (1512 & 1532) and before replication cycle 2 (1514 & 1534).
  • Replication cycle 2 starts from the source cycle 2 (1514) at 19:00 UTC and ends at 19: 15 UTC in the target cycle 2 (1534).
  • User snapshot S7 was deleted (shown as “-S7”) either between replication cycles I and 2, so S7 is deleted by the source FS at the time of the deletion request (may also be referred to as non-blocked deletion), and deleted by the target FS at the end of the replication cycle 2 at time 19:15 UTC.
  • the state for the corresponding snapkey (a type of marker key) is changed from visible to invisible, so no user is able to read.
  • the state changes from invisible to irretrievable, and the snapkey is removed from the B-tree (i.e., no longer exists in memory). This may be illustrated in FIG. 15 for snapshot S16 below.
  • a snapshot is created after a replication cycle has started (i.e., deltas have been calculated between two existing snapshots) in a source FS
  • that snapshot may not be transferred from the source FS to a target FS until next replication cycle.
  • snapshot 22 is created (shown as “+S22”) in the source FS 1510 during the source replication cycle 2 (1514).
  • S22 may not be replicated to the target FS 1530 during the current replication cycle (i.e., cycle 2, 1514 & 1534) already underway until next replication cycle (i.e., cycle 3, 1516 & 1536).
  • the delayed snapshot deletion scheme may be applied just like snapshot S5 discussed above.
  • the source FS may hold (or temporarily withhold) the snapshot deletion but still allow to perform the x-region replication on this requested snapshot. In other words, the source FS may transfer the requested snapshot to the target FS, which can perform the delta application on this request snapshot and then notify the source FS.
  • the source FS may delete the requested snapshot at the end of the replication cycle when the target FS has completed the x-region replication.
  • the target FS may not delete the requested snapshot the target FS has applied in the current replication cycle until the end of a next replication cycle (i.e., waits for another replication cycle).
  • FIG. 17 is a flow chart illustrating the process of delayed snapshot deletion and replication after detecting a snapshot creation event, according to certain embodiments.
  • a snapshot is created during x-region replication cycle (e.g., replication cycle N) but no snapshot deletion request is received by the source FS before or during next x-region replication cycle (e.g., replication cycle N+l)
  • the replication for the newly created snapshot is delayed until the next replication cycle.
  • the source FS may delay replicating the new 7 snapshot until the next replication cycle if no snapshot deletion request is received before or during next replication cycle. For example, in FIG. 15, snapshot S22 is created during the source cycle 2 (1514), and no snapshot deletion request is received before or during the source cycle 3 (1516). Therefore, the source FS 1510 does not start replicating S22 until the source cycle 3 (1516) and then transfers S22 to the target FS 1530 for application during the target cycle 3 (1536).
  • the source FS may extract snapshot metadata and upload it to the Object Store at the beginning of a replication cycle, for example, involving state machine’s states (may also referred to herein as delta states) from Ready _to_Copy_Metadata to Snapshot_Metadata_Copied. Uploading metadata at the beginning of a replication cycle can help detect and resolve any problems early for the replication before heavy data transfer starts.
  • the target FS populates snapshot metadata and performs snapshot deletion after the delta application has been completed to add metadata to the existing data.
  • the snapshot metadata transfer may include, but not limited to, provenance Id, snapshot type (e.g., system snapshot and user snapshot), and snapshot time.
  • snapshot records are also part of this snapshot metadata transfer.
  • FIG. 18 is about control plane communications, specifically related to snapshot metadata information, between a source region and a target region.
  • the source CP tracks the status of snapshot copying and deletion activities in the source region and receives validation from the target CP.
  • the source CP and target CP communicate through the SDBs in both regions.
  • the CP API 1810 may start recording snapshot status, including any deleted snapshots after a replication process begins.
  • the source snapshot generator 1812 (a separate thread in CP API service) scans replication policies and creates a system snapshot.
  • source CP API 1810 detects a snapshot deletion request during a replication cycle (may also be referred to as delta range from data perspective), it records the pending delete into source SDB 1814 (e.g., the schema table described above).
  • source data plane (DP)/replicator 1816 may check the status of snapshot creation (e.g., whether a system snapshot has been created). If a new system snapshot has been created, at step S5, delta monitor of source CP API 1810 may update its delta state to Snapshot_Metadata_Copying (referring to step 1004 in FIG. 10). Delta monitor may be threads on CP API, managing and transitioning delta states.
  • source CP 1810 may prepare information about, user snapshots in the current replication cycle by extracting metadata, such as provenance Id, snapshot type, and snapshot time, plus snapshot records. The source CP may then store the extracted information in the source SDB 1814.
  • the target DP 1836 obtain deltas from the Object Store.
  • the target replicator 1836 may apply the deltas to the target FS’s base snapshot in DP.
  • the target DP 1836 completes the delta application, then it notifies the target CP 1830 (e.g., delta monitor) to update the delta state to Replicated state (referring to step 1058 in FIG. 10).
  • the target FS may then proceed to prepare for metadata download and application.
  • the target CP 1830 may change the delta state to Snapshot __MetadataJPopulating (referring to step 1060 in FIG. 10).
  • the target DP 1836 can download snapshot metadata from the Object Store for the current replication cycle (or between last snapshot number and current snapshot number in the schema) and populate metadata for all the snapshots within this range.
  • the target DP 1836 also downloads deleted snapshot records for the current replication cycle from the Object Store.
  • the target CP 1830 then updates the delta state to Snapshot Metadata Populated (referring to step 1062 in FIG. 10), and notifies the source CP 1810.
  • laaS customers may access resources and services through a wide area network (WAN), such as the Internet, and can use the cloud provider's services to install the remaining elements of an application stack.
  • WAN wide area network
  • the user can log in to the laaS platform to create virtual machines (VMs), install operating systems (OSs) on each VM, deploy middleware such as databases, create storage buckets for workloads and backups, and even install enterprise software into that VM.
  • VMs virtual machines
  • OSs install operating systems
  • middleware such as databases
  • storage buckets for workloads and backups
  • enterprise software such as databases
  • Customers can then use the provider's services to perform various functions, including balancing network traffic, troubleshooting application issues, monitoring performance, managing disaster recovery, etc.
  • laaS provisioning may refer to acquiring computers or virtual hosts for use, and even installing needed libraries or services on them. In most cases, deployment does not include provisioning, and the provisioning may need to be performed first.
  • the service operators 1902 may be using one or more client computing devices, which may be portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry' 8, Palm OS, and the like, and being Internet, e-mail, short message service (SMS), Blackberry'®, or other communication protocol enabled.
  • the client computing devices can be general purpose personal computers including, by way of example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems.
  • the VCN 1906 can include a local peering gateway (LPG) 1910 that can be communicatively coupled to a secure shell (SSH) VCN 1912 via an LPG 1910 contained in the SSH VCN 1912.
  • the SSH VCN 1912 can include an SSH subnet 1914, and the SSH VCN 1912 can be communicatively coupled to a control plane VCN 1916 via the LPG 1910 contained in the control plane VCN 1916.
  • the SSH VCN 1912 can be communicatively coupled to a data plane VCN 1918 via an LPG 1910.
  • the control plane VCN 1916 and the data plane VCN 1918 can be contained in a service tenancy 1919 that can be owned and/or operated by the laaS provider.
  • the LB subnet(s) 1922 contained in the control plane DMZ tier 1920 can be communicatively coupled to the app subnet(s) 1926 contained in the control plane app tier 1924 and an Internet gateway 1934 that can be contained in the control plane VCN 1916, and the app subnet(s) 1926 can be communicatively coupled to the DB subnet(s) 1930 contained in the control plane data tier 1928 and a service gateway 1936 and a network address translation (NAT) gateway 1938.
  • the control plane VCN 1916 can include the service gateway 1936 and the NAT gateway 1938.
  • the control plane VCN 1916 can include a data plane mirror app tier 1940 that can include app subnet(s) 1926.
  • the data plane VCN 1918 can include the data plane app tier 1946, a data plane DMZ tier 1948, and a data plane data tier 1950.
  • the data plane DMZ tier 1948 can include LB subnet(s) 1922 that can be communicatively coupled to the app subnet(s) 1926 of the data plane app tier 1946 and the Internet gateway 1934 of the data plane VCN 1918.
  • the app subnet(s) 1926 can be communicatively coupled to the service gateway 1936 of the data plane VCN 1918 and the NAT gateway 1938 of the data plane VCN 1918.
  • the data plane data tier 1950 can also include the DB subnet(s) 1930 that can be communicatively coupled to the app subnet(s) 1926 of the data plane app tier 1946.
  • the Internet gateway 1934 of the control plane VCN 1916 and of the data plane VCN 1918 can be communicatively coupled to a metadata management service 1952 that can be communicatively coupled to public Internet 1954.
  • Public Internet 1954 can be communicatively coupled to the NAT gateway 1938 of the control plane VCN 1916 and of the data plane VCN 1918.
  • the service gateway 1936 of the control plane VCN 1916 and of the data plane VCN 1918 can be communicatively coupled to cloud sendees 1956.
  • the secure host tenancy 1904 can be directly connected to the service tenancy 1919, which may be otherwise isolated.
  • the secure host subnet 1908 can communicate with the SSH subnet 1914 through an LPG 1910 that may enable two-way communication over an otherwise isolated system. Connecting the secure host subnet 1908 to the SSH subnet 1914 may give the secure host subnet 1908 access to other entities within the service tenancy 1919.
  • the control plane VCN 1916 may allow users of the service tenancy 1919 to set up or otherwise provision desired resources. Desired resources provisioned in the control plane VCN 1916 may be deployed or otherwise used in the data plane VCN 1918.
  • the control plane VCN 1916 can be isolated from the data plane VCN 1918, and the data plane mirror app tier 1940 of the control plane VCN 1916 can communicate with the data plane app tier 1946 of the data plane VCN 1918 via VNICs 1942 that can be contained in the data plane mirror app tier 1940 and the data plane app tier 1946.
  • users of the system, or customers can make requests, for example create, read, update, or delete (CRUD) operations, through public Internet 1954 that can communicate the requests to the metadata management service 1952.
  • the metadata management service 1952 can communicate the request to the control plane VCN 1916 through the Internet gateway 1934.
  • the request can be received by the LB subnet(s) 1922 contained in the control plane DMZ tier 1920.
  • the LB subnet(s) 1922 may determine that the request is valid, and in response to this determination, the LB subnet(s) 1922 can transmit the request to app subnet(s) 1926 contained in the control plane app tier 1924. If the request is validated and requires a call to public Internet 1954, the call to public Internet 1954 may be transmitted to the NAT gateway 1938 that can make the call to public Internet 1954.
  • Metadata that may be desired to be stored by the request can be stored in the DB subnet(s) 1930.
  • the data plane mirror app tier 1940 can facilitate direct communication between the control plane VCN 1916 and the data plane VCN 1918. For example, changes, updates, or other suitable modifications to configuration may be desired to be applied to the resources contained in the data plane VCN 1918.
  • the control plane VCN 1916 can directly communicate with, and can thereby execute the changes, updates, or other suitable modifications to configuration to, resources contained in the data plane VCN 1918.
  • control plane VCN 1916 and the data plane VCN 1918 can be contai ned in the service tenancy 1919.
  • the user, or the customer, of the system may not own or operate either the control plane VCN 1916 or the data plane VCN 1918.
  • the laaS provider may own or operate the control plane VCN 1916 and the data plane VCN 1918, both of which may be contained in the sendee tenancy 1919.
  • This embodiment can enable isolation of networks that may prevent users or customers from interacting with other users’, or other customers’, resources. Also, this embodiment may allow 7 users or customers of the system to store databases privately without needing to rely on public Internet 1954, which may not have a desired level of threat prevention, for storage.
  • the LB subnet(s) 1922 contained in the control plane VCN 1916 can be configured to receive a signal from the service gateway 1936.
  • the control plane VCN 1916 and the data plane VCN 1918 may be configured to be called by a customer of the laaS provider without calling public Internet 1954.
  • Customers of the laaS provider may desire this embodiment since database(s) that the customers use may be controlled by the laaS provider and may be stored on the service tenancy 1919, which may be isolated from public Internet 1954.
  • FIG. 20 is a block diagram 2000 illustrating another example pattern of an laaS architecture, according to at least one embodiment.
  • Service operators 2002 e.g., service operators 1902 of FIG. 19
  • a secure host tenancy 2004 e.g., the secure host tenancy 1904 of FIG. 19
  • VCN virtual cloud network
  • the VCN 2006 can include a local peering gateway (LPG) 2010 (e.g., the LPG 1910 of FIG.
  • SSH VCN 2012 that can be communicatively coupled to a secure shell (SSH) VCN 2012 (e.g., the SSH VCN 1912 of FIG. 19) via an LPG 1910 contained in the SSH VCN 2012.
  • the SSH VCN 2012 can include an SSH subnet 2014 (e.g., the SSH subnet 1914 of FIG. 19), and the SSH VCN 2012 can be communicatively coupled to a control plane VCN 2016 (e.g., the control plane VCN 1916 of FIG. 19) via an LPG 2010 contained in the control plane VCN 2016.
  • the control plane VCN 2016 can be contained in a sendee tenancy 2019 (e.g., the sendee tenancy 1919 of FIG . 19), and the data plane VCN 2018 (e.g., the data plane VCN 1918 of FIG. 19) can be contained in a customer tenancy 2021 that may be owned or operated by users, or customers, of the system.
  • the control plane VCN 2016 can include a control plane DMZ tier 2020 (e.g., the control plane DMZ tier 1920 of FIG. 19) that can include LB subnet(s) 2022 (e.g., LB subnet! s) 1922 of FIG. 19), a control plane app tier 2024 (e.g., the control plane app tier 1924 of FIG. 19) that can include app subnet(s) 2026 (e.g., app subnet(s) 1926 of FIG. 19), a control plane data tier 2028 (e.g., the control plane data tier 1928 of FIG.
  • a control plane DMZ tier 2020 e.g., the control plane DMZ tier 1920 of FIG. 19
  • LB subnet(s) 2022 e.g., LB subnet! s
  • a control plane app tier 2024 e.g., the control plane app tier 1924 of FIG. 19
  • app subnet(s) 2026 e.g.,
  • the LB subnet(s) 2022 contained in the control plane DMZ tier 2020 can be communicatively coupled to the app subnet(s) 2026 contained in the control plane app tier 2024 and an Internet gateway 2034 (e.g., the Internet gateway 1934 of FIG. 19) that can be contained in the control plane VCN 2016, and the app subnet(s) 2026 can be communicatively coupled to the DB subnet(s) 2030 contained in the control plane data tier 2028 and a service gateway 2036 (e.g., the sendee gateway 1936 of FIG. 19) and a network address translation (NAT) gateway 2038 (e.g., the NAT gateway 1938 of FIG. 19).
  • the control plane VCN 2016 can include the service gateway 2036 and the NAT gateway 2038.
  • the control plane VCN 2016 can include a data plane mirror app tier 2040 (e.g., the data plane mirror app tier 1940 of FIG. 19) that can include app subnet(s) 2026.
  • the app subnet(s) 2026 contained in the data plane mirror app tier 2040 can include a virtual network interface controller (VNIC) 2042 (e.g., the VNIC of 1942) that can execute a compute instance 2044 (e.g., similar to the compute instance 1944 of FIG. 19).
  • VNIC virtual network interface controller
  • the compute instance 2044 can facilitate communication between the app subnet(s) 2026 of the data plane mirror app tier 2040 and the app subnet(s) 2026 that can be contained in a data plane app tier 2046 (e.g., the data plane app tier 1946 of FIG. 19) via the VNIC 2042 contained in the data plane mirror app tier 2040 and the VNIC 2042 contained in the data plane app tier 2046.
  • a data plane app tier 2046 e.g., the data plane app tier 1946 of FIG. 19
  • the Internet gateway 2034 contained in the control plane VCN 2016 can be communicatively coupled to a metadata management service 2052 (e.g., the metadata management service 1952 of FIG. 19) that can be communicatively coupled to public Internet 2054 (e.g., public Internet 1954 of FIG. 19).
  • Public Internet 2054 can be communicatively coupled to the NAT gateway 2038 contained in the control plane VCN 2016.
  • the service gateway 2036 contained in the control plane VCN 2016 can be communicatively coupled to cloud services 2056 (e.g., cloud services 1956 of FIG. 19).
  • the data plane VCN 2018 can be contained in the customer tenancy 2021.
  • the laaS provider may provide the control plane VCN 2016 for each customer, and the laaS provider may, for each customer, set up a unique compute instance 2044 that is contained in the service tenancy 2019.
  • Each compute instance 2044 may allow communication between the control plane VCN 2016, contained in the service tenancy 2019, and the data plane VCN 2018 that is contained in the customer tenancy 2021.
  • the compute instance 2044 may allow resources, that are provisioned in the control plane VCN 2016 that is contained in the service tenancy 2019, to be deployed or otherwise used in the data plane VCN 2018 that is contained in the customer tenancy 2021.
  • the customer of the laaS provider may have databases that live in the customer tenancy 2021.
  • the control plane VCN 2016 can include the data plane mirror app tier 2040 that can include app subnet(s) 2026.
  • the data plane mirror app tier 2040 can reside in the data plane VCN 2018, but the data plane mirror app tier 2040 may not live in the data plane VCN 2018. That is, the data plane mirror app tier 2040 may have access to the customer tenancy 2021, but the data plane mirror app tier 2040 may not exist, in the data plane VCN 2018 or be owned or operated by the customer of the laaS provider.
  • the data plane mirror app tier 2040 may be configured to make calls to the data plane VCN 2018 but may not be configured to make calls to any entity contained in the control plane VCN 2016.
  • the customer may desire to deploy or otherwise use resources in the data plane VCN 2018 that are provisioned in the control plane VCN 2016, and the data plane mirror app tier 2040 can facilitate the desired deployment, or other usage of resources, of the customer.
  • the customer of the laaS provider can apply filters to the data plane VCN 2018.
  • the customer can determine what the data plane VCN 2018 can access, and the customer may restrict access to public Internet 2054 from the data plane VCN 2018.
  • the laaS provider may not be able to apply filters or otherwise control access of the data plane VCN 2018 to any outside networks or databases. Applying filters and controls by the customer onto the data plane VCN 2018, contained in the customer tenancy 2021, can help isolate the data plane VCN 2018 from other customers and from public Internet 2054.
  • cloud sendees 2056 can be called by the service gateway- 2036 to access services that may not exist on public Internet 2054, on the control plane VCN 2016, or on the data plane VCN 2018.
  • the connection between cloud services 2056 and the control plane VCN 2016 or the data plane VCN 2018 may not be live or continuous.
  • Cloud sendees 2056 may exist on a different network owned or operated by the laaS provider.
  • Cloud services 2056 may be configured to receive calls from the service gateway 2036 and may be configured to not receive calls from public Internet 2054.
  • Some cloud sendees 2056 may be isolated from other cloud services 2056, and the control plane VCN 2016 may be isolated from cloud services 2056 that may not be in the same region as the control plane VCN 2016.
  • control plane VCN 2016 may be located in “Region 1,” and cloud service “Deployment 19,” may be located in Region 1 and in “Region 2,” If a call to Deployment 19 is made by the service gateway 2036 contained in the control plane VCN 2016 located in Region 1, the call may be transmitted to Deployment 19 in Region 1.
  • the control plane VCN 2016, or Deployment 19 in Region 1 may not be communicatively coupled to, or otherwise in communication with, Deployment 19 in Region
  • FIG. 21 is a block diagram 2100 illustrating another example pattern of an laaS architecture, according to at least one embodiment.
  • Service operators 2102 e.g., service operators 1902 of FIG. 19
  • a secure host tenancy 2104 e.g., the secure host tenancy 1904 of FIG. 19
  • VCN virtual cloud network
  • the VCN 2106 can include an LPG 2110 (e.g., the LPG 1910 of FIG.
  • the SSH VCN 2112 can include an SSH subnet 2114 (e.g., the SSH subnet 1914 of FIG . 19), and the SSH VCN 2112 can be communicatively coupled to a control plane VCN 2116 (e.g., the control plane VCN 1916 of FIG. 19) via an LPG 2110 contained in the control plane VCN 21 16 and to a data plane VCN 2118 (e.g., the data plane 1918 of FIG. 19) via an LPG 2110 contained in the data plane VCN 2118.
  • the control plane VCN 2116 and the data plane VCN 2118 can be contained in a sendee tenancy 21 19 (e.g., the service tenancy 1919 of FIG. 19).
  • the control plane VCN 21 16 can include a control plane DMZ tier 2120 (e.g., the control plane DMZ tier 1920 of FIG. 19) that can include load balancer (LB) subnet(s) 2122 (e.g., LB subnet) s) 1922 of FIG. 19), a control plane app tier 2124 (e.g., the control plane app tier 1924 of FIG. 19) that can include app subnet(s) 2126 (e.g., similar to app subnet(s) 1926 of FIG. 19), a control plane data tier 2128 (e.g., the control plane data tier 1928 of FIG. 19) that can include DB subnet(s) 2130.
  • LB load balancer
  • a control plane app tier 2124 e.g., the control plane app tier 1924 of FIG. 19
  • app subnet(s) 2126 e.g., similar to app subnet(s) 1926 of FIG. 19
  • a control plane data tier 2128
  • the LB subnet(s) 2122 contained in the control plane DMZ tier 2120 can be communicatively coupled to the app subnet(s) 2126 contained in the control plane app tier 2124 and to an Internet gateway 2134 (e.g., the Internet gateway 1934 of FIG. 19) that can be contained in the control plane VCN 21 16, and the app subnet(s) 2126 can be communicatively coupled to the DB subnet(s) 2130 contained in the control plane data tier 2128 and to a service gateway 2136 (e.g., the service gateway of FIG. 19) and a network address translation (NAT) gateway 2138 (e.g., the NAT gateway 1938 of FIG. 19).
  • a service gateway 2136 e.g., the service gateway of FIG. 19
  • NAT network address translation
  • the data plane DMZ tier 2148 can include LB subnet(s) 2122 that can be communicatively coupled to trusted app subnet(s) 2160 and untrusted app subnet(s) 2162 of the data plane app tier 2146 and the Internet gateway 2134 contained in the data plane VCN 2118.
  • the trusted app subnet(s) 2160 can be communicatively coupled to the service gateway 2136 contained in the data plane VCN 21 18, the NAT gateway 2138 contained in the data plane VCN 2118, and DB subnet(s) 2130 contained in the data plane data tier 2150.
  • the untrusted app subnet(s) 2162 can be communicatively coupled to the service gateway 2136 contained in the data plane VCN 2118 and DB subnet! s) 2130 contained in the data plane data tier 2150.
  • the data plane data tier 2150 can include DB subnet(s) 2130 that can be communicatively coupled to the service gateway 2136 contained in the data plane VCN 2118.
  • the untrusted app subnet(s) 2162 can include one or more primary' VNICs 2164(1)- (N) that, can be communicatively coupled to tenant virtual machines (VMs) 2166(1 )-(N). Each tenant VM 2166( 1 )-(N ) can be communicatively coupled to a respective app subnet 2167(1)-(N) that can be contained in respective container egress VCNs 2168(1)-(N) that can be contained in respective customer tenancies 2170(l)-(N).
  • VMs virtual machines
  • Respective secondary'' VNICs 2172(1)-(N) can facilitate communication between the untrusted app subnet(s) 2162 contained in the data plane VCN 2118 and the app subnet contained in the container egress VCNs 2168(1)-(N).
  • Each container egress VCNs 2168(1)-(N) can include a NAT gateway 2138 that can be communicatively coupled to public Internet 2154 (e.g., public Internet 1954 of FIG. 19).
  • the Internet gateway 2134 contained in the control plane VCN 2116 and contained in the data plane VCN 2118 can be communicatively coupled to a metadata management sendee 2152 (e.g., the metadata management system 1952 of FIG. 19) that can be communicatively coupled to public Internet 2154.
  • Public Internet 2154 can be communicatively coupled to the NAT gateway 2138 contained in the control plane VCN 21 16 and contained in the data plane VCN 2118.
  • the service gateway 2136 contained in the control plane VCN 2116 and contained in the data plane VCN 2118 can be communicatively coupled to cloud sendees 2156.
  • the data plane VCN 21 18 can be integrated with customer tenancies 2170.
  • This integration can be useful or desirable for customers of the laaS provider in some cases such as a case that may desire support when executing code.
  • the customer may provide code to run that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects.
  • the laaS provider may determine whether to run code given to the laaS provider by the customer.
  • the customer of the laaS provider may grant temporary network access to the laaS provider and request a function to be attached to the data plane app tier 2146.
  • Code to run the function may be executed in the VMs 2166(1)-(N), and the code may not be configured to run anywhere else on the data plane VCN 2118.
  • Each VM 2166(1)-(N) may be connected to one customer tenancy 2170.
  • Respective containers 2171(1)-(N) contained in the VMs 2166(1)-(N) may be configured to run the code.
  • control plane VCN 2116 and the data plane VCN 2118 may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN 21 16 and the data plane VCN 2118. However, communication can occur indirectly through at least one method.
  • An LPG 2110 may be established by the laaS provider that can facilitate communication between the control plane VCN 2116 and the data plane VCN 2118.
  • the control plane VCN 2116 or the data plane VCN 2118 can make a call to cloud sendees 2156 via the sendee gateway 2136.
  • a call to cloud services 2156 from the control plane VCN 21 16 can include a request for a sendee that can communicate with the data plane VCN 2118.
  • the LB subnet(s) 2222 contained in the control plane DMZ tier 2220 can be communicatively coupled to the app subnet(s) 2226 contained in the control plane app tier 2224 and to an Internet gateway 2234 (e.g., the Internet gateway 1934 of FIG. 19) that can be contained in the control plane VCN 2216, and the app subnet( s) 2226 can be communicatively coupled to the DB subnet(s) 2230 contained in the control plane data tier 2228 and to a sendee gateway 2236 (e.g., the service gateway of FIG. 19) and a network address translation (NAT) gateway 2238 (e.g., the NAT gateway 1938 of FIG. 19).
  • the control plane VCN 2216 can include the service gateway 2236 and the NAT gateway 2238.
  • the trusted app subnet(s) 2260 can be communicatively coupled to the service gateway 2236 contained in the data plane VCN 2218, the NAT gateway 2238 contained in the data plane VCN 2218, and DB subnet(s) 2230 contained in the data plane data tier 2250.
  • the untrusted app subnet(s) 2262 can be communicatively coupled to the service gateway 2236 contained in the data plane VCN 2218 and DB subnet(s) 2230 contained in the data plane data tier 2250.
  • the data plane data tier 2250 can include DB subnet(s) 2230 that can be communicatively coupled to the service gateway 2236 contained in the data plane VCN 2218.
  • the untrusted app subnet(s) 2262 can include primary' VNJCs 2264(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs) 2266(1)-(N) residing within the untrusted app subnet! s) 2262.
  • Each tenant VM 2266(1)-(N) can run code in a respective container 2267( 1 )-(N ), and be communicatively coupled to an app subnet 2226 that can be contained in a data plane app tier 2246 that can be contained in a container egress VCN 2268.
  • the pattern illustrated by the architecture of block diagram 2200 of FIG. 22 may be considered an exception to the pattern illustrated by the architecture of block diagram 2100 of FIG. 21 and may be desirable for a customer of the laaS provider if the laaS provider cannot directly communicate with the customer (e.g., a disconnected region).
  • the respective containers 2267(1)-(N) that are contained in the VMs 2266(1)-(N) for each customer can be accessed in real-time by the customer.
  • the containers 2267(1 )-(N) may be configured to make calls to respective secondary VNICs 2272(1 )-(N) contained in app subnet(s) 2226 of the data plane app tier 2246 that can be contained in the container egress VCN 2268.
  • the secondary VNICs 2272(1 )-(N) can transmit the calls to the NAT gateway 2238 that may transmit the calls to public Internet 2254.
  • the containers 2267(1 )-(N) that can be accessed in real-time by the customer can be isolated from the control plane VCN 2216 and can be isolated from other entities contained in the data plane VCN 2218.
  • the containers 2267(1)-(N) may also be isolated from resources from other customers.
  • the customer can use the containers 2267(1)-(N) to call cloud services 2256.
  • the customer may run code in the containers 2267(1 ) ⁇ (N) that requests a service from cloud services 2256.
  • the containers 2267(1 )-(N) can transmit this request to the secondary VNICs 2272(1 )-(N) that can transmit the request to the NAT gateway that can transmit the request to public Internet 2254.
  • Public Internet 2254 can transmit the request to LB subnet(s) 2222 contained in the control plane VCN 2216 via the Internet gateway 2234.
  • the LB subnet(s) can transmit the request to app subnet(s) 2226 that can transmit the request to cloud services 2256 via the service gateway 2236.
  • laaS architectures 1900, 2000, 2100, 2200 depicted in the figures may have other components than those depicted. Further, the embodiments shown in the figures are only some examples of a cloud infrastructure system that may incorporate an embodiment of the disclosure. In some other embodiments, the laaS systems may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components. [0272] In certain embodiments, the laaS systems described herein may include a suite of applications, middleware, and database sendee offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. An example of such an laaS system is the Oracle Cloud Infrastructure (OCI) provided by the present assignee.
  • OCI Oracle Cloud Infrastructure
  • FIG. 23 illustrates an example computer system 2300, in which various embodiments may be implemented.
  • the system 2300 may be used to implement any of the computer systems described above.
  • computer system 2300 includes a processing unit 2304 that communicates with a number of peripheral subsystems via a bus subsystem 2302. These peripheral subsystems may include a processing acceleration unit 2306, an I/O subsystem 2308, a storage subsystem 2318 and a communications subsystem 2324.
  • Storage subsystem 2318 includes tangible computer-readable storage media 2322 and a system memory 2310.
  • Bus subsystem 2302 provides a mechanism for letting the various components and subsystems of computer system 2300 communicate with each other as intended. Although bus subsystem 2302 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. Bus subsystem 2302 may be any of several types of bus structures including a memory bus or memory? controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • such architectures may include an Industry' Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE Pl 386.1 standard.
  • ISA Industry' Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • Processing unit 2304 which can be implemented as one or more integrated circuits (e.g,, a conventional microprocessor or microcontroller), controls the operation of computer system 2300.
  • processors may? be included in processing unit 2304. These processors may include single core or multicore processors.
  • processing unit 2304 may be implemented as one or more independent processing units 2332 and/or 2334 with single or multicore processors included in each processing unit.
  • processing unit 2304 may also be implemented as a quad-core processing unit formed by integrating two dual -core processors into a single chip.
  • processing unit 2304 can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes.
  • Computer system 2300 may additionally include a processing acceleration unit 2306, which can include a digital signal processor (DSP), a special-purpose processor, and/or the like.
  • DSP digital signal processor
  • I/O subsystem 2308 may include user interface input devices and user interface output devices.
  • User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices.
  • User interface input devices may include, for example, motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, through a natural user interface using gestures and spoken commands.
  • User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®). Additionally, user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands.
  • eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®).
  • user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands.
  • voice recognition systems e.g., Siri® navigator
  • User interface input devices may also include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices.
  • user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices.
  • User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like.
  • User interface output devices may include a display subsystem, indicator lights, or non-visual displays such as audio output devices, etc.
  • the display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that, using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like.
  • CTR cathode ray tube
  • LCD liquid crystal display
  • plasma display a projection device
  • touch screen a touch screen
  • output device is intended to include all possible types of devices and mechanisms for outputting information from computer system 2300 to a user or other computer.
  • user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems.
  • Computer system 2300 may comprise a storage subsystem 2318 that provides a tangible non-transitory computer-readable storage medium for storing software and data constructs that provide the functionality of the embodiments described in this disclosure.
  • the software can include programs, code modules, instructions, scripts, etc., that when executed by one or more cores or processors of processing unit 2304 provide the functionality described above.
  • Storage subsystem 2318 may also provide a repository 7 for storing data used in accordance with the present disclosure.
  • storage subsystem 2318 can include various components including a system memory 7 2310, computer-readable storage media 2322, and a computer readable storage media reader 2320.
  • System memory 2310 may store program instructions that are loadable and executable by processing unit 2304.
  • System memory 7 2310 may also store data that is used during the execution of the instructions and/or data that, is generated during the execution of the program instruct ions.
  • Various different kinds of programs may be loaded into system memory 2310 including but not limited to client applications, Web browsers, mid-tier applications, relational database management systems (RDBMS), virtual machines, containers, etc.
  • RDBMS relational database management systems
  • System memory 2310 may also store an operating system 2316.
  • operating system 2316 may include various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems, a variety of commercially-available UNIX® or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® OS, and Palm® OS operating systems.
  • the virtual machines along with their guest operating systems (GOSs) may be loaded into system memory 2310 and executed by one or more processors or cores of processing unit 2304.
  • GOSs guest operating systems
  • System memory 2310 can come in different configurations depending upon the type of computer system 2300.
  • system memory 2310 may be volatile memory (such as random access memory' (RAM)) and/or non-volatile memory' (such as read-only' memory (ROM), flash memory/, etc.)
  • RAM random access memory
  • ROM read-only' memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • system memory 2310 may include a basic input/output system (BIOS) containing basic routines that help to transfer information between elements within computer system 2300, such as during start-up.
  • BIOS basic input/output system
  • Computer-readable storage media 2322 can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information.
  • This can include tangible computer-readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory'- or other memory’ technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media.
  • computer-readable storage media 2322 may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media.
  • Computer-readable storage media 2322 may include, but is not limited to, Zip® drives, flash memory'- cards, universal serial bus (USB) flash drives, secure digital (SD) cards, DVD disks, digital video tape, and the like.
  • Computer-readable storage media 2322 may also include, solid-state drives (SSD) based on non-volatile memory/ such as flash-memory’ based SSDs, enterprise flash drives, solid state ROM, and the like, SSDs based on volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs.
  • SSD solid-state drives
  • volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs.
  • the disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for computer system 2300.
  • Machine-readable instructions executable by one or more processors or cores of processing unit 2304 may be stored on a non-transitory computer-readable storage medium.
  • a non-transitory-- computer-readable storage medium can include physically tangible memory or storage devices that include volatile memory' storage devices and/or non-volatile storage devices. Examples of non-transitory computer-readable storage medium include magnetic storage media (e.g., disk or tapes), optical storage media (e.g., DVDs, CDs), various types of RAM, ROM, or flash memory’, hard drives, floppy drives, detachable memory drives (e.g., USB drives), or other type of storage device.
  • Communications subsystem 2324 provides an interface to other computer systems and networks. Communications subsystem 2324 serves as an interface for receiving data from and transmitting data to other systems from computer system 2300. For example, communications subsystem 2324 may enable computer system 2300 to connect to one or more devices via the Internet.
  • communications subsystem 2324 can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 802.11 family standards, or other mobile communication technologies, or any combination thereof), global positioning system (GPS) receiver components, and/or other components.
  • RF radio frequency
  • communications subsystem 2324 can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface.
  • communications subsystem 2324 may also receive input communication in the form of structured and/or unstructured data feeds 2326, event streams 2328, event updates 2330, and the like on behalf of one or more users who may use computer system 2300.
  • communications subsystem 2324 may be configured to receive data feeds 2326 in real-time from users of social networks and/or other communication services such as Twitter® feeds.
  • RSS Rich Site Summary
  • Computer system 2300 can be one of various types, including a handheld portable device (e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA), a wearable device (e.g., a Google Glass® head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing sy stem.
  • a handheld portable device e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA
  • a wearable device e.g., a Google Glass® head mounted display
  • PC personal computer
  • workstation e.g., a workstation
  • mainframe e.g., a mainframe
  • kiosk e.g., a server rack
  • server rack e.g., a server rack
  • Embodiments have been described using a particular combination of hardware and software, it should be recognized that, other combinations of hardware and software are also within the scope of the present disclosure. Embodiments may be implemented only in hardware, or only in software, or using combinations thereof.
  • the various processes described herein can be implemented on the same processor or different processors in any combination. Accordingly, where components or services are described as being configured to perform certain operations, such configuration can be accomplished, e.g., by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation, or any combination thereof. Processes can communi cate using a variety of techni ques including but not limited to conventional techniques for inter process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different, times.
  • Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
  • Clause 5 The method of any of clauses 1 to 4, wherein the second operation comprises deleting the snapshot by the source file system without transferring the snapshot to the target file system.
  • a non-transitory computer-readable medium storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: generating, by a computing system, a snapshot of a source file system in a source region, performing, by the computing system, a first cross-region replication and a second cross- region replication between the source file system in the source region and a target file system in a target region, the source region and the target region being in different regions; receiving, by the computing system, a snapshot deletion request in the source file system to delete the snapshot; determining, by the computing system, a timing of the snapshot deletion request in the source file system; performing, by the computing system, a first operation in accordance with the timing of the snapshot deletion request being determined to be during the first cross-region replication; and performing, by the computing system, a second operation in accordance with the timing of the snapshot deletion request being determined to be between the first and the second cross- region replications.
  • Clause 13 The non-transitory computer-readable medium of any of clauses 8 to 12, the operations further comprising determining a second timing of generating the snapshot of the source file system .
  • Clause 16 The system of clause 15, wherein the first operation comprises withholding the snapshot deletion request by the source file system until an end of the first cross-region replication.
  • Clause 18 The sy stem of any of clauses 1 to 17, wherein the second operation comprises deleting the snapshot by the source file system without transferring the snapshot to the target file system.
  • Clause 19 The system of any of clauses 1 to 18, wherein the system is further caused to determine a second timing of generating the snapshot of the source file sy stem .
  • Clause 20 The system of clause 19, wherein the system is further caused to transfer the generated snapshot to the target file system during the second cross-region replication in accordance with the second timing of the snapshot generation being determined to be during the first cross-region replication and in accordance with the timing of the snapshot, del etion request being determined to be after the second cross-region replication.

Abstract

Techniques are described for efficient replication and maintaining snapshot data consistency during file storage replication between file systems in different cloud infrastructure regions. In certain embodiments, provenance IDs are used to efficiently identify a starting point (e.g., a base snapshot) for a cross-region replication process, conserve cloud resources while reducing network and IO traffic. In certain embodiments, snapshot creation and deletion requests that occur during cross-region replications may be temporarily withheld until appropriate times to execute such requests safely, depending on the timing relationship between such requests and cross-region replication cycles.

Description

TECHNIQUES FOR EFFICIENT REPLICATION AND RECOVERY
CROSS-REFERENCES TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Non- Provisional Application No. 18/169,121, filed on February' 14, 2023, entitled “TECHNIQUES FOR EFFICIENT REPLICATION AND RECOVERY,” Attorney Docket No. 088325-1342650 (347410US), and U.S. Non-Provisional Application No. 18/169,124, filed on February 14, 2023, entitled “TECHNIQUES FOR MAINTAINING SNAPSHOT DATA CONSISTENCY DURING FILE SYSTEM CROSS-REGION REPLICATION,” Attorney Docket No. 088325-1364772 (347480US), all of which claim the benefit and priority under 35 U.S.C. 119(e) of U.S.
Provisional Application No. 63/352,992, filed on June 16, 2022, U.S. Provisional Application No. 63/357,526, filed on June 30, 2022, U.S. Provisional Application No. 63/412,243, filed on September 30, 2022, and U.S. Provisional Application No. 63/378,486, filed on October 5, 2022, the disclosures of which are incorporated herein by reference in their entirety for all purposes.
FIELD [0002] The present disclosure generally relates to file systems. More specifically, but not by way of limitation, techniques are described for efficient replication and maintaining snapshot data consistency during file storage replications between file systems in different cloud infrastructure regions (e.g., data centers in particular geographic regions).
BACKGROUND [0003] Enterprise businesses contain critical data. File system replication enhances the availability of critical data and provides fault tolerance. However, there is a need to improve the efficiency of file system replication and snapshot data consistency during the replication.
BRIEF SUMMARY [0004] The present disclosure generally relates to file systems. More specifically, but not by way of limitation, techniques are described for efficient replication and maintaining snapshot data consistency during file storage replication between file systems in different cloud infrastructure regions (e.g., data centers in particular geographic regions).
[0005] In certain embodiments, techniques are provided including a method that comprises generating, by the computing system, a first snapshot and a second snapshot in a source file system in a source region; assigning, by the computing system, a first provenance identification to the first snapshot and a second provenance identification to the second snapshot in the source file system, the first provenance identification being unique among all snapshots in all regions and the second provenance identification being unique among all snapshots in all regions, receiving, by a computing system, a request to perform a replication between the source file system in the source region and a target file system in a target region, the source region and the target region being in different regions, in response to the request, comparing, by the computing system, the first provenance identification in the source file system to provenance identification of existing snapshots in the target region; identifying, by a computing system, a matched snapshot with the first provenance identification in the target region to use as a base snapshot for the replication based at least in part on the comparison, and performing, by the computing system, the replication using deltas between the second snapshot and the base snapshot in the source file system,
[0006] In yet another embodiment, the method further comprises selecting the matched snapshot as the base snapshot at least in response to the matched snapshot with the first provenance identification in the target region being in the target file system.
[0007] In yet another embodiment, the target region comprises a non-target file system having a snapshot associated with the first provenance identification;
[0008] In yet another embodiment, the method further comprises performing an in-region copying the matched snapshot with the first provenance identification from the non-target file system to the target file system at least, in response to the matched snapshot with the first provenance identification in the target region not being in the target file system; and selecting the in-region copy of the matched snapshot in the target file system as the base snapshot.
[0009] In yet another embodiment, the in-region copy of the matched snapshot in the target file system has the same first provenance identification but different resource identification from the matched snapshot in the non-target file system.
[0010] In yet another embodiment, the method further comprises selecting the first snapshot with the first provenance identification in the source file system as the base snapshot at least in response to no matched snapshot with the first provenance identification being found in the target region.
[0011] In yet another embodiment, the method further comprises performing a cross-region copying of the first snapshot with the first provenance identification from the source file system to the target file system before generating the deltas between the second snapshot and the base snapshot in the source file system.
[0012] In various embodiments, a system is provided that includes one or more data processors and a non-transitory computer readable medium containing instructions which, when executed on the one or more data processors, cause the one or more data processors to perform part or all of one or more methods disclosed herein.
[0013] In various embodiments, a non-transitory computer-readable medium, storing computer-executable instructions which, when executed by one or more processors, cause the one or more processors of a computer system to perform one or more methods disclosed herein.
[0014] In various embodiments, a computer-program product, comprising computer program/instructions which, when executed by a processor, cause the processor to perform any of the methods disclosed herein.
[0015] The techniques described above and below may be implemented in a number of ways and in a number of contexts. Several example implementations and contexts are provided with reference to the following figures, as described below in more detail. However, the following implementations and contexts are but a few of many.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] Features, embodiments, and advantages of the present disclosure are better understood when the following Detailed Description is read with reference to the accompanying drawings.
[0017] FIG. 1 depicts an example concept of recovery/ time objective (RTO) and recovery’ point objective (RPO), according to certain embodiments.
[0018] FIG. 2 is a simplified block diagram illustrating an architecture for cross-region remote replication, according to certain embodiments. [0019] FIG. 3 is a simplified schematic illustration of components involved in cross-region remote replication, according to certain embodiments.
[0020] FIG. 4 is a simplified flow diagram illustrating cross-region remote replication, according to certain embodiments.
[0021] FIG. 5 is a simplified diagram illustrating the high-level concept of B-tree walk, according to certain embodiments.
[0022] FIG. 6A is a diagram illustrating pipeline stages of cross-region replication, according to certain embodiments.
[0023] FIG. 6B is a diagram illustrating pipeline stages of cross-region replication, according to certain embodiments.
[0024] FIG. 7 is a diagram illustrating a layered structure in file storage service (FSS) data plane, according to certain embodiments.
[0025] FIG. 8 depicts a simplified example binary large object (BLOB) format, according to certain embodiments.
[0026] FIG. 9 depicts an example replication bucket format, according to certain embodiments.
[0027] FIG. 10 is a flow chart illustrating state machines for concurrent source upload and target download, according to certain embodiments.
[0028] FIG. 11 is an example flow diagram illustrating the interaction between the data plane and control plane in a source region, according to certain embodiments.
[0029] FIG. 12 is a simplified diagram illustrating fallback mode, according to certain embodiments.
[0030] FIG. 13 is a diagram illustrating an example use of provenance ID, according to certain embodiments.
[0031] FIG. 14 is a flow chart illustrating the process of using provenance ID to identify a base snapshot for cross-region replication, according to certain embodiments.
[0032] FIG. 15 is a diagram illustrating delayed snapshot deletion and replication for maintaining consistency between a source FS and a target FS, according to certain embodiments. [0033] FIG. 16 is a flow chart illustrating the process of delayed snapshot deletion and replication after detecting a snapshot deletion request, according to certain embodiments.
[0034] FIG. 17 is a flow chart illustrating the process of delayed snapshot deletion and replication after detecting a snapshot creation event, according to certain embodiments.
[0035] FIG. 18 is a flow diagram illustrating a control plane workflow for a source region and a target region, according to certain embodiments.
[0036] FIG. 19 is a block diagram illustrating one pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
[0037] FIG. 20 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
[0038] FIG. 21 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
[0039] FIG. 22 is a block diagram illustrating another pattern for implementing a cloud infrastructure as a service system, according to at least one embodiment.
[0040] FIG. 23 is a block diagram illustrating an example computer system, according to at least one embodiment.
DETAILED DESCRIPTION
[0041] In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of certain embodiments. However, it will be apparent that various embodiments may be practiced without these specific details. The figures and description are not intended to be restrictive.
[0042] Techniques are disclosed herein for file system services (FSS) that utilize a snapshot and data model to create, process, and replicate snapshots and their associated data to ensure efficient replication, recovery, and consistency between a source file system (FS) and a target FS. The efficiency for replication and recovery utilizes provenance IDs to efficiently identify a starting point (e.g., a base snapshot) for a cross-region (or x-region) replication process.
[0043] Provenance ID is a special identification that uniquely identifies a snapshot among regions, whether it’s a system snapshot or a user snapshot. Before x-region replication starts, the source region and the target region may compare provenance IDs of the existing snapshots in their respective regions. Once two snapshots with the same provenance ID are found in both the source file system in the source region and the target file system in the target region; these two snapshots may be used as base snapshots without copying a full base snapshot from the source file system to the target file system. Only the deltas between a later snapshot and the base snapshot in the source file system need to be transferred over to the target file system during the x-region replication. Thus, the provenance ID techniques conserve valuable cloud resources while reducing network and IO traffic for performing the x-region replication.
[0044] In some embodiments, if the snapshot with the matched provenance ID is in a non- target file system in the target region, the non-target file system may perform an in-region cloning of the snapshot with the matched provenance ID to the target FS to create the base snapshot. Thereafter, the x-region replication can be performed between the source file system to the target file system. The in-region cloning conserves cloud resources as well because an in-region cloning does not involve extra encryption/decry ption, data transfer through object storage, etc.
[0045] Provenance ID may also help efficient recovery during a replication failure by quickly identifying a common starting point between a source FS and a target FS in different regions without the need of a full base copy to resume the failed replication. Finally, the control plane communication between the source FS and the target FS for the snapshot and data model exchange snapshot metadata information during the replication process to help achieve the goal of improving the efficiency of file system replication.
[0046] The snapshot data consistency techniques disclosed herein can help safeguard data integrity when snapshot creation and deletion requests occur during cross-region replications by temporarily withholding certain requests until appropriate times to execute such requests safely. The control plane communication between the source FS and the target FS for the snapshot and data model exchange snapshot metadata information during the replication process to help achieve the goal of maintaining snapshot data consistency during a replication.
[0047] In some embodiments, system snapshots are created and deleted periodically by FSS, while user snapshots may be created and deleted by users at any time according to the scheduled snapshot policy. Depending on the request timings of creation and deletion of a user snapshot and how these timings coincide with cross-region replication cycles, several possibilities may occur and may potentially lead to snapshot consistency issues between a source FS and a target FS. Snapshot deletion and replication may be delayed to ensure snapshot consistency.
[0048] In one embodiment, if a request to delete a user snapshot occurs during a replication cycle (e.g., replication cycle N), FSS may withhold the deletion until the end of the replication cycle (e.g., replication cycle N) in the source FS and the end of next replication cycle (e.g., replication cycle N+l) in the target FS. In another embodiment, if a user snapshot is both created and then requested to be deleted between two replication cycles (e.g., replication cycles N and N+l), the snapshot may not be replicated at all. Yet, in another embodiment, if a user snapshot is created during a replication cycle (e.g., replication cycle N) but not requested to be deleted until more than a replication cycle later (e.g., after replication cycle N+l), the replication of the user snapshot may be delayed for a cycle (i.e., occur in replication cycle N+l).
Explanation of Terms in Certain Embodiments
[0049] “Recovery time objective” (RTO), in certain embodiments, refers to the time duration users require for their replica to be available in a secondary' (or target) region after a failure occurs in a primary' (or source) region's availability domain (AD), whether the failure is planned or unplanned.
[0050] “Recovery' point objective” (RPO), in certain embodiments, refers to a maximum acceptable tolerance in terms of time for data loss between the failure of a primary' region (typically due to unplanned failure) and the availability of a secondary region.
[0051] A “replicator,” in certain embodiments, may refer to a component (e.g., a virtual machine (VM)) in a file system’s data plane for either uploading deltas to a remote Object Store (i.e., an object storage sendee) if the component is located in a source region or downloading the deltas from the Object Storage for delta application if the component is located in a target region. Replicators may be formed as a fleet (i.e., multiple VMs or replicator threads) called replicator fleet to perform cross-region (or x-region) replication process (e.g., uploading deltas to target region) in parallel.
[0052] A “delta generator” (DG), in certain embodiments, may refer to a component in a file system’s data plane for either extracting the deltas (i.e., the changes) between the key- values of two snapshots if the component is located in a source region or applying the deltas to the latest snapshot in a B-tree of the file system if the component is located in a target region. The delta generator in the source region may use several threads (called delta generator threads or range threads for multiple partitioned B-tree key ranges) to perform the extraction of deltas (or B-tree walk) in parallel. The delta generator in the target region may use several threads to apply the downloaded deltas to its latest snapshot in parallel.
[0053] A “shared database” (SDB), for the purpose of the present disclosure and in certain embodiments, may refer to a key-value store through which components in both the control plane and data plane (e.g., replicator fleet) of a file system can read and write to communicate with each other. In certain embodiments, the SDB may be part of a B-tree.
[0054] A “file system communicator” (FSC), in certain embodiments, may refer to a file manager layer running on the storage nodes in a file system’s data plane. The sendee helps with file create, delete, read and write requests, and works with a NFS server (e.g., Orca) to service IOs to clients. Replicator fleet may communicate with many storage nodes thereby distributing the work of reading/writing the file system data among the storage nodes.
[0055] A “blob,” in certain embodiments, may refer to a data type for storing information (e.g., a formatted binary file) in a database. Blobs are generated during replication by a source region and uploaded to an Object Store (i.e., an object storage) in a target region. A blob may include binary tree (B-tree) keys and values and file data. Blobs in the Object Store are called objects. B-tree key -value pairs and their associated data are packed together in blobs to be uploaded to the Object Store in a target region.
[0056] A “manifest,” in certain embodiments, may refer to information communicated by a file system in a source region (referred to herein as source file system) to a file system in a target region (referred to herein as target file system) for facilitating a cross-region replication process. There are two types of manifest files, master manifest and checkpoint manifest. A range manifest file (or master manifest file) is created by a source file system at the beginning of a replication process, describing information (e.g., B-tree key ranges) desired by the target file system. A checkpoint manifest file is created after a checkpoint in a source file system informing a target file system of the number of blobs included in a checkpoint and uploaded to the Object Store, such that the target file system can download the number of blobs accordingly.
[0057] “Deltas,” in certain embodiments, may refer to the differences identified between two given snapshots after replicators recursively visiting every node of a B-tree (also referred to herein walking a B-tree). A delta generator identifies B-tree key -value pairs for the differences and traverses the B-tree nodes to obtain file data associated with the B-tree keys. A delta between two snapshots may contain multiple blobs. The term “deltas” may include blobs and manifests when used in the context of uploading information to an Object Store by a source file system and downloading from an Object Store by a target file system.
[0058] An “object,” in certain embodiments, may refer to a partial collection of information representing the entire deltas during a cross-region replication cycle and is stored in an Object Store. An object may be a few MBs in size stored in a specific location in a bucket of the Object Store. An object may contain many deltas (i.e., blobs and manifests). Blobs uploaded to and stored in the Object Store are called objects.
[0059] A “bucket,” in certain embodiments, may refer to a container storing objects in a compartment within an Object Storage namespace (tenancy). In the present disclosure, buckets are used by source replicators to store secured deltas using server-side encryption (SSE) and also by target replicators to download for applying changes to snapshots.
[0060] “Delta application,” in certain embodiments, may refer to the process of applying the deltas downloaded by a target file system to its latest snapshot to create a new snapshot. This may include analyzing manifest files, applying snapshot metadata, inserting the B-tree keys and values into its B-tree, and storing data associated with the B-tree keys (i.e., file data or data portion of blobs) to its local storage. Snapshot metadata is created and applied at the beginning of a replication cycle.
[0061] A "region," in certain embodiments, may refer to a logical abstraction corresponding to a geographic area. Each region can include one or more connected data centers. Regions are independent, of other regions and can be separated by vast distances.
End-to-end Cross-Region Replication Architecture
[0062] End-to-end cross-region replication architecture provides novel techniques for end- to-end file storage replication and security between file systems in different cloud infrastructure regions. In certain embodiments, a file storage service generates deltas between snapshots in a source file system, and transfers the deltas and associated data through a high- throughput object storage to recreate a new snapshot in a target file system located in a different region during disaster recovery’. The file storage service utilizes novel techniques to achieve scalable, reliable, and restartable end-to-end replication. Novel techniques are also described to ensure a secure transfer of information and consistency during the end-to-end replication.
[0063] In the context of the cloud, a realm refers to a logical collection of one or more regions. Realms are typically isolated from each other and do not share data. Within a region, the data centers in the region may be organized into one or more availability domains (ADs). Availability domains are isolated from each other, fault-tolerant., and very unlikely to fail simultaneously. ADs are configured such that a failure at one AD within a region is unlikely to impact the availability of the other ADs within the same region.
[0064] Current practices for disaster recovery can include taking regular snapshots and resyncing them to another filesystem in a different Availability Domain (AD) or region. Although resync is manageable and maintained by customers, it lacks a user interface for viewing progress, is a slow and serialized process, and is not easy to manage as data grow over time.
[0065] Accordingly, different approaches are needed to address these challenges and others. The cloud sendee provider (e.g., Oracle Cloud Infrastructure (OCI)) file storage replication disclosed in the present disclosure is based on incremental snapshots to provide consistent point-in-time view7 of an entire file system by propagating deltas of changing data from a primary' AD in a region to a secondary AD, either in the same or different region. As used herein, a primary site (or source side) may refer to a location where a file system is located (e.g., AD, or region) and initiates a replication process for disaster recovery7. A secondary site (or target side) may refer to a location (e.g., AD or region) where a file system receives information from the file system in the primary7 site during the replication process to become a new operational file system after the disaster recovery. The file system located in the primary' site is referred to as the source file system, and the file system located in the secondary' site is referred to as the target file system. Thus, the primary site, source side, source region, primary file system or source file system (referring to one of the file systems on the source side) may be used interchangeably. Similarly, the secondary site, target side, target region, secondary file system, or target file system (referring to one of the file systems on the target side) may be used interchangeably.
[0066] The File Storage Sendee (FSS) of the present disclosure supports full disaster recovery for failover or fallback with minimal administrative work. Failover is a sequence of actions to make a secondary/target site become primary/source (i.e., start serving workloads) and may include planned and/or unplanned failover. A planned failover (may also refer to as planned migration) is initiated by a user to execute a planned failover from the source side (e.g., source region) to the target side (e.g., a target region) without data loss. An unplanned failover is when the source side stops unexpectedly due to, for example, a disaster, and the user needs to start using the target side because the source side is lost. A fallback is to restore the primary /source side before failover to become the primary /source again. A fallback may occur when, after a planned or unplanned failover and the trigger event (e.g., an outage) has ended, users like to reuse the source side as their primary' AD by reversing the failover process. The users can resume either from the last point-in-time on the source side prior to the triggering event, or resume from the latest changes on the target side. The replication process described in the present disclosure can preserve the file system identity after a round- trip replication. In other words, the source file system, after performing a failover and then fallback, can serve the workload again.
[0067] The techniques (e.g., methods, computer-readable medium, and systems) disclosed in the present disclosure include a cross-region replication of file system data and/or metadata by using consistent snapshot information to replicate the deltas between snapshots to multiple remote (or target) regions from a source region, then walking through (or recursively visit) all the keys and values in one or more file trees (e.g. B-trees) of the source file system (sometimes referred to herein as “walking a B-tree” or “walking the keys”) to construct coherent information (e.g., the deltas or the differences between keys and values of two snapshots created at different time). The constructed coherent information is put into a blob format and transferred to a remote side (e.g., a target region) using object interface, for example Object Store (to be described later), such that the target file system on the remote side can download immediately and start applying the information once it detects the transferred information on the object interface. The process is accomplished by using a control plane, and the process can be scaled to thousands of file systems and hundreds of replication machines. Both the source file system and the target file system can operate concurrently and asynchronously. Operating concurrently means that, the data upload process by the source file system and the data download process by the target file system may occur at the same time. Operating asynchronously means the source file system and the target file system can each operates at their own pace without waiting for each other at every stage, for example, different start time, end time, processing speed, etc. [0068] In certain embodiments, multipie file systems may exist in the same region and are represented by the same B-tree. Each of these file systems in the same region may be replicated across regions independently. For example, file system A may have a set of parallel running replicator threads walking a B-tree to perform replication for file system A. File system B represented by the same B-tree may have another set of such parallel running replicator threads walking the same B-tree to perform replication for file sy stem B.
[0069] With respect to security, the cross-region replication is completely secure. Information is securely transferred, and securely applied. The disclosed techniques provide isolation between the source region and the target region such that keys are not shared unencrypted between the two. Thus, if the source keys are comprised, the target is not affected. Additionally, the disclosed techniques include how to read the keys, convert, them into certain formats, and upload and download them securely. Different keys are created and used in different regions, so separate keys are created on the target and applied to information in a target-centric security mechanism. For example, the FSS generates a session key, which is valid for only one replication cycle or session, to encrypt data to be uploaded from the source region to the Object Store, and decrypt the data downloaded from the Object Store to the target region. Separate keys are used locally in the source region and the target region.
[0070] In the disclosed techniques, each upload and download process through the Object Store during replication has different pipeline stages. For example, the upload process has several pipeline stages, including walking a B-tree to generate deltas, accessing storage IO, and uploading data (or blobs) to the Object Store. The download process has several pipeline stages, including downloading data, applying deltas to snapshots, and storing data in storage. Each of these pipelines also has parallel processing threads to increase the throughput and performance of the replication process. Additionally, the parallel processing threads can take over any failed processing threads and resume the replication process from the point of failure without restarting from the beginning. Thus, the replication process is highly scalable and reliable.
[0071] FIG. 1 depicts an example concept of recovery point objective (RPO) and recovery' time objective (RTO) for an unplanned failover, according to certain embodiments. RPO is the maximum tolerance for data loss (usually specified as minutes) between the failure of a primary' site and the availability' of a secondary' site. As shown in FIG. 1, the primary’ site A 102 encounters an unplanned incident at time 110, which triggers a failover replication process by copying the latest snapshot and its deltas to the secondary site B 104. The initially copied information reaches the secondary' site B 104 at time 112. The primary site A 102 completes its copying of information to the secondary' site B 104 at time 114, and the secondary' site B 104 completes its replication process at time 116. Thus, the secondary' site B 104 becomes fully operational at time 116. As a result, the user’s data is not accessible in the primary site A 110, starting from point 110 until point 116, when that data is available again. Therefore, RPO is the time between point 110 and point 116. For example, if there is 10- minute worth of data that a user does not care about, then RPO is 10 minutes. If the data loss is more than 10 minutes, the RPO is not met. A zero RPO means a synchronous replication.
[0072] RTO is the time it takes for the secondary to be fully operational (usually specified as minutes), so a user can access the data again after the failure happens. It is considered from the secondary site’s perspective. Referring back to FIG. 1, the primary' site A 102 starts the failover replication process at time 120. However, the secondary site B 104 is still operational until time 122 when it is aware of the incident (or outage) at the primary' site A 102.
Therefore, the secondary site B 104 stops its sendee at time 122. Using the similar failover replication process described for RPO, the secondary site B 104 becomes fully operational at time 126, Therefore, the RTO is the time between 122 and 126, The secondary site B 104 can now' assume the role of the primary’ site. However, for customers who use primary’ site A 102, the loss of service is between time 120 and 126.
[0073] The primary (or source) site is where the action is happening, and the secondary (or target) site is inactive and not usable until there is a disaster. However, customers can be provided some point in time for them to continue to use for testing-related activities in the secondary site. It’s about how customers set up the replication and how they can start using the target when something goes wrong, and how they come back to the source once their sources have failover.
[0074] FIG. 2 is a simplified block diagram illustrating an architecture for cross-region remote replication, according to certain embodiments. In FIG. 2, the end-to-end replication architecture illustrated has two regions, a source region 290 and a target region 292. Each region may contain one or more file systems. In certain embodiments, the end-to-end replication architecture includes data planes 202 & 212, control planes (only control APIs 208a-n & 218a-n are shown), local storages 204 & 214, Object Store 260, and Key Management Sendee (KMS) 250 for both source region 290 and target region 292, FIG. 2 illustrates only one file system 280 in the source region 290, and one file system 282 in the target region 292 for simplicity. If there is more than one file system in a region, the same replication architecture applies to each pair of source and target file systems. In certain embodiments, multiple cross-region replications may occur concurrently between each pair of source and target file systems by utilizing parallel processing threads. In some embodiments, one source file system may be replicated to different target file systems located in the same target region. Additionally, file systems in a region may share resources. For example, KMS 250, Object Store 260, and certain resources in data plane may be shared by many file systems in the same region depending on implementations.
[0075] The Data planes in the architecture includes local storage nodes 204a-n & 214a-n and replicators (or a replicator fleet) 206a-n & 216a-n. A control .API host in each region does all the orchestration between different regions. The FSS receives a request from a customer to set up a replication between a source file system 280 and a target file system 282 to which the customer wants to move its data. The control plane 208 gets the request, does the resource allocation, and informs the replicator fleet 206a-n in the source data plane 202 to start uploading the data 230a (or may be referred to as deltas being uploaded) from different snapshots to an object storage 260. APIs are available to help customers set replication time objective and recover}' time objective (RTO). The replication model disclosed in the present disclosure is a ‘"push based” model based on snapshot deltas, meaning that the source region initiates the replication.
[0076] As used herein, the data 230a and 230b transferred between the source file system 280 and the target file system 282 is a general term, and may include the initial snapshot, keys and values of a B-tree that differ between two snapshots, file data (e.g., finap), snapshot metadata (i.e., a set of snapshot B-tree keys that reflect various snapshots taken in the source file system), and other information (e.g., manifest files) useful for facilitating the replication process.
[0077] Turning to the data planes of the cross-region replication architecture, a replicator is a component in the data plane of a file system. It performs either delta generation or delta application for that file system depending on the region where the file system locates. For example, replicator fleet 206 in a source region file system 280 performs delta 230a generation and replication. Replicator fleet 216 in a target region file system 282 downloads deltas 230b and applies them to the latest snapshot in the target region file system 282. The target region file system 282 can also use its control plane and workflows to ensure end-to- end transfer.
[0078] All the incremental work is based on the snapshot, an existing resource in file storage as a service. A snapshot is a point in time, data point, or picture of what is happening in the file system, and performed periodically in the source region file system 280. For a very first replication, the FSS takes the base snapshot (e.g., no replication has ever been taken), which is a snapshot of all the content of the source file system, and transfers all of that content to the target system. In other words, replicators read from the storage layer for that specific file system and puts all the data in the object storage buckets.
[0079] Once the data plane 202 of the source file system 280 uploads all the data 230a to the object storage (or Object Store) 260, the source side control plane 208 will notify the target side control plane 218 that there is a new work to be done on the target side, which is then relayed to the replicators of the target side. Target side replicators 216a-n then start downloading the objects (e.g., initial snapshot and deltas) from the object storage bucket 260 and applying the deltas captured on the source side.
[0080] If it is a base copy (e.g., the whole file system content up to the point of time, for example, ranging from past five days to five years), the upload process may take longer. To help achieve service level objective about time and performance, the source system 280 can take replication snapshot at a specific duration, such as one hour. The source side 280 can then transfer all data within that one hour to the target side 282, and take a new snapshot every one hour. If there are some caches with a lot of changes, the replication may be set to a lower replication interval.
[0081] To illustrate the above discussion, consider a scenario that a first snapshot is created in a file system in a source region (called source file system). Replication is performed regularly; thus, the first snapshot is replicated to a file system in a target region (called the target file system). When some updates are performed in the source file system afterward, a second snapshot is created. If an unplanned outage occurs after the second snapshot is created, the source file system will try to replicate the second snapshot to the target file system. During the failover, the source file system may identify the differences (i.e., deltas) between the first, and second snapshots, which include the B-tree keys and values and their associated file data in a B-tree representing both the first and second snapshots. The deltas 230a & 230b are then transferred from the source file system to the target file system through an Object Store 260 in the target region for the target file system to re-create the second snapshot by applying the deltas to its previously established first snapshot in the target region. Once the second snapshot is created in the target file system, the replication process of the failover completes, and the target file system is ready to operate.
[0082] Turning to control plan and its Application Programming Interfaces (“API”), a control plane provides instructions for data plane which includes replicators as the executor that performs the instructions. Both storage (204 & 214) and replicator fleet (206 & 216) are in the data planes. Control plane is not shown in FIG, 2. As used herein a “cycle” may refer to a time duration beginning at the time when a source file system 280 starts transferring data 230a to a target file system 282 and ending at the time when the target file system 282 receives all data 230b and completes its application of the received data. The data 230a-b is captured on the source side, and then applied on the target side. Once all changes on the target side are applied for a cycle, the source file system 280 takes another snapshot and starts another cycle.
[0083] Control APIs (208a-n & 218 a-n) are a set of hosts in the control plane’s overall architecture, and perform file system configuration. Control APIs are responsible for communicating state information among different regions. State machines that keep track of various state activities within regions, such as the progress of jobs, locations of keys and future tasks to be performed, are distributed among multiple regions. All of these information is stored in control plane of each region, and are communicated among regions through the control APIs. In other words, the state information is about the lifecycle details, details of the delta, and the lifecycle of the resources. The state machines can also track the progress of the replication and work with the data plan to help estimate the time taken for replication. Thus, the state machines can provide status to the users on whether replications are proceeding on time and the health of jobs.
[0084] Additionally, the communication between control APIs (208a-n) of the source file system 280 and control APIs (218a-n) of target file system 218 in different regions includes the transfer of snapshots, and metadata to make exact copies from the source to the target. For example, when a customer takes snapshots periodically in the source file system, the control plane can ensure the same user snapshots are created on the target file system, including metadata tracking, transferring, and recreation. [0085] Object Store 260 (also referred to herein as “Object”) in FIG. 2 is an object storage service (e.g., Oracle’s object storage service) allowing to read blobs, and write files for archival purposes. The benefits of using Object. Store are: first, it is easy to configure, second, it is easy to stream data into the Object Store; and third, it has the benefit of security streaming as a reliable repository to keep information; all because there is no network loss, the data can be immediately downloaded and is permanently there. Although direct communication between Replicators in the source and target regions is possible, direct communication requires a cross-region network setup, which is not scalable and hard to manage.
[0086] For example, if there is a large amount of data to be moved from source to target, the source can upload it to the Object Store 260, and the target 282 does not have to wait for all the information to be uploaded to the Object Store 260 to start downloading. Thus, both source 280 and target 282 can operate concurrently and continuously. The use of Object Store allows the system to scale and achieve faster throughput. Furthermore, key management service (KMS) 250 can control the access to the Object Store 260 to ensure security. In other words, the source tries to move the data out of the source region as fast as possible, and persist the data somewhere before the data can be applied to the target such that the data is not lost.
[0087] Compared to using a network pipe which has packet loss and recovery issues, the utilization of Object Store 260 between the source and target regions enables continuous data streaming that allows hundreds of file systems from the source region to write to the Object Store, while at the same time, the target region can apply hundreds of files concurrently.
Thus, the data streaming through the Object Store can achieve high throughput. Additionally, both the source and target regions can operate at their own rates for uploading and downloading.
[0088] Whenever a user changes certain data in the source file system 280, a snapshot is taken, and deltas before and after the change is updated. The changes may be accumulated on the source file system 280 and streamed to the Object Store 260. The target file system 282 can detect that data is available in the Object Store 260 and immediately download and apply the changes to its file sy stem. In some embodiments, only the deltas are uploaded to the object storage after the base snapshot. [0089] In some embodiments, replicators can communicate to many different regions (e.g., Phoenix to Ashburn to other remote regions), and the file system can manage many different endpoints on replicators. Each replicator 206 in the source file system 280 can keep a cache of these object storage endpoints, and also works with KMS 250 to generate transfer keys (e.g., session keys) to encrypt data address for the data in the Object Storage 260 (e.g., Server Side Encryption or SSE) to secure data stored in the buckets. One master bucket is for every AD in a target region. A bucket is a container storing objects in a compartment within an Object Storage namespace (tenancy). /Ml remote clients can communicate to a bucket and write information in a particular format so that each file system’s information can be uniquely identified to avoid mixing up the data for different customers or file systems.
[0090] The Object Store 260 is a high-throughput system and the techniques disclosed in the present disclosure can utilize the Object Store. In certain embodiments, the replication process has several pipeline stages, B-tree walk in the source file system 280, storage IO access, data upload to the Object Store 260, data download from the Object Store 260, and delta application in the target file system 282. Each stage has parallel processing threads involved to increase the performance of data streaming from the source region 290 to a target region 292 through the Object Store 260.
[0091] In certain embodiments, each file system in the source region may have a set of replicator threads 206a-n running in parallel to upload deltas to the Object Store 260. Each file system in the target regi on may also have a set of replicator threads 216a-n running in parallel to download deltas from the Object Store 260. Since both the source side and the target side operate concurrently and asynchronously, the source can upload at fast as possible, while the target can start downloading once the target detects the deltas are available in the Object Store. The target file system then applies the deltas to the latest snapshot and deletes the deltas in the Object Store after its application. Thus, the FSS consumes very' little space in the Object Store, and the Object Store has very high throughput (e.g., gigabytes of transfer).
[0092] In certain embodiments, multiple threads also run in parallel for storage IO access (e.g., DASD) 204a-n & 214a-n. Thus, all processing related to the replication process, including accessing the storage, uploading snapshots and data 230a from the source file system 280 to the Object Store 260, and downloading the snapshots and data 230b to the target file system 282, have multiple threads running in parallel to perform the data streaming. [0093] File storage is an AD local sendee. When a file system is created, it is in a specific AD. For a customer to transfer or replicate data from one file system to another file system within the same region or different regions, an artifact (also referred to as manifest) transfer may need to be used.
[0094] As an alternative to transferring data using Object Store, VCN peering may be used to set up network connections between remote machines (e.g., between replicator nodes of source and target) and use Classless Inter-Domain Routing (“CIDR”) for each region.
[0095] Referring back to FIG. 2, Key Management System (KMS) 250 is a security for the replication, and provides storage service for cloud service providers (e.g., OCI). In certain embodiments, the file systems 280 at the source (or primary) side and target (or secondary') side use separate KMS keys, and the key management is hierarchical. The reason for using separate keys is that if the source is compromised, the bad actor cannot use the same keys to decrypt the target. The FSS has a three-layer key architecture. Because the source and target use different keys when transferring data, the source needs to decrypt the data first, re-encrypt with an intermediate key, and then re-encrypt the data on the target side. FSS defines sessions, and each session is one data cycle. A key is created for that session to transfer data. In other words, a new key is used for each new session. In other embodiments, a key may be used for more than one session (e.g., more than one data transfer) before creating another key. No key is transferred through the Object Store 260, and the keys are available only in the source side, and not visible outside the source for security reasons.
[0096] A replication cycle (also referred to as a session) is periodic and adjustable. For example, once every7 hour, the replicators (206a-n & 216a-n) perform a replication. A cycle starts when a new snapshot is created in the source side 280, and ends when all deltas 230b have been applied in the target side 282 (i.e., the target reaches DONE state). Each session completes before another session starts. Thus, only one session exists at any time, and there is no overlap between sessions.
[0097] Secret management (i.e., replication using KMS) handles secret material transfer between the source (primary) file system 290 and the target (or secondary) file system 292 utilizing KMS 250. The source file system 280 computes deltas, reads file data, and then uses local file system encryption keys, and works with Key Management Service to decrypt the file data. Then, the source file system 280 generates a session key (called delta encryption key (DEK)), encrypts it to become an encrypted session key (called delta transfer key (DTK)), and transfers the DTK to the target file system 282 through their respective control planes 208 & 218. The source file system 280 also uses DEK to encrypt data 230a and upload them to the Object Store 260 through Transport Layer Security (TLS) protocol. The Object Store 260 then uses server side encryption (SSE) to ensure the security of the data (e.g., deltas, manifests, and metadata) 230a for storing.
[0098] The target file system 282 obtains the encrypted session key DTK securely through its control plane 218 (using HTTPS via cross-region API communication), decrypts it via KMS 250 to obtain DEK, and places it in a location in the target region 292, When a replication job is scheduled in the target file system 282, the DEK is given to the replicator (one of the replication fleet 216a-n), and the replicator uses the key to decrypt the data (e.g., deltas including file data) 230b download from the Object Store 260 for application and re- encrypts file data with its local file system keys.
[0099] The replication between the source file system 280 and target file system 282 is a concurrent process, and both the source file system 280 and target file system 282 operate at their own pace. When the source side completes the upload, which may occur earlier than the target’s download process, the source side cleans up its memory and remove all the keys. When the target completes its application of the deltas to its latest snapshot, it cleans up its memory and removes all keys as well. The FSS service also releases the KMS key. In other words, there are two copies of the session key, one in the source file system 280 and another in the target file system 282. Both copies are removed by the end of each session, and a new session key is generated in the next replication cycle. This process ensures that the same keys are not used for different purposes. Additionally, the session key is encrypted by a file system key to create a double protection. This is to ensure only a particular file system can use this session key.
[0100] FIG. 3 is a simplified schematic illustration of components involved in cross-region remote replication, according to certain embodiments. In certain embodiments, a component called delta generator (DG) 310 in source region A 302 and 330 in target region B 304 is part of the replicator fleet 318 and runs on thousands of storage nodes in the fleet. A replicator 318 in source region A does Remote Procedural Call (RPC) (e.g., getting key -value set, lock blocks, etc.) to a delta generator 310 to collect B-tree keys and values, and data pages from Direct- Access Storage Device (DASD) 314, which is a replication storage service for accessing the storage, and considered a data server. The DG 310 in source region A is a helper to the replicator 318 to break the key ranges for a delta and pack all the key/values for a given range into a blob to be sent back to the replicator 318. There are multiple storage nodes 322 & 342 attached to DASDs 314 & 334 in both regions, where each node has many disks (e.g., 10 TBs or more).
[0101] In certain embodiments, the file system communicators (FSC) 312 & 332 in both regions is a metadata server that helps update the source file system for user updates to the system. FSCs 312 & 332 are used for file system communication, and the delta generator 310 is used for replication. Both the DGs 310 & 330 and the FSCs 312 & 332 are metadata servers. User traffic goes through the FSCs 312 & 332 and DASDs 314 & 334, while replication traffic goes through the DGs. In an alternative embodiment, the FSC’s function may be merged into that of DG.
[0102] In certain embodiment, a shared databases (SDBs) 316 & 336 of both regions are key-value stores that the components through which both the control plane and data plane (e.g., replicator fleet) can read and write for them to communicate with each other. Control planes 320 & 340 of both regions may queue a new job into their respective shared databases 316 & 336, and replicator fleet 318 & 338 may read the queues in the shared databases 316 & 336 constantly and start file system replication once the replicator fleet 318 & 338 detect the job request. In other words, the shared databases 316 & 336 are a conduit between the replicator fleet and the control planes. Further, the shared databases 316 & 336 are a distributed resource throughout different regions, and the IO traffic to/from the shared databases 316 & 336 should be minimized. Similarly, the IO traffic to/from DASD needs to be minimized to avoid affecting the user’s performance. However, the replication process may occasionally be throttled because it is a secondary service, compared to the primary service.
[0103] Replicator fleet 318 in source region A can work with DG 310 to start, walking B- tree in the file system in source region A to collect key -values and convert them into flat files or blobs to be uploaded to the Object Store. Once the data blobs (including key -values and actual data) are uploaded, the target can immediately apply them without waiting for a large number of blobs to be present in the Object Store 360. The Object Store 360 is located in the target region B for disaster recovery reasons. The goal is to push from source to the target region B as soon as possible and keep the data safe. [0104] There are many replicators to replicate thousands of file systems by utilizing low- cost machines with smaller footprints to optimize the space, and scheduling as many replications as possible while ensuring a fair share of bandwidth among them. Replicator fleet 318 & 338 in both regions run on virtual machines that can be scaled up and down automatically to build an entire fleet for performing replication. The replicators and replication service can dynamically adjust based on the capacity to support each job. If one replicator is heavily loaded, another can pick up to share the load. Different replicators in the fleet can balance load among themselves to ensure the jobs can continue and do not stop due to overloading individual replicators.
[0105] FIG. 4 is a simplified flow diagram illustrating the steps executed during cross- region remote replication, according to certain embodiments.
[0106] Step SI : When a customer sets up replication, the customer provides the source (or primary') file system (A) 402, target (or secondary’) file system (B) 404 and the RPO. A file system is uniquely identified by a file system identification (e.g., Oracle Cloud ID or OCID), a globally unique identifier for a file system. Data is stored in the file storage service (“FSS”) control plane database.
[0107] Step S2: Source (A) control plane (CP-A) 410 orchestrates creating system snapshots periodically at an interval (smaller than RPO) and notifies the data plane (including replicator/uploader 412) the latest snapshot, and the last snapshot that was successfully copied to the target (B) file system 404.
[0108] Step S3: CP-A 410 notifies replicator 412 (or uploader), a component in the data plane, to copy the latest snapshot:
S3a: Replicator 412 in Source (A) walks the B-Tree to compute the deltas between the two given snapshots. The existing key infrastructure is used to decrypt the file system data.
S3b: These deltas 414 are uploaded to the Object Store 430 in target (B) region (the data may be compressed, and / or de-duplicated during the copy). This upload may be performed by multiple replicator threads 412 in parallel.
[0109] Step S4: CP-A 410 notifies the target (B) control plane (CP- B) 450 about the completion of the upload. [0110] Step S5: CP-B 450 calls the target replicator-B 452 (or downloader) to apply the deltas:
S5a: Replicator-B 452 downloads the data 454 from Object Store 430.
S5b: Replicator-B 452 applies these deltas to the target file system (B).
[0111] Step S6: CP-A 410 is notified of the new snapshot now available on target (B) after the delta application is complete.
[0112] Step 7: The cross-region remote replication process repeats from step S2 to step S6.
[0113] FIG. 5 is a simplified diagram illustrating the high-level concept of B-tree walk, according to certain embodiments. B-tree structure may be used in a file system. A delta generator walks the B-tree and guarantees consistency for the walk. In other words, the walk ensures that the key-values are what is expected at the end of the walk and captures all information between any two snapshots, such that no data corruption may occur. The file system is a transactional type of file system that may be modified, and the users need to know about the modification and redo the transactions because another user may update the same transaction or data.
[0114] Key-values and snapshots are immutable (e.g., cannot be modified except garbage collector can remove them). As illustrated in FIG . 5, there are many snapshots (snapshot 1 ~ snapshot N) in the file systems. When a delta generator is walking the B-tree keys (510 - 560) in a source file system, snapshots may be removed because a garbage collector 580 may come in to clean the keys of the snapshots that deem as garbage. When a delta generator walks the B-tree keys, it needs to ensure the keys associated with the remaining snapshots (e.g., not removed by the garbage collector) are copied. When keys, for example, 540 and 550, are removed by garbage collector 580, the B-tree pages may shrink, for example from two pages before garbage collection down to one page after garbage collection. The way a delta generator can ensure consistency when walking B-tree keys is to confirm that the garbage collector 580 has not modified or deleted any keys for the page (or a section between two snapshots) that the delta generator has just, walked (e.g., between two keys). Once the consistency is confirmed, the delta generator collects the keys and sends them to replicator to process and upload.
[0115] The B-tree keys may give a picture of what has changed. The techniques disclosed in the present disclosure can determine what B-tree keys are new and what have been updated between two snapshots. A delta generator may collect the metadata part, keys and values, and associated data, then send to the target. The target can figure out that the received information is between two snapshot ranges and applies in the target file system. After the delta generator (or delta generator threads) walks a section between two keys and confirms its consistency, it uses the last ending key as the next starting key for its next walk. The process is repeated until all keys have been checked, and the delta generator collects the associated data every' time consistency is confirmed.
[0116] For example, in a file system, when a file is modified (e.g., created, deleted, and then re-created), this process creates several versions of corresponding file directory entries. During a replication process, the garbage collector may clean up (or remove) a version of the file directory entry corresponding to the deleted file and cause a consistency problem called whiteout. Whiteout occurs if there is an inconsistency between the source file system and the target file system, because the target file system may fail to reconstruct the original snapshot chain involving the modified file. The disclosed techniques can ensure the consistency between the source file system and the target file system by detecting a whiteout file (i.e., a modified file affected by the garbage collector) during B-tree walk, retrieving an unaffected version of the modified file, and providing relevant information to the target file system during the same replication cycle to properly reconstruct the correct snapshot chain.
[0117] FIG. 6A and 6B are diagrams illustrating pipeline stages of cross-region replication, according to certain embodiments. The cross-region replication for a source file system disclosed in the present disclosure has four pipeline stages, namely initiation of the cross- region replication, B-tree walk in the source file system (i.e., delta generation pipeline stage), storage IO access for retrieving data (i.e., data read pipeline stage), data upload to the Object Store (i.e., data upload pipeline stage), in the source file system. The target file system has similar four pipeline stages but in reverse order, namely preparation of cross-region replication, data download from the Object Store, delta application in the target file system, and storage 10 access for storing data. FIG. 6A illustrates the four pipeline stages in the source file system, but a similar concept applies to the target file system. FIG. 6B illustrates the interaction among the processes and components involved in the pipeline stages. All of these pipeline stages may operate in parallel. Each pipeline stage may operate independently and hand off information to the next pipeline stage when the processing in the current stage completes. Each pipeline stage is ensured to take a share of the entire bandwidth and not use more than necessary. In other words, resources are allocated fairly among all jobs. If no other job is working in the system, the working job can get as many resources as possible.
[0118] The threads in each pipeline stage also perform their tasks in parallel (or concurrently) and independently of each other in the same pipeline stage (i.e., if a thread fails, it will not affect other threads).. Additionally, the tasks (or replication jobs) performed by the threads in each pipeline stage are restartable, which means when a thread fails, a new thread (also referred to as substitute thread) may take over the failed thread to continue the original task from the last successful point.
[0119] In some embodiments, a B-tree walk may be performed with parallel processing threads in the source file system 280. A B-tree may be partitioned into multiple key ranges between the first, key and the last key in the file system. The number of key ranges may be determined by customers. Multiple range threads (e.g., around 8 to 16) per file system may be used for the B-tree walk. One range thread can perform the B-tree walk for a key range, and all range threads operate concurrently and in parallel. The number of threads to be used depends on factors such as the size of the file system, availability of resources, and bandwidth in order to balance the resource and traffic congestion. The number of key ranges is usually more than the number of range threads available to utilize the range threads fully. Thus, the B-tree walk can be scalable and processed by concurrent parallel walks (e.g., with multiple threads).
[0120] If some keys are not consistent after the delta generator walks a page because some keys do not exist, the system may drop a transaction that is in progress and has not been committed yet, and go back to the starting point to walk again. During the repeat B-tree walk due to inconsistency, the delta generator may ignore the missing keys and their associated data by not collecting them to minimize the amount of information to be processed or uploaded to the target side since these associated data are deemed garbage. Thus, the B-tree walk and data transfer can be more efficient. Additionally, a delta generator does not need to wait for the garbage collector to remove the information to be deleted before walking the B- tree keys. For example, keys have dependencies on each other. If a key or an iNode points to a block that is deleted or should be deleted by the garbage collector, the system (or delta generators) can figure out by itself that the particular block is garbage and delta generators do not need to carry it. [0121 ] Delta generators typically do not modify anything on the source side (e.g., does not delete the keys or blocks of data deemed garbage) but simply does not copy them to the target side. The B-tree walk process and garbage collection are asynchronous processes. For example, when a block of data that a key points to no longer exists, the fde system can flag the key as garbage and note that it should not be modified (e.g., immutable), but only the garbage collector can remove it. A delta generator can continue to walk the next key without waiting for the garbage collector. In other words, delta generators and garbage collectors can proceed at their own pace.
[0122] In FIG. 6A, when a source region initiates a cross-region replication process, which may involve many file systems, main threads 610a-n pick up the replication jobs, one job per file system. A main thread (e.g., 610a or 610 for later use) of a file sy stem in the source region (i.e., source file system) communicates to delta generator 620 (shown in FIG. 6B) to obtain the number of key ranges requested by a customer, and update a corresponding record in SDB 622. Once the main thread 610 of the source file system figures out the required number of key ranges, it further creates a set of range threads 612a-n based on the required number of key ranges. These range threads 612am are performed by the delta generator 620. They initialize their GETKEYVAL buffers 640 (shown in FIG. 6B), update their checkpoint records 642 in SDB 622 (shown in FIG. 6B), and perform storage IO access 644 by interacting with DASD IO threads 614a-n.
[0123] In certain embodiments, each main thread 610 is responsible for overseeing all the range threads 612a-n it creates. During the replication, the main thread 610 may generate a master manifest file outlining the whole replication. The range threads 612a-n generate a range manifest file including the number of key ranges (i.e., a sub-division of the whole replication), and then checkpoint manifest (CM) files for each range to provide updates to the target file sy stem about, the number of blobs per checkpoint, where checkpoints are created during the B-tree walk. One checkpoint is created by a range thread 612. Once the main thread 610 determines all the range threads 6I2a-n have been completed, it creates a final checkpoint manifest (CM) file with an end-of-file marking, and then uploads the CM file to the Object Store for the target file system to figure out. the progress in the source file system. The CM file contains a summary of all individual ranges, such as range count, the final state of checkpoint, record, and other information. [0124] The range threads 612a-n are used for parallel processing to reduce time significantly for the B-tree walk for a big source file system. In certain embodiments, the B- tree keys are partitioned into roughly equal-sized ranges. One range thread can perform the B-tree walk for a key range. The number of range threads 612a-n to be used depends on factors such as the size of the file system, availability of resources and bandwidth to balance the resource, amount of data to generate and traffic congestion. The number of key ranges are usually more than the number of range threads 612a-n available to fully utilize the range threads, around 2x to 4x ratio. Each of the range threads 612a-n has a dedicated buffer (GETKEYVAL) 640 containing available jobs to work on. Each range thread 612 operates independent of other range threads, and updates its checkpoint records 642 in SDB 622 periodically.
[0125] When the range threads 612a-n are walking the B-tree (i.e., recursively visiting every node of the B-tree), they may need to collect file data associated (e.g., FMAP) with B- tree keys and request IO access 644 to storage. These IO requests are enqueued by each range thread 612 to allow DASD IO threads 614a-n (i.e., data read pipeline stage) to work on them. These DASD IO threads 614a-n are common threads shared by all range threads 612a-n. After DASD IO threads 614a-n have obtained the requested data, the data is put into an output buffer 646 to serialize it into blobs for object threads 616a-n (i.e., data upload pipeline stage) of the replicators to upload to the Object Store located in the target region. Each object thread picks up an upload job that may contain a portion of all data to be uploaded, and all object threads perform the upload in parallel.
[0126] FIG. 7 is a diagram illustrating a layered structure in the FSS data plane, according to certain embodiments. In FIG. 7, the replicator fleet 710 has four layers, job layer 712, delta generator client 714, encryption/DASD IO 716, and Object 718. The replicator fleet 710 is a single process responsible for interacting with the storage fleet 720, KMS 730, and Object Storage 740. In certain embodiments, the job layer 712 polls the SDB 704 for enqueued jobs 706, either upload jobs or download jobs. The replicator fleet 710 includes VMs (or threads) that pick up the enqueue replication jobs to their maximum capacity. Sometimes, a replicator thread may own a part of a replication job, but it will work together with another replicator thread that owns the rest of the same replication job to complete the entire replication job concurrently. The replication jobs performed by the replicator fleet 710 are restartable in that if a replicator thread fails in the middle of replication, another replicator thread can take over and continue from the last successful point to complete the job the failed replicator thread initially owns. If a strayed replicator thread (e.g., fails and wakes up again) conflicts with another replicator thread, FSS can use a mechanism called generation number to avoid the conflict by making both replicator threads update different records.
[0127] The delta generator client layer 714 performs B-tree walking by accessing the delta generator server 724, where the B-tree locates, in storage fleet 720. The encryption / DASD IO layer 716 is responsible for security and storage access. After the B-tree walk, the replicator fleet 710 may request IO access through the encryption / DASD IO layer 716 to access DASD extents 722 for file data associated with the deltas identified during the B-tree walk. Both the replicator fleet 710 and storage fleet 720 update control API 702 their status (e.g., checkpoints and leasing for replicator fleet 710) through SDB 704 regularly to allow the control API 702 to trigger alarms or take actions when necessary.
[0128] The encryption / DASD IO layer 716 interacts with KVIS and FSK fleet 730 at the target side to create session keys (or snapshot encryption key) during a cross-region replication process, and use FSK for encrypting and decrypting the session keys. Finally, object layer 718 is responsible for uploading deltas and file data from the source file system to the Object Store 740 and downloading them to the target file system from the Object Store 740.
[0129] The Data plane of FSS is responsible for delta generation. The data plane uses B- tree to store FSS data, and the B-tree has different types of key -value pairs, including but not limited to, leader block, superblock, iNode, file name keys, cookie map (cookie related to directory entries), and block map (for file contents data, also referred to as FMAP).
[0130] These B-tree keys are processed by replicators and delta generators in the data plane together. Algorithms for computing the changed key-value pairs (i.e., part of deltas) between two given snapshots in a file system can continuously read the keys, and return the keys back to replicators using transaction budgets, and ensure that transactions are confirmed at the end to get consistent key-value pairs for processing.
[0131] In other embodiments, the delta generation and calculation may be scalable. The scalable approach can utilize multiple threads to compute deltas (i.e., the changes of key- value pairs) between two snapshots by breaking a B-tree into many key ranges. A pool of threads (i.e., the delta generators) can perform the scanning of the B-tree (i.e., walking the B- tree) and calculate the deltas in parallel. [0132] FIG. 8 depicts a simplified example binary large object (BLOB) format, according to certain embodiments. A blob is a data type for storing information (e.g., binary data) in a database. Blobs are generated during replication by the source region and uploaded to the Object Store. The target region needs to download and apply the blobs. Blobs and objects may be used interchangeably depending on the context.
[0133] During the B-tree walk, when a delta generator encounters an iNode and its block map (also referred to as FMAP, data associated with a B-tree key) for a given file (i.e., the data content), the delta generator works with replicators to traverse all the pages in the blocks (FM AP blocks) inside DASD extent that the FMAP points to and read them into a data buffer, decrypt the data using a local encryption file key, put into an output buffer to serialize it into blob for replicators to upload to the Object Store. In other words, the delta generators need to collect all FMAPs for an identified delta to get all the data related to the differences between the two snapshots.
[0134] A snapshot delta stored in the Object Store may span over many blobs (or objects if stored in the Object Store). The blob format for these blobs has keys, values, and data associated with the keys if they exist. For example, in FIG. 8, the snapshot delta 800 includes at least three blobs, 802, 804 and 806. The first blob 802 has a prefix 810 indicating the key- value type, key length and value length, followed by its key 812 (keyl) and value 814 (vail). The second blob 804 has a prefix 820 (key-value type, key length and value length), key 822 (key 2), value 824 (val2), data length 826 and data 828 (data2). In the prefix 820 of this second blob 804, its key-value type is fmap because this blob has additional data 828 associated with the key 822, The third blob 830 has a similar format to that of the first, blob 810, for example, prefix 830, key 832 (key3), and value 834 (va!3).
[0135] Data is decrypted, collected, and then written into the blob. All processes are performed parallelly. Multiple blobs can be processed and updated at the same time. Once all processes are done, data can be written into the blob format (shown in FIG. 8), then uploaded to the Object Store with a format or path names (illustrated in FIG. 9).
[0136] FIG. 9 depicts an example replication bucket format, according to certain embodiments. A “bucket” may refer to a container storing objects in a compartment within an object storage namespace. In certain embodiments, buckets are used by source replicators to store secured data using server-side encryption (SSE) technique and also used by target replicators to download for applying changes to snapshots. The replication data for all filesystems for a target region may share a bucket in that region.
[0137] The data layout of a bucket in the Object Store has a directory structure that includes, but not limited to, file system ID (e.g., Oracle Cloud ID), deltas with starting snapshot number and ending snapshot number, manifest describing the content of the information in the layout of the objects, and blobs. For example, the bucket in FIG. 9 contains two objects 910 & 930. The first object 910 has two deltas 912 & 920. It starts with a path name 911 using the source file system ID as a prefix (e.g., ocidl . filesystem. ocl .iad. . . .), the first, delta 912 that is generated from snapshot 1 and snapshot 2 , and a second snapshot 920 generated from snapshot 2 and snapshot 3. Each delta has one or more blobs representing the content for that delta. For the first delta 912, it has two blobs 914 & 916 stored in the sequence of their generation. For the second delta 920, it has only one blob 922. Each delta also has a manifest describing the content of the information in the layout of this delta, for example, manifest 918 for the first delta 912 and manifest 924 for the second delta 920. Manifest in a bucket is content that describes the deltas, for example, the file system numbers and snapshot ranges, etc. The manifest may be a master manifest, range manifest or checkpoint manifest, depending on the stage of replication process.
[0138] The second object 930 also has two deltas 932 & 940 with a similar format starting with a path name 931. The two objects 910 & 930 in the bucket come from different source regions, IAD for object 910 and PHX for object 930, respectively. Once a blob is applied, the corresponding information in the layout can be removed to reduce space utilization.
[0139] A final manifest object (i.e., the checkpoint manifest, CM file) is uploaded from the source region to the Object Store to indicate to the target region that the source file system has completed the snapshot delta upload for a particular object. The source CP will communicate this event to the target CP, where the target CP can inform the target DP via SDB to trigger the download process for that object by target replicators.
[0140] The control plane in a source region or target region orchestrates all of the replication workflows, and drives the replication of data. The control plane performs the following functions: I) creating system snapshots that are the basis for creating the deltas; 2) deciding when such snapshots need to be created; 3) initiating replication based on the snapshots; 4) monitoring the replication; 5) triggering the deltas to be downloaded by the secondary (or target side), and, 6) indicating to the primary (or source) side that snapshot has reached the secondary7.
[0141] A file system has a few operations to handle its resources, including, but not limited to, creating, reading, updating, and deleting (CRUD). These operations are generally synchronous within the same region, and take up workflows as the file system gets HTTPS request from API servers, make changes in the backend for creating resources, and get responses back to customers. The resources are split between source and target regions. The states are maintained for the same resources between the source and target regions. Thus, asynchronous communication between the source and target regions exists. Customers can contact the source region to create or update resources, which can be automatically reflected to the secondary or auxiliary resources in the target region. The state machine in control plane also covers recovery' in many aspects, including but not limited to, failure in the fleet, key management failure, disk failure, and object failure, etc.
[0142] Turning to Application Programming Interface (API) in the control plane, there are different APIs for users to configure the replication. Control APIs for any new resource work only in the region where the object is created. In a target file system, a field called “IsTargetable” in its APIs can be set to ensure that the target file system undergoing replication cannot be accidentally used by a consumer. In other words, setting this field to be false means that although a consumer can see the target file system, no one can export the target file system or access any data in the live system. Any export, may change the data because the export is a read/write permission to export, not read-only permission. Thus, export is not allowed to prevent any change to the target file system during the replication process. The consumer can only access data in old snapshots that have already been replicated. All newly created or cloned file systems can have this field set to true. The reason is that a target can only get data from a single source. Otherwise, a collision may occur when data is written or deleted. The system needs to know whether or not the target file system being used is already part of some replication. A "true" setting for the “IsTargetable” field means no replication is on-going, and a "false" setting means the target file system cannot be used.
[0143] Regarding cross-region communication between control plane components, a primary7 resource on the source file system is called application, and an auxiliary7 (or secondary) source on the target file system is called an application target. When a source object and a target object are created, they have a single replication relationship. Both objects can only be updated from the source side, including changing compartments, editing or deleting details. When a user wants to delete the target side, the replication can be deleted by itself. For a planned failover, the source side can be deleted, and both the source side and target replication are deleted. For an unplanned failover, the source side is not available, so only the target replication can be deleted. In other words, there are two resources for a single replication, and they should be kept in sync. There are various workflows for updating metadata on both the source and target sides. .Additionally, retries, failure handling, and cross-region APIs for failover are also part of the cross-region communication process.
[0144] When the source creates necessary security and other related artifacts, it uploads the security and the artifacts to the Object Store, and initiates a job on the target (i.e., notifies the target that a job is available), and the target can start downloading the artifacts (e.g., snapshots or deltas). Thereafter, the target continues to keep looking in the Object Store for an end-of-file marker (also referred to herein as checkpoint manifest (CM) file). The CM file is used as a mechanism for the source side and target side to communicate the completion of the upload of an object during the replication process. At every checkpoint, the source side upload s this CM file containing information, such as the number of blobs that, have been uploaded up to this checkpoint, such that the target side can download this number of blobs to apply to its current snapshot. This CM file is a mechanism for the source side to communicate to the target side that the upload of an object to the Object Store is complete for the target to start working on that object. In other words, the target will continue to download until there are no more objects in the Object Storage. Thus, this scheme enables the concurrent processing of both the source side and the target side.
[0145] FIG. 10 is a flow chart illustrating state machines for concurrent source upload and target download, according to certain embodiments. As discussed earlier, both the source file system and the target system can perform the replication concurrently and thus have their respective state machines. In certain embodiments, each file system may have its own state machine while sharing some common job level states. In FIG. 10, the source file system has states 1002 to 1018 for performing the data upload plus states 1030 to 1034 for session key generation and transfer. The target file system has states 1050 to 1068 for data download. A session key may be generated at any time in the source file system while the deltas are being uploaded to the Object Storage. Thus, the session key transfer has its own state sequence 1030 to 1034. In FIG. 10, the target file system cannot start the replication download process
->9 (i.e., Ready to Reconcile state 1050) until it has received the indication that at least an object has been uploaded by the source file system to the Object Storage (i.e., Mainfest_Copied state 1014) and that a session key is ready for it to download (i.e., Copied DTK state 1034).
[0146] In a source file system, several functional blocks, such as snapshot generator, control API and delta monitor, are part of the CP. Replicator fleet is part of the DP. The snapshot generator is responsible for periodically generating snapshots. The delta monitor monitors the progress of the replicators on replication-related tasks, including snapshot creation and replication schedule on a periodic basis. Once the delta monitor detects that the replicator has completed the replication jobs, it moves the states to copied state (e.g., Manifest Copied state 1014) on the source side or replicated state (e.g., Replicated state 1058) on the target side. In certain embodiments, several file systems can perform replication at the same time from a source region to a target region.
[0147] Referring to FIG. 10, in certain embodiments, the source file system, in a concurrent mode state machine, a snapshot generator after creating a snapshot signal to a delta monitor that a snapshot has been generated. The delta monitor, which runs a CP replication state (CpRpSt) workflow, is responsible for initiating snapshot metadata upload to the Object Store on the target side. Snapshot metadata may include snapshot type, snapshot identification information, snapshot time, etc. The CpRpSt workflow' sets Ready to Copy Metadata state 1002 for the replicator fleet to begin copying metadata. When a replicator gets a replication job, it makes copies of snapshot metadata (i.e., Snapshot-Metadata Copying state 1004) and uploads the copies to the Object Store. When all replicators complete the snapshot metadata upload, the state is set to Snapshot Metadata Copied state 1006. The CpRpSt workflow' then continues polling the source SDB for a session key.
[0148] Now' the CpRtSt workflow hands over control back to the delta monitor to monitor the delta upload process to move into Ready to _Copy state 1008, which indicates that, the delta computation has been scheduled. Then the source CP API sends a request to a replicator to start the next stage of replication by making copies of manifests along with uploading deltas. A replicator that picks up a replication job can start making copies of manifests (i.e., Mainfest Copying state 1010). When the source file system completes the manifest copying, it moves to Manifest_Copied state 1014 and, at the same time, notifies the target file system that it can start its internal state (Ready to Reconcile state 1050). [0149] As discussed above, the session key may be generated by the source file system while the data upload is in progress. The replicator of the source file system communicates with the target KMS vault to obtain a master key, which may be provided by customers, to create a session key (referred to herein as delta encryption key or DEK). The replicator then uses a local file system key (FSK) to encrypt the session key (now becomes encrypted DEK which is also referred to herein as delta transfer key (DTK)). DTK is then stored in SDB in the source region for reuse by replicator threads during a replication cycle. The state machine moves to Ready _to_Copy_DTK state 1030.
[0150] The source file system transfers DTK and KMS’s resource identification to the target API, which then puts them into SDB in the target region. During this transfer process, the state machine is set to CopyingJDTK state 1032. When the CpRpSt workflow in the source file system finishes polling the source SDB for the session key, it sends a notification to the target side signaling the session key (DTK) is ready for the target file system to download and use it to decrypt its downloaded deltas for application. The state machine then moves to Copied DTK state 1034. The target side replicator retrieves DTK from its SDB and requests KMS’s API to decrypt it to become a plain text DEK (i.e., decrypted session key).
[0151] When the source file system completes the upload of data for a particular replication cycle, including the session key transfer, its delta monitor notifies the target control API of such status as validation information and enters X-region_Copied_Done state 1016. This may occur before the target file system completes the data download and application. The source file system also cleans up its memory and removes all the keys. The source file system then enters Awaiting Target Response state 1018 to wait for a response from the target file system to start a new replication cycle.
[0152] As mentioned earlier, the target file system cannot start the replication download process until it has received the indication that at least an object has been uploaded by the source file system (i.e., Mainfest_Copied state 1014) to the Object Storage and that a session key is ready for it to download (i.e., Copied DTK state 1034). Once these two conditions are satisfied, the state machine moves to Ready To Reconcile state 1050. Then, at Reconciling state 1052, the target file system starts a reconciliation process with the source side, such as synchronizing snapshots of the source file system and the target file system, and also performs some internal CP administrative works, including taking snapshots and generating statistics. This internal state involves communication within the target file system between its delta monitor and CP API.
[0153] After the reconciliation process is complete, the replication job is passed to the target replicator (i.e., Ready to Replicate state 1054). The target replicator monitors a checkpoint manifest (CM) file that wall be uploaded by the source file system. The CM file is marked by the target. The target replicator threads then start downloading the manifests and applying the downloaded and decrypted deltas (i.e., Replicating state 1056). The target replicator threads also read the FMAP data blocks from the blobs downloaded from the Object Store, and communicates to local FSK services to get file system key FSK, which is used to re-encrypt each FMAP data block and store it in its local storage.
[0154] If the source file system has finished the data upload, it will update a final CM: file by setting an end-of-file (eof) field to be true and upload it to the Object Store. As soon as the target file system detects this final CM file, it wall finish the download of blobs, apply them, and the state machine moves to Replicated state 1058.
[0155] After the target file system applied all deltas (or blobs), it continues to download snapshot metadata from the Object Store and populates the target file system’s snapshots with the information of the source file system’s snapshots
(i.e., Snapshot metadata Populating state 1060). Once the target file system’s snapshots are populated, the state machine moves to Snapshot_Metadata_Populated state 1062.
[0156] At Snapshot Deleting state 1064, the target file system deletes all the blobs in the Object Store for those that have been downloaded and applied to its latest snapshot. The target control API will then notify the target delta monitor once the blobs in the Object. Store have been deleted, and proceeds to Snapshot Deleted state 1066. The target file system also cleans up its memory and removes all keys as well. The FSS service also releases the KMS key.
[0157] When the target DP finishes the delta application and the clean-up, it validates with the target control API about the status of the source file system and whether it has received the X-region Copied Done notification from the source file system. If the notification has been received, the target delta monitor enters X-region DONE state 1068 and sends X-region DONE notification to the source file system. In some embodiments, the target file system is also able to detect whether the source file system has completed the upload by checking whether the end of files has been present for all the key ranges and all the upload processing threads because every object uploaded to the Object Store has a special marker, such as end- of-file marker in a CM file.
[0158] Referring back to the source file system state machine, while the source file system is in the Awaiting Target Response state 1018, it checks whether the status of the target CP has changed to complete to indicate that the application of all downloaded deltas by the target has been applied and file data has been stored locally. If it does, this concludes a cycle of replication.
[0159] The source side and target side operate asynchronously. When the source file system completes its replication upload, it. notifies the target control API with X- region_Copied_Done notification. When the target file system later completes its replication process, its delta monitor target communicates back to the source control API with X-region DONE notification. The source file system goes back to Ready _to_Copy_Metadata state 1002 to start another replication cycle.
[0160] FIG. 11 is an example flow diagram illustrating the interaction between the data plane and control plane in a source region, according to certain embodiments. Data plane components and control plane components communicate with each other using a shared database (SDB), for example, 1106. The SDB is a key-value store that both control plane components and data plane components can read and write. Data plane components include replicators and delta generators. The interaction between components in source region A 1101 and target region B 1102 is also illustrated.
[0161] In FIG. 11, at step SI, a source control plane (CPa) 1103 requests the Object Store in target region B (OSb) 1112 to create a bucket. At step S2, a source replicator (REPLICATORa) 1108 updates its heartbeat status to the source SDB (SDBa) 1106 regularly. Heartbeat is a concept used to track the replication progress performed by replicators. It uses a mechanism called leasing in which a replicator can keep on updating the heartbeat whenever it w'orks on a job to allow the control plane to be aware of the whole leasing information; for example, the byte count is continuously moving on the job. If a replicator fails to work properly, the heartbeat may become stale, and then another replicator can detect and take over to continue to work on the job left behind. Thus, if a system crash in the middle, the system can start exactly from the last-point-in-time based on the checkpoint mechanism. A checkpoint helps the system know7 where the last point of progress is to allow it to continue from that point without re-performing the entire work. [0162] At step S3, CPa 1103 also requests file system service workflow (FSW CPa) 1104 to create a snapshot periodically, and at step S4, FSW_CPa 1104 informs CPa 1103 about the new snapshot. At step S5, CPa 1103 then stores snapshot information in SDBa 1106. At step S6, REPLICATORa 1108 polls SDB 1106 for any changes to existing snapshots, and retrieves job spec at step S7 if a change is detected. At step S8, once REPLICATORa 1108 detects a change to snapshots, this kicks off the replication process. At step S8, REPLICATORa 1108 provides information about two snapshots (SNa and SNb) with changes between them to delta generator (DGa) 1110. At step S9, REPLICATORa 1 108 put work items information, such as the number of key ranges, into the SDBa 1106. At step 10, REPLIC ATORa 1108 checks the replication job queue in SDBa 1106 to obtain work items, and at step Si l, assign them to delta generator (DGa) 1110 to scan the B-tree keys of the snapshots (i.e., walking the B-tree) to compute deltas and the corresponding key-value pairs. At step 12, REPLICATORa 1108 decrypts file data associated with the identified B-tree keys, and pack them together with the key-value pairs into blobs. A step 13, REPLICATORa 1 108 encrypts the blobs with a session key and uploads them to the OSb 1 112 as objects. At step S14, REPLICATORa performs a checkpoint and stores the checkpoint record in SDBa 1106. This replication process (S8 to S 14 ) repeats (as a loop) until all deltas have been identified and data has been uploaded to OSb 1112. At step S 15, REPLICATORa 1108 then notifies SDBa 1106 with the replication job details, which is then passed to CPa 1103 at step S16, and further relayed to CPb 1114 as the final CM file at step S 17. At step S18, CPb 1114 stores the job details in SDBb 1116.
[0163] The interaction between the data plane and control plane in target region B is similar. At the end of the application of deltas to the target file system, the control plane in target region B notifies the control plane in source region A that the snapshot is successfully applied. This enables the control plane in source region A to start all over again with a new7 snapshot.
[0164] Authentication is performed on every component. From replicators to a file system key (FSK), an authentication mechanism exists by using replication ID and file system number. The key can be given to a replicator only when it provides the right content. Thus, the authentication mechanism can prevent an imposter from obtaining decryption keys. Other security mechanisms include blocking network ports. A component called file system key server (FSKS) is a gatekeeper for checking appropriate!' requesters by checking metadata such as the jobs the requesters will perform and other information. For example, suppose a replicator tries to request a key for a file system. In that case, the FSKS can check whether the replicator is associated with a particular job (e.g., a replication is actually associated with that file system) to validate the requester.
[0165] Availability addresses the situation that, a machine can be restarted automatically after going down or a service continues to be available while software deployments are going on. For example, all replicators are stateless, so losing a replicator is transparent to customers because another replicator can take over to continue working on the jobs. The states of the jobs are kept in a shared database and other reliable locations, not locally. The shared database is a database-like service that the control plane uses to preserve information about file systems, and is based on B-tree.
[0166] Storage availability in the FSS of the present disclosure is high because the system has thousands of storage nodes to allow7 any storage node to perform delta replication.
Control plane availability' is high by utilizing many machines that can take over each other in case of any failures. For example, replication progress is not hindered simply due to one control plane’s failure. Thus, there is no single point of failure. Network access availability utilizes congestion management involving various types of throttling to ensure source nodes are not overloaded.
[0167] Replication is durable by utilizing checkpointing, where replication states are written to a shared database, and the replicators are stateless. The replication process is idempotent. Idempotency may refer to deterministic re-application that when an operation fails, the retry of the same operation should work and lead to the same result, by using, for example, the same key, upload process or walking process, etc.
[0168] Operations in several areas are idempotent. In the control plane, an action that has been taken needs to be remembered. For example, if an HTTP request repeats itself, an idempotency cache can help remember that the particular operation has been performed and is the same operation. In the data plane, for example, when a block is allocated, the block and the file system file map key are written together. Thus, when the block is allocated again, it can be identified. If the block has been sealed, a write operation will fail. The idempotent mechanism can know that the block w'as sealed in the past, and the write operation needs not be redone. In yet another example, the idempotent mechanism remembers the chain of the steps required to be performed for a particular key -value processing. In other words, idempotency mechani sm allows to check even? operation to see if it is in the right state.
Therefore, the system can just move on to the next step without repeating.
[0169] Atomic replay allows the application of deltas to start as soon as the first delta object reaches the Object Store when snapshots are rolled back, for example, from snapshot 10 back to snapshot 5. To make a replay atomic, the entire deltas need to be preserved in the Object Store before the deltas can be applied.
[0170] With respect to scaling of the replicator, the FSS of the present disclosure allows to add as many replication machines (e.g., replicator virtual machines (“VMs”)) as needed to support many file systems. The number of replicators may dynamically increase or decrease by taking into account the bandwidth requirement and availability of resources. With respect to scaling storage, thousands of storage can be used to parallelize the process and increase the speed of work. With respect to inter-region bandwidth, bandwidth rationing ensures each workload does not overuse or cross its predefined throughput limit by automatically throttling, such as, throttling all inter-region bandwidth by figuring out the latency increase and slowing down requests. All replicator processors (or threads) have this capability.
[0171] For checkpoint storage scaling, uploaders and downloaders checkpoint their progress to persistent storage, and the shared storage is used as a work queue for splitting key range. If checkpoint workloads overwhelm the shared database, checkpoint storage functionality can be added to delta generators for scaling purposes. Current shared database workloads may consume less than 10 lOPs.
[0172] FIG. 12 is a simplified diagram illustrating fallback mode, according to certain embodiments. Fallback mode allows restoring the primary/ source side before failover to become primary/source again. As shown in FIG. 12, the primary AD 1202 includes a source file system 1206, and the secondary AD 1204 includes a target file system 1208. The secondary AD 1204 may be in the same region or a different region as that of primary AD 1202.
[0173] In FIG. 12, snapshot 1 1220 and snapshot. 2 1222 in the source file system 1206 exist prior to failover due to an outage event. Similarly, snapshot 1 1240 and snapshot 2 1242 in the target file system 1208 exist prior to failover. When the outage occurred in the primary AD 1202 at snapshot 3 1224, FSS made an unplanned failover 1250, and snapshot 3 1224 in the source file system 1206 was replicated to the target file system 1208 to become a new snapshot 3 1224. After the target file system 1208 went live, a customer might make changes to the target file system 1208, which created a snapshot 4 1246.
[0174] If the customer decides to use the source file system again, the FSS service may perform a tailback. The user has two options when performing the tailback — 1) the last point- in-time in the source file system prior to the triggering event 1251, or 2) the latest changes in the target file system 1252.
[0175] For the first option, the user can resume from the last point-in-time (i.e., snapshot 3 1224) in the source file system 1206 prior to the triggering event. In other words, snapshot 3 1224 will be the one to use after fallback because it previously successfully failed over to the target file system 1208. To perform the fallback 1251, the state of the source file system 1206 is changed to not accessible. Then, FSS services identify the last point-in-time in the source file system 1206 prior to the successful failover, which is snapshot 3 1224. FSS may perform a clone (i.e., a duplicate in the same region) of snapshot 3 1224 in the primary’ AD 1202. Now the primary AD 1202 is back to its initial setup before the outage, and the user can reuse the source file system 1206 again. Because snapshot 3 1224 is already in the file system to be used, no data transfer is required from the secondary' AD 1204 to the primary AD 1202.
[0176] For the second option, the user wants to reuse the source file system with the latest changes in the target file system 1208. In other words, snapshot 4 1246 in the target file system 1208 will be the one to use after fallback because it was the latest change in the target file system 1208. The tailback process 1252 for this option involves reverse replication (i.e., reversing the roles of the source file system and the target file system for a replication process), and FSS performs the following steps:
Step!, the state of the source file system 1206 is changed to not accessible.
Step 2. Then, FSS services identify the latest snapshot in the target file system 1208 that has been successfully replicated, for example, snapshot 3 1244.
Step 3. The FSS services also find the corresponding snapshot 3 1224 in the source file system 1206, and perform a clone (i.e., a duplicate in the same region).
Step 4. The FSS services start a reverse replication 1252 with a similar process as discussed in relation to FIG. 4 but in the reverse direction. In other words, both the source file system 1206 and the target file system 1208 need to synchronize, then the target file system 1208 can upload deltas to an Object Store in the primary AD 1202. The source file system 1206 can download the deltas from the Object Store to complete the application to snapshot 3 1224 to create a new snapshot 4 1226.
[0177] Now the primary AD 1202 is back to its initial setup before the outage, and the user can reuse the source file system 1206 again without transferring data that is already in both the source file system 1206 and the target file system 1208, for example, snapshots 1~3 (1220-1224) in the source file system 1206, This saves time and avoids unnecessary bandwidth.
Snapshot and Data Model
Snapshots
[0178] In certain embodiments, there are two types of snapshots, system snapshots and user (or customer) snapshots. System snapshots are controlled by FSS while user snapshots are controlled by customers. System snapshots are created periodically by a snapshot generator in the source FS and cleaned up in both the source and target file systems at the end of replication cycles. Customers can also create user snapshots in a source region under the scheduled snapshot policy. Customers can distinguish between system snapshots and user snapshots based on their details, for example, different names.
[0179] A system snapshot may be used to designate the start of a replication cycle, so there is one system snapshot per replication cycle. On the other hand, a user snapshot can be generated and deleted at any time by a user, and may not be used for designating the start of a replication cycle.
[0180] System snapshots cannot be modified or deleted by customers. However, FSS may delete system snapshots after the target FS successfully completes the delta application. In certain embodiments, at least one system snapshot is preserved in both the source FS and the target FS. For example, when both the source FS and target FS complete a replication cycle N, both file systems may delete the system snapshot for replication cycle N-l, not immediately delete the system snapshot of the replication cycle N they just completed.
[0181] The replication process identifies changes (i.e., deltas) between two system snapshots. The replication process starts with a base snapshot (i.e., established as a starting point) for both the source FS and the target FS. For example, if the base snapshot exists only in the source FS, FSS needs to create a base snapshot copy in the target FS by transferring the whole base snapshot from the source FS to the target FS. If the source FS and the target FS each already has the same base snapshot, then the replication process can start the calculation of the deltas (i.e., differences between a new snapshot and the base snapshot) in the source FS and transfer the deltas to the target FS immediately.
Provenance ID
[0182] Techniques disclosed in the File system service (FSS) utilize provenance ID to achieve efficient replication, including saving cloud resources and reducing network and IO traffic. Provenance ID is a special identification that uniquely identifies a snapshot among regions, whether it’s a system snapshot or a user snapshot. Suppose two file systems have the same provenance ID for a particular snapshot. In that case, which means the snapshot in each of these two file systems is very similar up to that point, either having a common ancestor or the same known point in time, and can be used as a base snapshot for cross-region (or x- region) replication. Provenance ID applies to both system snapshots and user snapshots.
[0183] A snapshot is a point-in-time picture of a file system, and it is immutable (i.e., not writable). A snapshot may have two types of duplicates, a clone or a replica. A clone may be referred to as a writable snapshot and is typically created in the same region. When clones are created, each clone can be written independently with its IO. /Ml of these clones have the same lineage. If a clone is created between two file systems, then both file systems share the same copy of the snapshot for reading. A separate copy is created only when one of the file systems needs to write to the clone. A replica is a duplicated snapshot created in a different region (i.e., cross-region or different data centers) through a replication process.
[0184] Replica and cloning may be different in that replica is achieved by first copying the full data from a source region to a target region, and thereafter copying the deltas between snapshots. On the other hand, cloning copies only necessary data to create a thin client. In- region cloning is much faster than cross-region replication because cloning does not involve extra encryption/decryption, Object Storage transfer, and many stages of pipelines that a replication requires. Once a clone is created, it does not receive more changes in the future, so it only gets a point-in-time snapshot.
[0185] In certain embodiments, every? snapshot may have three pieces of information associated with the snapshot, namely snapshot number (snapNum), provenance ID (ProvID or PID), and a resource ID (e.g., OCID). The resource ID is a globally unique ID for identifying resources because a snapshot consumes resources. The snapshot number is for internal house-keeping use and for tracking purpose in a file system. The provenance ID is for external use and is unique among all snapshots, either in-region or cross-regions. The provenance ID is set at the moment a snapshot is created, and is not changed when the snapshot is cloned or replicated. These three pieces of information together can uniquely identify a snapshot’s history (e.g., parent-child relationship among all snapshots) and differentiate the snapshot from other resources in a cloud infrastructure. Additionally, the file system number (FS#) helps track clones in-region and replica for cross-region. Between different regions, the provenance ID helps track snapshot’s history by carrying the original parent snapshot’s provenance ID.
[0186] In certain embodiments, before a replication starts, the source FS and target FS can compare the provenance IDs of their respective snapshots to find a matched pair of snapshots. If a particular pair of snapshots have the same provenance ID, the source FS and the target FS can start replication from the identified pair without the need to transfer an entire base snapshot copy from the source FS to the target FS at the beginning of the replication. As a result, this saves resources and avoids traffic associated with data transfer. For example, suppose a previous replication between a source FS and a target FS had replicated snapshots SI to S 100, and then stop. After a while, these two file systems plan to have another replication, and they need to figure out a starting point for this new replication. Suppose the source FS is already at snapshot S200. In that case, it may compare the provenance IDs of its snapshots from S200 backward to SI with the provenance ID of the last snapshot of the target FS (the comparing process is also referred to herein as tracing), and find that S100 in both the source FS and the target FS is a matched pair. At that point, S100 can be used as a starting point (i.e., base snapshot) in both the source and the target file systems for the new replication process. The source FS can calculate deltas between snapshot S100 (i.e., the base snapshot) and snapshot 200 (i.e., the new snapshot), then transfer the deltas to the target FS, which can apply them to its SI 00 to create S200 in the target FS. There is no need for the source FS to transfer snapshot 100 again to the target FS as a base snapshot copy for the replication process to begin with. This saves a lot of data and IO transfer.
[0187] In some embodiments, the provenance ID may be useful for all file systems in the same region by cloning snapshots from another file system to a target FS in the same target region if the snapshots to be replicated from a source region already exist in the target region but not in the target FS. This may be illustrated in FIG. 13. [0188] FIG. 13 is a diagram illustrating an example use of the provenance ID, according to certain embodiments. In FIG. 13, FSS create clones (step 1310) for three snapshots, snapNum 1/ProvID Sl/OCID SI, snapNum 2/ProvID S2/OCID S2 and snapNum 3/ProvID S3/OCID S3, of a file system FS 1 in the same region 1 to become snapshots snapNum 1/ProvID Sl/OCID KI, snapNum 2/ProvID S2/OCID K2 and snapNum 3/ProvID S3/OCID K3, of a file system FS2. Additionally, a new snapshot snapNum 5/ProvID K5/OCID K5 is also created in FS2. The clones in FS2 have different resource IDs (S* becomes K*) because they use different resources in the same region. Note that snapshot 4 of FS1 is not cloned.
[0189] FSS then creates replicas (i.e., step 1320) for snapshots 1, 2, 3, and 5 of file system FS2 to become snapNum 1/ProvID Sl/OCID Ml, snapNum 2/ProvID S2/OCID M2, snapNum 3/ProvID S3/OCID M3 and snapNum 5ZProvID K5/OCID M5 of a file system FS3 in region 2. Thereafter, the replication is deleted (i.e., step 1322) after snapNum 5 is replicated, meaning region 1 and region 2 do not communicate anymore. Additionally, snapshots snapNum 6/ProvID G6/OCID M6 and snapNum 7/ProvID G7/OCID M7 are created in FS3 in region 2 afterward.
[0190] Sometime later, FSS tries to perform replication (i.e., create replicas at step 1330) for snapshots 1, 2, 3, and 7 of FS3 in region 2 to FS4 in region 1. Because FS4 (i.e., the target FS) does not exist in region 1 but FS1 (i.e., a non-target FS) already exists in the same region, before the replication, FS3 in region 2 and FS1 in region 1 compares the provenance IDs of their snapshots (i.e., step 1340). The comparison may find that snapshots 1, 2 and 3 of FS3 have the same provenance ID (SI, S2, and S3) as snapshots 1, 2 and 3 of FS1 in region 1. Therefore, to save resources and network bandwidth, FS1, which locates in the same region 1 as FS4, can first create clones (i.e., step 1342) for snapshots 1, 2 and 3 (snapNum 1/ProvID Sl/OCID SI, snapNum 2/ProvID S2/OCID S2 and snapNum 3/ProvID S3/OCID S3) of FS1 to become (snapNum 1 /ProvID Sl/OCID Pl, snapNum 2/ProvID S2/OCID P2 and snapNum 3/ProvID S3/OCID P3) of FS4 in the same region 1 as base copies of snapshots. Thereafter, FS3 only needs to replicate (i.e., step 1344) snapshot 7 (snapNum 7/ProvID G7/OCID M7) of FS3 in region 2 to become snapshot 7 (snapNum 7/ProvlD G7/OC1D P4) of FS4 in region 1 by transferring the deltas between snapshot 3 (ProvID S3) and snapshot 7 (ProvID G7). In other words, a regular cross-region replication of four snapshots 1, 2, 3 and 7 from FS3 in region 2 to FS4 in regi on 1 can be simplified to become three in-region clones of snapshots 1 , 2 and 3 between FS1 and FS4 in the same region plus a cross-region replication of snapshot 7 between FS3 in region 2 and FS4 in regi on 1. As a result the use of provenance ID save resources, traffic for data transfer (i.e., network or IO traffic), and time.
[0191] FIG. 14 is a flow chart illustrating the process of using the provenance ID to identify a base snapshot for cross-region replication, according to certain embodiments. As shown in FIG. 14, at step 1401, a source FS in a source region may periodically generate system snapshots and also generate user snapshots by user’s requests. At step 1402, each snapshot may be assigned a unique provenance ID, and other identifications (e.g., snapshot ID and resource ID). At step 1404, a source FS may receive a request to perform a x-region replication between the source FS and a target FS, either due to an outage or planned failover. At step 1408, as discussed above, in some embodiments, both the source FS in a source region and the file systems in the target region may compare the provenance IDs of their respective snapshots to identify a base snapshot for x-region replication purpose (i.e., a matched snapshot with the same provenance ID or matched provenance ID) or in response to the request to perform a x-region replication. For example, in FIG. 13, FS3 (i.e., the source FS) in source region 2 compares the provenance IDs of its snapshots (i.e., step 1340) with the provenance IDs of snapshots of both the target FS (i.e., FS4) and non-target FS (i.e., FS1). In other embodiments, the provenance ID comparison may be performed between the source FS and the target FS in the target region first. If no match is found, then the source FS can perform the provenance ID comparison with the non-target FS in the target region.
[0192] At step 1410, if no matched provenance ID is found between the source FS and the file systems in the target region, then at step 1412, the x-region replication process may use the latest snapshot of the source FS as the selected base snapshot. In other words, the source FS may need to transfer the whole base snapshot copy (i.e., the selected base snapshot) to the target FS, as indicated in step 1420, then perform any necessary' delta transfer to the target FS afterward. At step 1410, if a matched provenance ID is found between the source FS and the file systems in the target region, then at step 1414, the process further determines whether the matched provenance ID belongs to a snapshot of the target FS or non-target. FS in the target region.
[0193] At step 1414, if a matched provenance ID (i.e., a matched snapshot with the same provenance ID) does not belong to a snapshot of the target FS (i.e., belonging to a snapshot of a non-target FS), then at step 1416, the non-target FS may perform an in-region cloning of the snapshot with the matched provenance ID to the target FS to create the base snapshot. Then, at 1420, the x-region replication can use the cloned base snapshot for the target FS as the selected base snapshot. In other words, the source FS can generate the deltas between its latest snapshot and the selected base snapshot with the matched provenance ID, and transfer only the deltas to the target FS via an Object Store. This obviates the need to transfer a full base snapshot copy. For example, in FIG. 13, the non-target FS1 may clone snapshots SI, S2, and S3 (i.e., step 1342) to target FS4 in the same region 1. Since three snapshots (SI, S2 and S3) have matched provenance IDs, all three snapshots may be used as based snapshots. In certain embodiments, the source FS can use the latest snapshot (i.e., S3) among the three snapshots as the selected based snapshot to generate deltas between snapshots S3 and G7 for x-region replication (i.e., step 1344).
[0194] At step 1414, if matched provenance ID belongs to a snapshot of the target FS, then at step 1418, both the source FS and the target FS use the snapshot of the matched provenance ID as the selected base snapshot. At step 1420, the source FS can generate deltas between its latest snapshot and the selected base snapshot, and transfer the deltas to the target FS for delta application during the x-region replication.
[0195] In addition to selecting a base snapshot for cross-region replication, in some embodiments, provenance ID may also help resumability when a replication fails or is accidentally deleted. For example, multiple x-region replications may occur between regions, as discussed above. If one x-region replication fails during its replication process, the corresponding source and target file systems can use the provenance ID to search and find a snapshot of a target file system or a non-target file system in the target region to use as a base snapshot to resume its x-region replication. Since FSS uses incremental deltas to perform replications, the easier and faster FSS can identify a unique common starting point for both the source and target file systems, the better FSS can resume the replication process and recover from failures. Provenance ID can avoid the need for a full base copy every time there is a failure.
Snapshot Data Consistency
[0196] Techniques are also disclosed in the present disclosure to maintain snapshot consistency between a source FS and a target FS involving snapshot creation and deletion. The first aspect for maintaining snapshot consistency between a source FS and a target FS is the order of processing snapshot keys and file data. In certain embodiments, the snapshot and data model of the FSS process snapshot keys and file data in certain order, by processing snapshot keys first, then the file data. Snapshot keys (may also be referred to as snapkeys) are the B-tree keys for snapshots. Whenever a new snapshot is created in the source region, a source data plane performs delta generation involving identifying the new snapshot keys of the new snapshot, transfer to the target region, and the target FS applies and insert the new snapshot keys into its B-tree. Otherwise, the new snapshot keys may be collected by the garbage collector in the source region. Snapshot keys need to be processed (i.e., identified and transferred to the target region) first before reading data blocks in the source region because snapshot keys represent a snapshot and help distinguish the differences between snapshots. Additionally, file data is associated with B-tree keys. Thus, accessing file data before a B-tree key is created in the target FS may lead to filesystem inconsistency. Finally, in some embodiments, snapshot keys are involved in billing metering and need to be established first.
[0197] Snapkey is a marker key for snapshot. When an epoch is created, a marker key is also created. Snapshot number is created based on epoch, which tracks time for a file system. For example, when the epoch advances from N to N+l, the source file system number is N+l, and the source FS creates snapshot number N (for either system snapshots or user snapshots.
[0198] The second aspect of maintaining snapshot consistency between a source FS and a target FS is handling snapshot deletions. In certain embodiments, the FSS uses a data plane (DP) to handle snapshot creation and CP to handle snapshot deletion. As discussed earlier, a snapshot generator in source DP generates system snapshots periodically in addition to user snapshots generated by customers. Deltas are computed between two given system snapshots and replicated from a source FS to a target FS. How'ever, snapshots may be deleted during the replication process. Although system snapshots are preserved in both the source FS and target FS until the target FS has completed its delta application, the user snapshots may be updated or deleted at any time in the source region during a replication process but not in the target region. Both the source CP and target CP need to track and execute the snapshot deletion according to the replication policy. Otherwise, improper handling of the snapshot deletion may lead to inconsistency between the source FS and target FS,
[0199] Snapshots created in the source region may not be visible to the user until these snapshots have been applied by the target file system. For example, if a source FS has three user snapshots SI, S2 and S3, the source and target CP does not inform the user that snapshots SI, S2 and S3 are available in the target region until these snapshots have been recreated in the target FS. The purpose is to prevent the user from cloning any of these snapshots in the target region when they are not ready. In some embodiments, multiple replications may be performed on several existing user snapshots (e.g., SI, S2 and S3) from a source FS to one or more target file systems in different regions. Those existing user snapshots in the source FS may need to be copied to one or more target file systems. But the source FS may create a new system snapshot (e.g., snapshot S4) as a base copy for initial synchronization between the source FS and one or more target file systems before performing the replications.
[0200] In certain embodiments, the deletion of snapshot keys is tracked and temporarily held by the source CP in its persistence memory, then is applied to both the source FS and the target FS at the end of a replication cycle. Temporary hold or withhold means the deletion is not. immediate and postponed for a short period of time depending on other factors. The reason is that if the deletion is applied immediately during the replication window/process, the garbage collector may interfere with the replication process by removing some of the snapshot keys before they can be applied by the target FS, leading to inconsistency. In other words, if a file deletion happens during a replication window, the deletion is temporarily blocked until the replication completes and then is applied in both the source FS and the target FS. Thus, the deletion application is the final step of the snapshot model. FSS utilizes a scheme called delayed snapshot deletion, which is applicable to user snapshots only.
[0201] FIG. 15 is a diagram illustrating delayed snapshot deletion and replication for maintaining consistency between a source FS and a target FS, according to certain embodiments. In FIG. 15, the FSS has three replication cycles starting from the source FS 1510 and ending at the target FS 1530, where the source FS and target FS are in different regions. Replication cycle 1 includes source cycle 1 (1512) and target cycle 1 (1532). Replication cycle 2 includes source cycle 2 (1514) and target cycle 2 (1534). Replication cycle 3 includes source cycle 3 ( 1516) and target, cycle 3 ( 1536). Each replication cycle starts with a system snapshot, for example, system snapshot S10 for replication cycle 1, system snapshot S20 for replication cycle 2, and system snapshot S30 for replication cycle 3.
[0202] As shown in FIG. 15, in certain embodiments, when a user snapshot is deleted, the source FS holds the snapshot deletion until it receives notification from the target FS that the deleted snapshot has been applied by the target FS, typically at the end of the current replication cycle. However, the snapshot, deletion does not take effect in the target FS until the end of a next replication cycle. This delayed deletion prevents uncertainty and ensures consistency between the source FS and the target FS.
[0203] To illustrate, in FIG. 15, the source FS 1510 creates two user snapshots S5 and S7 (shown as “+” for creating a snapshot), then a system snapshot S10, which starts the replication cycle 1 (1512) in the source region at time 18:00 UTC. After some time, snapshots S5, S7 and S10 are applied (shown as +S5, +S7 and +SI0) during target cycle 1 (1532) by the target FS 1530 starting at 18:05 UTC (when deltas are available for the target FS to download) and ends at 18: 15 UTC (i.e., when delta application of snapshots S5, S7 and S10 completes). This (1512 & 1532) completes the replication cycle 1 for both file systems.
[0204] In certain embodiments, while these snapshots are being replicated and transferred from the source FS 1510 to the target FS 1530 during replication cycle 1 (1512 & 1532), user snapshot S5 is being deleted (shown as for deleting a snapshot) in the source FS 1510 during the source cycle 1 (1512) before the target FS 1530 starts applying these snapshots at time UTC 18:05. The source CP 1510 allows S5 to continue to be transferred to the target FS 1530, but temporarily holds the deletion (i.e., keep snapshot S5 in ‘‘deleting” state) and then deletes S5 (i.e., CP changes to “deleted” state for S5) at the end of the whole replication cycle 1 (or target cycle 1 (1532)) after receiving a notification from the target FS 1530 indicating S5 has been applied by the target FS 1530 at time 18: 15 UTC. At this point, the internal state is set to “deleting” state (i.e., pending delete), so any other requests related to S5 may receive a HTTP 409 response (i.e., indicate a conflict between the other requests and the current state of the resources). However, as shown in FIG. 15, snapshot S5 is not actually deleted by the target FS 1530 until the end of replication cycle 2 (or target cycle 2 (1534)) at 19: 15 UTC (shown as “-S5”). This postponed deletion of S5 may be referred to as blocked deletion because the snapshot is blocked from instant deletion.
[0205] In the source FS 1510, more snapshots are created (e.g., S16 and SI 8) while some snapshots are deleted (e.g., S7 and S 16) after replication cycle 1 (1512 & 1532) and before replication cycle 2 (1514 & 1534). Replication cycle 2 starts from the source cycle 2 (1514) at 19:00 UTC and ends at 19: 15 UTC in the target cycle 2 (1534). User snapshot S7 was deleted (shown as “-S7”) either between replication cycles I and 2, so S7 is deleted by the source FS at the time of the deletion request (may also be referred to as non-blocked deletion), and deleted by the target FS at the end of the replication cycle 2 at time 19:15 UTC. [0206] In certain embodiments, when a snapshot is deleted, the state for the corresponding snapkey (a type of marker key) is changed from visible to invisible, so no user is able to read. Once the snapkey is removed by the garbage collection, the state changes from invisible to irretrievable, and the snapkey is removed from the B-tree (i.e., no longer exists in memory). This may be illustrated in FIG. 15 for snapshot S16 below.
[0207] In FIG. 15, in source FS 1510, snapshot S16 is created (shown as “+S 16”) and deleted (shown as “-S16”) in the same delta range or replication cycle window7 (i.e., after replication cycle 1 (1532)) completes and before replication cycle 2 (1514) starts for delta calculation) may not be replicated to the target FS 1530 because it’s not visible even within the source FS 1510 for replication purposes. So, S16 becomes an unreachable entry' and is not replicated to the target FS at all. Here, S16 becomes visible after being created (i.e., “+S16”), and then becomes invisible when it is deleted (i.e., “-S16”) in the source FS. After the garbage collector removes S16, it becomes irretrievable in the same replication cycle. This scheme may help save some replication resources. Therefore, snapshots may have a gap (i.e., missing SI 6) from S 15 to S17 when they are replicated from the source FS 1510 to the target FS 1530.
[0208] As mentioned earlier, a user snapshot is controlled by users. A user snapshot may be deleted only when the user requests to delete the created snapshot. For example, in FIG. 15, user snapshot S18 is created between replication cycle 1 (1512 & 1532) and replication cycle 2 (1514 & 1534), but is never deleted by a user. Thus, snapshot S18 may continue to exist and not cleaned up by the FSS. In contrast, snapshot S7 is created before replication cycle 1 (1512 & 1532) and later deleted by a user between replication cycle 1 and cycle 2.
[0209] In some embodiments, if a snapshot is created after a replication cycle has started (i.e., deltas have been calculated between two existing snapshots) in a source FS, that snapshot may not be transferred from the source FS to a target FS until next replication cycle. For example, in FIG. 15, snapshot 22 is created (shown as “+S22”) in the source FS 1510 during the source replication cycle 2 (1514). Since the deltas between the system snapshot S20 and an earlier snapshot have been calculated and in the process of being transferred from the source FS 1510 to the target FS 1530, S22 may not be replicated to the target FS 1530 during the current replication cycle (i.e., cycle 2, 1514 & 1534) already underway until next replication cycle (i.e., cycle 3, 1516 & 1536). However, if S22 is deleted before replication cycle 3 (1516) starts, S22 may not be replicated to the target FS 1530 because it has become irretrievable similar to the scenario for snapshot S 16 discussed above. Furthermore, if S22 receives a deletion request during source replication cycle 3 (1516), the delayed snapshot deletion scheme may be applied just like snapshot S5 discussed above.
[0210] FIG. 16 i s a flow chart, illustrating the process of delayed snapshot deletion and replication after detecting a snapshot deletion request, according to certain embodiments. At step 1601 , a source FS may generate one or more snapshots in a source region. At step 1602, a source FS and a target FS may perform x-region replications periodically. At step 1604, when the source FS detects a snapshot deletion request, for example, a request to delete a user snapshot, at step 1606, the source FS needs to determine whether the snapshot deletion request occurs during a x-region replication cycle of the source FS. If it is not, then it means the snapshot deletion request occurs between two replication cycles, for example, after replication cycle N but before replication cycle N+l. At step 1608, the source FS can just delete the requested snapshot without performing x-region replication on this deleted snapshot.
[0211] The deleted snapshot at step 1608 may or may not have been replicated in the previous replication cycle (i.e., replication cycle N), depending on when the snapshot was created. Either scenario does not affect the operation in the current replication cycle (i.e., replication cycle N+l). For example, in FIG. 15, snapshot S7 is created before source cycle 1 (1512) and then requested to be deleted between source cycle 1 (1512) and source cycle 2 (1514). Since S7 has gone through replication cycle 1 (1512 & 1532), there is no need to replicate S7 again. In contrast, snapshot S16 is created and then requested to be deleted between source cycle 1 (1512) and source cycle 2 (1514). Once the source FS 1510 deletes SI 6, it is never replicated.
[0212] At step 1606, if the snapshot deletion request occurs during a x-region replication cycle of the source FS, then at step 1620, the source FS may hold (or temporarily withhold) the snapshot deletion but still allow to perform the x-region replication on this requested snapshot. In other words, the source FS may transfer the requested snapshot to the target FS, which can perform the delta application on this request snapshot and then notify the source FS. At step 1622, the source FS may delete the requested snapshot at the end of the replication cycle when the target FS has completed the x-region replication. At step 1624, the target FS may not delete the requested snapshot the target FS has applied in the current replication cycle until the end of a next replication cycle (i.e., waits for another replication cycle). For example, in FIG. 15, snapshot S5 deletion request occurs during the source cycle 1 (1512). The source FS 1510 holds the deletion and passes S5 to the target FS 1530. After the target FS has applied S5 at target cycle 1 (1532), completes the replication, and notifies the source FS, then the source FS deletes S5 at time 18: 15 UTC.
[0213] FIG. 17 is a flow chart illustrating the process of delayed snapshot deletion and replication after detecting a snapshot creation event, according to certain embodiments. In certain embodiments, if a snapshot is created during x-region replication cycle (e.g., replication cycle N) but no snapshot deletion request is received by the source FS before or during next x-region replication cycle (e.g., replication cycle N+l), the replication for the newly created snapshot is delayed until the next replication cycle.
[0214] In FIG. 17, at step 1702, a source FS and a target FS may perform x-region replications periodically. At step 1704, when the source FS detects a new snapshot creation event, for example, a newly created user snapshot, at step 1706, the source FS needs to determine whether the new7 snapshot is created during a x-region replication cycle of the source FS. If it is not, then it means the new snapshot is created between two replication cycles, for example, after replication cycle N but before replication cycle N+l . At step 1708, the source FS and the target FS may replicate the new snapshot during the upcoming replication cycle. For example, in FIG. 15, snapshot S18 is created between the source cycle 1 (1512) and the source cycle 2 (1514). Then, the source FS 1510 and the target FS 1530 may replicate SIS during replication 2, the source cycle 2 (1514) and the target cycle 2 (1534).
[0215] At step 1706, if the new snapshot is created during a x-region replication cycle of the source FS, then at step 1720, the source FS may delay replicating the new7 snapshot until the next replication cycle if no snapshot deletion request is received before or during next replication cycle. For example, in FIG. 15, snapshot S22 is created during the source cycle 2 (1514), and no snapshot deletion request is received before or during the source cycle 3 (1516). Therefore, the source FS 1510 does not start replicating S22 until the source cycle 3 (1516) and then transfers S22 to the target FS 1530 for application during the target cycle 3 (1536).
[0216] The delayed snapshot deletion and replication techniques may use a schema table in a SDB of the source FS and another schema table in a SDB of the target FS to temporarily store metadata of the snapshots being deleted (for example, snapshot S5 during the source cycle 1 (1512) in FIG. 15) and track deleted snapshots. [0217] In certain embodiments, the schema table may be a key-value store that contains metadata information including, but not limited to, replication numbers, snapshot numbers that have been applied, snapshot numbers of the deleted snapshots, and workflow IDs of the replicators handling those snapshots. In other words, such a schema table can help track the snapshots of the blocked and non-block deletions in the source FS, and the snapshots may have been applied in the target FS.
Snapshot Metadata Transfer Between Source CP and Target CP
[0218] FIG. 18 is a flow diagram illustrating a control plane \vorkflovv for a source region and a target region, according to certain embodiments. The workflow may involve snapshot metadata collection by the source region, transfer between the source and target regions, and metadata application by the target region. As discussed above in relation to FIG. 10, in certain embodiments, the state machine for a x-region replication process can be roughly divided into five parts, 1) metadata collection and transfer on the source region, 2) delta generation and transfer on the source region, 3 ) session key generation and transfer between the source and target regions, 4) delta download and application on the target region, and 5) metadata download and application on the target region. FIG. 18 focuses on part one (i.e., metadata collection and transfer on the source region), and part five (i.e., metadata download and application on the target region) of the x-region replication process.
[0219] At a high level regarding metadata processing, the source FS may extract snapshot metadata and upload it to the Object Store at the beginning of a replication cycle, for example, involving state machine’s states (may also referred to herein as delta states) from Ready _to_Copy_Metadata to Snapshot_Metadata_Copied. Uploading metadata at the beginning of a replication cycle can help detect and resolve any problems early for the replication before heavy data transfer starts. The target FS populates snapshot metadata and performs snapshot deletion after the delta application has been completed to add metadata to the existing data. The snapshot metadata transfer may include, but not limited to, provenance Id, snapshot type (e.g., system snapshot and user snapshot), and snapshot time. Additionally, snapshot records, such as the creation and deletion of snapshots, are also part of this snapshot metadata transfer. As compared to FIG. 2, which discusses delta transfer between a source region and a target region, FIG. 18 is about control plane communications, specifically related to snapshot metadata information, between a source region and a target region. [0220] In general, the source CP tracks the status of snapshot copying and deletion activities in the source region and receives validation from the target CP. The source CP and target CP communicate through the SDBs in both regions. In FIG. 18, at step SI, the CP API 1810 may start recording snapshot status, including any deleted snapshots after a replication process begins. At step S2, the source snapshot generator 1812 (a separate thread in CP API service) scans replication policies and creates a system snapshot. At step S3, if source CP API 1810 detects a snapshot deletion request during a replication cycle (may also be referred to as delta range from data perspective), it records the pending delete into source SDB 1814 (e.g., the schema table described above). At step S4, source data plane (DP)/replicator 1816 may check the status of snapshot creation (e.g., whether a system snapshot has been created). If a new system snapshot has been created, at step S5, delta monitor of source CP API 1810 may update its delta state to Snapshot_Metadata_Copying (referring to step 1004 in FIG. 10). Delta monitor may be threads on CP API, managing and transitioning delta states. At step S6, source CP 1810 may prepare information about, user snapshots in the current replication cycle by extracting metadata, such as provenance Id, snapshot type, and snapshot time, plus snapshot records. The source CP may then store the extracted information in the source SDB 1814.
[0221] Then, at step S7, the replicator 1816 source DP may obtain the metadata information from the source SDB 1814 and upload it to the Object Store 1850. For regular deltas between system snapshots, they may be uploaded to the Object Store in different stages of the same replication cycle. At step S8, the source CP 1810 may change its delta state to Snapshot_Metadata_Copied (referring to step 1006 in FIG. 10) and update the source SDB 1814 accordingly. The source CP API 1810 then notifies the target CP API 1830 (i.e., FSS target CP host) that the snapshot metadata is in the copied state. The source CP 1810 may also clean up the deleted snapshot records stored in the source SDB 1814 for the current replication cycle. Please note that part two (i.e., delta generation and transfer on the source region) and part three (i.e., session key generation and transfer) of the x -region replication process mentioned above is not discussed in FIG. 18.
[0222] After the source FS completes its delta generation and transfer process, at step S9, the target CP 1830 may update its target SDB 1834 upon receiving notification from the source CP API 1810, and change its delta state to Ready To Replicate (referring to step 1054 in FIG. 10) accordingly. At step S10, once the target data plane (DP) (e.g., replicator) 1836 detects that system snapshots are in the Copied state (in source region) and Ready To Replicate (in target region), and it’s ready to replicate, the target DP moves to next step for delta application, then metadata application.
[0223] At step SI 1, for delta replication, the target DP 1836 obtain deltas from the Object Store. At step S I 2, the target replicator 1836 may apply the deltas to the target FS’s base snapshot in DP. At step S13, when the target DP 1836 completes the delta application, then it notifies the target CP 1830 (e.g., delta monitor) to update the delta state to Replicated state (referring to step 1058 in FIG. 10).
[0224] The target FS may then proceed to prepare for metadata download and application. At step S14, the target CP 1830 may change the delta state to Snapshot __MetadataJPopulating (referring to step 1060 in FIG. 10). At step S 15, the target DP 1836 can download snapshot metadata from the Object Store for the current replication cycle (or between last snapshot number and current snapshot number in the schema) and populate metadata for all the snapshots within this range. The target DP 1836 also downloads deleted snapshot records for the current replication cycle from the Object Store. At step SI 6, the target CP 1830 then updates the delta state to Snapshot Metadata Populated (referring to step 1062 in FIG. 10), and notifies the source CP 1810.
[0225] At step S17, the target DP 1836 may then delete system snapshots locally, clean up the corresponding snapshot metadata and deleted snapshot records. This may complete the current replication cycle (or delta cycle). At step SI 8, the target CP 1830 can then move to state X-region Done (referring to step 1068 in FIG. 10) and notify the source CP 1810 about the completion of current x-region replication.
Example Infrastructure as Service Architectures
[0226] As noted above, infrastructure as a service (laaS) is one particular type of cloud computing. laaS can be configured to provide virtualized computing resources over a public network (e.g., the Internet). In an laaS model, a cloud computing provider can host the infrastructure components (e.g., servers, storage devices, network nodes (e.g., hardware), deployment software, platform virtualization (e.g., a hypervisor layer), or the like). In some cases, an laaS provider may also supply a variety’ of services to accompany those infrastructure components (example services include billing software, monitoring software, logging software, load balancine software, clustering software, etc.). Thus, as these services may be policy-driven, laaS users may be able to implement policies to drive load balancing to maintain application availability and performance. [0227] In some instances, laaS customers may access resources and services through a wide area network (WAN), such as the Internet, and can use the cloud provider's services to install the remaining elements of an application stack. For exampie, the user can log in to the laaS platform to create virtual machines (VMs), install operating systems (OSs) on each VM, deploy middleware such as databases, create storage buckets for workloads and backups, and even install enterprise software into that VM. Customers can then use the provider's services to perform various functions, including balancing network traffic, troubleshooting application issues, monitoring performance, managing disaster recovery, etc.
[0228] In most cases, a cloud computing model will require the participation of a cloud provider. The cloud provider may, but need not be, a third-party sendee that specializes in providing (e.g., offering, renting, selling) laaS. An entity might also opt to deploy a private cloud, becoming its own provider of infrastructure sendees.
[0229] In some examples, laaS deployment is the process of putting a new application, or a new version of an application, onto a prepared application server or the like. It may also include the process of preparing the server (e.g., installing libraries, daemons, etc.). This is often managed by the cloud provider, below the hypervisor layer (e.g., the servers, storage, network hardware, and virtualization). Thus, the customer may be responsible for handling (OS), middleware, and/or application deployment, (e.g., on self-service virtual machines (e.g., that can be spun up on demand) or the like.
[0230] In some examples, laaS provisioning may refer to acquiring computers or virtual hosts for use, and even installing needed libraries or services on them. In most cases, deployment does not include provisioning, and the provisioning may need to be performed first.
[0231] In some cases, there are two different challenges for laaS provisioning. First, there is the initial challenge of provisioning the initial set of infrastructure before anything is running. Second, there is the challenge of evolving the existing infrastructure (e.g., adding new7 services, changing services, removing services, etc.) once everything has been provisioned. In some cases, these two challenges may be addressed by enabling the configuration of the infrastructure to be defined declaratively. In other words, the infrastructure (e.g., what components are needed and how they interact) can be defined by one or more configuration files. Thus, the overall topology of the infrastructure (e.g., what resources depend on which, and how they each work together) can be described declaratively. In some instances, once the topology is defined, a workflow can be generated that creates and/or manages the different components described in the configuration files.
[0232] In some examples, an infrastructure may have many interconnected elements. For example, there may be one or more virtual private clouds (VPCs) (e.g., a potentially on- demand pool of configurable and/or shared computing resources), also known as a core network. In some examples, there may also be one or more inbound/outbound traffic group rules provisioned to define how the inbound and/or outbound traffic of the network will be set up and one or more virtual machines (VMs). Other infrastructure elements may also be provisioned, such as a load balancer, a database, or the like. As more and more infrastructure elements are desired and/or added, the infrastructure may incrementally evolve.
[0233] In some instances, continuous deployment techniques may be employed to enable deployment of infrastructure code across various virtual computing environments. Additionally, the described techniques can enable infrastructure management within these environments. In some examples, sendee teams can write code that, is desired to be deployed to one or more, but often many, different production environments (e.g., across various different, geographic locations, sometimes spanning the entire world). However, in some examples, the infrastructure on which the code will be deployed must first be set up. In some instances, the provisioning can be done manually, a provisioning tool may be utilized to provision the resources, and/or deployment tools may be utilized to deploy the code once the infrastructure is provisioned.
[0234] FIG. 19 is a block diagram 1900 illustrating an example pattern of an laaS architecture, according to at least one embodiment. Sendee operators 1902 can be communicatively coupled to a secure host tenancy 1904 that can include a virtual cloud network ( VC’N) 1906 and a secure host subnet 1908. In some examples, the service operators 1902 may be using one or more client computing devices, which may be portable handheld devices (e.g., an iPhone®, cellular telephone, an iPad®, computing tablet, a personal digital assistant (PDA)) or wearable devices (e.g., a Google Glass® head mounted display), running software such as Microsoft Windows Mobile®, and/or a variety of mobile operating systems such as iOS, Windows Phone, Android, BlackBerry' 8, Palm OS, and the like, and being Internet, e-mail, short message service (SMS), Blackberry'®, or other communication protocol enabled. Alternatively, the client computing devices can be general purpose personal computers including, by way of example, personal computers and/or laptop computers running various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems. The client computing devices can be workstation computers running any of a variety of commercially-available UNIX® or UNIX-like operating systems, including without limitation the variety of GNU/Linux operating systems, such as for example, Google Chrome OS. Alternatively, or in addition, client computing devices may be any other electronic device, such as a thin-client computer, an Internet-enabled gaming system (e.g., a Microsoft Xbox gaming console with or without a Kinect® gesture input device), and/or a personal messaging device, capable of communicating over a network that can access the VCN 1906 and/or the Internet.
[0235] The VCN 1906 can include a local peering gateway (LPG) 1910 that can be communicatively coupled to a secure shell (SSH) VCN 1912 via an LPG 1910 contained in the SSH VCN 1912. The SSH VCN 1912 can include an SSH subnet 1914, and the SSH VCN 1912 can be communicatively coupled to a control plane VCN 1916 via the LPG 1910 contained in the control plane VCN 1916. Also, the SSH VCN 1912 can be communicatively coupled to a data plane VCN 1918 via an LPG 1910. The control plane VCN 1916 and the data plane VCN 1918 can be contained in a service tenancy 1919 that can be owned and/or operated by the laaS provider.
[0236] The control plane VCN 1916 can include a control plane demilitarized zone (DMZ) tier 1920 that acts as a perimeter network (e.g., portions of a corporate network between the corporate intranet and external networks). The DMZ-based servers may have restricted responsibilities and help keep breaches contained. Additionally, the DMZ tier 1920 can include one or more load balancer (LB) subnet(s) 1922, a control plane app tier 1924 that can include app subnet(s) 1926, a control plane data tier 1928 that can include database (DB) subnet(s) 1930 (e.g., frontend DB subnet(s) and/or backend DB subnet(s)). The LB subnet(s) 1922 contained in the control plane DMZ tier 1920 can be communicatively coupled to the app subnet(s) 1926 contained in the control plane app tier 1924 and an Internet gateway 1934 that can be contained in the control plane VCN 1916, and the app subnet(s) 1926 can be communicatively coupled to the DB subnet(s) 1930 contained in the control plane data tier 1928 and a service gateway 1936 and a network address translation (NAT) gateway 1938. The control plane VCN 1916 can include the service gateway 1936 and the NAT gateway 1938. [0237] The control plane VCN 1916 can include a data plane mirror app tier 1940 that can include app subnet(s) 1926. The app subnet(s) 1926 contained in the data plane mirror app tier 1940 can include a virtual network interface controller (VNIC) 1942 that can execute a compute instance 1944. The compute instance 1944 can communicatively couple the app subnet(s) 1926 of the data plane mirror app tier 1940 to app subnet(s) 1926 that can be contained in a data plane app tier 1946.
[0238] The data plane VCN 1918 can include the data plane app tier 1946, a data plane DMZ tier 1948, and a data plane data tier 1950. The data plane DMZ tier 1948 can include LB subnet(s) 1922 that can be communicatively coupled to the app subnet(s) 1926 of the data plane app tier 1946 and the Internet gateway 1934 of the data plane VCN 1918. The app subnet(s) 1926 can be communicatively coupled to the service gateway 1936 of the data plane VCN 1918 and the NAT gateway 1938 of the data plane VCN 1918. The data plane data tier 1950 can also include the DB subnet(s) 1930 that can be communicatively coupled to the app subnet(s) 1926 of the data plane app tier 1946.
[0239] The Internet gateway 1934 of the control plane VCN 1916 and of the data plane VCN 1918 can be communicatively coupled to a metadata management service 1952 that can be communicatively coupled to public Internet 1954. Public Internet 1954 can be communicatively coupled to the NAT gateway 1938 of the control plane VCN 1916 and of the data plane VCN 1918. The service gateway 1936 of the control plane VCN 1916 and of the data plane VCN 1918 can be communicatively coupled to cloud sendees 1956.
[0240] In some examples, the service gateway 1936 of the control plane VCN 1916 or of the data plane VCN 1918 can make application programming interface (API) calls to cloud services 1956 without going through public Internet 1954. The API calls to cloud services 1956 from the service gateway 1936 can be one-way: the service gateway 1936 can make API calls to cloud services 1956, and cloud sendees 1956 can send requested data to the service gateway 1936. But, cloud services 1956 may not initiate API calls to the service gateway 1936.
[0241] In some examples, the secure host tenancy 1904 can be directly connected to the service tenancy 1919, which may be otherwise isolated. The secure host subnet 1908 can communicate with the SSH subnet 1914 through an LPG 1910 that may enable two-way communication over an otherwise isolated system. Connecting the secure host subnet 1908 to the SSH subnet 1914 may give the secure host subnet 1908 access to other entities within the service tenancy 1919.
[0242] The control plane VCN 1916 may allow users of the service tenancy 1919 to set up or otherwise provision desired resources. Desired resources provisioned in the control plane VCN 1916 may be deployed or otherwise used in the data plane VCN 1918. In some examples, the control plane VCN 1916 can be isolated from the data plane VCN 1918, and the data plane mirror app tier 1940 of the control plane VCN 1916 can communicate with the data plane app tier 1946 of the data plane VCN 1918 via VNICs 1942 that can be contained in the data plane mirror app tier 1940 and the data plane app tier 1946.
[0243] In some examples, users of the system, or customers, can make requests, for example create, read, update, or delete (CRUD) operations, through public Internet 1954 that can communicate the requests to the metadata management service 1952. The metadata management service 1952 can communicate the request to the control plane VCN 1916 through the Internet gateway 1934. The request can be received by the LB subnet(s) 1922 contained in the control plane DMZ tier 1920. The LB subnet(s) 1922 may determine that the request is valid, and in response to this determination, the LB subnet(s) 1922 can transmit the request to app subnet(s) 1926 contained in the control plane app tier 1924. If the request is validated and requires a call to public Internet 1954, the call to public Internet 1954 may be transmitted to the NAT gateway 1938 that can make the call to public Internet 1954.
Metadata that may be desired to be stored by the request can be stored in the DB subnet(s) 1930.
[0244] In some examples, the data plane mirror app tier 1940 can facilitate direct communication between the control plane VCN 1916 and the data plane VCN 1918. For example, changes, updates, or other suitable modifications to configuration may be desired to be applied to the resources contained in the data plane VCN 1918. Via a VNIC 1942, the control plane VCN 1916 can directly communicate with, and can thereby execute the changes, updates, or other suitable modifications to configuration to, resources contained in the data plane VCN 1918.
[0245] In some embodiments, the control plane VCN 1916 and the data plane VCN 1918 can be contai ned in the service tenancy 1919. In this case, the user, or the customer, of the system may not own or operate either the control plane VCN 1916 or the data plane VCN 1918. Instead, the laaS provider may own or operate the control plane VCN 1916 and the data plane VCN 1918, both of which may be contained in the sendee tenancy 1919. This embodiment can enable isolation of networks that may prevent users or customers from interacting with other users’, or other customers’, resources. Also, this embodiment may allow7 users or customers of the system to store databases privately without needing to rely on public Internet 1954, which may not have a desired level of threat prevention, for storage.
[0246] In other embodiments, the LB subnet(s) 1922 contained in the control plane VCN 1916 can be configured to receive a signal from the service gateway 1936. In this embodiment, the control plane VCN 1916 and the data plane VCN 1918 may be configured to be called by a customer of the laaS provider without calling public Internet 1954. Customers of the laaS provider may desire this embodiment since database(s) that the customers use may be controlled by the laaS provider and may be stored on the service tenancy 1919, which may be isolated from public Internet 1954.
[0247] FIG. 20 is a block diagram 2000 illustrating another example pattern of an laaS architecture, according to at least one embodiment. Service operators 2002 (e.g., service operators 1902 of FIG. 19) can be communicatively coupled to a secure host tenancy 2004 (e.g., the secure host tenancy 1904 of FIG. 19) that can include a virtual cloud network (VCN) 2006 (e.g., the VCN 1906 of FIG. 19) and a secure host subnet 2008 (e.g., the secure host subnet 1908 of FIG. 19). The VCN 2006 can include a local peering gateway (LPG) 2010 (e.g., the LPG 1910 of FIG. 19) that can be communicatively coupled to a secure shell (SSH) VCN 2012 (e.g., the SSH VCN 1912 of FIG. 19) via an LPG 1910 contained in the SSH VCN 2012. The SSH VCN 2012 can include an SSH subnet 2014 (e.g., the SSH subnet 1914 of FIG. 19), and the SSH VCN 2012 can be communicatively coupled to a control plane VCN 2016 (e.g., the control plane VCN 1916 of FIG. 19) via an LPG 2010 contained in the control plane VCN 2016. The control plane VCN 2016 can be contained in a sendee tenancy 2019 (e.g., the sendee tenancy 1919 of FIG . 19), and the data plane VCN 2018 (e.g., the data plane VCN 1918 of FIG. 19) can be contained in a customer tenancy 2021 that may be owned or operated by users, or customers, of the system.
[0248] The control plane VCN 2016 can include a control plane DMZ tier 2020 (e.g., the control plane DMZ tier 1920 of FIG. 19) that can include LB subnet(s) 2022 (e.g., LB subnet! s) 1922 of FIG. 19), a control plane app tier 2024 (e.g., the control plane app tier 1924 of FIG. 19) that can include app subnet(s) 2026 (e.g., app subnet(s) 1926 of FIG. 19), a control plane data tier 2028 (e.g., the control plane data tier 1928 of FIG. 19) that, can include database (DB) subnet(s) 2030 (e.g., similar to DB subnet(s) 1930 of FIG. 19). The LB subnet(s) 2022 contained in the control plane DMZ tier 2020 can be communicatively coupled to the app subnet(s) 2026 contained in the control plane app tier 2024 and an Internet gateway 2034 (e.g., the Internet gateway 1934 of FIG. 19) that can be contained in the control plane VCN 2016, and the app subnet(s) 2026 can be communicatively coupled to the DB subnet(s) 2030 contained in the control plane data tier 2028 and a service gateway 2036 (e.g., the sendee gateway 1936 of FIG. 19) and a network address translation (NAT) gateway 2038 (e.g., the NAT gateway 1938 of FIG. 19). The control plane VCN 2016 can include the service gateway 2036 and the NAT gateway 2038.
[0249] The control plane VCN 2016 can include a data plane mirror app tier 2040 (e.g., the data plane mirror app tier 1940 of FIG. 19) that can include app subnet(s) 2026. The app subnet(s) 2026 contained in the data plane mirror app tier 2040 can include a virtual network interface controller (VNIC) 2042 (e.g., the VNIC of 1942) that can execute a compute instance 2044 (e.g., similar to the compute instance 1944 of FIG. 19). The compute instance 2044 can facilitate communication between the app subnet(s) 2026 of the data plane mirror app tier 2040 and the app subnet(s) 2026 that can be contained in a data plane app tier 2046 (e.g., the data plane app tier 1946 of FIG. 19) via the VNIC 2042 contained in the data plane mirror app tier 2040 and the VNIC 2042 contained in the data plane app tier 2046.
[0250] The Internet gateway 2034 contained in the control plane VCN 2016 can be communicatively coupled to a metadata management service 2052 (e.g., the metadata management service 1952 of FIG. 19) that can be communicatively coupled to public Internet 2054 (e.g., public Internet 1954 of FIG. 19). Public Internet 2054 can be communicatively coupled to the NAT gateway 2038 contained in the control plane VCN 2016. The service gateway 2036 contained in the control plane VCN 2016 can be communicatively coupled to cloud services 2056 (e.g., cloud services 1956 of FIG. 19).
[0251] In some examples, the data plane VCN 2018 can be contained in the customer tenancy 2021. In this case, the laaS provider may provide the control plane VCN 2016 for each customer, and the laaS provider may, for each customer, set up a unique compute instance 2044 that is contained in the service tenancy 2019. Each compute instance 2044 may allow communication between the control plane VCN 2016, contained in the service tenancy 2019, and the data plane VCN 2018 that is contained in the customer tenancy 2021. The compute instance 2044 may allow resources, that are provisioned in the control plane VCN 2016 that is contained in the service tenancy 2019, to be deployed or otherwise used in the data plane VCN 2018 that is contained in the customer tenancy 2021.
[0252] In other examples, the customer of the laaS provider may have databases that live in the customer tenancy 2021. In this example, the control plane VCN 2016 can include the data plane mirror app tier 2040 that can include app subnet(s) 2026. The data plane mirror app tier 2040 can reside in the data plane VCN 2018, but the data plane mirror app tier 2040 may not live in the data plane VCN 2018. That is, the data plane mirror app tier 2040 may have access to the customer tenancy 2021, but the data plane mirror app tier 2040 may not exist, in the data plane VCN 2018 or be owned or operated by the customer of the laaS provider. The data plane mirror app tier 2040 may be configured to make calls to the data plane VCN 2018 but may not be configured to make calls to any entity contained in the control plane VCN 2016. The customer may desire to deploy or otherwise use resources in the data plane VCN 2018 that are provisioned in the control plane VCN 2016, and the data plane mirror app tier 2040 can facilitate the desired deployment, or other usage of resources, of the customer.
[0253] In some embodiments, the customer of the laaS provider can apply filters to the data plane VCN 2018. In this embodiment, the customer can determine what the data plane VCN 2018 can access, and the customer may restrict access to public Internet 2054 from the data plane VCN 2018. The laaS provider may not be able to apply filters or otherwise control access of the data plane VCN 2018 to any outside networks or databases. Applying filters and controls by the customer onto the data plane VCN 2018, contained in the customer tenancy 2021, can help isolate the data plane VCN 2018 from other customers and from public Internet 2054.
[0254] In some embodiments, cloud sendees 2056 can be called by the service gateway- 2036 to access services that may not exist on public Internet 2054, on the control plane VCN 2016, or on the data plane VCN 2018. The connection between cloud services 2056 and the control plane VCN 2016 or the data plane VCN 2018 may not be live or continuous. Cloud sendees 2056 may exist on a different network owned or operated by the laaS provider. Cloud services 2056 may be configured to receive calls from the service gateway 2036 and may be configured to not receive calls from public Internet 2054. Some cloud sendees 2056 may be isolated from other cloud services 2056, and the control plane VCN 2016 may be isolated from cloud services 2056 that may not be in the same region as the control plane VCN 2016. For example, the control plane VCN 2016 may be located in “Region 1,” and cloud service “Deployment 19,” may be located in Region 1 and in “Region 2,” If a call to Deployment 19 is made by the service gateway 2036 contained in the control plane VCN 2016 located in Region 1, the call may be transmitted to Deployment 19 in Region 1. In this example, the control plane VCN 2016, or Deployment 19 in Region 1, may not be communicatively coupled to, or otherwise in communication with, Deployment 19 in Region
[0255] FIG. 21 is a block diagram 2100 illustrating another example pattern of an laaS architecture, according to at least one embodiment. Service operators 2102 (e.g., service operators 1902 of FIG. 19) can be communicatively coupled to a secure host tenancy 2104 (e.g., the secure host tenancy 1904 of FIG. 19) that can include a virtual cloud network (VCN) 2106 (e.g., the VCN 1906 of FIG. 19) and a secure host subnet 2108 (e.g., the secure host subnet 1908 of FIG. 19). The VCN 2106 can include an LPG 2110 (e.g., the LPG 1910 of FIG. 19) that can be communicatively coupled to an SSH VCN 21 12 (e.g., the SSH VCN 1912 of FIG. 19) via an LPG 2110 contained in the SSH VCN 2112. The SSH VCN 2112 can include an SSH subnet 2114 (e.g., the SSH subnet 1914 of FIG . 19), and the SSH VCN 2112 can be communicatively coupled to a control plane VCN 2116 (e.g., the control plane VCN 1916 of FIG. 19) via an LPG 2110 contained in the control plane VCN 21 16 and to a data plane VCN 2118 (e.g., the data plane 1918 of FIG. 19) via an LPG 2110 contained in the data plane VCN 2118. The control plane VCN 2116 and the data plane VCN 2118 can be contained in a sendee tenancy 21 19 (e.g., the service tenancy 1919 of FIG. 19).
[0256] The control plane VCN 21 16 can include a control plane DMZ tier 2120 (e.g., the control plane DMZ tier 1920 of FIG. 19) that can include load balancer (LB) subnet(s) 2122 (e.g., LB subnet) s) 1922 of FIG. 19), a control plane app tier 2124 (e.g., the control plane app tier 1924 of FIG. 19) that can include app subnet(s) 2126 (e.g., similar to app subnet(s) 1926 of FIG. 19), a control plane data tier 2128 (e.g., the control plane data tier 1928 of FIG. 19) that can include DB subnet(s) 2130. The LB subnet(s) 2122 contained in the control plane DMZ tier 2120 can be communicatively coupled to the app subnet(s) 2126 contained in the control plane app tier 2124 and to an Internet gateway 2134 (e.g., the Internet gateway 1934 of FIG. 19) that can be contained in the control plane VCN 21 16, and the app subnet(s) 2126 can be communicatively coupled to the DB subnet(s) 2130 contained in the control plane data tier 2128 and to a service gateway 2136 (e.g., the service gateway of FIG. 19) and a network address translation (NAT) gateway 2138 (e.g., the NAT gateway 1938 of FIG. 19). The control plane VCN 2116 can include the sendee gateway 2136 and the NAT gateway 2138. [0257] The data plane VCN 2118 can include a data plane app tier 2146 (e.g., the data plane app tier 1946 of FIG. 19), a data plane DMZ tier 2148 (e.g., the data plane DMZ tier 1948 of FIG. 19), and a data plane data tier 2150 (e.g., the data plane data tier 1950 of FIG. 19). The data plane DMZ tier 2148 can include LB subnet(s) 2122 that can be communicatively coupled to trusted app subnet(s) 2160 and untrusted app subnet(s) 2162 of the data plane app tier 2146 and the Internet gateway 2134 contained in the data plane VCN 2118. The trusted app subnet(s) 2160 can be communicatively coupled to the service gateway 2136 contained in the data plane VCN 21 18, the NAT gateway 2138 contained in the data plane VCN 2118, and DB subnet(s) 2130 contained in the data plane data tier 2150. The untrusted app subnet(s) 2162 can be communicatively coupled to the service gateway 2136 contained in the data plane VCN 2118 and DB subnet! s) 2130 contained in the data plane data tier 2150. The data plane data tier 2150 can include DB subnet(s) 2130 that can be communicatively coupled to the service gateway 2136 contained in the data plane VCN 2118.
[0258] The untrusted app subnet(s) 2162 can include one or more primary' VNICs 2164(1)- (N) that, can be communicatively coupled to tenant virtual machines (VMs) 2166(1 )-(N). Each tenant VM 2166( 1 )-(N ) can be communicatively coupled to a respective app subnet 2167(1)-(N) that can be contained in respective container egress VCNs 2168(1)-(N) that can be contained in respective customer tenancies 2170(l)-(N). Respective secondary'' VNICs 2172(1)-(N) can facilitate communication between the untrusted app subnet(s) 2162 contained in the data plane VCN 2118 and the app subnet contained in the container egress VCNs 2168(1)-(N). Each container egress VCNs 2168(1)-(N) can include a NAT gateway 2138 that can be communicatively coupled to public Internet 2154 (e.g., public Internet 1954 of FIG. 19).
[0259] The Internet gateway 2134 contained in the control plane VCN 2116 and contained in the data plane VCN 2118 can be communicatively coupled to a metadata management sendee 2152 (e.g., the metadata management system 1952 of FIG. 19) that can be communicatively coupled to public Internet 2154. Public Internet 2154 can be communicatively coupled to the NAT gateway 2138 contained in the control plane VCN 21 16 and contained in the data plane VCN 2118. The service gateway 2136 contained in the control plane VCN 2116 and contained in the data plane VCN 2118 can be communicatively coupled to cloud sendees 2156. [0260] In some embodiments, the data plane VCN 21 18 can be integrated with customer tenancies 2170. This integration can be useful or desirable for customers of the laaS provider in some cases such as a case that may desire support when executing code. The customer may provide code to run that may be destructive, may communicate with other customer resources, or may otherwise cause undesirable effects. In response to this, the laaS provider may determine whether to run code given to the laaS provider by the customer.
[0261] In some examples, the customer of the laaS provider may grant temporary network access to the laaS provider and request a function to be attached to the data plane app tier 2146. Code to run the function may be executed in the VMs 2166(1)-(N), and the code may not be configured to run anywhere else on the data plane VCN 2118. Each VM 2166(1)-(N) may be connected to one customer tenancy 2170. Respective containers 2171(1)-(N) contained in the VMs 2166(1)-(N) may be configured to run the code. In this case, there can be a dual isolation (e.g., the containers 2171(1)-(N) running code, where the containers 2171(1)-(N) may be contained in at least the VM 2166( 1 )-(N) that are contained in the untrusted app subnet(s) 2162), which may help prevent incorrect or otherwise undesirable code from damaging the network of the laaS provider or from damaging a network of a different customer. The containers 2171(1)-(N) may be communicatively coupled to the customer tenancy 2170 and may be configured to transmit or receive data from the customer tenancy 2170. The containers 2171(1)-(N) may not be configured to transmit or receive data from any other entity in the data plane VCN 21 18. Upon completion of running the code, the laaS provider may kill or otherwise dispose of the containers 2171(1)-(N).
[0262] In some embodiments, the trusted app subnet(s) 2160 may run code that may be owned or operated by the laaS provider. In this embodiment, the trusted app subnet(s) 2160 may be communicatively coupled to the DB subnet(s) 2130 and be configured to execute CRUD operations in the DB subnet(s) 2130. The untrusted app subnet(s) 2162 may be communicatively coupled to the DB subnet(s) 2130, but in this embodiment, the untrusted app subnet(s) may be configured to execute read operations in the DB subnet(s) 2130. The containers 2171(1)-(N) that can be contained in the VM 2166( 1 )-(N) of each customer and that may run code from the customer may not be communicatively coupled with the DB subnet(s) 2130.
[0263] In other embodiments, the control plane VCN 2116 and the data plane VCN 2118 may not be directly communicatively coupled. In this embodiment, there may be no direct communication between the control plane VCN 21 16 and the data plane VCN 2118. However, communication can occur indirectly through at least one method. An LPG 2110 may be established by the laaS provider that can facilitate communication between the control plane VCN 2116 and the data plane VCN 2118. In another example, the control plane VCN 2116 or the data plane VCN 2118 can make a call to cloud sendees 2156 via the sendee gateway 2136. For example, a call to cloud services 2156 from the control plane VCN 21 16 can include a request for a sendee that can communicate with the data plane VCN 2118.
[0264] FIG. 22 is a block diagram 2200 illustrating another example pattern of an laaS architecture, according to at least one embodiment. Sendee operators 2202 (e.g., service operators 1902 of FIG. 19) can be communicatively coupled to a secure host tenancy 2204 (e.g., the secure host tenancy 1904 of FIG. 19) that can include a virtual cloud network (VCN) 2206 (e.g., the VCN 1906 of FIG. 19) and a secure host subnet 2208 (e.g., the secure host subnet 1908 of FIG. 19). The VCN 2206 can include an LPG 2210 (e.g., the LPG 1910 of FIG. 19) that can be communicatively coupled to an SSH VCN 2212 (e.g., the SSH VCN 1912 of FIG. 19) via an LPG 2210 contained in the SSH VCN 2212. The SSH VCN 2212 can include an SSH subnet 2214 (e.g., the SSH subnet 1914 of FIG. 19), and the SSH VCN 2212 can be communicatively coupled to a control plane VCN 2216 (e.g., the control plane VCN 1916 of FIG. 19) via an LPG 2210 contained in the control plane VCN 2216 and to a data plane VCN 2218 (e.g., the data plane 1918 of FIG. 19) via an LPG 2210 contained in the data plane VCN 2218. The control plane VCN 2216 and the data plane VCN 2218 can be contained in a service tenancy 2219 (e.g., the service tenancy 1919 of FIG. 19).
[0265] The control plane VCN 2216 can include a control plane DMZ tier 2220 (e.g., the control plane DMZ tier 1920 of FIG. 19) that can include LB subnet(s) 2222 (e.g., LB subnet(s) 1922 of FIG. 19), a control plane app tier 2224 (e.g., the control plane app tier 1924 of FIG. 19) that can include app subnet(s) 2226 (e.g., app subnet(s) 1926 of FIG. 19), a. control plane data tier 2228 (e.g., the control plane data tier 1928 of FIG. 19) that can include DB subnet(s) 2230 (e.g., DB subnet(s) 2130 of FIG. 21). The LB subnet(s) 2222 contained in the control plane DMZ tier 2220 can be communicatively coupled to the app subnet(s) 2226 contained in the control plane app tier 2224 and to an Internet gateway 2234 (e.g., the Internet gateway 1934 of FIG. 19) that can be contained in the control plane VCN 2216, and the app subnet( s) 2226 can be communicatively coupled to the DB subnet(s) 2230 contained in the control plane data tier 2228 and to a sendee gateway 2236 (e.g., the service gateway of FIG. 19) and a network address translation (NAT) gateway 2238 (e.g., the NAT gateway 1938 of FIG. 19). The control plane VCN 2216 can include the service gateway 2236 and the NAT gateway 2238.
[0266] The data plane VCN 2218 can include a data plane app tier 2246 (e.g., the data plane app tier 1946 of FIG. 19), a data plane DMZ tier 2248 (e.g., the data plane DMZ tier 1948 of FIG. 19), and a data plane data tier 2250 (e.g., the data plane data tier 1950 of FIG. 19). The data plane DMZ tier 2248 can include LB subnet(s) 2222 that can be communicatively coupled to trusted app subnet(s) 2260 (e.g., trusted app subnet(s) 2160 of FIG . 21) and untrusted app subnet(s) 2262 (e.g., untrusted app subnet(s) 2162 of FIG. 21) of the data plane app tier 2246 and the Internet gateway 2234 contained in the data plane VCN 2218. The trusted app subnet(s) 2260 can be communicatively coupled to the service gateway 2236 contained in the data plane VCN 2218, the NAT gateway 2238 contained in the data plane VCN 2218, and DB subnet(s) 2230 contained in the data plane data tier 2250. The untrusted app subnet(s) 2262 can be communicatively coupled to the service gateway 2236 contained in the data plane VCN 2218 and DB subnet(s) 2230 contained in the data plane data tier 2250. The data plane data tier 2250 can include DB subnet(s) 2230 that can be communicatively coupled to the service gateway 2236 contained in the data plane VCN 2218.
[0267] The untrusted app subnet(s) 2262 can include primary' VNJCs 2264(1)-(N) that can be communicatively coupled to tenant virtual machines (VMs) 2266(1)-(N) residing within the untrusted app subnet! s) 2262. Each tenant VM 2266(1)-(N) can run code in a respective container 2267( 1 )-(N ), and be communicatively coupled to an app subnet 2226 that can be contained in a data plane app tier 2246 that can be contained in a container egress VCN 2268. Respective secondary VNICs 2272(1 )-(N) can facilitate communication between the untrusted app subnet(s) 2262 contained in the data plane VCN 2218 and the app subnet contained in the container egress VCN 2268. The container egress VCN can include a NAT gateway 2238 that can be communicatively coupled to public Internet 2254 (e.g., public Internet 1954 of FIG. 19).
[0268] The Internet gateway 2234 contained in the control plane VCN 2216 and contained in the data plane VCN 2218 can be communicatively coupled to a metadata management service 2252 (e.g., the metadata management system 1952 of FIG. 19) that can be communicatively coupled to public Internet 2254. Public Internet 2254 can be communicatively coupled to the NAT gateway 2238 contained in the control plane VCN 2216 and contained in the data plane VCN 2218. The service gateway 2236 contained in the control plane VCN 2216 and contained in the data plane VCN 2218 can be communicatively coupled to cloud sendees 2256.
[0269] In some examples, the pattern illustrated by the architecture of block diagram 2200 of FIG. 22 may be considered an exception to the pattern illustrated by the architecture of block diagram 2100 of FIG. 21 and may be desirable for a customer of the laaS provider if the laaS provider cannot directly communicate with the customer (e.g., a disconnected region). The respective containers 2267(1)-(N) that are contained in the VMs 2266(1)-(N) for each customer can be accessed in real-time by the customer. The containers 2267(1 )-(N) may be configured to make calls to respective secondary VNICs 2272(1 )-(N) contained in app subnet(s) 2226 of the data plane app tier 2246 that can be contained in the container egress VCN 2268. The secondary VNICs 2272(1 )-(N) can transmit the calls to the NAT gateway 2238 that may transmit the calls to public Internet 2254. In this example, the containers 2267(1 )-(N) that can be accessed in real-time by the customer can be isolated from the control plane VCN 2216 and can be isolated from other entities contained in the data plane VCN 2218. The containers 2267(1)-(N) may also be isolated from resources from other customers.
[0270] In other examples, the customer can use the containers 2267(1)-(N) to call cloud services 2256. In this example, the customer may run code in the containers 2267(1 )~(N) that requests a service from cloud services 2256. The containers 2267(1 )-(N) can transmit this request to the secondary VNICs 2272(1 )-(N) that can transmit the request to the NAT gateway that can transmit the request to public Internet 2254. Public Internet 2254 can transmit the request to LB subnet(s) 2222 contained in the control plane VCN 2216 via the Internet gateway 2234. In response to determining the request is valid, the LB subnet(s) can transmit the request to app subnet(s) 2226 that can transmit the request to cloud services 2256 via the service gateway 2236.
[0271] It should be appreciated that laaS architectures 1900, 2000, 2100, 2200 depicted in the figures may have other components than those depicted. Further, the embodiments shown in the figures are only some examples of a cloud infrastructure system that may incorporate an embodiment of the disclosure. In some other embodiments, the laaS systems may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration or arrangement of components. [0272] In certain embodiments, the laaS systems described herein may include a suite of applications, middleware, and database sendee offerings that are delivered to a customer in a self-service, subscription-based, elastically scalable, reliable, highly available, and secure manner. An example of such an laaS system is the Oracle Cloud Infrastructure (OCI) provided by the present assignee.
[0273] FIG. 23 illustrates an example computer system 2300, in which various embodiments may be implemented. The system 2300 may be used to implement any of the computer systems described above. As shown in the figure, computer system 2300 includes a processing unit 2304 that communicates with a number of peripheral subsystems via a bus subsystem 2302. These peripheral subsystems may include a processing acceleration unit 2306, an I/O subsystem 2308, a storage subsystem 2318 and a communications subsystem 2324. Storage subsystem 2318 includes tangible computer-readable storage media 2322 and a system memory 2310.
[0274] Bus subsystem 2302 provides a mechanism for letting the various components and subsystems of computer system 2300 communicate with each other as intended. Although bus subsystem 2302 is shown schematically as a single bus, alternative embodiments of the bus subsystem may utilize multiple buses. Bus subsystem 2302 may be any of several types of bus structures including a memory bus or memory? controller, a peripheral bus, and a local bus using any of a variety of bus architectures. For example, such architectures may include an Industry' Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus, which can be implemented as a Mezzanine bus manufactured to the IEEE Pl 386.1 standard.
[0275] Processing unit 2304, which can be implemented as one or more integrated circuits (e.g,, a conventional microprocessor or microcontroller), controls the operation of computer system 2300. One or more processors may? be included in processing unit 2304. These processors may include single core or multicore processors. In certain embodiments, processing unit 2304 may be implemented as one or more independent processing units 2332 and/or 2334 with single or multicore processors included in each processing unit. In other embodiments, processing unit 2304 may also be implemented as a quad-core processing unit formed by integrating two dual -core processors into a single chip. [0276] In various embodiments, processing unit 2304 can execute a variety of programs in response to program code and can maintain multiple concurrently executing programs or processes. At any given time, some or all of the program code to be executed can be resident in processor(s) 2304 and/or in storage subsystem 2318. Through suitable programming, processor(s) 2304 can provide various functionalities described above. Computer system 2300 may additionally include a processing acceleration unit 2306, which can include a digital signal processor (DSP), a special-purpose processor, and/or the like.
[0277] I/O subsystem 2308 may include user interface input devices and user interface output devices. User interface input devices may include a keyboard, pointing devices such as a mouse or trackball, a touchpad or touch screen incorporated into a display, a scroll wheel, a click wheel, a dial, a button, a switch, a keypad, audio input devices with voice command recognition systems, microphones, and other types of input devices. User interface input devices may include, for example, motion sensing and/or gesture recognition devices such as the Microsoft Kinect® motion sensor that enables users to control and interact with an input device, such as the Microsoft Xbox® 360 game controller, through a natural user interface using gestures and spoken commands. User interface input devices may also include eye gesture recognition devices such as the Google Glass® blink detector that detects eye activity (e.g., ‘blinking’ while taking pictures and/or making a menu selection) from users and transforms the eye gestures as input into an input device (e.g., Google Glass®). Additionally, user interface input devices may include voice recognition sensing devices that enable users to interact with voice recognition systems (e.g., Siri® navigator), through voice commands.
[0278] User interface input devices may also include, without limitation, three dimensional (3D) mice, joysticks or pointing sticks, gamepads and graphic tablets, and audio/visual devices such as speakers, digital cameras, digital camcorders, portable media players, webcams, image scanners, fingerprint scanners, barcode reader 3D scanners, 3D printers, laser rangefinders, and eye gaze tracking devices. Additionally, user interface input devices may include, for example, medical imaging input devices such as computed tomography, magnetic resonance imaging, position emission tomography, medical ultrasonography devices. User interface input devices may also include, for example, audio input devices such as MIDI keyboards, digital musical instruments and the like.
[0279] User interface output devices may include a display subsystem, indicator lights, or non-visual displays such as audio output devices, etc. The display subsystem may be a cathode ray tube (CRT), a flat-panel device, such as that, using a liquid crystal display (LCD) or plasma display, a projection device, a touch screen, and the like. In general, use of the term "output device" is intended to include all possible types of devices and mechanisms for outputting information from computer system 2300 to a user or other computer. For example, user interface output devices may include, without limitation, a variety of display devices that visually convey text, graphics and audio/video information such as monitors, printers, speakers, headphones, automotive navigation systems, plotters, voice output devices, and modems.
[0280] Computer system 2300 may comprise a storage subsystem 2318 that provides a tangible non-transitory computer-readable storage medium for storing software and data constructs that provide the functionality of the embodiments described in this disclosure. The software can include programs, code modules, instructions, scripts, etc., that when executed by one or more cores or processors of processing unit 2304 provide the functionality described above. Storage subsystem 2318 may also provide a repository7 for storing data used in accordance with the present disclosure.
[0281] As depicted in the example in FIG. 23, storage subsystem 2318 can include various components including a system memory7 2310, computer-readable storage media 2322, and a computer readable storage media reader 2320. System memory 2310 may store program instructions that are loadable and executable by processing unit 2304. System memory7 2310 may also store data that is used during the execution of the instructions and/or data that, is generated during the execution of the program instruct ions. Various different kinds of programs may be loaded into system memory 2310 including but not limited to client applications, Web browsers, mid-tier applications, relational database management systems (RDBMS), virtual machines, containers, etc.
[0282] System memory 2310 may also store an operating system 2316. Examples of operating system 2316 may include various versions of Microsoft Windows®, Apple Macintosh®, and/or Linux operating systems, a variety of commercially-available UNIX® or UNIX-like operating systems (including without limitation the variety of GNU/Linux operating systems, the Google Chrome® OS, and the like) and/or mobile operating systems such as iOS, Windows® Phone, Android® OS, BlackBerry® OS, and Palm® OS operating systems. In certain implementations where computer system 2300 executes one or more virtual machines, the virtual machines along with their guest operating systems (GOSs) may be loaded into system memory 2310 and executed by one or more processors or cores of processing unit 2304.
[0283] System memory 2310 can come in different configurations depending upon the type of computer system 2300. For example, system memory 2310 may be volatile memory (such as random access memory' (RAM)) and/or non-volatile memory' (such as read-only' memory (ROM), flash memory/, etc.) Different types of RAM configurations may be provided including a static random access memory (SRAM), a dynamic random access memory (DRAM), and others. In some implementations, system memory 2310 may include a basic input/output system (BIOS) containing basic routines that help to transfer information between elements within computer system 2300, such as during start-up.
[0284] Computer-readable storage media 2322 may represent remote, local, fixed, and/or removable storage devices plus storage media for temporarily and/or more permanently containing, storing, computer-readable information for use by computer system 2300 including instructions executable by processing unit 2304 of computer system 2300.
[0285] Computer-readable storage media 2322 can include any appropriate media known or used in the art, including storage media and communication media, such as but not limited to, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage and/or transmission of information. This can include tangible computer-readable storage media such as RAM, ROM, electronically erasable programmable ROM (EEPROM), flash memory'- or other memory’ technology, CD-ROM, digital versatile disk (DVD), or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible computer readable media.
[0286] By way of example, computer-readable storage media 2322 may include a hard disk drive that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive that reads from or writes to a removable, nonvolatile magnetic disk, and an optical disk drive that reads from or writes to a removable, nonvolatile optical disk such as a CD ROM, DVD, and Blu-Ray® disk, or other optical media. Computer-readable storage media 2322 may include, but is not limited to, Zip® drives, flash memory'- cards, universal serial bus (USB) flash drives, secure digital (SD) cards, DVD disks, digital video tape, and the like. Computer-readable storage media 2322 may also include, solid-state drives (SSD) based on non-volatile memory/ such as flash-memory’ based SSDs, enterprise flash drives, solid state ROM, and the like, SSDs based on volatile memory such as solid state RAM, dynamic RAM, static RAM, DRAM-based SSDs, magnetoresistive RAM (MRAM) SSDs, and hybrid SSDs that use a combination of DRAM and flash memory based SSDs. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules, and other data for computer system 2300.
[0287] Machine-readable instructions executable by one or more processors or cores of processing unit 2304 may be stored on a non-transitory computer-readable storage medium. A non-transitory-- computer-readable storage medium can include physically tangible memory or storage devices that include volatile memory' storage devices and/or non-volatile storage devices. Examples of non-transitory computer-readable storage medium include magnetic storage media (e.g., disk or tapes), optical storage media (e.g., DVDs, CDs), various types of RAM, ROM, or flash memory’, hard drives, floppy drives, detachable memory drives (e.g., USB drives), or other type of storage device.
[0288] Communications subsystem 2324 provides an interface to other computer systems and networks. Communications subsystem 2324 serves as an interface for receiving data from and transmitting data to other systems from computer system 2300. For example, communications subsystem 2324 may enable computer system 2300 to connect to one or more devices via the Internet. In some embodiments communications subsystem 2324 can include radio frequency (RF) transceiver components for accessing wireless voice and/or data networks (e.g., using cellular telephone technology, advanced data network technology, such as 3G, 4G or EDGE (enhanced data rates for global evolution), WiFi (IEEE 802.11 family standards, or other mobile communication technologies, or any combination thereof), global positioning system (GPS) receiver components, and/or other components. In some embodiments communications subsystem 2324 can provide wired network connectivity (e.g., Ethernet) in addition to or instead of a wireless interface.
[0289] In some embodiments, communications subsystem 2324 may also receive input communication in the form of structured and/or unstructured data feeds 2326, event streams 2328, event updates 2330, and the like on behalf of one or more users who may use computer system 2300.
[0290] By way of example, communications subsystem 2324 may be configured to receive data feeds 2326 in real-time from users of social networks and/or other communication services such as Twitter® feeds. Facebook® updates, web feeds such as Rich Site Summary (RSS) feeds, and/or real -time updates from one or more third party information sources.
[0291] Additionally, communications subsystem 2324 may also be configured to receive data in the form of continuous data streams, which may include event streams 2328 of real- time events and/or event updates 2330, that may be continuous or unbounded in nature with no explicit end. Examples of applications that generate continuous data may include, for example, sensor data applications, financial tickers, network performance measuring tools (e.g., network monitoring and traffic management applications), clickstream analysis tools, automobile traffic monitoring, and the like.
[0292] Communications subsystem 2324 may also be configured to output the structured and/or unstructured data feeds 2326, event streams 2328, event updates 2330, and the like to one or more databases that may be in communication with one or more streaming data source computers coupled to computer system 2300.
[0293] Computer system 2300 can be one of various types, including a handheld portable device (e.g., an iPhone® cellular phone, an iPad® computing tablet, a PDA), a wearable device (e.g., a Google Glass® head mounted display), a PC, a workstation, a mainframe, a kiosk, a server rack, or any other data processing sy stem.
[0294] Due to the ever-changing nature of computers and networks, the description of computer system 2300 depicted in the figure is intended only as a specific example. Many other configurations having more or fewer components than the system depicted in the figure are possible. For example, customized hardware might also be used and/or particular elements might be implemented in hardware, firmware, software (including applets), or a combination. Further, connection to other computing devices, such as network input/output devices, may be employed. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art wall appreciate other ways and/or methods to implement the various embodiments.
[0295] Embodiments may be implemented by using a computer program product, comprising computer program/instructions which, when executed by a processor, cause the processor to perform any of the methods described in the disclosure.
[0296] Although specific embodiments have been described, various modifications, alterations, alternative constructions, and equivalents are also encompassed within the scope of the disclosure. Embodiments are not restricted to operation within certain specific data processing environments, but are free to operate within a plurality of data processing environments. Additionally, although embodiments have been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not limited to the described series of transactions and steps. Various features and aspects of the above-described embodiments may be used individually or jointly.
[0297] Further, while embodiments have been described using a particular combination of hardware and software, it should be recognized that, other combinations of hardware and software are also within the scope of the present disclosure. Embodiments may be implemented only in hardware, or only in software, or using combinations thereof. The various processes described herein can be implemented on the same processor or different processors in any combination. Accordingly, where components or services are described as being configured to perform certain operations, such configuration can be accomplished, e.g., by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation, or any combination thereof. Processes can communi cate using a variety of techni ques including but not limited to conventional techniques for inter process communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different, times.
[0298] The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope as set forth in the claims. Thus, although specific disclosure embodiments have been described, these are not. intended to be limiting. Various modifications and equivalents are within the scope of the following claims.
[0299] The use of the terms “a” and “an” and “the” and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be consumed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. The term “connected” is to be construed as partly or wholly contained within, attached to, or joined together, even if there is something intervening. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate embodiments and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure.
[0300] Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is intended to be understood within the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
[0301] Preferred embodiments of this disclosure are described herein, including the best mode known for carrying out the disclosure. Vari ations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the foregoing description. Those of ordinary skill should be able to employ such variations as appropriate and the disclosure may be practiced otherwise than as specifically described herein. Accordingly, this disclosure includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the disclosure unless otherwise indicated herein.
[0302] All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
[0303 ] In the foregoing specification, aspects of the disclosure are described with reference to specific embodiments thereof, but those skilled in the art will recognize that the disclosure is not limited thereto. Various features and aspects of the above-described disclosure may be used individually or jointly. Further, embodiments can be utilized in any number of environments and applications beyond those described herein without departing from the broader spirit and scope of the specification. The specification and drawings are, accordingly, to be regarded as illustrative rather than restrictive.
Clause 1 : A computer-implemented method, comprising: generating, by a computing system, a snapshot of a source file system in a source region; performing, by the computing system, a first cross-region replication and a second cross- region replication between the source file system in the source region and a target file system in a target region, the source region and the target region being in different regions; receiving, by the computing system, a snapshot deletion request in the source file system to delete the snapshot; determining, by the computing system, a timing of the snapshot deletion request in the source file system; performing, by the computing system, a first operation in accordance with the timing of the snapshot deletion request being determined to be during the first cross-region replication; and performing, by the computing system, a second operation in accordance with the timing of the snapshot deletion request being determined to be between the first and the second cross- region replications.
Clause 2: The method of clause 1, wherein the first operation comprises withholding the snapshot deletion request by the source file system until an end of the first cross-region replication.
Clause 3: The method of clause 1 or clause 2, wherein withholding the snapshot deletion request comprises storing metadata information of the snapshot by the source file system in a database for communicating to the target file system.
Clause 4: The method of any of clauses 1 to 3, wherein withholding the snapshot deletion request comprises: transferring the snapshot by the source file system to the target file system to complete the first cross-region replication; and deleting the snapshot by the target file system at the end of the second cross-region replication.
Clause 5: The method of any of clauses 1 to 4, wherein the second operation comprises deleting the snapshot by the source file system without transferring the snapshot to the target file system.
Clause 6: The method of any of clauses 1 to 5, further comprising determining a second tinting of generating the snapshot of the source file system.
Clause 7: The method of clause 6, further comprising transferring the generated snapshot to the target file system during the second cross-region replication in accordance with the second timing of the snapshot generation being determined to be during the first cross-region replication and in accordance with the timing of the snapshot deletion request being determined to be after the second cross-region replication.
Clause 8: A non-transitory computer-readable medium storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: generating, by a computing system, a snapshot of a source file system in a source region, performing, by the computing system, a first cross-region replication and a second cross- region replication between the source file system in the source region and a target file system in a target region, the source region and the target region being in different regions; receiving, by the computing system, a snapshot deletion request in the source file system to delete the snapshot; determining, by the computing system, a timing of the snapshot deletion request in the source file system; performing, by the computing system, a first operation in accordance with the timing of the snapshot deletion request being determined to be during the first cross-region replication; and performing, by the computing system, a second operation in accordance with the timing of the snapshot deletion request being determined to be between the first and the second cross- region replications. Clause 9: The non-transitory computer-readable medium of clause 8, wherein the first operation comprises withholding the snapshot deletion request by the source file system until an end of the first cross-region replication.
Clause 10. The non-transitory computer-readable medium of clause 8 or clause 9, wherein withholding the snapshot deletion request comprises storing metadata information of the snapshot by the source file system in a database for communicating to the target file system.
Clause 1 1. The non-transitory computer-readable medium of any of clauses 8 to 10, wherein withholding the snapshot deletion request comprises: transferring the snapshot by the source file system to the target file system to complete the first cross-region replication; and deleting the snapshot by the target file system at the end of the second cross-region replication.
Clause 12. The non-transitory' computer-readable medium of any of clauses 8 to 11, wherein the second operation comprises deleting the snapshot by the source file system without transferring the snapshot to the target file system.
Clause 13. The non-transitory computer-readable medium of any of clauses 8 to 12, the operations further comprising determining a second timing of generating the snapshot of the source file system .
Clause 14: The non-transitory computer-readable medium of clause 13, the operations further comprising transferring the generated snapshot to the target file system during the second cross-region replication in accordance with the second timing of the snapshot generation being determined to be during the first cross-region replication and in accordance with the timing of the snapshot deletion request being determined to be after the second cross-regi on repl i cati on .
Clause 15. A system, comprising: one or more processors; and one or more computer readable media storing computer-executable instructions that, when executed by the one or more processors, cause the system to: create a snapshot of a source file system in a source region; perform a first cross-region replication and a second cross-region replication between the source file system in the source region and a target file system in a target region, the source region and the target region being in different regions; receive a snapshot deletion request in the source file system to delete the snapshot; determine a timing of the snapshot deletion request in the source file system; perform a first operation in accordance with the timing of the snapshot deletion request being determined to be during the first cross-region replication, and perform a second operation in accordance with the timing of the snapshot deletion request being determined to be between the first and the second cross-region replications.
Clause 16. The system of clause 15, wherein the first operation comprises withholding the snapshot deletion request by the source file system until an end of the first cross-region replication.
Clause 17. The system of clause 15 or clause 16, wherein withholding the snapshot deletion request comprises: transferring the snapshot by the source file system to the target file system to complete the first cross-region replication; and deleting the snapshot by the target file system at the end of the second cross-region replication.
Clause 18. The sy stem of any of clauses 1 to 17, wherein the second operation comprises deleting the snapshot by the source file system without transferring the snapshot to the target file system.
Clause 19. The system of any of clauses 1 to 18, wherein the system is further caused to determine a second timing of generating the snapshot of the source file sy stem .
Clause 20. The system of clause 19, wherein the system is further caused to transfer the generated snapshot to the target file system during the second cross-region replication in accordance with the second timing of the snapshot generation being determined to be during the first cross-region replication and in accordance with the timing of the snapshot, del etion request being determined to be after the second cross-region replication.

Claims

WHAT IS CLAIMED IS:
1. A method, comprising: generating, by a computing system, a first snapshot and a second snapshot in a source file system in a source region, assigning, by the computing system, a first provenance identification to the first snapshot and a second provenance identification to the second snapshot in the source file system, the first provenance identification being unique among all snapshots in all regions and the second provenance identification being unique among all snapshots in all regions; receiving, by the computing sy stem, a request to perform a replication between the source file system in the source region and a target file system in a target region, the source region and the target region being in different regions, comparing, by the computing system, the first provenance identification in the source file system to provenance identification of existing snapshots in the target region at least in response to the request; identifying, by the computing system, a matched snapshot with the first provenance identification in the target region to use as a base snapshot for the replication based at least in part on the comparison; and performing, by the computing system, the replication using deltas between the second snapshot and the base snapshot in the source file system.
2. The method of claim 1, further comprising selecting the matched snapshot as the base snapshot at least in response to the matched snapshot with the first provenance identification in the target region being in the target file system.
3. The method of claim 1 or claim 2, wherein the target region comprises a non-target file system having a snapshot associated with the first provenance identification.
4. The method of any one of claims 1 to 3, further comprising: performing an in-region copying the matched snapshot with the first provenance identification from the non-target file system to the target file system at least in response to the matched snapshot with the first provenance identification in the target region not being in the target file system; and selecting the in-region copy of the matched snapshot in the target file system as the base snapshot.
5. The method of any one of claims 1 to 4, wherein the in-region copy of the matched snapshot in the target file system has the same first provenance identification but different resource identification from the matched snapshot in the non-target file system.
6. The method of any of the preceding claims, further comprising selecting the first snapshot with the first provenance identification in the source file system as the base snapshot at least in response to no matched snapshot with the first provenance identification being found in the target region.
7. The method of any one of claims 1 to 6, further comprising performing a cross-region copying of the first snapshot with the first provenance identification from the source file system to the target file system before generating the deltas between the second snapshot and the base snapshot in the source file system.
8. A non-transitory computer-readable medium storing computer- executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: generating, by a computing system, a first snapshot and a second snapshot in a source file system in a source region; assigning, by the computing system, a first provenance identification to the first snapshot, and a second provenance identification to the second snapshot, in the source file system, the first provenance identification being unique among all snapshots in all regions and the second provenance identification being unique among all snapshots in all regions; receiving, by the computing system, a request to perform a replication between the source file system in the source region and a target file system in a target region, the source region and the target region being in different regions; comparing, by the computing system, the first provenance identification in the source file system to provenance identifications of existing snapshots in the target region at least in response to the request; identifying, by the computing system, a matched snapshot with the first provenance identification in the target region to use as a base snapshot for the replication based at least in part, on the comparison; and performing, by the computing system, the replication between the source file system and the target file system by using deltas between the second snapshot and the base snapshot in the source file system.
9. The non-transitory computer-readable medium of claim 8, the operations further comprising selecting the matched snapshot as the base snapshot at least in response to that the matched snapshot with the first provenance identification in the target region being in the target file system.
10. The non-transitory computer-readable medium of claim 8 or claim 9, wherein the target region comprises a non-target file system having a snapshot associated with the first provenance identification.
1 1. The non-transitory computer-readable medium of claim 10, the operations further comprising: performing an in-region copying the matched snapshot with the first provenance identification from the non-target file system to the target file system at least in response to the matched snapshot with the first, provenance identification in the target region not being in the target file system; and selecting the in-region copy of the matched snapshot in the target file system as the base snapshot.
12. The non-transitory computer-readable medium of any of claims 8 to 11, the operations further comprising selecting the first snapshot with the first provenance identification in the source file system as the base snapshot at least in response to no matched snapshot with the first provenance identification being found in the target region.
13. The non-transitory computer-readable medium of claim 12, the operations further comprising performing a cross-region copying of the first snapshot with the first provenance identification from the source file system to the target file system before generating the deltas between the second snapshot and the base snapshot in the source file system.
14. The non-transitory computer-readable medium of claim 13, w'herein the cross-region copy of the first snapshot in the target file system has the same first provenance identification but different resource identification from the first snapshot in the source file system.
1 15. A system, comprising: one or more processors; and one or more computer readable media storing computer-executable instructions that, when executed by the one or more processors, cause the system to: generate a first snapshot and a second snapshot in a source file system in a source region;
/ assign a first provenance identification to the first snapshot and a second provenance identification to the second snapshot in the source file system, the first provenance identification being unique among all snapshots in all regions and the second provenance identification being unique among all snapshots in all regions; receive a request to perform a replication between the source file system in the source region and a target file system in a target region, the source region and the target region being in different regions: compare the first provenance identification in the source file system to provenance identifications of existing snapshots in the target region at least in response to the request; identify a matched snapshot with the first provenance identification in the target region to use as a base snapshot for the replication based at least in part on the comparison; and perform the replication between the source file system and the target1 file system by using deltas between the second snapshot and the base snapshot in the source file system.
1 16. The system of claim 15, wherein the system is further caused to select the matched snapshot as the base snapshot at least in response to the matched snapshot with the first provenance identification in the target region being in the target file system.
1 17. The system of claim 15 or 16, wherein the target region comprises a non-target file system having a snapshot associated with the first provenance identification.
1 18. The system of claim 17, wherein the system is further caused to: perform an in-region copying the matched snapshot with the first provenance identification from the non-target file system to the target file system at least in response to the matched snapshot with the first provenance identification in the target region not being in the target file system; and select the in-region copy of the matched snapshot in the target file system as the base snapshot.
19. The system of any of claims 15 to 18, wherein the system is further caused to select the first snapshot with the first provenance identification in the source file system as the base snapshot at least in response to no matched snapshot with the first provenance identification being found in the target region.
20. The system of claim 19, wherein the system is further caused to perform a cross-region copying of the first snapshot with the first provenance identification from the source file system to the target file system before generating the deltas between the second snapshot and the base snapshot in the source file system.
PCT/US2023/024236 2022-06-16 2023-06-02 Techniques for efficient replication and recovery WO2023244447A1 (en)

Applications Claiming Priority (12)

Application Number Priority Date Filing Date Title
US202263352992P 2022-06-16 2022-06-16
US63/352,992 2022-06-16
US202263357526P 2022-06-30 2022-06-30
US63/357,526 2022-06-30
US202263412243P 2022-09-30 2022-09-30
US63/412,243 2022-09-30
US202263378486P 2022-10-05 2022-10-05
US63/378,486 2022-10-05
US18/169,121 2023-02-14
US18/169,124 US20230409539A1 (en) 2022-06-16 2023-02-14 Techniques for maintaining snapshot data consistency during file system cross-region replication
US18/169,124 2023-02-14
US18/169,121 US20230409538A1 (en) 2022-06-16 2023-02-14 Techniques for efficient replication and recovery

Publications (1)

Publication Number Publication Date
WO2023244447A1 true WO2023244447A1 (en) 2023-12-21

Family

ID=87136620

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/024236 WO2023244447A1 (en) 2022-06-16 2023-06-02 Techniques for efficient replication and recovery

Country Status (1)

Country Link
WO (1) WO2023244447A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090307277A1 (en) * 2008-06-04 2009-12-10 Microsoft Corporation Generation of database deltas and restoration
US20120317079A1 (en) * 2011-06-08 2012-12-13 Kurt Alan Shoens Systems and methods of data replication of a file system
US10459632B1 (en) * 2016-09-16 2019-10-29 EMC IP Holding Company LLC Method and system for automatic replication data verification and recovery

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090307277A1 (en) * 2008-06-04 2009-12-10 Microsoft Corporation Generation of database deltas and restoration
US20120317079A1 (en) * 2011-06-08 2012-12-13 Kurt Alan Shoens Systems and methods of data replication of a file system
US10459632B1 (en) * 2016-09-16 2019-10-29 EMC IP Holding Company LLC Method and system for automatic replication data verification and recovery

Similar Documents

Publication Publication Date Title
US10691716B2 (en) Dynamic partitioning techniques for data streams
WO2019154394A1 (en) Distributed database cluster system, data synchronization method and storage medium
CA2929777C (en) Managed service for acquisition, storage and consumption of large-scale data streams
US9858322B2 (en) Data stream ingestion and persistence techniques
CA2930026C (en) Data stream ingestion and persistence techniques
US10482104B2 (en) Zero-data loss recovery for active-active sites configurations
US9276959B2 (en) Client-configurable security options for data streams
US10635644B2 (en) Partition-based data stream processing framework
US20200026786A1 (en) Management and synchronization of batch workloads with active/active sites using proxy replication engines
KR20140014268A (en) Cloud storage
US20240015143A1 (en) Cross-regional replication of keys
WO2023244491A1 (en) Techniques for replication checkpointing during disaster recovery
US20230409538A1 (en) Techniques for efficient replication and recovery
US20240061814A1 (en) Techniques for maintaining snapshot key consistency involving garbage collection during file system cross-region replication
US20240094937A1 (en) Concurrent and non-blocking object deletion for cross-region replications
CN117643015A (en) Snapshot-based client-side key modification of log records manages keys across a series of nodes
US20240104062A1 (en) Techniques for resolving snapshot key inter-dependency during file system cross-region replication
US20240134828A1 (en) Techniques for efficient encryption and decryption during file system cross-region replication
WO2023244447A1 (en) Techniques for efficient replication and recovery
US20240086417A1 (en) Techniques for replication-aware resource management and task management of file systems
WO2023244601A1 (en) End-to-end restartability of cross-region replication using a new replication
WO2023244449A1 (en) Hierarchical key management for cross-region replication
WO2023244446A1 (en) Scalable and secure cross-region and optimized file system replication for cloud scale
US11188522B2 (en) Streamlined database commit for synchronized nodes
KR101929948B1 (en) Method and system for data type based multi-device synchronization

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23738250

Country of ref document: EP

Kind code of ref document: A1