CN117693743A - Memory controller and method for shared memory storage - Google Patents

Memory controller and method for shared memory storage Download PDF

Info

Publication number
CN117693743A
CN117693743A CN202180100656.9A CN202180100656A CN117693743A CN 117693743 A CN117693743 A CN 117693743A CN 202180100656 A CN202180100656 A CN 202180100656A CN 117693743 A CN117693743 A CN 117693743A
Authority
CN
China
Prior art keywords
file
memory
server
backup
files
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180100656.9A
Other languages
Chinese (zh)
Inventor
伊塔玛·菲克
迈克尔·赫希
阿萨夫·纳塔逊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN117693743A publication Critical patent/CN117693743A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • G06F16/1827Management specifically adapted to NAS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/178Techniques for file synchronisation in file systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Hardware Redundancy (AREA)

Abstract

A memory controller for use in a shared memory system includes a source memory server and a target memory server for storing files. The memory controller is configured to connect to the source memory server and initiate a session that is synchronized using a high precision clock, wherein the high precision clock has a precision that exceeds a timing threshold. The memory controller also receives a request associated with the source memory server, inputs the request into a log file, and initiates a backup of the source memory server on the target memory source. One or more files of the source memory server are copied to the target memory server, wherein the copied files are backup files. The memory controller also analyzes the backup file and adjusts the log file and replays the request in the log file on the target memory server. Thus, an initial consistent synchronization point for a shared memory system having multiple clients is created.

Description

Memory controller and method for shared memory storage
Technical Field
The present disclosure relates generally to the field of shared storage and data replication; and more particularly to a memory controller and method for use in a shared memory storage system.
Background
Shared storage (e.g., network-attached storage (NAS)) is widely used as a convenient way to store and share data files. The network attached storage stores data received from multiple clients of the source site and is therefore also commonly referred to as NAS sharing. These data are further stored as backup data at the target site, e.g., target sharing. Typically, data backup is used to protect and restore data when source site data is lost. Examples of data loss events may include, but are not limited to, data corruption, hardware or software failures in the source site, accidental deletion of data, hacking, or malicious attacks. Thus, for security reasons, separate backup storage or target sharing is widely used to store backups of data present in a source site.
Traditionally, NAS sharing is continuously used by multiple clients for storing new or updated data. A data replication solution is needed to store these data from NAS to target shares as backups. Some NAS manufacturers offer a solution for data replication between their own storage devices, i.e. require that NAS sharing and target sharing belong to the same product manufacturer or compatible manufacturer. These schemes force users to use the same manufacturer (or vendor) hardware and software products, resulting in a manufacturer locked situation that is not desirable. In addition, some conventional data replication schemes are based on continuously replicating snapshot differences through an application programming interface (application programming interface, API). However, conventional snapshot API-based solutions include scanning the entire file system and only detecting changes to the entire file. This is very inefficient and impractical in the case of applications (e.g., databases) modifying specific data in very large files. For example, if a small change is made in a very large file, the snapshot difference will indicate that the file has been modified. Thus, the entire file will be replicated, rather than updating the small incremental changes made therein, making the data replication solution inefficient and impractical. In addition, some conventional data replication solutions employ methods of monitoring change files, which may be based on periodically scanning the entire shared storage, or they may use facilities on the client to monitor change files. However, such conventional data replication schemes introduce race conditions because there may be Input/Output (IO) to the file when replicating the file. Thus, there are technical problems of inefficiency and unreliability associated with conventional data replication schemes for shared memory storage.
Thus, in light of the above discussion, there is a need to overcome the above-described drawbacks associated with conventional data replication schemes.
Disclosure of Invention
The present disclosure provides a memory controller and method for use in a shared memory storage system. The present disclosure provides a solution to the existing problem of unreliability and inefficiency in conventional data replication solutions for shared memory storage, wherein the problem is complicated by the fact that: in existing systems, there is a dependency on the source site and the target site on the use of compatible vendor services and users are bundled or forced to employ hardware and software solutions from the same manufacturer (or vendor) simultaneously in the source and target shared storage systems, which increases the difficulty of solving the problem of unreliable data replication and data recovery. It is an object of the present disclosure to provide a solution that at least partly overcomes the problems encountered in the prior art and provides an improved data replication solution by creating an initial consistent synchronization point with active shared storage of multiple clients while introducing minimal delay in the data stream without additional shared content reading. In addition, the disclosed solution eliminates vendor locking problems because the solution does not rely on compatible vendor or manufacturer services on the source and target sites.
One or more objects of the present disclosure are achieved by the solutions provided in the appended independent claims. Advantageous embodiments of the present disclosure are further defined in the dependent claims.
In one aspect, the present disclosure provides a memory controller. The memory controller is for use in a shared memory storage system that includes a source memory server and a target memory server for storing files having metadata and data. The memory controller is used for: connecting to the source memory server; initiating a session that is synchronized with a high precision clock, wherein the high precision clock has a precision that exceeds a timing threshold; receiving a request related to the source memory server; inputting the request into a log file; initiating a backup of the source memory server at the target memory server, wherein one or more files of the source memory server are copied to the target memory server, the copied files being backup files; analyzing the backup file and adjusting the log file; the request in the log file is replayed on the target memory server.
The memory controller of the present disclosure provides an improved data replication solution for the shared memory storage system that is independent of the device manufacturer (or vendor). Thus, the source memory server and the target memory server need not belong to the same product manufacturer or compatible manufacturer. This eliminates the problem of vendor locking. In addition, since the memory controller is configured to synchronize file operations of a plurality of clients using the log file, race conditions are avoided and any possibility of reading covered data is suppressed. Furthermore, the data replication scheme provided by the memory controller is implemented in a fully distributed manner, as each client is responsible for its own log record. Further, by adjusting the log file, an initial synchronization point is established to place the target memory server in a known state that coincides with the state of the source memory server at the beginning of certain delta change sequences. Furthermore, adjusting the log file does not require any additional shared content reading. Therefore, the memory controller provides a reliable and efficient data replication solution for the shared memory storage system.
In one implementation, the memory controller is further configured to input the request to a log file along with a timestamp of the request generated by the high precision clock.
The timestamp of the request in the log file enables the memory controller to efficiently synchronize file operations of multiple clients, thereby avoiding single point failure problems without introducing any significant latency in the data stream.
In another implementation, the memory controller is further configured to analyze the backup file and adjust the log file by: generating a map of the backup files, wherein each file of the source memory server is mapped to a file in the source server; determining whether metadata of the backup file has been changed, and if so, indicating that the backup file is an orphaned file; determining for each orphan file which other backup files are affected by changes in metadata of the orphan file and linking the orphan file to those other backup files in the map; and deleting the orphan file from the log file.
Advantageously, the adapted log file includes all metadata that is important to the log record, which in turn establishes an initial synchronization point to place the target memory server in a known state that is consistent with the state of the source memory server at the beginning of certain incremental change sequences.
In another implementation, the memory controller is further configured to determine that the request in the log file relates to a file that is not a backup file, determine whether there is a request in the log file to generate the file, and if not, read the file from the source memory server and copy the file to the target memory server prior to replaying the log file.
Advantageously, duplicate copies of the backup file may be avoided.
In another implementation, the memory controller is further configured to replay the log file by executing all requests in the log file in the order of the time stamps.
Advantageously, the memory controller ensures an improved data replication solution, without requiring programming access to the source memory server, nor re-reading all data written to the source memory server, as the log file includes the complete file operations and metadata sets required to replay them at a remote location.
In another implementation, the memory controller includes a client controller. The client controller is configured to: connecting to the source memory server; initiating the session synchronized by the high-precision clock; receiving the request related to the source memory server; inputting the request into the log file; initiating the backup.
By the client controller, file operations of one or more clients are efficiently tracked and coordinated to ensure that files present on the source memory server are reliably backed up on the target memory server.
In another implementation, the client controller is further configured to execute a replicator sequencer.
By the replicator sequencer, the initial log synchronization problem of an active shared file system (e.g., a shared memory storage system) is solved and no complete suspension in the production of IO operations is required.
In another implementation, the memory controller includes a target controller. The target controller is used for analyzing the backup file, adjusting the log file and replaying the request in the log file on the target memory server.
The backup files created by the client controller are efficiently analyzed by the target controller to ensure that all files present at the source memory server are reliably backed up at the target memory server.
In another implementation, the target controller is further configured to execute a replicator receiver.
The data backup or replication process of the target site (e.g., the target memory server) is successfully completed by the replicator receiver.
In another aspect, the present disclosure provides a method for use in a shared memory storage system that includes a source memory server and a target memory server for storing a file, the file having metadata and data. The method comprises the following steps: initiating a session that is synchronized with a high precision clock, wherein the high precision clock has a precision that exceeds a timing threshold; receiving a request related to the source memory server; inputting the request into a log file; initiating a backup of the source memory server at the target memory server, wherein one or more files of the source memory server are copied to the target memory server, the copied files being backup files; analyzing the backup file and adjusting the log file; the request in the log file is replayed on the target memory server.
The method of the present disclosure provides an improved data replication solution for the shared memory storage system that is independent of the device manufacturer (or vendor). Thus, the source memory server and the target memory server do not need to belong to the same product manufacturer or compatible manufacturer, which eliminates vendor locking issues. Furthermore, since the method is used for synchronizing file operations of a plurality of clients using the log file, race conditions are avoided and any possibility of reading covered data is suppressed. Furthermore, the data replication solution provided by the method is implemented in a completely distributed manner, since each client is responsible for its own log record. Therefore, the method provides a reliable and efficient data copying technical scheme for the shared memory storage system.
In yet another aspect, the present disclosure provides a computer-readable medium comprising instructions that, when loaded into and executed by a memory controller, enable the memory controller to implement the method of the above aspect.
The computer readable medium realizes all the advantages and effects of the corresponding method of the present disclosure.
It should be noted that all devices, elements, circuits, units and modules described in this application may be implemented in software elements or hardware elements or any type of combination thereof. The steps performed by the various entities described in this application, and the functions to be performed by the various entities described are all intended to mean that the various entities are adapted to perform the various steps and functions. Even though in the description of specific embodiments below the specific functions or steps to be performed by external entities are not reflected in the description of specific detailed elements of the entity performing the specific steps or functions, it should be clear to a person skilled in the art that these methods and functions may be implemented in respective software or hardware elements, or in any combination thereof. It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
Other aspects, advantages, features and objects of the present disclosure will become apparent from the accompanying drawings and the detailed description of illustrative implementations explained in conjunction with the following appended claims.
Drawings
The foregoing summary, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the disclosure, there is shown in the drawings exemplary constructions of the disclosure. However, the present disclosure is not limited to the specific methods and instrumentalities disclosed herein. Moreover, those skilled in the art will appreciate that the drawings are not drawn to scale. Identical elements are denoted by the same numerals, where possible.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following drawings.
FIG. 1A is a diagram of a network environment for a shared memory storage system provided by an embodiment of the present disclosure;
FIG. 1B is a block diagram of various exemplary components of a memory controller provided by embodiments of the present disclosure;
FIG. 2 is a flow chart of a method for use in a shared memory storage system provided by an embodiment of the present disclosure;
FIG. 3 is an exemplary sequence diagram of a data replication solution provided by an embodiment of the present disclosure;
fig. 4 is an exemplary timing diagram of a data replication scheme provided by an embodiment of the present disclosure.
In the drawings, the underlined numbers are used to denote items where the underlined numbers are located or items adjacent to the underlined numbers. The non-underlined number is associated with the item identified by the line linking the non-underlined number to the item. When a number is not underlined but with an associated arrow, the number without the underline is used to identify the general item to which the arrow refers.
Detailed Description
The following detailed description describes embodiments and implementations of the present disclosure. While some embodiments of the present disclosure have been disclosed, those skilled in the art will recognize that other embodiments for practicing or practicing the present disclosure can be implemented as well.
Fig. 1A is a network environment diagram of a shared memory storage system according to an embodiment of the present disclosure. Referring to FIG. 1A, a network environment diagram 100 of a shared memory storage system 102 is shown. The shared memory storage system 102 includes a source memory server 104, a memory controller 106, and a target memory server 108. In one implementation, memory controller 106 also includes a client controller 110 and a target controller 112.
Shared memory storage system 102 refers to a computer data storage system for storing and sharing data files. The shared memory storage system 102 provides faster data access, easier management, and simple configuration. In addition, the shared memory storage system 102 stores new or updated data received from multiple clients simultaneously in order to provide communication between them and avoid storing redundant copies of data files. The shared memory storage system 102 includes one or more source memory servers (e.g., source memory server 104), one or more processors (e.g., the memory controller 106), and one or more target memory servers (e.g., the target memory server 108). Examples of the shared memory storage system 102 include, but are not limited to, a network-attached-storage (NAS) system, a cloud server, a file storage system, a block storage system, an object storage system, or a combination thereof.
The source memory server 104 refers to a file-level computer data store that is connected to a computer network, such as a low latency communications network, that provides data access to heterogeneous groups of one or more clients. The source memory server 104 may comprise suitable logic, circuitry, and/or interfaces that may be operable to receive and store data from one or more clients. The source memory server 104 may also be referred to as NAS sharing. The source memory server 104 supports multiple file service protocols and may enable clients to share (i.e., receive or transmit) data files across different operating environments (e.g., UNIX or Windows). In one example, source memory server 104 may be a data center that may include one or more hard disk drives, solid state drives, or permanent memory modules operating as logical storage, redundant storage containers, or redundant arrays of inexpensive disks (Redundant Array of Inexpensive Disk, RAID).
The memory controller 106 refers to a central sequencer that inserts synchronization points into its log stream. The memory controller 106 coordinates the synchronization points (or synchronization points) to initiate backup of the data files with high accuracy. These synchronization points define a dataset boundary, where the dataset is a log segment that is coordinated among multiple clients. The memory controller 106 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to perform memory control procedures within the shared memory storage system 102. Examples of implementations of the memory controller 106 may include, but are not limited to, a central sequencer, a central data processing device, a NAS file oplog merge device, and the like. The various components of the memory controller 106 are explained in detail in fig. 1B.
Memory controller 106 includes a client controller 110 and a target controller 112. Client controller 110 refers to a source site replication agent that records file operations at timed intervals and transmits the data as a data set to a replicator sequencer. Client controller 110 may also be referred to as an IO splitter. The client controller 110 may comprise suitable logic, circuitry, interfaces and/or code that may be operable to connect to the source memory server 104; initiating a session synchronized with a high precision clock; receiving a request associated with a source memory server 104; inputting the request into a log file; a backup is initiated. In addition, target controller 112 refers to a target site replication agent that executes a replicator receiver. The target controller 112 may comprise suitable logic, circuitry, interfaces and/or code that may enable analysis of the backup file, adjustment of the log file, and replay of requests in the log file on the target memory server 108.
The target memory server 108 refers to a file-level computer data store that is connected to a computer network (e.g., a low latency communication network) to provide a data backup for data files stored in the source memory server 104. The target memory server 108 may comprise suitable logic, circuitry, and/or interfaces that may be operable to backup the source memory server 104. The target memory server 108 may also be referred to as target sharing. In the event of a data loss in the source site (i.e., source memory server 104), the target memory server 108 is used to protect and recover the data. Examples of data loss events may include, but are not limited to, data corruption, hardware or software failures in the source site, accidental deletion of data, hacking, or malicious attacks. Thus, for security reasons, a separate backup storage or target shared server 108 is widely used to store backups of data present in the source memory server 104. Examples of the target memory server 108 include, but are not limited to, a data storage system, a cloud server, a network-attached-storage (NAS) system, a file storage system, a block storage system, an object storage system, or a combination thereof.
FIG. 1B is a block diagram of various exemplary components of a memory controller provided by embodiments of the present disclosure. FIG. 1B is described in connection with the elements of FIG. 1A. Referring to FIG. 1B, a block diagram of the memory controller 106 is shown. In one implementation, the memory controller 106 includes local memory such as a client controller 110, a target controller 112, a network interface 114, a memory 116, a clock 118, and control circuitry 120. In addition, the memory 116 may store log files 122.
The network interface 114 includes software or hardware interfaces that may be used to establish communications between the source memory server 104, the memory controller 106, and the target memory server 108. Examples of network interface 114 may include, but are not limited to, a computer port, a network socket, a network interface controller (network interface controller, NIC), and any other network interface device.
The memory 116 may comprise suitable logic, circuitry, and/or interfaces that may be operable to store machine code and/or instructions that may be executed by the memory controller 106. Examples of implementations of memory 116 may include, but are not limited to, random access memory (Random Access Memory, RAM), hard Disk Drive (HDD), flash memory, solid-State Drive (SSD), and/or CPU cache.
The clock 118 of the memory controller 106 may refer to a high precision clock used to synchronize file operations of the memory controller 106.
The control circuitry 120 comprises suitable logic that may be used to send a plurality of data set synchronization messages to synchronize logging operations for one or more clients. Examples of control circuitry 120 may include, but are not limited to, microprocessors, microcontrollers, complex instruction set computing (complex instruction set computing, CISC) processors, application-specific integrated circuit (ASIC) processors, reduced instruction set computing (reduced instruction set computing, RISC) processors, very long instruction word (very long instruction word, VLIW) processors, central processing units (central processing unit, CPU), state machines, data processing units, and other processors or control circuits. In one implementation, the operations performed by the memory controller 106 may be performed and controlled by the control circuitry 120.
The log file 122 of the memory controller 106 refers to the data structure of a log file system, which is a fault tolerant file system. In the event of a system failure, log file 122 ensures that the data has been restored to its pre-crash configuration. It may also recover the unsaved data and store it in a location that it would have been if the computer had not crashed. Instead of just one central device, a single write to a file is captured because log file 122 is captured at a single file operation level on each client. Thus, these writes may be replicated, avoiding replication of large files when only a small number of updates are applied by a user in one or more clients. The log file 122 information is useful for synchronizing file operations received from one or more clients and avoiding race conditions by eliminating any possibility of re-reading the covered data.
In operation, the memory controller 106 is used to connect to the source memory server 104. The memory controller 106 is operable to connect to the source memory server 104 via a wired or wireless network using known protocols including, but not limited to, LAN, WLAN, internet protocol (Internet Protocol, IP), etc. As shown in fig. 1A, the memory controller 106 is directly connected to the source memory server 104 in the normal network, without using a gateway or increasing the number of hops, so as to avoid the problem of single point failure caused by the gateway, and minimize the delay in the data stream. In one example, the source memory server 104 provides scalable shared storage for multiple clients and may act as the primary storage for storing data.
The memory controller 106 is also configured to initiate a session that is synchronized using the high precision clock 118, the high precision clock 118 having a precision that exceeds a timing threshold. The timing threshold is defined to be above the granularity of the system operating system, better than microseconds (e.g., according to IEEE 1588), i.e., better than the time it takes to process an IO request. Thus, the memory controller 106 ensures that the clocks of one or more clients are accurately synchronized to at least the time resolution for IO operations. Thus, the high precision clock 118 of the memory controller 106 synchronizes the initial known consistent crash backup points used to initiate successive copy sessions.
The memory controller 106 is also configured to receive requests associated with the source memory server 104. The memory controller 106 receives a request to copy a data file stored in the source memory server 104 to the target memory server 108. The request is managed by the client controller 110. The request includes metadata (e.g., inodes) related to all data files to be replicated. A data file refers to a file written by one or more clients on source memory server 104.
The memory controller 106 is also configured to input the request into a log file 122. Each of the one or more clients is responsible for independently logging its own file operations. In other words, since log file 122 is captured at a single file operation level on each client, a single write to the file is also captured. Thus, these writes may be replicated, avoiding replication of large files, as one or more clients apply only small incremental changes. The log file 122 includes all data that one or more clients send to the source memory server 104. Thus, there is no need to re-read the data from the source memory server 104 at a later time, avoiding any possibility of race conditions, and avoiding any possibility of reading the covered data. In addition, the log records begin prior to the backup process and log files 122 are periodically sent to the memory controller 106 by one or more clients, independent of the backup process.
According to one embodiment, the memory controller 106 is also configured to input the request to the log file 122 along with a timestamp of the request generated by the high precision clock 118. Each request in log file 122 is time stamped to record the time of each request. Each request is entered into log file 122 in the chronological order in which it occurred. The order of requests in log file 122 refers to a time series of file operations at each of one or more clients, which enables memory controller 106 to efficiently synchronize data for multiple clients, avoid single point failure problems, and reduce latency in the data stream.
The memory controller 106 is further configured to initiate a backup of the source memory server 104 on the target memory server 108, wherein one or more files of the source memory server 104 are copied to the target memory server 108, the copied files being backup files. The memory controller 106 initiates a backup process that copies one or more data files from a source site, such as the source memory server 104, to a target site, such as the target memory server 108. The files replicated at the target memory server 108 are referred to as backup files. In the event of a loss of data in the source site (i.e., source memory server 104), a backup process is required to protect and restore the data. Examples of data loss events may include, but are not limited to, data corruption, hardware or software failures in the source site, accidental deletion of data, hacking, or malicious attacks. Thus, for security reasons, a separate backup storage or target shared server 108 is widely used to store backups of data present in the source memory server 104.
According to one embodiment, memory controller 106 includes a client controller 110. The client controller 110 is configured to: connect to source memory server 104; initiate a session synchronized with the high precision clock 118; receiving a request associated with a source memory server 104; inputting the request into log file 122; a backup is initiated. Accordingly, it is the responsibility of the client controller 110 to efficiently track and coordinate file operations for one or more clients in order to ensure that files present at the source memory server 104 are reliably backed up at the target memory server 108.
According to one embodiment, the client controller 110 is also configured to execute a replicator sequencer. The replicator sequencer (replicator sequencer) refers to an agent that gathers records of IO operations transmitted by one or more clients in the form of data sets, processes the records, and transmits incremental changes therein. Thus, the replicator sequencer solves the initial log synchronization problem of an active shared file system (e.g., shared memory storage system 102) and does not require any complete suspension in the production of IO operations.
The memory controller 106 is also used to analyze the backup files and adjust the log files 122. The memory controller 106 analyzes the backup file to create a translation table to map the source metadata (e.g., inodes) to the actual restored metadata and adjusts the log file 122 accordingly. The backup file refers to a file copied from the source memory server 104 to the target memory server 108. The analysis is performed by the target controller 112. The process of analyzing the backup file may refer to processing the backup file to capture potential instances of all possible small and incremental changes that may have occurred without the need to scan the entire backup file. In addition, the memory controller 106 processes data received from one or more clients in the form of log files 122. The process is performed in the background without requiring one or more clients to wait for the process to complete. This process facilitates reliable and efficient synchronization of data between one or more clients without introducing any delay in the production data path (i.e., data stream). Further, the log file 122 is adjusted based on the result of the analysis. The adjustment of log file 122 refers to modifying the backup file for only small incremental changes such that an initial synchronization point is established to place target memory server 108 in a known state that is consistent with the state of source memory server 104 at the beginning of certain incremental change sequences. An initial synchronization point may refer to a point at which synchronization is achieved between IO operations performed by one or more clients to store data files in source memory server 104 and log operations performed by memory controller 106 to backup files of source memory server 104 at target memory server 108. In addition, the adjustment log file 122 does not require any additional shared content reads. Therefore, the efficiency and the reliability of the data replication technical scheme when the disaster recovery scheme is established are improved.
According to one embodiment, the memory controller 106 is further configured to analyze the backup files and adjust the log files 122 by: generating a map of backup files, wherein each file of the source memory server 104 is mapped to a file in the source server; determining whether metadata of a backup file has been changed, and if so, indicating that the backup file is an orphaned file; determining for each orphan file which other backup files are affected by changes in metadata of the orphan file and linking the orphan file to those other backup files in the map; and deleting the orphan file from the log file 122. Mapping refers to an inode mapping that maps a source inode to a target inode. The map includes names (files or directories) associated with the source inodes. If the metadata of the inode has been touched (i.e., hard links are used to create an inode copy, delete a copy, or an inode parent change (rename) or an inode filename change), the backup file will be indicated as an orphan file. In addition, the log file 122 is used to compile (create/delete) related operation call graphs (create, link, move cancel link, and delete operations) for each path where an inode is found. This determines all paths that the inode should exist on the target site (i.e., target memory server 108). In addition, the inode is linked to all of its existing paths (if any) and is de-linked or deleted from the orphaned directory. Thus, the adjusted log file 122 includes all inodes that are of importance and should be journaled.
According to one embodiment, the memory controller 106 is further configured to determine that the request in the log file 122 relates to a file that is not a backup file, determine if there is a request in the log file 122 to generate a file, and if not, read the file from the source memory server 104 and copy the file to the target memory server 108 prior to replaying the log file. Inodes that exist prior to the backup and remain in the backup after the backup is completed are maintained as they were in log file 122. Such inodes refer to metadata that is not touched (i.e., not renamed, not created hard links, or not deleted) during the logging process. For an inode found in log file 122, but no record was created in log file 122 and the inode was not in a backup copy, the file is re-read from the source site (i.e., source memory server 104).
The memory controller 106 is also configured to replay requests in the log file 122 on the target memory server 108. The log file 122 includes requests from one or more clients that include all data associated with the file operations of the one or more clients. For efficient and reliable replication, these requests are replicated by the memory controller 106 on the target memory server 108 to ensure that data for any client is not lost. Thus, the memory controller 106 ensures an improved data replication solution without requiring programming access to the source memory server 104, nor re-reading all data written to the source memory server 104, as the log file 122 includes the complete file operations and metadata sets required to replay them at a remote location.
According to one embodiment, the memory controller 106 is further configured to replay the log file 122 by executing all requests in the log file 122 in time-stamped order. The log file 122 includes requests from one or more clients that include all data associated with the file operations of the one or more clients. For efficient and reliable replication, all requests in the log file 122 are replayed by the memory controller 106 in the order of the time stamps at the target memory server 108 to ensure that the replicated data is synchronized with the data present at the source memory server 104. Thus, the memory controller 106 further ensures an improved data replication solution without requiring programming access to the source memory server 104, nor re-reading all data written to the source memory server 104, as the log file 122 includes the complete file operations and metadata sets required to replay them at a remote location.
According to one embodiment, the memory controller 106 includes a target controller 112. Target controller 112 is configured to analyze the backup file and adjust log file 122 and replay the request in log file 122 on target memory server 108. Accordingly, it is the responsibility of the target controller 112 to efficiently analyze the backup files created by the client controller 110 to ensure that all files present at the source memory server 104 are reliably backed up at the target memory server 108.
According to one embodiment, the target controller 112 is also configured to execute a replicator receiver. The replicator receiver refers to a replicator receiver on a target site (e.g., target memory server 108). The replicator receiver replays all requests in the log file 122 transmitted by the target controller 112 and successfully completes the data backup or replication process at the target site (e.g., the target memory server 108).
Accordingly, the memory controller 106 of the present disclosure provides an improved data replication solution for the shared memory storage system 102 that is independent of the device manufacturer (or vendor). Thus, the source memory server 104 and the target memory server 108 need not belong to the same product manufacturer or compatible manufacturer. This eliminates the problem of vendor locking. In addition, since the memory controller 106 is configured to synchronize file operations of multiple clients using the log file 122, race conditions are avoided and any possibility of reading covered data is suppressed. Furthermore, the data replication scheme provided by the memory controller 106 is implemented in a fully distributed manner, as each client is responsible for its own log record. Further, by adjusting log file 122, an initial synchronization point is established to place target memory server 108 in a known state that is consistent with the state of source memory server 104 at the beginning of certain delta change sequences. In addition, the adjustment log file 122 does not require any additional shared content reads. Thus, the memory controller 106 provides a reliable and efficient data replication solution for the shared memory storage system 102 that generates and captures all potential instances of incremental changes in the log file 122 with all changes.
Fig. 2 is a flowchart of a method for use in a shared memory storage system according to an embodiment of the present disclosure. Referring to fig. 2, a method 200 is shown. Fig. 2 is described in conjunction with the elements of fig. 1A and 1B. For example, method 200 is used in shared memory storage system 102 depicted in FIG. 1A. Method 200 includes steps 202 through 212. For example, the method 200 is performed by the memory controller 106 described in fig. 1A and 1B.
In step 202, the method 200 includes initiating a session that is synchronized with a high precision clock 118, the high precision clock 118 having a precision that exceeds a timing threshold. The timing threshold is defined to be above the granularity of the system operating system, better than microseconds (e.g., according to IEEE 1588), i.e., better than the time it takes to process an IO request. Thus, the method 200 ensures that the clocks of one or more clients are accurately synchronized to at least the time resolution for IO operations. Thus, the high precision clock 118 of the memory controller 106 synchronizes the initial known consistent crash backup points used to initiate successive copy sessions.
In step 204, the method 200 further includes receiving a request associated with the source memory server 104. The memory controller 106 receives a request to copy a data file stored in the source memory server 104 to the target memory server 108. The request is managed by the client controller 110. The request includes metadata (e.g., inodes) related to all data files to be replicated. A data file refers to a file written by one or more clients on source memory server 104.
In step 206, the method 200 further includes entering the request into the log file 122. Each of the one or more clients is responsible for independently logging its own file operations. In other words, since log file 122 is captured at a single file operation level on each client, a single write to the file is also captured. Thus, these writes may be replicated, avoiding replication of large files, as one or more clients apply only small incremental changes. The log file 122 includes all data that one or more clients send to the source memory server 104. Thus, the method 200 eliminates the need to re-read data from the source memory server 104 at a later time, avoids any possibility of race conditions, and avoids any possibility of reading the covered data. In addition, the log records begin prior to the backup process and log files 122 are periodically sent to the memory controller 106 by one or more clients, independent of the backup process.
In step 208, the method 200 further includes initiating a backup of the source memory server 104 on the target memory server 108, wherein one or more files of the source memory server 104 are copied to the target memory server 108, the copied files being backup files. The method 200 initiates a backup process that copies one or more data files from a source site, such as the source memory server 104, to a target site, such as the target memory server 108. The files replicated at the target memory server 108 are referred to as backup files. In the event of a loss of data in the source site (i.e., source memory server 104), a backup process is required to protect and restore the data.
In step 210, the method 200 further includes analyzing the backup file and adjusting the log file 122. The memory controller 106 analyzes the backup file to create a translation table to map the source metadata (e.g., inodes) to the actual restored metadata and adjusts the log file 122 accordingly. The analysis is performed by the target controller 112. In addition, the memory controller 106 processes data received from one or more clients in the form of log files 122. The process is performed in the background without requiring one or more clients to wait for the process to complete. This process facilitates reliable and efficient synchronization of data between one or more clients without introducing any delay in the production data path (i.e., data stream).
In step 212, the method 200 further includes replaying the request in the log file 122 on the target storage server 108. The log file 122 includes requests from one or more clients that include all data associated with the file operations of the one or more clients. For efficient and reliable replication, these requests are replicated by the memory controller 106 on the target memory server 108 to ensure that data for any client is not lost. Thus, the method 200 ensures an improved data replication solution without requiring programming access to the source memory server 104, nor re-reading all data written to the source memory server 104, as the log file 122 includes the complete file operations and metadata sets required to replay them at a remote location.
Steps 202 through 212 are merely illustrative and other alternatives may be provided in which one or more steps are added, one or more steps are deleted, or one or more steps are provided in a different order without departing from the scope of the claims herein.
The method 200 uses the memory controller 106 to provide an improved data replication solution for the shared memory storage system 102 that is independent of the device manufacturer (or vendor). Thus, the source memory server 104 and the target memory server 108 need not belong to the same product manufacturer or compatible manufacturer, which eliminates vendor locking issues. Furthermore, since the method 200 is used to synchronize file operations for multiple clients using the log file 122, race conditions are avoided and any possibility of reading covered data is suppressed. Furthermore, the data replication solution provided by the method 200 works in a fully distributed manner, as each client is responsible for its own log record. Further, by adjusting log file 122, an initial synchronization point is established to place target memory server 108 in a known state that is consistent with the state of source memory server 104 at the beginning of certain delta change sequences. Thus, the method 200 provides a reliable and efficient data replication solution for the shared memory storage system 102.
In yet another aspect, the present disclosure provides a computer-readable medium comprising instructions that, when loaded into and executed by a memory controller 106, the memory controller 106 is capable of implementing the method 200. Computer readable medium refers to non-transitory computer readable storage medium. Examples of implementations of the computer readable medium may include, but are not limited to, electrically Erasable Programmable Read Only Memory (EEPROM), random access Memory (Random Access Memory, RAM), read Only Memory (ROM), hard Disk Drive (HDD), flash Memory, secure Digital (SD) cards.
Fig. 3 is an exemplary sequence diagram of a data replication scheme provided by an embodiment of the present disclosure. Fig. 3 is described in conjunction with the elements of fig. 1A, 1B, and 2. Referring to fig. 3, a sequence diagram 300 depicting a data replication scheme is shown. Also shown are a source memory server 302, a replicator sequencer 304, one or more clients 306, a replicator receiver 308, a backup API 310, and a target memory server 312. A series of exemplary operations 314 through 336 are further illustrated. The source memory server 302 and the target memory server 312 correspond to the source memory server 104 and the target memory server 108 (of fig. 1A), respectively.
The replicator sequencer 304 refers to a proxy that gathers records of IO operations transmitted by one or more clients 306 in the form of data sets, processes the records, and transmits incremental changes therein.
Each of the one or more clients 306 refers to a client communicatively coupled to the source memory server 302 for data access and storage. One or more clients 306 are operatively connected to each other and to replicator sequencer 304 through a low-latency communications network, such as a LAN or WLAN. The one or more clients 306 may be heterogeneous groups of clients, wherein each of the one or more clients 306 comprises suitable logic, circuitry, and interfaces for remotely accessing data from the source memory server 302. Each of the one or more clients 306 may be associated with a user that may perform particular file operations and further store data associated with such file operations to the source memory server 302. Examples of one or more clients 306 include, but are not limited to, thin clients, laptop computers, desktop computers, smartphones, wireless modems, or other computing devices.
Replicator receiver 308 refers to a replicator receiver on a target site (e.g., target memory server 108). Backup API 310 refers to a backup application programming interface that provides an intermediate interface to store backup files at a target site (i.e., target memory server 312).
In operation 314, one or more clients 306 connect to source memory server 302. One or more clients 306 may store one or more data files in source memory server 302 through one or more file operations.
In operation 316, the replicator sequencer 304 joins the high-precision clock session. A high precision clock is required to accurately synchronize the log operations of one or more clients 306.
In operation 318, one or more clients 306 begin sending IO logs. The log records are initiated by one or more clients 306 prior to the backup process and periodically sent to the replicator sequencer 304 independent of the backup process. IO log refers to the data structure of the log record file system, which is a fault tolerant file system. In the event of a system failure, the IO log ensures that the data has recovered to its pre-crash configuration. It may also recover the unsaved data and store it in a location that it would have been if the computer had not crashed. Because the IO logs are captured at a single file operation level on each client (e.g., one or more clients 306), a single write to the file is also captured. Thus, these writes may be replicated, avoiding replication of large files when only a small number of updates are applied by a user in one or more clients 306. The IO log information is useful to synchronize file operations received from one or more clients 306 and to avoid race conditions by eliminating any possibility of re-reading the covered data.
In operation 320, the replicator sequencer 304 creates a backup and restores the backup files at the backup API 310. The replicator sequencer 304 creates a backup of IO logs received from one or more clients 306. The backup file is then sent to the backup API 310 for restoration at the target memory server 312.
In operation 322, the backup API 310 restores the backup file at the target memory server 312. The backup file may be directly piped to the restore session on the target storage server 312 or may be initiated through the backup API 310.
In operation 324, replicator receiver 308 receives an IO log from replicator sequencer 304. The IO logs received by replicator receiver 308 correspond to inodes that exist prior to the backup and remain in the backup after the backup is complete. Such inodes refer to metadata that is not touched during logging, i.e., metadata that is not renamed, created hard links, or deleted.
In operation 326, the backup API 310 sends an acknowledgement to the replicator sequencer 304 that the backup was successfully completed. Since the IO log received by replicator receiver 308 is not touched during logging, the IO log is backed up as is at target memory server 312 and an acknowledgement of a successful backup is sent to replicator sequencer 304.
In operation 328, the one or more clients 306 again send the log to the replicator sequencer 304. In operation 330, the replicator sequencer 304 sends the log to the replicator receiver 308.
In operation 332, the replicator receiver 308 analyzes the log and prunes the orphan inode file. Replicator receiver 308 analyzes the log to create a conversion table to map source metadata (e.g., inodes) to the actual recovered metadata and adjust the log accordingly. If the metadata of the inode has been touched (i.e., hard links are used to create an inode copy, delete a copy, or an inode parent change (rename) or an inode filename change), the log will be indicated as an orphan file. In addition, the log is used to compile (create/delete) the associated operation call graph (create, link, move cancel link, and delete operations) for each path that finds the inode. This determines all paths that the inode should exist on the target site (i.e., target memory server 312). In addition, the inode is linked to all of its existing paths (if any) and is de-linked or deleted from the orphan directory.
In operation 334, the replicator receiver 308 replays the logged log operations at the target memory server 312 for each inode. For efficient and reliable data replication, the logged operations of each inode are replayed by replicator receiver 308 on target memory server 312 to ensure that data of any client (e.g., one or more clients 306) is not lost.
In operation 336, the replicator receiver 308 sends an acknowledgement of the initial synchronization on to the replicator sequencer 304. When all index nodes of importance are found, replicator receiver 308 sends an acknowledgement to replicator sequencer 304 indicating that initial synchronization has been established.
Operations 314 through 336 are merely illustrative and other alternatives may be provided in which one or more steps are added, one or more steps are deleted, or one or more steps are provided in a different order without departing from the scope of the claims herein.
Fig. 4 is an exemplary timing diagram of a data replication scheme provided by an embodiment of the present disclosure. Fig. 4 is described in conjunction with the elements of fig. 1A, 1B, 2 and 3. Referring to fig. 4, a timing diagram 400 depicting a data replication scheme is shown. Also shown are a desired consistency point 402 and a target backup completion point 404.
Also shown are 6 (six) log time points, e.g., j0, j1, j2, j3, j4, j5, and j6. The target backup completion point 404 refers to the point in time when the backup process is completed. When j5 is selected as the desired consistency point 402, the target backup completion point 404 must contain j1 operations and may contain j2 to j4 operations. Further, it is assumed that the time of the log points is consistent and the backup process is not atomic. In addition, the target memory server 108 compares the log to the actual state of each inode/parent directory inode in the backup system. All operations of j0 must be present/overridden in the backup and the state including j5 is the desired consistency point 402.
Table 1 shows the backup status at the target memory server 108. Table 1 shows the inodes (or metadata) associated with the backup files and the size and path of access to them. Table 2 shows the final state after the treatment. It can be observed from table 2 that the creation of the file is not skipped during the backup process. Further, tables 3 through 7 illustrate replay actions performed to mark log entries starting from the desired consistency point 402 (i.e., j 5). Thus, table 3 shows j5 processing according to the target index node, table 4 shows j4 processing according to the target index node, table 5 shows j3 processing according to the target index node, table 6 shows j2 processing according to the target index node, and table 7 shows j1 processing according to the target index node. It can be observed that if the same data already exists in the log, no replay operation is performed.
Table 1: backup state
Table 2: final state after processing
Index node Path Size and dimensions of
100 /A
102 /A/a2
200 /B
201 /B/b1 1024
300 /C
103 /C/c1 1024
104 /C/a1
102 /C/c2
400 /D
104 /D/d1
Table 3: j5 processing according to target index node
Index node Path Operation of Playback
104 /A/a1 Adding Is (content does not exist)
301 /C/c1 Adding Is (content does not exist)
201 /B/b1 Adding Is (content does not exist)
Table 4: j4 processing according to target index node
Index node Path Operation of Playback
104 /A/a1=>/D/d1 Linking Is (does not exist)
400 /D/d1 Adding (yes)
100 /A Deletion of a1 Is that
104 /A/a1 Unlink Is that
201 /B/b1 Write 512B offset 256 … … Is (content does not exist)
Table 5: j3 processing according to the target index node
Table 6: j2 processing according to the target index node
Index node Path Operation of
103 /A/a1 Unlink
100 /A Removal of a1
301 /C/c1 Additional 512B bytes … …
201 /B/b1 Additional 1024B bytes … …
Table 7: j1 processing according to the target index node
Index node Path Operation of
102 /A/a2=>/C/c2 Linking
300 /C/c2 Adding
103 /A/a1 Creation of
100 /A/a1 Adding
Accordingly, various embodiments of the present disclosure provide a memory controller 106. The memory controller 106 is for use in a shared memory storage system 102, the shared memory storage system 102 including a source memory server 104 and a target memory server 108 for storing files having metadata and data. The memory controller 106 is configured to: connect to source memory server 104; initiating a session synchronized with the high precision clock 118, the high precision clock 118 having a precision exceeding a timing threshold; receiving a request associated with a source memory server 104; inputting the request into log file 122; initiating a backup of the source memory server 104 on the target memory server 108, wherein one or more files of the source memory server 104 are copied to the target memory server 108, the copied files being backup files; analyze the backup files and adjust log files 122; and replaying the request in log file 122 on target memory server 108.
Accordingly, various embodiments of the present disclosure also provide a method 200 for use in a shared memory storage system 102, the shared memory storage system 102 including a source memory server 104 and a target memory server 108 for storing files having metadata and data. The method 200 includes: initiating a session synchronized with a high precision clock 118, the high precision clock 118 having a precision exceeding a timing threshold; receiving a request associated with the source memory server 104; inputting the request into log file 122; initiating a backup of the source memory server 104 on the target memory server 108, wherein one or more files of the source memory server 104 are copied to the target memory server 108, the copied files being backup files; analyze the backup files and adjust log files 122; the requests in log file 122 are replayed on target memory server 108.
Modifications may be made to the embodiments of the disclosure described above without departing from the scope of the disclosure as defined in the appended claims. The terms "comprising," "including," "incorporating," "having," "being" and the like used to describe and claim the present disclosure should be interpreted in a non-exclusive manner, i.e., to allow items, components, or elements not explicitly described to be present. Reference to the singular is also to be construed to relate to the plural. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration. Any "exemplary" embodiment is not necessarily to be construed as preferred or advantageous over other embodiments, and/or as an exclusion of any combination of features from other embodiments. The word "optionally" as used herein means "provided in some embodiments and not provided in other embodiments. It is appreciated that certain features of the disclosure, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the disclosure that are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable combination or as in any other described embodiment of the disclosure.

Claims (11)

1. A memory controller (106) for use in a shared memory storage system (102), wherein the shared memory storage system (102) includes a source memory server (104) and a target memory server (108) for storing files having metadata and data, the memory controller (106) being configured to:
connecting to the source memory server (104);
initiating a session synchronized with a high precision clock (118), wherein the high precision clock (118) has a precision exceeding a timing threshold;
receiving a request associated with the source memory server (104);
inputting the request into a log file (122);
initiating a backup of the source memory server (104) on the target memory server (108), wherein one or more files of the source memory server (104) are copied to the target memory server (108), the copied files being backup files;
analyzing the backup files and adjusting log files (122);
replaying the request in the log file (122) on the target memory server (108).
2. The memory controller (106) of claim 1, wherein the memory controller (106) is further configured to input a request into the log file (122) along with a timestamp of the request generated by the high precision clock (118).
3. The memory controller (106) of claim 1 or 2, wherein the memory controller (106) is further configured to analyze the backup file and adjust the log file (122) by:
generating a map of the backup files, wherein each file of the source memory server (104) is mapped to a file in the source server;
determining whether metadata of the backup file has changed, and if so, indicating that the backup file is an orphaned file;
for each orphan file, determining which other backup files are affected by changes in metadata of the orphan file and linking the orphan file to those other backup files in the map;
-deleting the orphan file from the log file (122).
4. A memory controller (106) according to claim 1, 2 or 3, wherein the memory controller (106) is further configured to determine that the request in the log file (122) relates to a file that is not a backup file, determine whether there is a request in the log file (122) to generate the file, and if not, read the file from the source memory server (104) and copy the file to the target memory server (108) prior to replaying the log file (122).
5. The memory controller (106) of any of the preceding claims, wherein the memory controller (106) is further configured to replay the log file (122) by executing all requests in the log file (122) in the order of the time stamps.
6. The memory controller (106) of any of the preceding claims, wherein the memory controller (106) comprises a client controller (110), the client controller (110) to:
connecting to the source memory server (104);
initiating the session synchronized with the high precision clock (118);
receiving the request associated with the source memory server (104);
-entering the request into the log file (122);
initiating the backup.
7. The memory controller (106) of claim 6, wherein the client controller (110) is further configured to execute a replicator sequencer.
8. The memory controller (106) of any of the preceding claims, wherein the memory controller (106) comprises a target controller (112), the target controller (112) to analyze backup files and adjust log files (122) and replay requests in the log files (122) on the target memory server (108).
9. The memory controller (106) of claim 8, wherein the target controller (112) is further configured to execute a replicator receiver.
10. A method (200) for use in a shared memory storage system (102), wherein the shared memory storage system (102) includes a source memory server (104) and a target memory server (108) for storing files having metadata and data, the method (200) comprising:
initiating a session synchronized with a high precision clock (118), wherein the high precision clock (118) has a precision exceeding a timing threshold;
receiving a request associated with the source memory server (104);
inputting the request into a log file (122);
initiating a backup of the source memory server (104) on the target memory server (108), wherein one or more files of the source memory server (104) are copied to the target memory server (108), the copied files being backup files;
analyzing the backup files and adjusting log files (122);
replaying the request in the log file (122) on the target memory server (108).
11. A computer readable medium comprising instructions that, when loaded into a memory controller (106) and executed by the memory controller (106), enable the memory controller (106) to perform the method (200) of claim 10.
CN202180100656.9A 2021-10-12 2021-10-12 Memory controller and method for shared memory storage Pending CN117693743A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2021/078128 WO2023061557A1 (en) 2021-10-12 2021-10-12 Memory controller and method for shared memory storage

Publications (1)

Publication Number Publication Date
CN117693743A true CN117693743A (en) 2024-03-12

Family

ID=78134966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180100656.9A Pending CN117693743A (en) 2021-10-12 2021-10-12 Memory controller and method for shared memory storage

Country Status (2)

Country Link
CN (1) CN117693743A (en)
WO (1) WO2023061557A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7552122B1 (en) * 2004-06-01 2009-06-23 Sanbolic, Inc. Methods and apparatus facilitating access to storage among multiple computers
US20200341855A1 (en) * 2019-04-28 2020-10-29 Synamedia Object store specialized backup and point-in-time recovery architecture
US11301450B2 (en) * 2020-02-28 2022-04-12 Netapp, Inc. Maintaining timestamp parity of objects with alternate data streams during transition phase to synchronous state
US20210303164A1 (en) * 2020-03-25 2021-09-30 Pure Storage, Inc. Managing host mappings for replication endpoints

Also Published As

Publication number Publication date
WO2023061557A1 (en) 2023-04-20

Similar Documents

Publication Publication Date Title
US11429305B2 (en) Performing backup operations using replicas
US10503616B2 (en) Periodic data replication
US8930309B2 (en) Interval-controlled replication
US9311328B2 (en) Reference volume for initial synchronization of a replicated volume group
US10067952B2 (en) Retrieving point-in-time copies of a source database for creating virtual databases
US7895501B2 (en) Method for auditing data integrity in a high availability database
US8924354B2 (en) Block level data replication
US11138156B2 (en) Continuous data management system and operating method thereof
JP4638905B2 (en) Database data recovery system and method
US8762342B1 (en) Method of inserting a validated time-image on the primary CDP subsystem in a continuous data protection and replication (CDP/R) subsystem
WO2023046042A1 (en) Data backup method and database cluster
US9483367B1 (en) Data recovery in distributed storage environments
WO2021226905A1 (en) Data storage method and system, and storage medium
US10484179B1 (en) Data consistency in an encrypted replication environment
CN108351821A (en) Data reconstruction method and storage device
US9817834B1 (en) Techniques for performing an incremental backup
WO2017122060A1 (en) Parallel recovery for shared-disk databases
US8799211B1 (en) Cascaded replication system with remote site resynchronization after intermediate site failure
CN117693743A (en) Memory controller and method for shared memory storage
Alquraan et al. Scalable, near-zero loss disaster recovery for distributed data stores
US20240086082A1 (en) Memory controller for shared memory access and method for use in memory controller
US11995042B1 (en) Fast recovery for replication corruptions
US9497266B2 (en) Disk mirroring for personal storage
CN114780299B (en) BMR backup system and method based on disk CBT
EP4320521A1 (en) Nas central sequencer, nas clients and journaling methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination