CN110737716B - Data writing method and device - Google Patents

Data writing method and device Download PDF

Info

Publication number
CN110737716B
CN110737716B CN201810706826.8A CN201810706826A CN110737716B CN 110737716 B CN110737716 B CN 110737716B CN 201810706826 A CN201810706826 A CN 201810706826A CN 110737716 B CN110737716 B CN 110737716B
Authority
CN
China
Prior art keywords
sub
log
logs
node
stored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810706826.8A
Other languages
Chinese (zh)
Other versions
CN110737716A (en
Inventor
戴冠群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810706826.8A priority Critical patent/CN110737716B/en
Publication of CN110737716A publication Critical patent/CN110737716A/en
Application granted granted Critical
Publication of CN110737716B publication Critical patent/CN110737716B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore

Abstract

The application provides a data writing method and device. The method comprises the following steps: the method comprises the steps that a main node stores N sub-logs of a log to the local, the log is generated according to an atomic write request, the atomic write request comprises N sub-writes, information of the sub-writes is recorded in the sub-logs, and N is a positive integer larger than 0; the main node backs up the N sub-logs into a standby node; and the master node marks the logs as valid under the condition that the master node determines that all the N sub-logs are stored in the standby node. The technical scheme provided by the application can ensure the atomicity of a group of write requests through a log mechanism in the data backup process, and avoids the realization of the atomicity through a file system semantic lock in the prior art, so that the process complexity and the system overhead can be reduced.

Description

Data writing method and device
Technical Field
The present application relates to the field of storage, and more particularly, to a method and apparatus for data writing.
Background
In the storage system, in order to prevent the data loss caused by the hardware failure or the storage medium failure of the operating system, the locally stored data can be backed up to the standby node. The file system is a software mechanism for managing and storing file information in an operating system, and is mainly responsible for establishing files, storing, reading and dumping the files, controlling the access of the files and the like for a user. In the process of providing a storage service for data backup, a file system mainly prevents data inconsistency or damage caused by power failure or system crash through atomic operation (i.e., it is required to ensure that a group of write requests are all successful or all failed). For example, Network Attached Storage (NAS) may be a file system that provides efficient and stable storage for storage devices.
In the prior art, in the data backup process, in order to ensure the atomic operation among a plurality of nodes in the file system, a semantic lock needs to be added in the file system, and in the data backup process, the atomic operation of a group of write requests is ensured through the semantic lock. However, the system flow is complicated by adding the semantic lock to guarantee the atomic operation of the write request, and the semantic lock needs to be opened after a group of write requests are all written and backed up successfully, thereby increasing the system overhead.
Therefore, in the data backup process, the flow complexity and the system overhead can be reduced while ensuring the atomic operation of the write request.
Disclosure of Invention
The application provides a method and a device for data writing, which can ensure atomicity of a group of writing requests through a log mechanism in the data backup process, and avoid the situation that atomicity is realized through a file system semantic lock in the prior art, so that the process complexity and the system overhead can be reduced.
In a first aspect, a method for writing data is provided, the method including: the main node stores N sub-logs of the log to the local; the main node backs up the N sub-logs into a standby node; and the master node marks the logs as valid under the condition that the master node determines that all the N sub-logs are stored in the standby node.
In this embodiment of the present application, a group of write requests may include N sub-writes, where N is a positive integer greater than 0. To ensure that the set of write requests all succeed or all fail (i.e., the set of write requests is one or a series of operations that cannot be interrupted), the set of write requests may be referred to as an atomic write, which may also be referred to as having atomicity.
It should be understood that a series of sub-writes in an atomic write may generate one log (log), and that N sub-writes in an atomic write may generate N sub-logs (Sublog). The sub-log may record information of the sub-writes (that is, one sub-write may correspond to one sub-log), and the recorded information may include, but is not limited to: the address (start address) to be stored, the data to be written, the length of the data to be written, and the like.
In the embodiment of the application, the atomicity of a group of write requests is ensured through a log mechanism in the data backup process, and the atomicity is prevented from being realized through a file system semantic lock in the prior art, so that the process complexity and the system overhead can be reduced.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and under the condition that the N sub-logs in the standby node fail to be stored, the main node deletes the stored N sub-logs.
Optionally, in some embodiments, if the peer fails to perform operations for the peer due to resources or various reasons. The embodiment of the application can adopt a rollback mechanism, log stored by the local end can be rolled back, and atomicity (one group of write requests are successful or failed) of data of the whole system can be guaranteed.
In the embodiment of the application, the atomicity of the write request between the nodes is ensured through a rollback mechanism, the phenomenon that the atomicity of the write request between the nodes is ensured by depending on retry in the prior art is avoided, and the serviceability of a system is improved.
With reference to the first aspect, in some implementation manners of the first aspect, the N sub-logs carry the same log identification number ID, and the master node sequentially stores the N sub-logs into a log storage container of the master node according to the ID.
In the embodiment of the present application, in the data backup process, the standby node is used as a disaster recovery backup of the master node, and the data sequence written by the standby node needs to be written according to the sequence of the master node, that is, the sequence of the data written by the standby node needs to be consistent with the sequence of the data written by the master node. As an example, the embodiment of the present application may generate an identity number (ID) for each log, and multiple superbogs above the log may also have the same log ID. The nodes and the standby nodes can hang the Sublogs into the corresponding chunk according to the log IDs, and the Sublogs in the Sublog linked list in the chunk can be orderly arranged according to the log IDs.
With reference to the first aspect, in some implementations of the first aspect, the master node receives a notification message, where the notification message is used to indicate that all of the N sub-logs are stored in the standby node; the master node marks the log as valid.
In a second aspect, a method for writing data is provided, the method comprising: the backup node stores N sub-logs of the log backed up by the main node into the backup node; when the N sub-logs are all stored in the standby node, the standby node marks the logs as valid; and the standby node informs the main node that all the N sub-logs are stored in the standby node.
It should be understood that the log is generated according to one atomic write request, the one atomic write request includes N sub-writes, the sub-log records information of the sub-writes, and N is a positive integer greater than 0.
In the embodiment of the application, the atomicity of a group of write requests is ensured through a log mechanism in the data backup process, and the atomicity is prevented from being realized through a file system semantic lock in the prior art, so that the process complexity and the system overhead can be reduced.
With reference to the second aspect, in some implementation manners of the second aspect, the N sub-logs carry the same log identification number ID, and the standby node sequentially stores the N sub-logs into a log storage container of the standby node according to the ID.
With reference to the second aspect, in some implementation manners of the second aspect, the standby node sends a notification message to the master node, where the notification message is used to indicate that all of the N sub-logs are stored in the log storage container of the standby node.
In a third aspect, an apparatus for writing data is provided, the apparatus including: the storage module is used for storing N sub-logs of the log to the local; the sending module is used for backing up the N sub-logs to a standby node; and the first processing module is used for marking the logs as valid under the condition that all the N sub-logs are determined to be stored in the standby node.
It should be understood that the log is generated according to one atomic write request, the one atomic write request includes N sub-writes, each sub-log of the N sub-logs records information of each sub-write, and N is a positive integer greater than 0.
The data writing device provided by the embodiment of the application can ensure the atomicity of a group of writing requests through a log mechanism in the data backup process, and avoids the realization of the atomicity through a file system semantic lock in the prior art, so that the process complexity and the system overhead can be reduced.
With reference to the third aspect, in certain implementations of the third aspect, the apparatus further includes: and the second processing module is used for deleting the N sub-logs stored under the condition that the N sub-logs in the standby node fail to be stored.
With reference to the third aspect, in some implementation manners of the third aspect, the N sub-logs carry the same log identification number ID, and the storage module is specifically configured to: and the main node orderly stores the N sub-logs into a log storage container of the main node according to the ID.
With reference to the third aspect, in some implementations of the third aspect, the first processing module is specifically configured to: receiving a notification message, wherein the notification message is used for indicating that all the N sub-logs are stored in the standby node; marking the log as valid.
It should be noted that, in the entity apparatus shown in the third aspect, the processor may implement the steps executed by the respective modules by calling the computer program in the memory. For example, computer instructions stored in the cache may be called by the processor to perform the steps required to be performed by the respective modules (the storage module, the transmission module, the first processing module, and the second processing module).
In a fourth aspect, an apparatus for writing data is provided, the apparatus including: the storage module is used for storing N sub-logs of the log backed up by the main node into the standby node; the processing module is used for marking the logs as valid when all the N sub-logs are stored in the standby node; and the sending module is used for informing the master node that all the N sub-logs are stored in the standby node by the standby node.
It should be understood that the log is generated according to one atomic write request, the one atomic write request includes N sub-writes, each sub-log of the N sub-logs records information of each sub-write, and N is a positive integer greater than 0.
The data writing device provided by the embodiment of the application can ensure the atomicity of a group of writing requests through a log mechanism in the data backup process, and avoids the realization of the atomicity through a file system semantic lock in the prior art, so that the process complexity and the system overhead can be reduced.
With reference to the fourth aspect, in some implementation manners of the fourth aspect, the N sub-logs carry the same log identification number ID, and the storage module is specifically configured to: and orderly storing the N sub-logs into a log storage container of the standby node according to the ID.
With reference to the fourth aspect, in some implementations of the fourth aspect, the sending module is specifically configured to: and sending a notification message to the master node, wherein the notification message is used for indicating that all the N sub-logs are stored in a log storage container of the standby node.
It should be noted that, in the entity apparatus shown in the fourth aspect, the processor may implement the steps executed by the respective modules by calling the computer program in the memory. For example, computer instructions stored in the cache may be called by the processor to perform steps required by the various modules (storage module, processing module, sending module).
In a fifth aspect, a master node is provided, which includes an input/output interface, a processor and a memory, wherein the processor is configured to control the input/output interface to send and receive information, the memory is configured to store a computer program, and the processor is configured to call and execute the computer program from the memory, so that the master node executes the method in the first aspect or any one of the possible implementation manners of the first aspect.
Alternatively, the processor may be a general-purpose processor, and may be implemented by hardware or software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory, which may be integrated with the processor, located external to the processor, or stand-alone.
In a sixth aspect, a standby node is provided, which includes an input/output interface, a processor and a memory, where the processor is configured to control the input/output interface to send and receive information, and the memory is configured to store a computer program, and the processor is configured to call and execute the computer program from the memory, so that the master node executes the method described in the second aspect or any one of the possible implementation manners of the second aspect.
Alternatively, the processor may be a general-purpose processor, and may be implemented by hardware or software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory, which may be integrated with the processor, located external to the processor, or stand-alone.
In a seventh aspect, a computer program product is provided, the computer program product comprising: computer program code which, when run on a computer, causes the computer to perform the method of the above-mentioned aspects.
In an eighth aspect, a computer-readable medium is provided, which stores program code, which, when run on a computer, causes the computer to perform the method of the above-mentioned aspects.
In a ninth aspect, a system is provided, which includes the aforementioned main node and standby node.
Drawings
Fig. 1 is a schematic block diagram of a scenario in which an embodiment of the present application is applicable.
Fig. 2 is a schematic flow chart of a data writing method provided in an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a data writing system according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a data writing system according to another embodiment of the present application.
Fig. 5 is a schematic structural diagram of a data writing system according to another embodiment of the present application.
Fig. 6 is a schematic flow chart of a data writing method according to another embodiment of the present application.
Fig. 7 is a schematic structural diagram of a data writing system according to another embodiment of the present application.
Fig. 8 is a schematic block diagram of a data writing device 800 according to an embodiment of the present application.
Fig. 9 is a schematic block diagram of a data writing apparatus 900 according to an embodiment of the present application.
Fig. 10 is a schematic block diagram of a host node 1000 provided in an embodiment of the present application.
Fig. 11 is a schematic block diagram of a standby node 1100 according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
In a distributed storage system, system availability is one of important indexes, and it needs to be ensured that the system availability is not affected when a machine fails. To ensure system availability, data needs to be backed up (i.e., multiple copies of stored data need to be kept consistent), and when a machine fails and one copy fails, the other copy can still provide service.
Fig. 1 is a schematic block diagram of a scenario in which an embodiment of the present application is applicable. The scenario shown in fig. 1 may include at least two nodes. The source node that can be replicated is generally referred to as the master node 110, while the destination node(s) that can be replicated are referred to as the standby nodes 120.
The master node 110 is a management node of the entire file system, and can receive an operation request from a user. As an example, primary node 110 may write data locally after receiving a write request for it, and may backup the written data onto standby node 120. The master node 110 may also be referred to as a master replica.
The standby node 120 is used as a disaster recovery backup based on disk data, so that when the host performs regular scheduled maintenance or the host fails (unplanned), the standby node 120 can quickly take over the application and the application data, and thus, the access of a user is not interfered by scheduled or unplanned shutdown. As an example, backup node 120 may perform data backup on data written by primary node 110. Backup node 120 may also be referred to as a secondary replica.
The file system is used as a software mechanism for managing and storing file information in an operating system and is mainly responsible for establishing files, storing, reading and dumping the files, controlling the access of the files and the like for a user.
For example, Network Attached Storage (NAS) may be a computer data storage device connected to a computer network, and is therefore also referred to as "network storage. The NAS is a special data storage server, takes data as a center, thoroughly separates storage equipment from the server, and centrally manages the data. NAS, as a form of service that is heavily used in storage systems, is mainly to provide a highly efficient and stable file system for storage devices. NAS devices can generally be shared with hosts to provide storage services for a variety of industries, in accordance with typical file system sharing protocols.
In a service for providing a file system, there are many scenarios (e.g., involving a data storage system with more than 2 nodes) where it is desirable to ensure that a group of write requests all succeed or all fail, and the group of write requests may be referred to as an atomic write. For example, in order to ensure that each update is completely successful or completely failed even in an unexpected time when power failure is encountered, a data write request of the storage system needs to be atomic write, which needs to meet the requirement of atomicity.
It should be understood that an atomic write may also be referred to as a group of write requests being atomic or the group of write requests being an atomic operation (i.e., the group of write requests being one or a series of operations that are not interruptible).
In the prior art, the atomicity of a group of write requests is guaranteed by a semantic lock of the file system. However, the increased semantic locks may result in complex system flow and increased system overhead.
In the data backup process, in order to ensure atomicity of write requests among more than 2 nodes and reduce complexity of a system process and system overhead, the embodiment of the application provides a data write-in method, which can avoid the complex process and large system overhead caused by the fact that atomicity is realized through a file system semantic lock in the prior art.
The data writing method according to the embodiment of the present application is described in detail below with reference to fig. 2 based on the architecture shown in fig. 1. Fig. 2 is a schematic flow chart of a data writing method provided in an embodiment of the present application. The method shown in fig. 2 may include: step 210 — 230. The steps 210-230 are described in detail below.
In step 210, the master node stores N sub-logs of the log locally.
In this embodiment of the present application, a group of write requests may include N sub-writes, where N is a positive integer greater than 0. To ensure that the set of write requests all succeed or all fail (i.e., the set of write requests is one or a series of operations that cannot be interrupted), the set of write requests may be referred to as an atomic write, which may also be referred to as having atomicity.
A chain of sub-writes in an atomic write may generate a log (log), and N sub-writes in an atomic write may generate N sub-logs (Sublog). The sub-log may record information of the sub-writes (that is, one sub-write may correspond to one sub-log), and the recorded information may include, but is not limited to: the address (start address) to be stored, the data to be written, the length of the data to be written, and the like.
The master node may store the generated N sub-logs locally, and as an example, the master node may store the N sub-logs into corresponding log storage containers according to the recorded start addresses, for example, the master node may store the N sub-logs into corresponding chunks according to the recorded start addresses. The following will describe the writing process of an atomic write request in detail with reference to fig. 3, and will not be described herein again.
In step 220, the primary node backs up the N sub-logs into the standby node.
In the embodiment of the application, the master node may back up the N sub-logs generated to the standby node. As an example, the standby node may store the N sub-logs written by the master node in corresponding log storage containers according to the recorded start addresses, and for example, the standby node may store the N sub-logs in corresponding chunks according to the recorded start addresses. The following will describe the writing process of an atomic write request in detail with reference to fig. 3, and will not be described herein again.
It should be noted that the chunk address in the standby node is the same as the address in the main node, and the standby node may store the N sub-logs into the corresponding chunks according to the recorded start addresses, which is the same as the operation flow in the main node.
In step 230, the primary node marks the journal as valid if it determines that all of the N sub-journals have been stored to the backup node.
In this embodiment of the present application, after the master node stores all the N sub-logs locally, it may determine whether the standby node stores all the N sub-logs in the corresponding log storage containers. If the master node can determine that the backup node stores all the N sub-logs in the corresponding log storage containers, the master node can mark the logs as valid after storing all the N sub-logs locally. As an example, the master node may add a flag bit to the log, and the master node may modify the state of the flag bit and may assert the flag bit.
It should be understood that, in the embodiment of the present application, when the master node marks the log as valid, it may indicate that N sub-logs of the log have been stored locally, and the N sub-logs have also been backed up to the standby node, and one atomic write request may be considered to have been completely written into the master node and the standby node, which may ensure atomicity of a group of write requests between nodes.
In the embodiment of the application, in the data backup process, the atomicity of the write request between the main node and the standby node is realized through a log mechanism, so that the semantic lock of a file system in the prior art can be avoided, and the complexity of a system flow and the system overhead can be reduced.
A specific implementation of the write process of the atomic write request in the embodiment of the present application is described in more detail below with reference to a specific example. It should be noted that the example of fig. 3 is only for assisting the skilled person in understanding the embodiments of the present application, and is not intended to limit the embodiments of the application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art that various equivalent modifications or variations are possible in light of the given example of fig. 3, and such modifications and variations are intended to be included within the scope of the embodiments of the present application.
Fig. 3 is a schematic structural diagram of a data writing system according to an embodiment of the present application. The system shown in fig. 3 may include chunks (chunk)310, chunks (chunk)320, and chunks (chunk) 330.
The Sublog stored in the chunk 310 in the form of a linked list may have: a child log (Sublog)312, a child log (Sublog)314, and a child log (Sublog) 316.
The Sublog stored in the form of a linked list in chunk 320 may have: a sub-log (Sublog)322, a sub-log (Sublog)324, and a sub-log (Sublog) 326.
The superbog stored in linked list form in chunk 330 may have: sublogs (Sublog)332, 334 and 336.
It should be understood that in the embodiment of the present application, a series of sub-writes in an atomic write request may generate one log (log), N sub-writes in an atomic write may generate N sub-logs (sublog), and one sub-log may record an address of one sub-write, data written, a length of data written, and the like. That is, an atomic write request may correspond to a log, with one superbog for each sub-write in an atomic write.
In the embodiment of the present application, the chunk may be a logically divided block of memory, and is a basic data index unit, and may be used to store the superbog. The node may store the Sublog in the form of a linked list inside the chunk of the corresponding address according to the starting address of the sub-write recorded in the Sublog.
The Sublog generated by the write request can find the chunk corresponding to the address of the sub-write according to the address recorded by the Sublog, and can be added to the Sublog linked list on the chunk. As an example, a read request may find a chunk corresponding to an address and offset recorded in a superbog. As an example, the write request may also find a chunk corresponding to the address and offset recorded in the sublog.
It should be understood that a linked list may be a non-sequential, non-sequential storage structure on a physical storage unit, and the logical order of data elements may be implemented sequentially through pointer connections in the linked list. The linked list may be comprised of a series of nodes, and the linked list may allow for the insertion or removal of nodes anywhere on the list. As an example, a plurality of superbogs on a chunk linked list may be considered as a series of nodes on the chunk linked list, where the superbogs may be inserted or removed.
The following description will be made in detail by taking a write request as an example.
Referring to fig. 3, a log311 generated by a set of write requests may include a Sublog 316, a Sublog 324, and a Sublog334, wherein the Sublog 316, the Sublog 324, and the Sublog334 may be organized within the log311 in a linked list.
The superblog can find the chunk corresponding to the address and the offset according to the recorded address and the offset. For example, Sublog 316 may be added to the link of chunk 310, Sublog 324 may be added to the link of chunk 320, and Sublog334 may be added to the link of chunk 320.
It should be noted that the solid line boxes in the graph may be used to indicate that multiple superblogs included in one log have all been stored in corresponding chunks, and the multiple superblogs have been marked as valid (a group of write requests has all been written into the memory, and atomicity of the group of write requests may be guaranteed), and may be read by a read request. For example, the Sublog 312, the Sublog 314, the Sublog 322, and the Sublog 332 are all written into the corresponding chunk, and can be read by the read request. The dashed box in the graph may be used to indicate that all of the plurality of superbogs included in a log have not been stored in the corresponding chunk, the log has not been marked as valid, and the plurality of superbogs in the log cannot be read by a read request. For example, the Sublog 316, Sublog 324, Sublog334, and Sublog 332 included in log311 have not been marked as valid and cannot be read by a read request.
It should be understood that the Sublog 322 on the chunk 320 linked list may be used to indicate that the log311 has been stored in the chunk before being written, and the Sublog 326 may be used to indicate that other write data may be stored on the chunk 320 linked list while the log311 is being written.
The node may mark log311 as valid after the Sublog 316, Sublog 324, Sublog334 in log311 are all stored on the chunk linked list corresponding to it. For example, a flag bit may be added to the log311, and the flag bit may be set to be valid after all the Sublogs in the log311 are stored on the corresponding chunk linked list, and all the Sublogs in the log311 may be read by the read request. Referring specifically to the system structure diagram shown in fig. 4, log311 in fig. 4 has been marked as valid (a group of write requests has been written into the memory completely, and atomicity of the group of write requests can be guaranteed), and the blob 316, the blob 324, and the blob 334 on the log311 can be read by the read request.
Alternatively, an atomic write request operation may be considered complete after a log is marked as valid (i.e., all of the superbogs on the log are stored on the corresponding chunk linked list). The memory space occupied by the log can be released, and the released memory space can be recycled. Specifically, referring to the system structure diagram shown in fig. 5, compared with the system structure diagram shown in fig. 4, the Sublog 316, the Sublog 324, and the Sublog334 on the log311 in fig. 5 have all been stored in the corresponding chunk, and the memory space occupied by the log311 may be released.
It should be appreciated that if a log of an atomic write request is released, it may also be considered that all of the Sublogs on the log are stored in the corresponding chunk linked list, and an atomic write request has completed. For example, if the pointer to log is null, an atomic write request may be considered complete and may be read by a read request.
In the embodiment of the application, atomic writing on the process can be realized through a mechanism of the log and the sub-log, atomicity of a group of writing requests can be ensured, atomicity of a group of writing requests realized through a file system semantic lock in the prior art can be avoided, and therefore complexity of the process and system overhead can be reduced.
Optionally, in some embodiments, in the data backup process, the standby node is used as a data backup for the main node, and the data written by the standby node needs to be written in the order of the main node, that is, the order of the data written by the standby node and the data written by the main node needs to be consistent. As an example, the embodiment of the present application may generate an identity number (ID) for each log, and multiple superbogs above the log may also have the same log ID. The nodes and the standby nodes can hang the Sublogs into the corresponding chunk according to the log IDs, and the Sublogs in the Sublog linked list in the chunk can be orderly arranged according to the log IDs.
A specific implementation manner for ensuring atomicity of a group of write requests between the primary node and the secondary node in the embodiment of the present application is described in more detail below with reference to a specific example. It should be noted that the example of fig. 6 is only for assisting the skilled person in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art that various equivalent modifications or variations are possible in light of the given example of fig. 6, and such modifications and variations are intended to be included within the scope of the embodiments of the present application.
Referring to fig. 6, it should be understood that the home node in fig. 6 corresponds to the above home node, and the opposite node corresponds to the above standby node. The method of FIG. 6 may include steps 610-670, which are described in detail below.
Step 610, the local end writes input/output (I/O).
The local end (understanding that the controller) may receive a write I/O instruction of an operating system and data that needs to be written, in this embodiment, a series of sub-writes in a group of write requests may generate a log, and the sub-writes may generate a Sublog, where the Sublog may record data that needs to be written, a length of the data, and an address where the data is stored.
Step 620, the local end generates a log (log) ID.
The local end can generate a log ID for a log after receiving a write I/O instruction of an operating system, the generated log and a plurality of Sublogs. Multiple Sublogs of the Log can have the same Log ID as the Log, so that the opposite end can write data in the order of the local end when writing data, and thus the data written by the opposite end can be consistent with the data written by the local end.
Step 630, the local end writes locally.
The local end may write the received multiple Sublogs locally, for example, the local end may write the Sublogs into corresponding log storage containers (for example, may be chunks) according to the log ID of each Sublog, and the Sublog linked lists in the chunks may be ordered according to the log IDs. For the local write processing flow of the local terminal, please refer to the data write method described in fig. 3, which is not described herein again.
Step 640, the mirror image is written to the peer.
An opposite end (which can understand a controller) may receive a plurality of superbogs to be backed up sent by the local end, where the plurality of superbogs may carry log IDs, and the opposite end may write the superbogs into corresponding log storage containers (for example, may be chunk) according to the log IDs of each superbog, and a superbog linked list in the chunk may be ordered according to the log IDs.
The process of writing the Sublog into the opposite end according to the log ID can also be called as a write mirror image, that is, a plurality of Sublogs stored by the local end are backed up to the opposite end. For the processing flow of the peer-to-peer write mirror, please refer to the data writing method described in fig. 3, which is not described herein again.
It should be noted that the local end may send only the multiple superbogs to be backed up to the opposite end, and the local end may also send both the log and the multiple superbogs of the log to the opposite end, which is not specifically limited in this embodiment of the present application. If the opposite end only receives a plurality of superbogs sent by the local end, the opposite end can generate a corresponding log.
Step 650, validate the endpoint data.
After the opposite end stores all the received multiple Sublogs to the corresponding chunk according to the log IDs, the opposite end can set the log marking positions of the multiple Sublogs to be valid. For a specific processing flow, please refer to the data writing method described with reference to fig. 3, which is not described herein again.
The opposite end may send a notification message to the home end after storing the plurality of superbogs to the corresponding chunk, and may notify the home end of the result of the data backup. As an example, if the peer stores all of the plurality of superbags in the corresponding chunk, and after the log flag is set to be valid, the notification message sent by the peer to the home peer may be used to indicate that one atomic write request of the peer has been completed. As another example, if the peer fails to perform the write mirroring due to resources or various other reasons, a notification message sent by the peer to the home peer may be used to indicate that the peer failed to write mirroring.
In step 660, the local side data is valid.
After the local terminal determines that the opposite terminal stores all the Sublogs to the corresponding chunk, the local terminal may set log mark positions of the Sublogs to be valid. For a specific processing flow, please refer to the data writing method described in fig. 3, which is not described herein again.
And step 670, the local end finishes.
The log generated by the local end in the atomic write request is set to be valid, and the log can be used for indicating that the atomic write request has been successfully written in 2 nodes and can be read by the read request.
Optionally, in some embodiments, if the local end does not set the mark position of the log to valid (the process fails to execute) after it is determined that the opposite end completely executes, it may be considered that the local end is abnormal. The system may benchmark the peer's data. After all the data of the local terminal are invalid, all the data of the opposite terminal are synchronized to the local terminal.
Optionally, in some embodiments, if the peer fails to perform the operation due to resources or various reasons, the peer may fail to perform the operation. The embodiment of the application can adopt a rollback mechanism, log stored by the local end can be rolled back, and atomicity (one group of write requests are successful or failed) of data of the whole system can be guaranteed.
The following describes a specific implementation manner of the rollback mechanism in the embodiment of the present application in more detail with reference to a specific example. It should be noted that the example of fig. 7 is only for assisting the person skilled in the art in understanding the embodiments of the present application, and is not intended to limit the embodiments of the present application to the specific values or specific scenarios illustrated. It will be apparent to those skilled in the art that various equivalent modifications or variations are possible in light of the given example of fig. 7, and such modifications and variations are intended to be included within the scope of the embodiments of the present application.
If the storage system needs to rollback, the local end can determine the Sublog related to the log according to the log needing to be rolled back, and can delete the Sublog from the chunk linked list in sequence to complete the rollback operation through the Sublog linked list in the log.
As an example, if the opposite end fails to store the Sublog in the log311, the home end may delete the Sublog 316, the Sublog 324, and the Sublog334 from the chunk link table according to the Sublog link table in the log311 to complete the rollback operation, as shown in fig. 3.
The system after the local rollback can refer to fig. 7, and the superbog in the log311 written in fig. 7 has been deleted from the chunk linked list.
In the embodiment of the application, the atomicity of the write request between the nodes is guaranteed through a rollback mechanism, the phenomenon that the atomicity of the write request between the nodes is guaranteed by means of retry in the prior art is avoided, and the serviceability of a system is improved.
The data writing method provided by the embodiment of the present application is described in detail above with reference to fig. 1 to 7, and the embodiment of the data writing apparatus of the present application is described in detail below with reference to fig. 8 to 11. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.
Fig. 8 is a schematic block diagram of a data writing device 800 according to an embodiment of the present application. The data writing apparatus 800 may include:
a storage module 810, configured to store N sub-logs of the log locally;
a sending module 820, configured to backup the N sub-logs to a standby node;
a first processing module 830, configured to mark the log as valid when it is determined that all the N sub-logs are stored in the standby node.
It should be understood that the log is generated according to an atomic write request, the atomic write request includes N sub-writes, each sub-log in the N sub-logs records information of each sub-write, and N is a positive integer greater than 0
Optionally, in some embodiments, the data writing apparatus 800 may further include:
the second processing module 840 is configured to delete the N stored sub-logs when the N sub-logs in the standby node fail to be stored.
Optionally, in some embodiments, the N sub-logs carry the same log identification number ID, and the storage module 810 is specifically configured to: and the main node orderly stores the N sub-logs into a log storage container of the main node according to the ID.
Optionally, in some embodiments, the first processing module 830 is specifically configured to: receiving a notification message, wherein the notification message is used for indicating that all the N sub-logs are stored in the standby node; marking the log as valid.
It should be noted that, in the entity apparatus 800 shown in fig. 8, the processor may implement the steps executed by the respective modules by calling the computer program in the memory. For example, computer instructions stored in the cache may be called by the processor to perform steps required to be performed by the respective modules (the storage module 810, the transmission module 820, the first processing module 830, and the second processing module 840).
Fig. 9 is a schematic block diagram of a data writing apparatus 900 according to an embodiment of the present application. The data writing apparatus 900 may include:
a storage module 910, configured to store N sub-logs of the log backed up by the master node in the backup node;
a processing module 920, configured to mark the log as valid when all the N sub-logs are stored in the standby node;
a sending module 930, configured to notify the master node that all of the N sub-logs are stored in the slave node by the slave node.
It should be understood that the log is generated according to one atomic write request, the one atomic write request includes N sub-writes, each sub-log of the N sub-logs records information of each sub-write, and N is a positive integer greater than 0.
Optionally, in some embodiments, the N sub-logs carry the same log identification number ID, and the storage module 910 is specifically configured to: and orderly storing the N sub-logs into a log storage container of the standby node according to the ID.
Optionally, in some embodiments, the sending module 930 is specifically configured to: and sending a notification message to the master node, wherein the notification message is used for indicating that all the N sub-logs are stored in a log storage container of the standby node.
It should be noted that, in the entity apparatus 900 shown in fig. 9, the processor may implement the steps executed by the respective modules by calling the computer program in the memory. For example, computer instructions stored in the cache may be called by the processor to perform the steps required by the respective modules (the storage module 910, the processing module 920, and the sending module 930).
Fig. 10 is a schematic block diagram of a host node 1000 provided in an embodiment of the present application. The master node 1000 may include: memory 1001, processing 1002, and input/output interface 1003.
The memory 1001, the processor 1002 and the input/output interface 1003 are connected via an internal connection path, the memory 1001 is used for storing program instructions, and the processor 1002 is used for executing the program instructions stored in the memory 1001 to control the input/output interface 1003 to receive input data and information and output data such as operation results.
It should be understood that, in the embodiment of the present application, the processor 1002 may adopt a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Or the processor 1002 is implemented by one or more integrated circuits, and is configured to execute the relevant programs to implement the technical solutions provided in the embodiments of the present application.
The memory 1001 may include both read-only memory and random access memory, and provides instructions and data to the processor 1002. A portion of the processor 1002 may also include non-volatile random access memory. For example, the processor 1002 may also store information of the device type.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1002. The method disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1001, and the processor 1002 reads the information in the memory 1001 and completes the steps of the method in combination with the hardware thereof. To avoid repetition, it is not described in detail here.
It should be understood that the host node 1000 according to the embodiment of the present application may correspond to the apparatus 800 in the embodiment of the present application, and may be configured to perform corresponding processes of the methods in fig. 2 to fig. 7 in the embodiment of the present application, and the above-mentioned and other operations and/or functions of each module in the host node 1000 are respectively for implementing the corresponding processes of the methods in fig. 2 to fig. 7 in the embodiment of the present application, and are not described herein again for brevity.
Fig. 11 is a schematic block diagram of a standby node 1100 according to an embodiment of the present application. The standby node 1100 may include: memory 1101, processing 1102, input/output interface 1103.
The memory 1101, the processor 1102 and the input/output interface 1103 are connected through an internal connection path, the memory 1101 is used for storing program instructions, and the processor 1102 is used for executing the program instructions stored in the memory 1101 to control the input/output interface 1103 to receive input data and information and output data such as operation results.
It should be understood that in the embodiment of the present application, the processor 1102 may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Or the processor 1102 may employ one or more integrated circuits for executing related programs to implement the technical solutions provided in the embodiments of the present application.
The memory 1101 may include both read-only memory and random access memory and provides instructions and data to the processor 1102. A portion of the processor 1102 may also include non-volatile random access memory. For example, the processor 1102 may also store information of the device type.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1102. The method disclosed in the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1101, and the processor 1102 reads the information in the memory 1101, and completes the steps of the above method in combination with the hardware thereof. To avoid repetition, it is not described in detail here.
It should be understood that the standby node 1100 according to the embodiment of the present application may correspond to the apparatus 900 in the embodiment of the present application, and may be configured to perform corresponding processes of the methods in fig. 2 to fig. 7 in the embodiment of the present application, and the above-mentioned and other operations and/or functions of the modules in the standby node 1100 are respectively for implementing the corresponding processes of the methods in fig. 2 to fig. 7 in the embodiment of the present application, and are not repeated herein for brevity.
Through the above description, the main node 1000 and the standby node 1100 provided in the embodiment of the present application can ensure atomicity of a group of write requests through a log mechanism in a data backup process, and avoid implementing atomicity through a file system semantic lock in the prior art, thereby reducing process complexity and system overhead.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the system, the apparatus and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and in actual implementation, there may be other divisions, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may also be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (18)

1. A method of data writing, the method comprising:
the method comprises the following steps that a main node stores N sub-logs of a log to the local, the log is generated according to an atomic write request, the atomic write request is one or a series of operations which cannot be interrupted, and the log comprises: marking bits, wherein the atomic write request comprises N sub-writes, the sub-log records the information of the sub-writes, and N is a positive integer greater than 0;
the master node backs up the N sub-logs to a standby node;
and the master node modifies the state of the marking bit in the log under the condition that the N sub-logs are all stored in the standby node, so that the log is marked as valid.
2. The method of claim 1, further comprising:
and under the condition that the N sub-logs in the standby node fail to be stored, the main node deletes the stored N sub-logs.
3. The method according to claim 1 or 2, wherein the N sub-logs carry the same log identification number ID,
the main node stores N sub-logs of the log to the local, including:
and the main node orderly stores the N sub-logs into a log storage container of the main node according to the ID.
4. The method according to any one of claims 1 to 3, wherein the master node, in case that it is determined that all of the N sub-logs are stored to the standby node, marking the log as valid comprises:
the main node receives a notification message, wherein the notification message is used for indicating that all the N sub-logs are stored in the standby node;
the master node marks the log as valid.
5. A method of data writing, the method comprising:
the backup node stores N sub-logs of a log backed up by a master node into the backup node, wherein the log is generated according to an atomic write request, the atomic write request is one or a series of operations which cannot be interrupted, and the log comprises: marking bits, wherein the atomic write request comprises N sub-writes, the sub-log records the information of the sub-writes, and N is a positive integer greater than 0;
when the N sub-logs are all stored in the standby node, the standby node modifies the state of the marking bit in the log, so that the log is marked as valid;
and the standby node informs the main node that all the N sub-logs are stored in the standby node.
6. The method of claim 5, wherein the N sub-logs carry the same log identification number ID,
the backup node stores N sub-logs of the log backed up by the master node into the backup node, and the method comprises the following steps:
and the standby node stores the N sub-logs into a log storage container of the standby node in order according to the ID.
7. The method of claim 5 or 6, wherein the notifying, by the standby node, that the N sub-logs are all stored in the standby node to the master node comprises:
and the standby node sends a notification message to the main node, wherein the notification message is used for indicating that all the N sub-logs are stored in a log storage container of the standby node.
8. An apparatus for writing data, the apparatus comprising:
a storage module, configured to store N sub-logs of a log locally, where the log is generated according to an atomic write request, where the atomic write request is one or a series of operations that cannot be interrupted, and the log includes: marking bits, wherein the atomic write request comprises N sub-writes, each sub-log in the N sub-logs records information of each sub-write, and N is a positive integer greater than 0;
the sending module is used for backing up the N sub-logs to a standby node;
and the first processing module is used for modifying the state of the marking bit in the log under the condition that the N sub-logs are all stored in the standby node, so that the log is marked to be valid.
9. The apparatus of claim 8, further comprising:
and the second processing module is used for deleting the N sub-logs stored under the condition that the N sub-logs in the standby node fail to be stored.
10. The apparatus according to claim 8 or 9, wherein the N sub-logs carry the same log identification number ID,
the storage module is specifically configured to:
and the main node orderly stores the N sub-logs into a log storage container of the main node according to the ID.
11. The apparatus according to any one of claims 8 to 10, wherein the first processing module is specifically configured to:
receiving a notification message, wherein the notification message is used for indicating that all the N sub-logs are stored in the standby node;
marking the log as valid.
12. An apparatus for writing data, the method comprising:
a storage module, configured to store N sub-logs of a log backed up by a master node into the backup node, where the log is generated according to an atomic write request, the atomic write request is one or a series of operations that cannot be interrupted, and the log includes: marking bits, wherein the atomic write request comprises N sub-writes, each sub-log in the N sub-logs records information of each sub-write, and N is a positive integer greater than 0;
the processing module is used for modifying the state of the marking bit in the log when the N sub-logs are all stored in the standby node, so that the log is marked to be valid;
and the sending module is used for informing the master node that all the N sub-logs are stored in the standby node by the standby node.
13. The apparatus of claim 12, wherein the N sub-logs carry a same log identification number (ID),
the storage module is specifically configured to:
and orderly storing the N sub-logs into a log storage container of the standby node according to the ID.
14. The apparatus according to claim 12 or 13, wherein the sending module is specifically configured to:
and sending a notification message to the master node, wherein the notification message is used for indicating that all the N sub-logs are stored in a log storage container of the standby node.
15. A master node comprising an input output interface, a processor for controlling the input output interface to transceive information, and a memory for storing a computer program, the processor being configured to retrieve from the memory and execute the computer program such that the master node performs the operational steps of the method of any one of claims 1 to 4.
16. A standby node comprising an input/output interface, a processor for controlling the input/output interface to transceive information, and a memory for storing a computer program, the processor being configured to retrieve and execute the computer program from the memory so that the standby node performs the operation steps of the method of any one of claims 5 to 7.
17. A storage system comprising a master node according to claim 15 and a backup node according to claim 16.
18. A computer-readable storage medium, comprising a computer program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 7.
CN201810706826.8A 2018-07-02 2018-07-02 Data writing method and device Active CN110737716B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810706826.8A CN110737716B (en) 2018-07-02 2018-07-02 Data writing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810706826.8A CN110737716B (en) 2018-07-02 2018-07-02 Data writing method and device

Publications (2)

Publication Number Publication Date
CN110737716A CN110737716A (en) 2020-01-31
CN110737716B true CN110737716B (en) 2022-09-23

Family

ID=69233325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810706826.8A Active CN110737716B (en) 2018-07-02 2018-07-02 Data writing method and device

Country Status (1)

Country Link
CN (1) CN110737716B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113609150B (en) * 2021-10-08 2022-03-08 阿里云计算有限公司 Hardware-based atomic writing method, equipment and system
CN115993940B (en) * 2023-03-23 2023-07-25 青岛鼎信通讯股份有限公司 Electric quantity loss prevention method and device, electric energy meter equipment and storage medium
CN116107516B (en) * 2023-04-10 2023-07-11 苏州浪潮智能科技有限公司 Data writing method and device, solid state disk, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251814A (en) * 2008-02-04 2008-08-27 浙江大学 Method for implementing credible recovery system in operating system
CN104571956A (en) * 2014-12-29 2015-04-29 成都致云科技有限公司 Data writing method and splitting device
CN106648959A (en) * 2016-09-07 2017-05-10 华为技术有限公司 Data storage method and storage system
US9785510B1 (en) * 2014-05-09 2017-10-10 Amazon Technologies, Inc. Variable data replication for storage implementing data backup

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609479B (en) * 2012-01-20 2015-11-25 北京思特奇信息技术股份有限公司 A kind of memory database node clone method
CN103793291B (en) * 2012-11-01 2017-04-19 华为技术有限公司 Distributed data copying method and device
CN104346373B (en) * 2013-07-31 2017-12-15 华为技术有限公司 Subregion journal queue synchronization management method and equipment
CN105426439B (en) * 2015-11-05 2022-07-05 腾讯科技(深圳)有限公司 Metadata processing method and device
CN106855822A (en) * 2015-12-08 2017-06-16 阿里巴巴集团控股有限公司 For the method and apparatus of distributing real time system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251814A (en) * 2008-02-04 2008-08-27 浙江大学 Method for implementing credible recovery system in operating system
US9785510B1 (en) * 2014-05-09 2017-10-10 Amazon Technologies, Inc. Variable data replication for storage implementing data backup
CN104571956A (en) * 2014-12-29 2015-04-29 成都致云科技有限公司 Data writing method and splitting device
CN106648959A (en) * 2016-09-07 2017-05-10 华为技术有限公司 Data storage method and storage system

Also Published As

Publication number Publication date
CN110737716A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
US8464101B1 (en) CAS command network replication
CN101706802B (en) Method, device and sever for writing, modifying and restoring data
US7778975B2 (en) Mirroring method, mirroring device, and computer product
US20150213100A1 (en) Data synchronization method and system
CN108509462B (en) Method and device for synchronizing activity transaction table
CN110737716B (en) Data writing method and device
CN111753013B (en) Distributed transaction processing method and device
JP4715774B2 (en) Replication method, replication system, storage device, program
US10628298B1 (en) Resumable garbage collection
CN107656834A (en) Recover main frame based on transaction journal to access
CN105049258A (en) Data transmission method of network disaster-tolerant system
US9330153B2 (en) System, method, and computer readable medium that coordinates between devices using exchange of log files
CN110121694B (en) Log management method, server and database system
CN115145697A (en) Database transaction processing method and device and electronic equipment
CN111309799A (en) Method, device and system for realizing data merging and storage medium
CN111159156B (en) Backup method and device for SQLite database
US10620872B2 (en) Replicating data in a data storage system
CN110121712B (en) Log management method, server and database system
CN110196788B (en) Data reading method, device and system and storage medium
CN115348276A (en) Data storage method and device, computer equipment and storage medium
US9471409B2 (en) Processing of PDSE extended sharing violations among sysplexes with a shared DASD
CN112596959A (en) Distributed storage cluster data backup method and device
CN107305582B (en) Metadata processing method and device
JP6044363B2 (en) Computer, NAS access method and NAS access program
CN104220982A (en) Transaction processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant