CN114244859B - Data processing method and device and electronic equipment - Google Patents

Data processing method and device and electronic equipment Download PDF

Info

Publication number
CN114244859B
CN114244859B CN202210168540.5A CN202210168540A CN114244859B CN 114244859 B CN114244859 B CN 114244859B CN 202210168540 A CN202210168540 A CN 202210168540A CN 114244859 B CN114244859 B CN 114244859B
Authority
CN
China
Prior art keywords
data
sequencing
copies
access request
data copy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210168540.5A
Other languages
Chinese (zh)
Other versions
CN114244859A (en
Inventor
严祥光
朱云锋
鞠进涛
张冠华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Cloud Computing Ltd
Original Assignee
Alibaba Cloud Computing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Cloud Computing Ltd filed Critical Alibaba Cloud Computing Ltd
Priority to CN202210168540.5A priority Critical patent/CN114244859B/en
Publication of CN114244859A publication Critical patent/CN114244859A/en
Application granted granted Critical
Publication of CN114244859B publication Critical patent/CN114244859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/22Arrangements for detecting or preventing errors in the information received using redundant apparatus to increase reliability

Abstract

The embodiment of the specification provides a data processing method and device and electronic equipment. The method is applied to a distributed storage system, wherein the distributed storage system comprises at least two data copies; the method comprises the following steps: receiving concurrent access requests of a client; in response to the concurrent access requests, recording, by the data replica, the concurrent access requests; sequencing each access request in the concurrent access requests among the data copies based on a preset sequencing protocol so as to reach the agreement on the sequence of each access request; and sequentially executing each access request in the concurrent access requests by the data copy according to the sequencing sequence.

Description

Data processing method and device and electronic equipment
Technical Field
The embodiment of the specification relates to the technical field of computers, in particular to a data processing method and device and electronic equipment.
Background
The distributed storage system generally refers to a system that stores data in a distributed manner on a plurality of independent devices and provides a uniform access mode to the outside. In order to improve the reliability of the distributed storage system, it is generally necessary to store multiple copies of data in the same copy of data in the distributed storage system, so that even if a copy of data fails, other valid copies of data can be used to provide data services.
In addition, since the same data is stored in a plurality of different data copies, it is necessary to ensure data consistency among the plurality of data copies. To ensure data consistency of the data copies, all data copies of the same copy of data need to process all write operations in the same order.
In practical applications, a situation that multiple clients concurrently access requests may occur, and therefore the distributed storage system needs to sequence the multiple clients concurrently access requests, so that each data copy sequentially executes the multiple clients concurrently access requests according to the sequence of sequencing.
Disclosure of Invention
The embodiment of the specification provides a data processing method and device and electronic equipment.
According to a first aspect of embodiments of the present specification, there is provided a data processing method applied to a distributed storage system, where the distributed storage system includes at least two data copies; the method comprises the following steps:
receiving concurrent access requests of a client;
in response to the concurrent access requests, recording, by the data replica, the concurrent access requests;
sequencing each access request in the concurrent access requests among the data copies based on a preset sequencing protocol so as to reach the agreement on the sequence of each access request;
and sequentially executing each access request in the concurrent access requests by the data copy according to the sequencing sequence.
Optionally, the sequencing, based on a preset sequencing protocol, each access request in the concurrent access requests among the data copies to reach an agreement on a sequence of each access request includes:
electing a Leader data copy among the data copies;
acquiring the receiving sequence of each access request from all available data copies by the Leader data copy;
determining a sequencing sequence to be identified by the Leader data copy according to the receiving sequence corresponding to all available data copies;
synchronizing the sequencing sequence to all data copies for consensus by the Leader data copy, and determining the consensus-achieved sequencing sequence as a result of sequencing the access requests.
Optionally, the determining a sequence order to be identified commonly according to the receiving order corresponding to all available data copies includes:
screening out a target receiving sequence with the maximum number of the same receiving sequences from the receiving sequences corresponding to all available data copies;
determining the target receiving sequence as a sequencing sequence to be commonly recognized.
Optionally, the method further includes:
and if the receiving sequence corresponding to all the available data copies does not have the maximum number of target receiving sequences, the Leader data copy determines the sequencing sequence to be identified by itself.
Optionally, the method further includes:
when any data copy is in a missing access request, the loader data copy informs an abnormal data copy of the missing access request to fill the missing access request with a normal data copy with the missing access request.
Optionally, the sequencing protocol is provided with an execution condition for fault tolerance; sequencing each access request in the concurrent access requests among the data copies based on a preset sequencing protocol so as to reach an agreement on the sequence of each access request, and the sequencing method comprises the following steps:
when the data copies meet the execution condition, sequencing each access request in the concurrent access requests among the data copies based on a preset sequencing protocol so as to reach the agreement on the sequence of each access request;
wherein the execution condition of the sequencing protocol comprises:
n is greater than or equal to 3F + 1;
w is more than or equal to 2F + 1;
r is more than or equal to F + 1;
wherein N represents the total number of data copies, F represents the number of data copies that can tolerate simultaneous failures, W represents the number of data copies that successfully execute the access request, and R represents the number of data copies that successfully read the execution result.
Optionally, the method further includes:
when the data copy sends member change, the old member configuration is switched to the combined member configuration;
after the configuration of the joint member is submitted, switching the configuration of the joint member into a new configuration of the member;
wherein the old member configuration comprises a data copy before member change; the new member configuration comprises a data copy after member change; the federated member configuration comprises a combination of the old member configuration and the new member configuration.
Optionally, there is no master data copy between the at least two data copies.
Optionally, the distributed storage system comprises a distributed shared block storage system.
According to a second aspect of embodiments of the present specification, there is provided a data processing apparatus applied to a distributed storage system, the distributed storage system including at least two data copies; the device comprises:
the receiving unit is used for receiving concurrent access requests of the client;
the response unit is used for responding to the concurrent access request and recording the concurrent access request by the data copy;
the sequencing unit is used for sequencing each access request in the concurrent access requests among the data copies based on a preset sequencing protocol so as to reach the agreement on the sequence of each access request;
and the processing unit is used for sequentially executing each access request in the concurrent access requests by the data copy according to the sequencing sequence.
Optionally, the sequencing unit comprises
The electing subunit elects a Leader data copy among the data copies;
the obtaining subunit obtains the receiving sequence of each access request from all available data copies by the Leader data copy;
the determining subunit determines a sequencing order to be identified by the Leader data copy according to the receiving order corresponding to all available data copies;
and the consensus subunit synchronizes the sequencing sequence to all the data copies for consensus through the Leader data copy, and determines the consensus-achieved sequencing sequence as a result of sequencing the access requests.
Optionally, the determining subunit is further configured to screen out, from the receiving orders corresponding to all available data copies, a target receiving order with the largest number of the same receiving orders, and determine the target receiving order as an ordering order to be identified.
Optionally, the determining subunit is further configured to determine, by the Leader data copy, an ordering order to be identified by itself when there is no target receiving order with the largest number in the receiving orders corresponding to all available data copies.
Optionally, the method further includes:
and the filling subunit is used for notifying the abnormal data copy of the missing access request by the Leader data copy to fill the missing access request to the normal data copy with the missing access request when any data copy is in the missing access request.
Optionally, the sequencing protocol is provided with an execution condition for fault tolerance;
the sequencing unit is further configured to sequence each access request in the concurrent access requests among the data copies based on a preset sequencing protocol when the data copies satisfy the execution condition, so as to agree on a sequence of each access request; wherein the execution condition of the sequencing protocol comprises:
n is greater than or equal to 3F + 1;
w is more than or equal to 2F + 1;
r is more than or equal to F + 1;
wherein, N represents the total number of data copies, F represents the number of data copies that can tolerate simultaneous failures, W represents the number of data copies that successfully execute the access request, and R represents the number of data copies that successfully read the execution result.
Optionally, the method further includes:
a changing unit for switching the old member configuration to the joint member configuration when the member of the data copy is changed; after the configuration of the joint member is submitted, switching the configuration of the joint member into a new configuration of the member;
wherein the old member configuration comprises a data copy before member change; the new member configuration comprises a data copy after member change; the federated member configuration comprises a combination of the old member configuration and the new member configuration.
Optionally, there is no master data copy between the at least two data copies.
Optionally, the distributed storage system comprises a distributed shared block storage system.
According to a third aspect of embodiments herein, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured as any one of the data processing methods described above.
According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium comprising:
the instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform any of the data processing methods described above.
In this specification, a data processing scheme is provided, where a preset sequencing protocol is used to agree on a sequence of concurrent access requests among data copies, so that each data copy may sequentially execute the concurrent access requests according to the same execution sequence, thereby ensuring data consistency among the data copies.
Drawings
Fig. 1 is a schematic diagram of a read/write operation of a client according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of sequential consistency computation for multiple write requests provided by an embodiment of the present description;
FIG. 3 is a flow chart of a data processing method provided by an embodiment of the present specification;
FIG. 4 is a schematic diagram of a data copy provided by an embodiment of the present description;
FIG. 5 is a schematic diagram of another data copy provided by an embodiment of the present description;
FIG. 6 is a schematic diagram of a sequencing protocol provided by an embodiment of the present description;
FIG. 7 is a schematic diagram of a simplified sequencing protocol provided by an embodiment of the present description;
fig. 8 is a schematic diagram of a member change timing provided in an embodiment of the present specification;
FIG. 9 is a schematic diagram of a member change protocol provided by an embodiment of the present specification;
fig. 10 is a hardware configuration diagram of a data processing apparatus provided in an embodiment of the present specification;
fig. 11 is a block diagram of a data processing apparatus according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present description. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As mentioned above, in order to improve the reliability of the distributed storage system, a certain fault tolerance is required. It is generally desirable to store multiple copies of data in the same copy of data in a distributed storage system so that even if one copy of data fails, other valid copies of data can be used to provide data services.
Since the same data needs to be stored on multiple different copies of data, these copies of data need to be consistent. To ensure data consistency of the data copies, all data copies of the same copy of data need to process all write operations in the same order.
Generally, the distributed storage system can be divided into a distributed storage system with a master data copy and a distributed storage system without a master data copy according to whether a master-slave relationship exists between the data copies.
In the distributed storage system with the master data copies, the data copies are not completely equal to each other and have a master-slave relationship. Specifically, the plurality of data copies of the distributed storage system includes one master data copy, and the remaining data copies are all slave data copies. When the distributed storage system responds to the concurrent access requests of the multiple clients, the master data copy can be sequenced, and other slave data copies only need to sequentially execute the concurrent access requests of the multiple clients according to the sequencing sequence of the master data copy.
In the distributed storage system without the master data copies, the data copies are completely equivalent without master-slave relationship. When the distributed storage system responds to the concurrent access requests of multiple clients, because there is no master data copy, and none of the data copies can independently sequence the concurrent access requests, for the distributed storage system without master data copy, it is necessary to provide a sequencing mode of concurrent access requests to ensure that all the data copies are processed in the same order, thereby ensuring the data consistency of the data copies.
Since the distributed storage system without the master data copy does not need to set the master data copy, the availability is higher relative to the distributed storage system with the master data copy.
The embodiments in this specification can be applied in the Block Storage (EBS) technology, especially in the shared Block Storage scenario.
The block storage technology is a block-level random storage technology which provides low time delay, durability and high reliability for a cloud server ecs (elastic computer service). The block storage can support automatic copying of data in an available area and prevent data unavailability caused by hardware unexpected failures, thereby protecting traffic from the threat of hardware failures.
In a public cloud, a cloud disk is usually mounted on an ECS instance, so as to provide a block storage service for customers. However, in some practical business scenarios (e.g., Oracle Rac database HA architecture), there is a need for one cloud disk to be mounted on multiple ECS instances. Therefore, the multiple ECS instances need to perform read-write operations on the same cloud disk at the same time, which requires implementing shared block storage, thereby supporting block storage for concurrent read-write access of multiple ECS instances.
Further, to achieve low latency, high throughput shared block storage, point-to-point read and write operations are required and cannot be forwarded via intermediate nodes. In addition, in order to have a certain fault-tolerant capability, an architecture without a main data copy is required. In the architecture without the main data copy, the client directly reads and writes a plurality of data copies, so that the delay can be reduced; but at the same time, the problem of data consistency in the data copy is brought.
For the problem of data consistency in the non-master data copy, the present specification provides a data processing scheme, in which a peer-to-peer sequencing protocol is used between the data copies to perform consistent sequencing on the order of concurrent access requests, so as to ensure that all the data copies execute the concurrent access requests according to the same execution order.
Fig. 1 is a schematic diagram illustrating a client initiating an access request provided in this specification. Generally, the access request can be divided into a write request corresponding to a write operation and a read request corresponding to a read operation. The client can broadcast the read-write request to a plurality of data copies, and the data copies are processed by the data copies; when the client receives enough (usually at least half) reply messages returned by the data copies, the client can be considered to have completed the read-write request. Accordingly, in this specification, after the data copy receives the read-write request from the client, the data copy is directly processed locally without further network communication, so that only one hop of network from the client to the data copy is delayed, thereby having the effect of low delay.
Taking the schematic diagram on the left in fig. 2 as an example, for multiple clients (at least two) concurrently writing to multiple data copies, for example, a write request W1 and a write request W2 concurrently performed by client 1 and client 2; the write request W1 and write request W2 require writes to data copy 1 and data copy 2. The data copy 1 and the data copy 2 may receive the write requests W1 and W2 of the client 1 and the client 2 in different orders (for example, the data copy 1 shown in fig. 2 receives the write requests in the order of W1/W2 (W1 is before and W2 is after), while the data copy 2 receives the write requests in the order of W2/W1 (W1 is after and W2 is before), and if the data copy 1 and the data copy 2 execute the write requests W1 and the write requests W2 in different orders, data inconsistency may result.
To this end, with continued reference to the intermediate diagram shown in FIG. 2, the data copies may be consistently sequenced between themselves by sequencing the order of write requests in each data copy via a sequencing protocol; so that all copies of data agree on the order of all write requests. That is, as shown in the right diagram of FIG. 2, the data copy 1 and the data copy 2 can achieve the agreed sequencing result of W2/W1. In this manner, since data copy 1 and data copy 2 can execute write request W1 and write request W2 in the same order, data consistency between different data copies can be guaranteed.
Please refer to fig. 3 for describing an embodiment of a data processing method provided in the present specification, which is applied to a distributed storage system including at least two data copies; the method comprises the following steps:
step 310: receiving concurrent access requests of a client;
step 320: and responding to the concurrent access requests, and recording the concurrent access requests by the data copy.
After receiving the concurrent access requests, each data copy locally records the access requests and returns an acceptance message to each client, and then executes the access requests, namely, executes the subsequent steps 330 to 340.
When the client receives acceptance messages returned by more than half of the data copies, the access request is considered to be executed completely. Thus, the system response efficiency is very high for the client.
Step 330: sequencing each access request in the concurrent access requests among the data copies based on a preset sequencing protocol so as to reach the agreement on the sequence of each access request.
Step 340: and sequentially executing each access request in the concurrent access requests by the data copy according to the sequencing sequence.
As mentioned above, through the sequencing protocol, the execution order of the access requests can be commonly known among the data copies, and after agreement is reached, the access requests are executed according to the uniform execution order.
The sequencing protocol in this specification is a consistency protocol designed for the scenario of read and write requests. Although the consistency protocol also exists in the related art, the performance requirements in the read-write request scene cannot be met.
As shown in table 1, the sequencing protocol is compared with the existing coherence protocol to show that the sequencing protocol provided by the present specification has the characteristics of high performance and low latency compared with the existing coherence protocol, so that the performance requirement of the read-write request scenario, especially the requirement of the special scenario of concurrent read-write requests on low latency, can be met.
Figure 25934DEST_PATH_IMAGE001
The direct writing can be to request point-to-point writing to a plurality of data copies, and generally, the client directly writes to the data copies; the Y-type writing needs to send a request to one data copy first and then the data copy is forwarded to other data copies, and the process is like a Y, so the process is called Y-type writing in an image manner. For direct writing and Y-type writing, in view of response speed, the direct writing is faster in response and lower in delay because forwarding is not needed. Therefore, for scenes with higher delay requirements, it is more appropriate to use direct writing.
As shown in table 1, among the existing coherence protocols, the lowest latency is the RWN and the sequencing protocol of the present specification (there is only one hop network latency from client to server, while other coherence protocols require the additional addition of at least 1 hop network between copies of data). Furthermore, comparing the RWN with the sequencing protocol can find that the RWN only supports a single client and does not support multi-client concurrence, so the restriction is large; the sequencing protocol of the specification can support multi-client concurrent requests, and has high performance (good universality and practicability).
In implementation, since the aforementioned data copy does not actually execute the access request when returned to the client acceptance message, although the access request is still executed subsequently, there is a possibility that the access request cannot be executed due to an exception.
This may cause an exception condition in which the client receives an access request that is completed, but not actually completed. In this regard, when designing the sequencing protocol, it is necessary to consider the fault tolerance capability for the data copies, that is, when one or some data copies fail, the remaining data copies can still complete the access request to provide the data service normally.
The following describes in detail several execution conditions set for implementing fault tolerance capability in the sequencing protocol, that is, consistent sequencing can be implemented by using the sequencing protocol only when the above-mentioned several execution conditions are satisfied between data copies; and in step 320, the access request is considered to be necessarily completed, so that the access request can be returned to the client to accept the message before the access request is actually executed.
In the following description, taking a write request as an example, assuming that the total number of data copies is N, it is tolerable for F data copies to fail simultaneously, it is necessary to determine that the write request needs to write W data copies to consider that the write is successful, and the read request needs to read from R data copies to read correct data.
Referring to FIG. 4, assuming a concurrent write request W1 writes a copy of the data before the write request W2, the sequencing result needs to guarantee W1 is before and W2 is after.
Further taking the data copy diagram on the left side in fig. 4 as an example, N is 3, F may be 1, and W is 2 (between 3 data copies, at least 2 data copies need to be written to consider that writing is successful). Where W1 writes data copy 1 and data copy 2 and W2 writes data copy 2 and data copy 3. If data copy 2 fails at this time, the order of write requests W1 and W2 will be lost and cannot be recovered because neither data copy 1 nor 3 has an ordering result.
Taking the data copy diagram on the right side in fig. 4 as an example, N is 4, F may be 1, and W is 3 (between 4 data copies, at least 3 data copies need to be written to consider that writing is successful). Where W1 writes data copy 1, data copy 2, and data copy 3, and W2 writes data copy 2, data copy 3, and data copy 4. If any one of the data copies fails at this time, sequencing can still be performed since there are always 1 data copy that retains the write request order. But if there are two copies of data (e.g., data copy 2 and data copy 3) that fail, then there is no sequencing.
Considering the universality, in the case that a write request considers that the write is successful only after the write request successfully writes into W data copies, if it is ensured that the sequence of any two write requests can still be restored after the F data copies are invalid, at least one data copy must also store the sequence of the write request, and therefore the following formula 1 needs to be satisfied between the data copies to satisfy the execution condition of the sequencing protocol:
2W-N > = F +1 formula 1
Of course, in practical applications, it is still insufficient to satisfy equation 1 in some cases. As shown in the schematic diagram of the data copy shown in FIG. 5, the write request W1 succeeds in resending the write request W2 (i.e., W1 is before and W2 is after); but write request W1 may have been transferred long enough to arrive at data copy 4 after write request W2, the wrong order (W1 after and W2 before) would be preserved on data copy 4. At this time, if the data copy 2 fails, the true order of W1 and W2 cannot be determined because the data copy 3 and the data copy 4 are given different orders, and the data copy 1 has only W1.
It can be seen that of all the data copies, some data copies may not give an order (e.g., data copy 2) and some may give an incorrect order (e.g., data copy 4). In order to ensure that the order of any two write requests can still be restored after the F data copies fail, it is necessary to ensure that the number of data copies that can give the correct order is greater than the number of data copies that can give the wrong order, that is, the following formula 2 is satisfied:
2W-N-F > = N-W +1 formula 2
In addition, in the schematic diagram of the data copy shown in fig. 5, only the write request W1 is on the data copy 1, and there is no write request W2, so the data copy 1 does not know the order of W1 and W2, and the data copy 1 does not give an incorrect order. Based on this observation, an optimization can be made that data copy 1 can also present the correct order, assuming that data copy 1 will receive a write request W2 in the future that is not currently received. Based on this, if the write requests that are not received are all queued up, for any two write requests, not only the normal data copy can be given the order, but also other abnormal data copies (which receive only either of the two write requests) can be given the order. Although this abnormal data copy may give the correct order or the wrong order, the number of data copies giving the correct order will be greater than or equal to the number of data copies giving the wrong order, so that the optimization may not need to be performed in consideration of the situation that the data copy shown in fig. 5 may give the wrong order, and equation 2 above is not needed.
And finally, ensuring that the write request can be processed continuously after the F data copies fail, wherein the number of successfully written data copies cannot exceed the number of the remaining data copies, namely the following formula 3 is required to be satisfied:
w < = N-F equation 3
Further, from equation 1 and equation 3, it can be derived:
n > = 3F +1 formula 4
W > =2F +1 formula 5
That is, the conditions to be satisfied when the write success is expressed for the write request are formula 4 and formula 5.
After the conditions for write success are determined, the derivation continues with the conditions to be satisfied when the read request indicates a successful read.
First, to ensure that the latest data can be read, the W copies indicating successful writing and the R copies indicating successful reading must intersect:
r + W-N > = 1 formula 6
In addition, to ensure that the read request can be processed continuously after the F data copies fail, the number of successfully read data copies cannot exceed the number of remaining data copies, that is, the following formula 7 is to be satisfied:
r < = N-F equation 7
Further, from equations 6 and 7, it can be deduced
R > = F +1 formula 8
In summary, when the total number of the data copies is N, and it can be tolerated that F data copies simultaneously fail, the write request needs to write W data copies, and the read request needs to read from R data copies, then N, F, W and R need to satisfy the following relationship:
n > = 3F +1 equation 9
W > =2F +1 formula 10
R > = F +1 formula 11
In summary, the sequencing protocol may set the execution conditions shown in equations 9, 10, and 11 above; correspondingly, the sequencing, based on a preset sequencing protocol, of each access request in the concurrent access requests among the data copies in step 330 to reach an agreement on the sequence of each access request may include:
when the data copies meet the execution condition, sequencing each access request in the concurrent access requests among the data copies based on a preset sequencing protocol so as to reach an agreement on the sequence of each access request.
Applying the above example, the execution condition of the sequencing protocol can be satisfied only when the data copy needs to satisfy the above equations 9, 10 and 11, thereby realizing the fault tolerance of the sequencing protocol.
Table 2 below illustrates N, F, W and R for several common copies of data:
Figure 85157DEST_PATH_IMAGE002
having described the fault tolerance of the sequencing protocol, the execution flow of the sequencing protocol is described further below. As previously described, the data copy may run a sequencing protocol to agree on the order of received access requests and then execute the access requests in that order. Referring to the schematic diagram of the sequencing protocol shown in fig. 6, in order to improve the sequencing efficiency, a stable Leader data copy may be elected between data copies, the order of access requests may be determined by the Leader data copy, and then the determined order is agreed with other Follower data copies.
It should be noted that although a Leader is elected inside the sequencing protocol, the master data copy is still not available between the data copies, and therefore, the availability is still higher compared with a distributed storage system with a master data copy.
The sequencing protocol as shown in FIG. 6 may be divided into the following sections:
1. and (4) Leader election. Leader may be elected using an election algorithm (e.g., Raft); and after the Leader is selected, keeping by using a Leader, and when the Leader is found to be invalid, selecting a new Leader.
2. And (5) collecting. The Leader obtains from the N-F data copies the order of receipt of access requests (hereinafter write requests for example) received from the Client but not yet sequenced. The N-F data copies refer to the remaining available data copies after the data copies which can be failed at most at the same time are eliminated. If the receiving order provided by the N-F data copies is consistent, the receiving order may be determined directly as the sequencing order and the agreement has been reached, and the subsequent sequencing and consensus stages may be skipped. If a write request exists on less than F +1 copies of data, the data needs to be complemented first.
3. And (5) sequencing. And the Leader determines the final sequencing sequence according to the receiving sequence of the write requests acquired from the N-F data copies. The final sequencing order may be based on the maximum number of votes (the maximum number of the same receiving orders), and if there are two write requests and no receiving order is the maximum votes, it indicates that the two writes are concurrent, and the Leader may decide the final sequencing order of the two write requests.
4. And (4) consensus is carried out. The Leader synchronizes the determined order to most of the data copies and agrees and then asynchronously notifies each data copy. The synchronization may be implemented using, for example, appendix entries by Raft.
5. And (5) completing the data. When the Leader notifies each data copy, the Leader notifies the data copy where the missing write request is filled, and each data copy fills the missing write request from the data copy notified by the Leader. And the data copy carries out each write operation according to the determined sequence after the data are filled.
Based on this, in an exemplary embodiment, the step 330 may sequence, based on a preset sequencing protocol, each access request in the concurrent access requests among the data copies to reach an agreement on a sequence of each access request, and may include:
electing a Leader data copy among the data copies;
acquiring the receiving sequence of each access request from all available data copies by the Leader data copy;
determining a sequencing sequence to be identified by the Leader data copy according to the receiving sequence corresponding to all available data copies;
synchronizing the sequencing sequence to all data copies for consensus by the Leader data copy, and determining the consensus-achieved sequencing sequence as a result of sequencing the access requests.
Wherein, the determining the sequencing order to be commonly recognized according to the receiving order corresponding to all the available data copies comprises:
screening out a target receiving sequence with the maximum number of the same receiving sequences from the receiving sequences corresponding to all available data copies;
determining the target receiving sequence as a sequencing sequence to be commonly recognized.
And if the receiving sequence corresponding to all the available data copies does not have the maximum number of target receiving sequences, the Leader data copy determines the sequencing sequence to be identified by itself.
Corresponding to the data supplementing phase, when any data copy lacks an access request, the Leader data copy can notify an abnormal data copy of the missing access request to supplement the missing access request to a normal data copy with the missing access request.
Applying the above example, by sequencing protocol data copies, each access request in the concurrent access requests can be sequenced, and a consensus is achieved on the sequencing result.
In an exemplary embodiment, the sequencing protocol may be simplified if it does not take into account fault tolerance, and the number of data copies that need to be written for a successful write may be equal to the total number of data copies. The simplified sequencing protocol is shown in fig. 7, where it is only necessary to satisfy the condition of the election algorithm (e.g., Raft), i.e., N > =2F + 1. The simplified sequencing protocol may include the following advantages over the sequencing protocol shown in FIG. 6:
1. because the fault tolerance is not considered, the order of recording the write requests by each data copy can be considered to be valid, so the Leader can be subject to the order of the local write requests, and the order of other copies of the data is not required to be collected, and the Leader can complete the sequencing locally.
2. Each data copy holds all write requests without completing the data.
3. For a read request, the read may be considered successful by simply reading data from any one of the data copies.
As should be appreciated from the above example, if the sequencing protocol does not account for fault tolerance, the contents of the sequencing protocol may be further simplified so that sequencing is completed faster, access requests are executed faster; the delay is further shortened.
Having introduced the above-described sequencing protocol, and continuing with the introduction of member changes to the distributed storage system, how data consistency is guaranteed.
The member change refers to the change of the data copy in the distributed storage system, such as adding the data copy, reducing the data copy or replacing the data copy. The data copies need to be dynamically changed during the sequencing protocol run so that all data copies agree on the member configuration. But the member change has the particularity, because the members participating in the consensus change in the member change process; in the process of changing the members, each data copy is configured from the old members
Figure 986117DEST_PATH_IMAGE003
Switching to new member configuration
Figure 283237DEST_PATH_IMAGE004
May differ in time or may be at a certain time
Figure 345871DEST_PATH_IMAGE003
And
Figure 485866DEST_PATH_IMAGE004
there are different N, F, W, R parameters and data consistency is therefore bad. To solve this problem, in the present specification, a two-stage member change method Joint Consensus is introduced into the sequencing protocol.
Joint Consensus configures by introducing a federation member
Figure 65883DEST_PATH_IMAGE005
As a member of a transition, wherein
Figure 342143DEST_PATH_IMAGE006
Is that
Figure 575678DEST_PATH_IMAGE007
And
Figure 75405DEST_PATH_IMAGE004
combinations of (a) and (b). When member changes occur, member configurations may be configured from the old member first
Figure 42136DEST_PATH_IMAGE007
Switching to federated membership configuration
Figure 438483DEST_PATH_IMAGE006
(ii) a To-be-federated member configuration
Figure 655969DEST_PATH_IMAGE005
After submission, configuration from federated member
Figure 770555DEST_PATH_IMAGE005
Switching to new member configuration
Figure 426795DEST_PATH_IMAGE004
(ii) a Thus, the configuration of the old member can be ensured
Figure 943227DEST_PATH_IMAGE007
With new member configuration
Figure 518565DEST_PATH_IMAGE004
The device can not be used simultaneously, so that different N, F, W, R parameters are avoided, and the safety is guaranteed.
Due to federated member configuration
Figure 992884DEST_PATH_IMAGE005
Is a more conservative member configuration and therefore can exist for longer periods of time without compromising security.
The data processing scheme provided by the specification has the advantages that the client is directly written (directly written to the data copy), the client needs to sense the member change of the distributed storage system (hereinafter referred to as the server) in time and keep synchronization with the member configuration of the server so as to ensure the consistency of parameters such as N, F, W, R and the like. Because of federated member configuration
Figure 311870DEST_PATH_IMAGE005
Is a more conservative member configuration, and the client uses the joint member configuration
Figure 558175DEST_PATH_IMAGE008
May be longer than the server, but cannot be shorter than the server, as shown in fig. 8, which is a schematic diagram of member change timing.
To ensure that clients use federated membership configuration
Figure 304414DEST_PATH_IMAGE005
The time of the member configuration is not shorter than that of the server, the safety of member change is guaranteed, the version of member configuration can be identified by continuously increasing epochs, the stable member configuration is 2N, and the transitional member configuration is 2N + 1. When the server member is configured to be 2N, the request of the client member for configuring to be 2N, 2N +1 and 2N-1 is allowed to be received. When the server member is configured to be 2N +1, only the request of the client member configured to be 2N, 2N +1 is allowed to be received.
Referring to the schematic diagram of the member change protocol shown in fig. 9, member change is initiated by a client, the client sends a member change request to a server in a write request manner, and member change is completed after the member change request is applied. If a new member is to join, the new member is prepared before a member change request is sent, and data can be synchronized in advance.
In the member changing process, the time for switching the client and server member configurations can be determined in the following manner. Assuming that C is a change request of a certain member, the time for switching member configuration between the server and the client is as follows:
the server member configures switching time:
1、
Figure 128014DEST_PATH_IMAGE003
switch to
Figure 126057DEST_PATH_IMAGE005
: number ofWhen the order of the member change request C is determined according to the copy, the order of the member change request C is determined from
Figure 351502DEST_PATH_IMAGE003
Switch to
Figure 409588DEST_PATH_IMAGE006
Access request usage before Member Change request C
Figure 720483DEST_PATH_IMAGE003
Access request usage after member change request C
Figure 381272DEST_PATH_IMAGE005
2、
Figure 336589DEST_PATH_IMAGE005
Switch to
Figure 955789DEST_PATH_IMAGE004
: when the data copy agrees on the order of the member change requests C, the member change request C is selected from
Figure 894927DEST_PATH_IMAGE005
Switch to
Figure 765931DEST_PATH_IMAGE004
Later access request usage
Figure 965968DEST_PATH_IMAGE004
The client member configures switching time:
1、
Figure 631436DEST_PATH_IMAGE003
switch to
Figure 49080DEST_PATH_IMAGE009
: when the client side initiates an access request, if F +1 data copy feedbacks have received the member change request C,starting from the access request, starting from
Figure 317250DEST_PATH_IMAGE003
Switch to
Figure 247160DEST_PATH_IMAGE009
2、
Figure 755633DEST_PATH_IMAGE009
Switch to
Figure 403783DEST_PATH_IMAGE004
: when the client side initiates an access request, if any data copy feedback is switched to
Figure 475644DEST_PATH_IMAGE004
Starting from the present access request, starting from
Figure 525640DEST_PATH_IMAGE009
Switch to
Figure 798489DEST_PATH_IMAGE004
In practical applications, there may be a problem of ghost reproduction.
Assuming that an access request is issued to fewer than F data copies, and then the corresponding client hangs up, all data copies that receive the access request hang up, at which time the access request is invalid. When the data copy is later restored, the access request appears again, and the access request needs to be restored according to the maximum submission principle, namely, the access is blocked for a period of time.
Thus, an access request appears after disappearing for a long period of time and is often unacceptable in practice. When this access request reappears, the other access requests ahead of it must all have been ordered. So if the Leader finds that the order of a certain access request is inconsistent with the order that has been previously defined when ordering, it can be determined that this is a ghost request and thus the access request can be ignored.
Corresponding to the foregoing data processing method embodiments, the present specification also provides embodiments of a data processing apparatus. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. In the case of software implementation, as a logical device, a corresponding computer program in the nonvolatile memory is read into the memory by a processor of the device where the device is located and executed. From a hardware aspect, as shown in fig. 10, the hardware structure diagram of the device in which the data processing apparatus is located in this specification is shown, except for the processor, the network interface, the memory, and the nonvolatile memory shown in fig. 10, the device in which the apparatus is located in the embodiment may also include other hardware according to the actual function of data processing, which is not described again.
Referring to fig. 11, a block diagram of a data processing apparatus according to an embodiment of the present disclosure is provided, where the apparatus corresponds to the embodiment of the method shown in fig. 3 and is applied to a distributed storage system, where the distributed storage system includes at least two data copies; the device comprises:
a receiving unit 1010, which receives concurrent access requests from clients;
a response unit 1020, configured to record, by the data copy, the concurrent access request in response to the concurrent access request;
the sequencing unit 1030 is used for sequencing each access request in the concurrent access requests among the data copies based on a preset sequencing protocol so as to reach an agreement on the sequence of each access request;
the processing unit 1040, according to the sequencing order, sequentially executes each access request in the concurrent access requests by the data copy.
Optionally, the sequencing unit 1030 comprises
The electing subunit elects a Leader data copy among the data copies;
the obtaining subunit obtains the receiving sequence of each access request from all available data copies by the Leader data copy;
the determining subunit determines a sequencing order to be identified by the Leader data copy according to the receiving order corresponding to all available data copies;
and the consensus subunit synchronizes the sequencing sequence to all the data copies for consensus through the Leader data copy, and determines the consensus-achieved sequencing sequence as a result of sequencing the access requests.
Optionally, the determining subunit is further configured to screen out, from the receiving orders corresponding to all available data copies, a target receiving order with the largest number of the same receiving orders, and determine the target receiving order as an ordering order to be identified.
Optionally, the determining subunit is further configured to determine, by the Leader data copy, an ordering order to be identified by itself when there is no target receiving order with the largest number in the receiving orders corresponding to all available data copies.
Optionally, the method further includes:
and the filling subunit is used for notifying the abnormal data copy of the missing access request by the Leader data copy to fill the missing access request to the normal data copy with the missing access request when any data copy is in the missing access request.
Optionally, the sequencing protocol is provided with an execution condition for fault tolerance;
the sequencing unit 1030 is further configured to sequence, based on a preset sequencing protocol, each access request in the concurrent access requests among the data copies when the data copies meet the execution condition, so as to agree on a sequence of each access request; wherein the execution condition of the sequencing protocol comprises:
n is greater than or equal to 3F + 1;
w is more than or equal to 2F + 1;
r is more than or equal to F + 1;
wherein, N represents the total number of data copies, F represents the number of data copies that can tolerate simultaneous failures, W represents the number of data copies that successfully execute the access request, and R represents the number of data copies that successfully read the execution result.
Optionally, the method further includes:
a changing unit for switching the old member configuration to the joint member configuration when the member of the data copy is changed; after the configuration of the joint member is submitted, switching the configuration of the joint member into a new configuration of the member;
wherein the old member configuration comprises a data copy before member change; the new member configuration comprises a data copy after member change; the federated member configuration comprises a combination of the old member configuration and the new member configuration.
Optionally, there is no master data copy between the at least two data copies.
Optionally, the distributed storage system comprises a distributed shared block storage system.
The systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. A typical implementation device is a computer, which may be in the form of a personal computer, laptop, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement without inventive effort.
Fig. 11 above describes the internal functional modules and the structural schematic of the data processing apparatus, and the substantial execution subject of the data processing apparatus may be an electronic device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform any of the embodiments of the data processing method described above.
In the above embodiments of the electronic device, it should be understood that the Processor may be a Central Processing Unit (CPU), other general-purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. The general-purpose processor may be a microprocessor, or the processor may be any conventional processor, and the aforementioned memory may be a read-only memory (ROM), a Random Access Memory (RAM), a flash memory, a hard disk, or a solid state disk. The steps of a method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor.
In an exemplary embodiment, there is also provided a computer-readable storage medium, in which instructions, when executed by a processor of an electronic device, may enable the electronic device to perform the data processing method described in any of the above embodiments.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the embodiment of the electronic device, since it is substantially similar to the embodiment of the method, the description is simple, and for the relevant points, reference may be made to part of the description of the embodiment of the method.
Other embodiments of the present disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.

Claims (11)

1. A data processing method is applied to a distributed storage system, wherein the distributed storage system comprises at least two data copies; the method comprises the following steps:
receiving concurrent access requests of a client;
in response to the concurrent access requests, recording, by the data replica, the concurrent access requests;
when the data copies meet the execution condition of a preset sequencing protocol, sequencing each access request in the concurrent access requests among the data copies based on the sequencing protocol so as to reach the agreement on the sequence of each access request; wherein the execution condition of the sequencing protocol comprises: n is more than or equal to 3F +1, W is more than or equal to 2F +1, and R is more than or equal to F + 1; wherein, N represents the total number of the data copies, F represents the number of the data copies which can tolerate the simultaneous failure, W represents the number of the data copies which successfully execute the access request, and R represents the number of the data copies which successfully read the execution result;
and sequentially executing each access request in the concurrent access requests by the data copy according to the sequencing sequence.
2. The method of claim 1, the sequencing each of the concurrent access requests among the data replicas based on the sequencing protocol to agree on a precedence order of each access request, comprising:
electing a Leader data copy among the data copies;
acquiring the receiving sequence of each access request from all available data copies by the Leader data copy;
determining a sequencing sequence to be identified by the Leader data copy according to the receiving sequence corresponding to all available data copies;
synchronizing the sequencing sequence to all data copies for consensus by the Leader data copy, and determining the consensus-achieved sequencing sequence as a result of sequencing the access requests.
3. The method of claim 2, wherein determining an ordering order to be commonly recognized according to the receiving order corresponding to all available data copies comprises:
screening out a target receiving sequence with the maximum number of the same receiving sequences from the receiving sequences corresponding to all available data copies;
and determining the target receiving sequence as a sequencing sequence to be identified.
4. The method of claim 3, further comprising:
and if the receiving sequence corresponding to all the available data copies does not have the maximum number of target receiving sequences, the Leader data copy determines the sequencing sequence to be identified by itself.
5. The method of claim 2, further comprising:
when any data copy is in a missing access request, the loader data copy informs an abnormal data copy of the missing access request to fill the missing access request with a normal data copy with the missing access request.
6. The method of claim 1, further comprising:
when the data copy is subjected to member change, switching the old member configuration into the combined member configuration;
after the configuration of the joint member is submitted, switching the configuration of the joint member into a new configuration of the member;
wherein the old member configuration comprises a data copy before member change; the new member configuration comprises a data copy after member change; the federated member configuration comprises a combination of the old member configuration and the new member configuration.
7. The method of claim 1, there being no primary data replica between data replicas in the distributed storage system.
8. The method of claim 1, the distributed storage system comprising a distributed shared block storage system.
9. A data processing device is applied to a distributed storage system, wherein the distributed storage system comprises at least two data copies; the device comprises:
the receiving unit is used for receiving concurrent access requests of the client;
the response unit is used for responding to the concurrent access request and recording the concurrent access request by the data copy;
the sequencing unit is used for sequencing each access request in the concurrent access requests among the data copies based on a preset sequencing protocol when the data copies meet the execution condition of the sequencing protocol so as to reach the agreement on the sequence of each access request; wherein the execution condition of the sequencing protocol comprises: n is more than or equal to 3F +1, W is more than or equal to 2F +1, and R is more than or equal to F + 1; wherein, N represents the total number of the data copies, F represents the number of the data copies which can tolerate the simultaneous failure, W represents the number of the data copies which successfully execute the access request, and R represents the number of the data copies which successfully read the execution result;
and the processing unit is used for sequentially executing each access request in the concurrent access requests by the data copy according to the sequencing sequence.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method of any of the preceding claims 1-8.
11. A computer-readable storage medium, comprising:
the instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method of any of claims 1-8 above.
CN202210168540.5A 2022-02-23 2022-02-23 Data processing method and device and electronic equipment Active CN114244859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210168540.5A CN114244859B (en) 2022-02-23 2022-02-23 Data processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210168540.5A CN114244859B (en) 2022-02-23 2022-02-23 Data processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN114244859A CN114244859A (en) 2022-03-25
CN114244859B true CN114244859B (en) 2022-08-16

Family

ID=80747977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210168540.5A Active CN114244859B (en) 2022-02-23 2022-02-23 Data processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN114244859B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115357600B (en) * 2022-10-21 2023-02-03 鹏城实验室 Data consensus processing method, system, device, equipment and readable storage medium
CN117149097B (en) * 2023-10-31 2024-02-06 苏州元脑智能科技有限公司 Data access control method and device for distributed storage system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105577763A (en) * 2015-12-16 2016-05-11 浪潮(北京)电子信息产业有限公司 Dynamic duplicate consistency maintenance system and method, and cloud storage platform

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7360032B2 (en) * 2005-07-19 2008-04-15 International Business Machines Corporation Method, apparatus, and computer program product for a cache coherency protocol state that predicts locations of modified memory blocks
GB2519157A (en) * 2013-10-14 2015-04-15 Ibm Robust data replication
US9569459B1 (en) * 2014-03-31 2017-02-14 Amazon Technologies, Inc. Conditional writes at distributed storage services
US10235404B2 (en) * 2014-06-25 2019-03-19 Cohesity, Inc. Distributed key-value store
US10073902B2 (en) * 2014-09-24 2018-09-11 Microsoft Technology Licensing, Llc Snapshot and replication of a multi-stream application on multiple hosts at near-sync frequency
US9971692B2 (en) * 2015-11-17 2018-05-15 International Business Machines Corporation Supporting concurrent operations at fine granularity in a caching framework
CN106445409A (en) * 2016-09-13 2017-02-22 郑州云海信息技术有限公司 Distributed block storage data writing method and device
CN106603645A (en) * 2016-12-02 2017-04-26 广东电网有限责任公司电力科学研究院 Large-scale cloud storage copy server consistency processing method and system
CN108234630B (en) * 2017-12-29 2021-03-23 北京奇虎科技有限公司 Data reading method and device based on distributed consistency protocol
CN110535680B (en) * 2019-07-12 2020-07-14 中山大学 Byzantine fault-tolerant method
US11327688B2 (en) * 2020-01-13 2022-05-10 Cisco Technology, Inc. Master data placement in distributed storage systems
CN111277636A (en) * 2020-01-15 2020-06-12 成都理工大学 Consensus algorithm for improving conventional PBFT (basic particle beam Fourier transform)
CN111368002A (en) * 2020-03-05 2020-07-03 广东小天才科技有限公司 Data processing method, system, computer equipment and storage medium
CN113253924A (en) * 2021-04-28 2021-08-13 百果园技术(新加坡)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN113535656B (en) * 2021-06-25 2022-08-09 中国人民大学 Data access method, device, equipment and storage medium
CN113645295B (en) * 2021-08-09 2023-04-07 东南大学 Block chain network security setting method based on Paxos algorithm

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105577763A (en) * 2015-12-16 2016-05-11 浪潮(北京)电子信息产业有限公司 Dynamic duplicate consistency maintenance system and method, and cloud storage platform

Also Published As

Publication number Publication date
CN114244859A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
US11360854B2 (en) Storage cluster configuration change method, storage cluster, and computer system
CN114244859B (en) Data processing method and device and electronic equipment
JP6382454B2 (en) Distributed storage and replication system and method
CN111258822B (en) Data processing method, server, and computer-readable storage medium
JP6491210B2 (en) System and method for supporting persistent partition recovery in a distributed data grid
CN105426439A (en) Metadata processing method and device
CN107919977B (en) Online capacity expansion and online capacity reduction method and device based on Paxos protocol
JP2014532921A (en) Split brain tolerant failover in high availability clusters
GB2484086A (en) Reliability and performance modes in a distributed storage system
US11620087B2 (en) Implicit leader election in a distributed storage network
CN107038192B (en) Database disaster tolerance method and device
WO2024008156A1 (en) Database system, and master database election method and apparatus
CN106897288B (en) Service providing method and system for database
CN110941666A (en) Database multi-activity method and device
CN115510156A (en) Cloud native high-availability database service providing system and method
CN113326251B (en) Data management method, system, device and storage medium
CN115098229A (en) Transaction processing method, device, node equipment and storage medium
JP2013206072A (en) Data matching system, data matching method, and data matching program
CN112783694A (en) Long-distance disaster recovery method for high-availability Redis
WO2023151443A1 (en) Synchronizing main database and standby database
CN107045426B (en) Multi-copy reading method and system
CN113064768B (en) Method and device for switching fragment nodes in block chain system
US11556441B2 (en) Data storage cluster with quorum service protection
CN113708960B (en) Deployment method, device and equipment of Zookeeper cluster
CN116069868B (en) Distributed flexible transaction processing method and device based on enterprise microservice

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant