CN116107706A - Transaction processing method and device of distributed database, electronic equipment and storage medium - Google Patents

Transaction processing method and device of distributed database, electronic equipment and storage medium Download PDF

Info

Publication number
CN116107706A
CN116107706A CN202211660353.5A CN202211660353A CN116107706A CN 116107706 A CN116107706 A CN 116107706A CN 202211660353 A CN202211660353 A CN 202211660353A CN 116107706 A CN116107706 A CN 116107706A
Authority
CN
China
Prior art keywords
node
participant
coordinator
preparation
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211660353.5A
Other languages
Chinese (zh)
Inventor
李磊
陆天炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinzhuan Xinke Co Ltd
Original Assignee
Jinzhuan Xinke Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinzhuan Xinke Co Ltd filed Critical Jinzhuan Xinke Co Ltd
Priority to CN202211660353.5A priority Critical patent/CN116107706A/en
Publication of CN116107706A publication Critical patent/CN116107706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/466Transaction processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application relates to a transaction processing method and device of a distributed database, electronic equipment and a storage medium, wherein the method comprises the following steps: the coordinator node receives a commit instruction sent by the terminal node, wherein the coordinator node is used for scheduling the corresponding participant node cluster, and the coordinator node is deployed in the corresponding participant node cluster; the coordinator node sends the commit instruction to each participant node in the participant node cluster; in response to the commit instruction, each participant node performs a preparation operation to generate a corresponding preparation result, wherein the preparation result indicates whether the copy of the participant node successfully completes the preparation operation; and executing corresponding transaction operation in response to the generated preparation result meeting a preset condition, wherein the preset condition is used for determining the processing state of the transaction. Thereby, the speed and accuracy of the transaction are improved.

Description

Transaction processing method and device of distributed database, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for transaction processing of a distributed database, an electronic device, and a storage medium.
Background
In the field of distributed databases, the two-phase commit protocol (two phase commit protocol,2 PC) is a more general distributed transaction protocol. The two-phase commit protocol ensures strong consistency of data, which many distributed relational data management systems employ to complete distributed transactions. It is a distributed algorithm that coordinates all distributed atomic transaction participants and decides to commit or cancel (rollback). And is a consistency algorithm for solving the consistency problem.
However, the two-stage commit protocol has uncertainty under the condition that the processing time is overtime or the node is abnormally down, so that the information between the nodes cannot be communicated and coordinated when abnormal conditions occur, further abnormal conditions that the protocol cannot be continuously executed due to the fact that the information cannot be synchronized, the node resources are locked and the like occur, and the transaction processing speed and accuracy are low.
Disclosure of Invention
In view of this, in order to solve some or all of the above technical problems, embodiments of the present application provide a transaction processing method, apparatus, electronic device, and storage medium for a distributed database.
In a first aspect, an embodiment of the present application provides a method for transaction processing of a distributed database, where the method includes:
The coordinator node receives a commit instruction sent by the terminal node, wherein the coordinator node is used for scheduling the corresponding participant node cluster, and the coordinator node is deployed in the corresponding participant node cluster;
the coordinator node sends the commit instruction to each participant node in the participant node cluster;
in response to the commit instruction, each participant node performs a preparation operation to generate a corresponding preparation result, wherein the preparation result indicates whether the copy of the participant node successfully completes the preparation operation;
and executing corresponding transaction operation in response to the generated preparation result meeting a preset condition, wherein the preset condition is used for determining the processing state of the transaction.
In one possible implementation manner, the executing the corresponding transaction operation in response to the generated preparation result meeting the preset condition includes:
in response to generating a preparation result indicating that the copy of the participant node did not successfully complete the preparation operation, performing a transaction rollback operation; or alternatively
And in response to the number of target preparation results being generated being greater than the target number, performing a transaction commit operation, wherein the target preparation results indicate that the copy of the participant node successfully completed the preparation operation.
In one possible implementation manner, the coordinator node receives a commit instruction sent by the terminal node, and the method includes:
a coordinator node in a coordinator node cluster receives a commit instruction sent by a terminal node;
the coordinator node sending the commit instruction to each participant node in a cluster of participant nodes, comprising:
the coordinator node in the coordinator node cluster receiving the commit instruction sends the commit instruction to each participant node in each participant node cluster and other coordinators except the coordinator node in the coordinator node cluster; and
and executing corresponding transaction operation in response to the generated preparation result meeting a preset condition, wherein the method comprises the following steps:
responsive to the participant node generating a corresponding preparation result, the participant node sending the preparation result to each coordinator node in the coordinator node cluster;
in response to a single coordinator node in the coordinator node cluster receiving a target preparation result sent by each participant node in the respective participant node cluster, the single coordinator node performs a transaction commit operation, wherein the target preparation result indicates that a copy of a participant node successfully completed the preparation operation.
In one possible implementation, after the sending of the preparation result to each coordinator node in the coordinator node cluster, the method further includes:
in response to a single coordinator node of the coordinator node cluster receiving a preparation result sent by each participant node of the single participant node cluster, the single coordinator node sends the received preparation result to each participant node of the each participant node cluster.
In one possible implementation manner, the coordinator node in the coordinator node cluster receives a commit instruction sent by the terminal node, including:
determining coordinator nodes for receiving a commit instruction sent by a terminal node from a coordinator node cluster by adopting a load balancing algorithm;
and receiving the commit instruction by adopting the determined coordinator node.
In one possible embodiment, the method further comprises:
and in response to the occurrence of the abnormality of the coordinator node in the coordinator node cluster, adopting a new coordinator node in the coordinator node cluster to replace the coordinator node with the abnormality to execute the operation.
In one possible implementation, after the performing the corresponding transaction operation, the method further includes:
And sending response information for indicating whether the commit instruction is successfully executed or not to the terminal node.
In a second aspect, an embodiment of the present application provides a transaction processing apparatus of a distributed database, the apparatus including:
the first sending unit is used for receiving a submitting instruction sent by the terminal node by the coordinator node, wherein the coordinator node is used for scheduling the corresponding participant node cluster, and the coordinator node is deployed in the corresponding participant node cluster;
a second sending unit, configured to send the commit instruction to each participant node in the participant node cluster by using the coordinator node;
the first execution unit is used for responding to the commit instruction, and each participant node respectively executes a preparation operation to generate a corresponding preparation result, wherein the preparation result indicates whether the copy of the participant node successfully completes the preparation operation;
and the second execution unit is used for responding to the generated preparation result to meet the preset condition, and executing the corresponding transaction operation, wherein the preset condition is used for determining the processing state of the transaction.
In one possible implementation manner, the executing the corresponding transaction operation in response to the generated preparation result meeting the preset condition includes:
In response to generating a preparation result indicating that the copy of the participant node did not successfully complete the preparation operation, performing a transaction rollback operation; or alternatively
And in response to the number of target preparation results being generated being greater than the target number, performing a transaction commit operation, wherein the target preparation results indicate that the copy of the participant node successfully completed the preparation operation.
In one possible implementation manner, the executing the corresponding transaction operation in response to the generated preparation result meeting the preset condition includes:
in response to generating a preparation result indicating that the copy of the participant node did not successfully complete the preparation operation, performing a transaction rollback operation; or alternatively
And in response to the number of target preparation results being generated being greater than the target number, performing a transaction commit operation, wherein the target preparation results indicate that the copy of the participant node successfully completed the preparation operation.
In one possible implementation manner, the coordinator node receives a commit instruction sent by the terminal node, and the method includes:
a coordinator node in a coordinator node cluster receives a commit instruction sent by a terminal node;
the coordinator node sending the commit instruction to each participant node in a cluster of participant nodes, comprising:
The coordinator node in the coordinator node cluster receiving the commit instruction sends the commit instruction to each participant node in each participant node cluster and other coordinators except the coordinator node in the coordinator node cluster; and
and executing corresponding transaction operation in response to the generated preparation result meeting a preset condition, wherein the method comprises the following steps:
responsive to the participant node generating a corresponding preparation result, the participant node sending the preparation result to each coordinator node in the coordinator node cluster;
in response to a single coordinator node in the coordinator node cluster receiving a target preparation result sent by each participant node in the respective participant node cluster, the single coordinator node performs a transaction commit operation, wherein the target preparation result indicates that a copy of a participant node successfully completed the preparation operation.
In one possible implementation, after the sending of the preparation result to each coordinator node in the coordinator node cluster, the apparatus further includes:
and a third sending unit, configured to respond to a single coordinator node in the coordinator node cluster receiving a preparation result sent by each participant node in the single participant node cluster, where the single coordinator node sends the received preparation result to each participant node in the each participant node cluster.
In one possible implementation manner, the coordinator node in the coordinator node cluster receives a commit instruction sent by the terminal node, including:
determining coordinator nodes for receiving a commit instruction sent by a terminal node from a coordinator node cluster by adopting a load balancing algorithm;
and receiving the commit instruction by adopting the determined coordinator node.
In one possible embodiment, the apparatus further comprises:
and the third execution unit is used for responding to the abnormal occurrence of the coordinator node in the coordinator node cluster, and adopting a new coordinator node in the coordinator node cluster to replace the abnormal coordinator node to execute the operation.
In one possible implementation, after the performing the corresponding transaction operation, the apparatus further includes:
and the fourth sending unit is used for sending response information for indicating whether the commit instruction is successfully executed or not to the terminal node.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a memory for storing a computer program;
a processor, configured to execute a computer program stored in the memory, where the computer program is executed to implement a method according to any embodiment of the transaction method of the distributed database according to the first aspect of the present application.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as in any of the embodiments of the transaction method of the distributed database of the first aspect described above.
In a fifth aspect, embodiments of the present application provide a computer program comprising computer readable code which, when run on a device, causes a processor in the device to implement a method as in any of the embodiments of the transaction method of the distributed database of the first aspect described above.
According to the transaction processing method of the distributed database, a coordinator node can receive a commit instruction sent by a terminal node, wherein the coordinator node is used for scheduling a corresponding participant node cluster, the coordinator node is deployed in the corresponding participant node cluster, and then the coordinator node sends the commit instruction to each participant node in the participant node cluster. And then, respectively executing preparation operations by the participant nodes in response to the commit instruction to generate corresponding preparation results, wherein the preparation results represent whether the copies of the participant nodes successfully complete the preparation operations, and finally, executing corresponding transaction operations in response to the generated preparation results meeting preset conditions, wherein the preset conditions are used for determining the processing state of the transaction. Therefore, the coordinator node is deployed in the corresponding participant node cluster, so that the coordinator node can obtain the preparation result of the preparation operation executed by the participant nodes in the corresponding participant node cluster more quickly, the transaction processing speed can be improved, the transaction processing state can be determined without waiting for all the participant nodes in the participant node cluster to generate the preparation result, and the transaction processing speed and the transaction processing accuracy are improved.
Drawings
Fig. 1 is a flow chart of a transaction processing method of a distributed database according to an embodiment of the present application;
FIG. 2 is a flow chart of a transaction method of another distributed database according to an embodiment of the present application;
FIG. 3A is a schematic diagram of a deployment mode of a transaction processing method of a distributed database according to an embodiment of the present application;
FIG. 3B is a flow chart of the deployment method of FIG. 3A according to an embodiment of the present application;
FIG. 3C is a schematic diagram of a deployment mode of a transaction processing method of another distributed database according to an embodiment of the present application;
FIG. 3D is a flow chart of the deployment method of FIG. 3C according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a transaction processing device of a distributed database according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the parts and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise.
It will be appreciated by those skilled in the art that terms such as "first," "second," and the like in the embodiments of the present application are used merely to distinguish between different steps, devices, or modules, and do not represent any particular technical meaning or logical sequence therebetween.
It should also be understood that in this embodiment, "plurality" may refer to two or more, and "at least one" may refer to one, two or more.
It should also be appreciated that any component, data, or structure referred to in the embodiments of the present application may be generally understood as one or more without explicit limitation or the contrary in the context.
In addition, the term "and/or" in this application is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In this application, the character "/" generally indicates that the associated object is an or relationship.
It should also be understood that the description of the embodiments herein emphasizes the differences between the embodiments, and that the same or similar features may be referred to each other, and for brevity, will not be described in detail.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. For an understanding of the embodiments of the present application, the present application will be described in detail below with reference to the drawings in conjunction with the embodiments. It will be apparent that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Fig. 1 is a flow chart of a transaction processing method of a distributed database according to an embodiment of the present application. The method can be applied to electronic equipment such as a server and the like. The main execution body of the method may be hardware or software. When the execution subject is software, the method may be implemented as a plurality of software or software modules. The present invention is not particularly limited herein.
As shown in fig. 1, the method specifically includes:
step 101, a coordinator node receives a commit instruction sent by a terminal node, where the coordinator node is configured to schedule a corresponding participant node cluster, and the coordinator node is deployed in the corresponding participant node cluster.
In this embodiment, a coordinator (coordinator) node may receive a commit (commit) instruction transmitted by an application installed at the terminal node.
Here, coordinator nodes may be integrated in the participant node cluster.
Step 102, the coordinator node sends the commit instruction to each participant node in the participant node cluster.
Step 103, responding to the commit instruction, and respectively executing preparation operations by the participant nodes to generate corresponding preparation results, wherein the preparation results represent whether copies of the participant nodes successfully complete the preparation operations.
In this embodiment, once a single participant node in the participant node cluster receives a commit instruction sent by the coordinator node, the participant node begins performing a preparation operation. During or after the participant node performs the preparation operation, the participant node may generate a preparation result indicating whether the copy of the participant node successfully completed the preparation operation.
In some cases, each copy of the participant node may correspond to a preparation result. Each participant node may have multiple copies, e.g., each participant node may have 3 to 5 copies.
The preparation operation may include operations such as persistence of log and data, transition of commit status, and the like.
And step 104, executing corresponding transaction operation in response to the generated preparation result meeting a preset condition, wherein the preset condition is used for determining the processing state of the transaction.
In this embodiment, once the generated preparation result satisfies the preset condition, the corresponding transaction operation may be executed.
In some optional implementations of this embodiment, the preset conditions may also include: a prepare result is generated indicating that the copy of the participant node did not successfully complete the prepare operation.
Thus, the step 104 may specifically be: once a preparation result is generated that indicates that a copy of a participant node did not successfully complete the preparation operation, the participant node performs the corresponding transaction operation. In particular, the participant node may send (e.g., by way of point-to-point communication or broadcast) feedback results to the coordinator indicating that the preparation operation failed to execute. Further, the coordinator node may perform a transaction rollback (rollback) operation.
It will be appreciated that in the alternative implementation described above, once a preparation result is generated indicating that a copy of a participant node did not successfully complete the preparation operation, the participant node may send a feedback result to the coordinator indicating that the preparation operation failed to perform. Further, the coordinator node may perform a transaction rollback operation. Thus, the transaction processing efficiency is improved.
In some optional implementations of this embodiment, the preset conditions may also include: the number of target preparation results generated is greater than the target number.
Thus, the step 104 may specifically be: once the number of target readiness results generated is greater than the target number, the participant node begins performing the corresponding transaction operation. In particular, the participant node may perform a transaction commit (commit) operation, for example, the participant node may send feedback results to the coordinator (e.g., in a point-to-point communication or broadcast manner) indicating that the preparation operation was successfully completed.
Wherein the target preparation result indicates that the copy of the participant node successfully completed the preparation operation.
The target number may be a preset number; further, the target number may also be set based on the total number of copies of the participant node, e.g., the target number may be half the total number of copies of the participant node.
It will be appreciated that in the alternative implementations described above, once the number of target readiness results generated is greater than the target number, the participant node begins performing the corresponding transaction operation. In particular, the participant node may perform a transaction commit operation. Thus, the transaction processing efficiency is improved.
According to the transaction processing method of the distributed database, a coordinator node can receive a commit instruction sent by a terminal node, wherein the coordinator node is used for scheduling a corresponding participant node cluster, the coordinator node is deployed in the corresponding participant node cluster, and then the coordinator node sends the commit instruction to each participant node in the participant node cluster. And then, respectively executing preparation operations by the participant nodes in response to the commit instruction to generate corresponding preparation results, wherein the preparation results represent whether the copies of the participant nodes successfully complete the preparation operations, and finally, executing corresponding transaction operations in response to the generated preparation results meeting preset conditions, wherein the preset conditions are used for determining the processing state of the transaction. Therefore, the coordinator node is deployed in the corresponding participant node cluster, so that the coordinator node can obtain the preparation result of the preparation operation executed by the participant nodes in the corresponding participant node cluster more quickly, the transaction processing speed can be improved, the transaction processing state can be determined without waiting for all the participant nodes in the participant node cluster to generate the preparation result, and the transaction processing speed and the transaction processing accuracy are improved.
In some optional implementations of this embodiment, after performing step 104 described above, the coordinator node may further send response information to the end node, where the response information indicates whether the commit instruction is successfully executed.
Specifically, in the case where the transaction operation performed in step 104 is a transaction rollback operation, response information indicating that the commit instruction was not successfully performed may be sent to the terminal node; in the event that the transaction operation performed in step 104 is a transaction commit operation, response information may be sent to the end node indicating that the commit instruction was successfully performed.
It will be appreciated that in the above alternative implementation, the coordinator node may send the response information to the terminal node more timely.
Fig. 2 is a flow chart of another transaction processing method of a distributed database according to an embodiment of the present application. As shown in fig. 2, the method specifically includes:
in step 201, a coordinator node in a coordinator node cluster receives a commit instruction sent by a terminal node, where the coordinator node is configured to schedule a corresponding participant node cluster, and the coordinator node is deployed in the corresponding participant node cluster.
In this embodiment, a preset coordinator node (e.g., a master coordinator node) in the coordinator node cluster, or a coordinator node dynamically determined by adopting a preset rule, may receive a commit instruction sent by the terminal node.
The coordinator node is used for scheduling the corresponding participant node cluster, and is deployed on the corresponding participant node cluster.
In some alternative implementations of the present embodiment, a load balancing algorithm may also be employed to determine a coordinator node from the coordinator node cluster for receiving the commit instruction sent by the terminal node, and to receive the commit instruction using the determined coordinator node.
It will be appreciated that in the alternative implementations described above, a load balancing algorithm may be employed to determine the coordinator node for receiving the commit instruction, whereby the processing power of the coordinator node may be enhanced.
Here, one or more coordinator nodes may be deployed in each of the participant node clusters.
Step 202, the coordinator node in the coordinator node cluster that receives the commit instruction sends the commit instruction to each participant node in each participant node cluster and other coordinators in the coordinator node cluster except for the coordinator node.
In response to the commit instruction, the respective participant nodes perform a preparation operation to generate corresponding preparation results, wherein the preparation results indicate whether the copy of the participant node successfully completes the preparation operation.
In this embodiment, step 203 is substantially identical to step 103 in the corresponding embodiment of fig. 1, and will not be described herein.
In response to the participant node generating the corresponding preparation result, the participant node sends the preparation result to each coordinator node in the coordinator node cluster, step 204.
In this embodiment, once the participant node generates the corresponding preparation result, the participant node may send (e.g., send in a point-to-point communication manner or a broadcast manner) the preparation result to each coordinator node in the coordinator node cluster.
In step 205, in response to a single coordinator node in the coordinator node cluster receiving a target preparation result sent by each participant node in the respective participant node cluster, the single coordinator node performs a transaction commit operation, wherein the target preparation result indicates that a copy of a participant node successfully completes the preparation operation.
In this embodiment, once a coordinator node that receives the target preparation result sent by each participant node in each participant node cluster is generated in the coordinator node cluster, the coordinator node may perform a transaction commit operation.
Wherein the target preparation result indicates that the copy of the participant node successfully completed the preparation operation.
It should be noted that, in addition to the above descriptions, the present embodiment may further include the corresponding technical features described in the embodiment corresponding to fig. 1, so as to further achieve the technical effects of the transaction processing method of the distributed database shown in fig. 1, and the detailed description with reference to fig. 1 is omitted herein for brevity.
According to the transaction processing method of the distributed database, once a coordinator node which receives target preparation results sent by all participant nodes in all participant node clusters is generated, the coordinator node can execute transaction commit operation. This may further increase the efficiency of the transaction.
In some alternative implementations of the present embodiment, after performing step 204 described above, the following steps may also be performed:
Once a single coordinator node of the coordinator node cluster receives a preparation result that is sent (e.g., sent by way of point-to-point communication or broadcast) by each participant node of the single coordinator node cluster, the single coordinator node sends (e.g., sends by way of point-to-point communication or broadcast) the received preparation result to each participant node of the respective participant node cluster.
It may be appreciated that in the above implementation, the coordinator node that receives the preparation result sent by each participant node in the single participant node cluster may synchronize the preparation result to each coordinator node, and may further improve transaction processing efficiency.
In some alternative implementations of the present embodiments, once an anomaly occurs in a coordinator node cluster, an operation is performed using a new coordinator node in the coordinator node cluster instead of the coordinator node in which the anomaly occurred.
It can be appreciated that in the above alternative implementation manner, a new coordinator node may be adopted in time to replace an abnormal coordinator node, so as to improve the robustness of the transaction processing.
The following exemplary description of the embodiments of the present application is provided, but it should be noted that the embodiments of the present application may have the features described below, and the following description should not be construed as limiting the scope of the embodiments of the present application.
The method can cooperate to execute the transaction processing method of the distributed database through a coordinator module, a participant module, a consistency protocol module and a point-to-point communication module.
The coordinator module may receive a commit instruction (i.e., the commit instruction) sent by an application (e.g., an application running on the terminal node) and perform two-stage commit collaboration with a participant (i.e., the participant node). The specific flow is that after the coordinator (i.e. the coordinator node) receives the commit instruction, a prepare instruction may be sent to all the participants (for instructing to execute the preparation operation), and wait for all the participants to reply to the message (i.e. the preparation result). There are two types of messages that participants reply to: if the preparation operation is successfully completed, the message type of the reply may be "yes". If the perform preparation operation fails, the message type of the reply may be "no".
When the coordinator collects the replies of the participants, the coordinator responds to the next step according to the message content. If all participants' reply messages are yes, the coordinator will send commit (commit) commands (corresponding to performing the transaction commit operation) to all participants. If one of the participants fails to reply (no), a rollback (rollback) instruction (corresponding to performing a transaction rollback operation) is replied to all of the participants. After receiving the responses of all participants, a command (i.e. the response information) that the commit succeeds or fails is replied to the application.
The participant module can receive the instruction sent by the coordinator and perform related operations in the two-stage submission protocol. In the first phase, the participant may receive a prepare instruction (for performing a prepare operation) sent by the coordinator. After the participant obtains the instruction, the resource related to the transaction at the node can be locked and related persistence operations can be performed. If the prepare operation (i.e., the prepare operation) is successful, a reply message (yes) indicating that the prepare was successful is replied to the coordinator, and if an exception occurs in the prepare operation, a reply message (no) indicating that the prepare failed is replied. The participant may persist the reply message before sending the reply message.
Here, the participants may synchronize using a coherence protocol for coherence protocols between the participant copies. Specifically, if the number of prepared results generated to represent successful completion of the preparing operation by the copy of the participant node is greater than the target number, it may be determined that each participant preparation in the participant cluster was successful. In addition, since the coordinator is deployed in the participant cluster, the coordinator may also participate in the coherence protocol validation as a part of the participant cluster. Once a certain state of a participant is confirmed by a coherence protocol, the coordinator can broadcast the participant's state at the fastest speed to inform all coordinators. The remaining coordinators also conduct a synchronized broadcast of the individual participant status via the protocol. When the coordinator that has newly acquired the status of all the participant clusters is generated, the second phase of commit (corresponding to the transaction commit operation described above) may be entered. Or any coordinator that gets the participant status no may immediately enter the second phase of rollback (corresponding to the transaction rollback operation described above).
The consistency protocol module can adopt a plurality of types of protocols including paxos or raft and other algorithms. The method mainly realizes the data synchronization inside the participants and completes the highly reliable control inside the participants. Here, the coordinator may also join the participant status sync as part of the participant, but may not join the election, nor become the master node of the participant cluster to carry the participant's traffic. This has the advantage that the coordinator can obtain the state synchronization information of the participants fastest. And information synchronization between coordinators is achieved.
And the point-to-point communication module is used for realizing information synchronization among a plurality of participants. Here, a plurality of coordinators need to synchronize the status information of the respective participants. A broadcast will occur immediately after a coordinator has acquired the status of the participants. All coordinators receive the coordinator information.
When all participants receive information that is yes, the information of all participants needs to be collected at the same time. If one coordinator receives the no message replied by the participants, the rollback can be sent to all the participants immediately, and the no message is synchronized to all the coordinators through the point-to-point protocol.
Specifically, as shown in fig. 3A, fig. 3A is a schematic diagram of a deployment mode of a transaction processing method of a distributed database according to an embodiment of the present application. The method mainly comprises the following steps:
The first step: the application client (i.e., the terminal) sends a commit instruction (i.e., the commit instruction) to the coordinator's master node.
And a second step of: after receiving the commit instruction, the master node of the coordinator confirms the preparation state through a consistency protocol in the coordinator module cluster, and after confirmation is successful, the preparation instruction is sent to the participant related to the transaction.
And a third step of: after receiving the preparation instruction, the participant performs a preparation operation (i.e., the preparation operation described above) on the related transaction application. The data to be persisted generated by each preparation operation is synchronized and confirmed among the participants through a consistency protocol.
Fourth step: if the participants succeed in performing the operations to complete all the preparation, the yes response message is synchronized to the copies of all the participants through the coherence protocol. If the synchronization is successful, a reply yes message is sent to the coordinator. If any of the previous steps is not successfully executed or the reply message of yes is not successfully synchronized, a no reply message is replied, and a rollback operation (namely, the transaction rollback operation) is performed.
Fifth step: when all participants' messages are received as yes, the coordinator synchronizes the commit command to all coordinator copies through the coherence protocol. If the synchronization is successful, the coordinator sends a commit instruction to the master node of all participants. If any one participant reply message is no, or a timeout is waited, or the synchronous commit instruction fails, the rollback instruction will be replied to all participants.
Sixth, after receiving the commit instruction, all participants release the resource lock, mark up as commit status and reply to commit success message. If a rollback instruction is received, rollback operation is performed, persistent data is deleted, and a resource lock is released. A roller ack success instruction is returned.
After coordination receives replies of success of all participant commit, replies of success response of commit to application, otherwise replies of failure message of commit.
With continued reference to fig. 3B, fig. 3B is a flowchart of the deployment method of fig. 3A according to an embodiment of the present application. Here, the software module portion includes: the coordinator cluster is constructed by using a consistency protocol, the participant module is constructed by using a consistency module, the coordinator cluster and the participant cluster are connected by adopting a network, the main processing flow is shown in fig. 3B, and the processing of the flow part is as follows:
first, the coordinator's master node (i.e., the coordinator node described above) interfaces with the applications (i.e., the applications running on the terminals). When the distributed transaction executes to the commit phase, the application sends a commit instruction (i.e., the commit instruction described above) to the coordinator's master node.
After that, after receiving the commit instruction, the coordinator cluster sends a prepare instruction to all participants (i.e., the above-mentioned participant nodes), and starts timing. The preparation state is that the received reply messages of all participants are timely synchronized to other copies of the coordinator through a consistency protocol.
Then, when the participant receives the prepare instruction, the resource at the participant is locked and a persistence operation (i.e., the preparation operation described above) is performed. All persisted data is synchronized to the copies of all participant clusters through a consistency protocol. And after all the preparation operations are executed, sending a message of success of the preparation. The message must also be persisted before being sent, synchronized to all participants' copies. When any preparation operation fails, or a problem that synchronization cannot be successful occurs in the data synchronization process, it is necessary to send information about the failure of the preparation to the coordinator. Before sending the preparation failure information, the reply message synchronization needs to be performed, and the rollback operation needs to be performed.
Then, if the coordinator does not receive replies of all the participants within a specified time or receives information that any one of the participant clusters fails to prepare, a rollback instruction is sent to all the participant clusters, otherwise a commit instruction is sent. The instruction state is synchronized to all copies in the cluster by a coherency protocol before the relevant instruction is sent.
Next, after receiving the commit instruction, all the participant clusters release all the locked resources, change the state of all the relevant records to commit state, and synchronize among all the participants through a coherence protocol. After successful synchronization, the coordinator is replied to successful commit.
Finally, after the coordinator receives the instruction of successful completion of all the participant clusters within the specified time, the coordinator reverts to successful completion of the application, otherwise, the coordinator reverts to failure of the application.
With continued reference to fig. 3C, fig. 3C is a schematic diagram illustrating a deployment mode of a transaction processing method of another distributed database according to an embodiment of the present application.
Here, the coordinator performs data consistency synchronization inside the participant as a part of the participant. The status of the participant can be acquired at a first time by the nature of the coherence protocol. After any coordinator obtains the status of a certain participant cluster at the first time, the synchronization can be performed inside all the participants through a point-to-point protocol. After a node in the coordinator cluster acquires the states of all the participants for the first time, the second stage can be entered, and a commit (corresponding to the transaction commit operation) or a rollback instruction (corresponding to the transaction rollback operation) is sent to all the participants or the coordinator. Accordingly, message torsion between protocol participants and coordinator is quicker, and message loss and timeout problems caused by node downtime are reduced.
The method comprises the following steps:
In the first step, the application sends a commit instruction (i.e., the commit instruction described above) to all coordinators through load balancing.
In a second step, any coordinator that receives a commit instruction (i.e., the coordinator node) synchronizes the commit instruction to all coordinators and simultaneously sends the commit instruction to all participants (i.e., the participant nodes).
Third, after receiving the preparation command, all the participants execute the preparation operation (i.e. the preparation operation described above), and synchronize the state of the preparation to the coordinator through the consistency protocol. Once the coordinator in the participant cluster determines that the preparation job was successful, the state of the participant cluster is synchronized to all coordinators.
Fourth, the first coordinator that gathers the successful message of preparation sent by all coordinators can send the commit instruction to all participants. If any coordinator gathers that the participant's status is a preparation failure, a rollback instruction is sent immediately. And synchronize the state to all coordinators.
Fifth, after the commit instruction is sent to the participant, the two-phase protocol enters the second phase. After obtaining the successful message of the participant commit of the group, all coordinators synchronize with each other, and once the first coordinator obtaining the successful message of all participants synchronizing is generated, the successful message of commit can be returned to the application.
And sixthly, for abnormal condition processing, any participant generates downtime and other problems, and the consistent high-reliability protocol can ensure that the persistent data cannot be lost. In addition, the method can ensure that a new master takeover participant cluster transaction is timely selected. The coordinator is deployed on all participants, so that the other coordinators in the participant cluster take over the business when downtime occurs, and the message synchronization in the two phases is synchronized to complete the two-phase protocol.
Referring to fig. 3D, fig. 3D is a flowchart of a deployment manner for fig. 3C according to an embodiment of the present application. Here, the software module portion includes: and forming a participant cluster by using the participant modules built by the consistency modules. Each participant module integrates a coordinator, and the coordinators form a coordinator cluster by using a point-to-point communication protocol or a broadcast protocol. Data synchronization in the coordinator cluster is also synchronized using a coherence protocol. The main processing flow comprises:
firstly, an application sends a commit instruction (i.e. the commit instruction) to any coordinator module (i.e. the coordinator node) through load balancing, when the coordinator module receives the commit instruction, the coordinator module immediately sends a preparation instruction to master participants of a cluster of all participants, and the state is synchronized in the coordinator cluster through a consistency protocol.
And then, after receiving the preparation instruction, all the participant modules perform a preparation operation (namely, the preparation operation). All operations need to be data synchronized, namely, the data in state synchronization are synchronized into all participant copies through a consistency protocol, and a coordinator module can timely judge the transaction execution state of the current participant cluster.
Then, when the coordinator in the participant cluster determines that the sub-transaction reservation is successful, the message is synchronized to all coordinator modules through the coherence protocol and the peer-to-peer communication protocol.
Then, when the first coordinator is generated that has collected that all participant modules have been pepare successful, a commit instruction is immediately sent to all participants and the state is synchronized into all coordinator modules using a coherence protocol.
In addition, when a preparation fails in any one of the participant clusters, the coordinator in that cluster can immediately send a rollback instruction to all of the participant clusters. And synchronizes the state to all coordinator modules in the cluster via a coherence protocol.
When the commit instruction is executed in all participant clusters, the coordinator module can timely sense the operation state through the consistency protocol, if the execution is successful, the coordinator is synchronized to all coordinators, and if the execution is not successful, the coordinator synchronizes the message of the execution failure to all coordinators.
When a coordinator is generated that the first collect all participants to commit successfully, a commit successful instruction is sent to the application. Otherwise, any participant whose execution is unsuccessful exists, a message of the failure of the commit is sent to the application.
It should be noted that, in addition to the above descriptions, the present embodiment may further include the technical features described in the above embodiments, so as to achieve the technical effects of the transaction processing method of the distributed database shown above, and the detailed description is referred to above, and is omitted herein for brevity.
The method organically combines the two-stage transaction submitting protocol with the consistency protocol, the point-to-point communication and other technologies, solves the problems of inconsistent data and the like caused by overtime, downtime and other problems in two-stage submitting, and improves the success rate of two-stage submitting. The high reliability and the high availability of each component in the two-stage protocol submitting process are effectively guaranteed.
Fig. 4 is a schematic structural diagram of a transaction processing device of a distributed database according to an embodiment of the present application. The method specifically comprises the following steps:
a first sending unit 401, configured to receive a commit instruction sent by a terminal node, where the coordinator node is configured to schedule a corresponding participant node cluster, and the coordinator node is deployed in the corresponding participant node cluster;
A second sending unit 402, configured to send the commit instruction to each participant node in the participant node cluster by using a coordinator node;
a first execution unit 403, configured to, in response to the commit instruction, perform a preparation operation by each of the participant nodes, so as to generate a corresponding preparation result, where the preparation result indicates whether the copy of the participant node successfully completes the preparation operation;
and the second execution unit 404 is configured to execute a corresponding transaction operation in response to the generated preparation result meeting a preset condition, where the preset condition is used to determine a processing state of the transaction.
In one possible implementation manner, the executing the corresponding transaction operation in response to the generated preparation result meeting the preset condition includes:
in response to generating a preparation result indicating that the copy of the participant node did not successfully complete the preparation operation, performing a transaction rollback operation; or alternatively
And in response to the number of target preparation results being generated being greater than the target number, performing a transaction commit operation, wherein the target preparation results indicate that the copy of the participant node successfully completed the preparation operation.
In one possible implementation manner, the executing the corresponding transaction operation in response to the generated preparation result meeting the preset condition includes:
In response to generating a preparation result indicating that the copy of the participant node did not successfully complete the preparation operation, performing a transaction rollback operation; or alternatively
And in response to the number of target preparation results being generated being greater than the target number, performing a transaction commit operation, wherein the target preparation results indicate that the copy of the participant node successfully completed the preparation operation.
In one possible implementation manner, the coordinator node receives a commit instruction sent by the terminal node, and the method includes:
a coordinator node in a coordinator node cluster receives a commit instruction sent by a terminal node;
the coordinator node sending the commit instruction to each participant node in a cluster of participant nodes, comprising:
the coordinator node in the coordinator node cluster receiving the commit instruction sends the commit instruction to each participant node in each participant node cluster and other coordinators except the coordinator node in the coordinator node cluster; and
and executing corresponding transaction operation in response to the generated preparation result meeting a preset condition, wherein the method comprises the following steps:
responsive to the participant node generating a corresponding preparation result, the participant node sending the preparation result to each coordinator node in the coordinator node cluster;
In response to a single coordinator node in the coordinator node cluster receiving a target preparation result sent by each participant node in the respective participant node cluster, the single coordinator node performs a transaction commit operation, wherein the target preparation result indicates that a copy of a participant node successfully completed the preparation operation.
In one possible implementation, after the sending of the preparation result to each coordinator node in the coordinator node cluster, the apparatus further includes:
a third sending unit (not shown in the figure) is configured to send, in response to a single coordinator node in the coordinator node cluster receiving a preparation result sent by each participant node in the single participant node cluster, the single coordinator node sending the received preparation result to each participant node in the each participant node cluster.
In one possible implementation manner, the coordinator node in the coordinator node cluster receives a commit instruction sent by the terminal node, including:
determining coordinator nodes for receiving a commit instruction sent by a terminal node from a coordinator node cluster by adopting a load balancing algorithm;
And receiving the commit instruction by adopting the determined coordinator node.
In one possible embodiment, the apparatus further comprises:
and a third executing unit (not shown in the figure) for executing an operation by adopting a new coordinator node in the coordinator node cluster to replace the coordinator node with the abnormality in response to the abnormality occurring to the coordinator node in the coordinator node cluster.
In one possible implementation, after the performing the corresponding transaction operation, the apparatus further includes:
a fourth transmitting unit (not shown in the figure) for transmitting response information indicating whether the commit instruction is successfully executed to the terminal node.
The transaction processing device of the distributed database provided in this embodiment may be the transaction processing device of the distributed database shown in fig. 4, and may perform all the steps of the transaction processing method of each distributed database, thereby achieving the technical effects of the transaction processing method of each distributed database, which are described above, and are specifically referred to the above related description, and are not described herein for brevity.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and an electronic device 500 shown in fig. 5 includes: at least one processor 501, memory 502, at least one network interface 504, and other user interfaces 503. The various components in the electronic device 500 are coupled together by a bus system 505. It is understood that bus system 505 is used to enable connected communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 505 in fig. 5.
The user interface 503 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, a trackball, a touch pad, or a touch screen, etc.).
It is to be appreciated that the memory 502 in embodiments of the present application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DRRAM). The memory 502 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some implementations, the memory 502 stores the following elements, executable units or data structures, or a subset thereof, or an extended set thereof: an operating system 5021 and application programs 5022.
The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 5022 includes various application programs such as a Media Player (Media Player), a Browser (Browser), and the like for realizing various application services. A program for implementing the method of the embodiment of the present application may be included in the application 5022.
In this embodiment, the processor 501 is configured to execute the method steps provided in the method embodiments by calling a program or an instruction stored in the memory 502, specifically, a program or an instruction stored in the application 5022, for example, including:
the coordinator node receives a commit instruction sent by the terminal node, wherein the coordinator node is used for scheduling the corresponding participant node cluster, and the coordinator node is deployed in the corresponding participant node cluster;
the coordinator node sends the commit instruction to each participant node in the participant node cluster;
In response to the commit instruction, each participant node performs a preparation operation to generate a corresponding preparation result, wherein the preparation result indicates whether the copy of the participant node successfully completes the preparation operation;
and executing corresponding transaction operation in response to the generated preparation result meeting a preset condition, wherein the preset condition is used for determining the processing state of the transaction.
The method disclosed in the embodiments of the present application may be applied to the processor 501 or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware or instructions in software in the processor 501. The processor 501 may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), an off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software elements in a decoded processor. The software elements may be located in a random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 502, and the processor 501 reads information in the memory 502 and, in combination with its hardware, performs the steps of the method described above.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (dspev, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general purpose processors, controllers, microcontrollers, microprocessors, other electronic units configured to perform the above-described functions of the application, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The electronic device provided in this embodiment may be an electronic device as shown in fig. 5, and may perform all the steps of the above-described transaction processing method of each distributed database, so as to achieve the technical effects of the above-described transaction processing method of each distributed database, and specific reference should be made to the above-described related description, which is omitted herein for brevity.
The embodiment of the application also provides a storage medium (computer readable storage medium). The storage medium here stores one or more programs. Wherein the storage medium may comprise volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, hard disk, or solid state disk; the memory may also comprise a combination of the above types of memories.
When one or more programs are executed by one or more processors in the storage medium, the transaction method of the distributed database executed on the electronic device side is implemented.
The above processor is configured to execute a transaction program of a distributed database stored in a memory, so as to implement the following steps of a transaction method of the distributed database executed on an electronic device side:
the coordinator node receives a commit instruction sent by the terminal node, wherein the coordinator node is used for scheduling the corresponding participant node cluster, and the coordinator node is deployed in the corresponding participant node cluster;
the coordinator node sends the commit instruction to each participant node in the participant node cluster;
In response to the commit instruction, each participant node performs a preparation operation to generate a corresponding preparation result, wherein the preparation result indicates whether the copy of the participant node successfully completes the preparation operation;
and executing corresponding transaction operation in response to the generated preparation result meeting a preset condition, wherein the preset condition is used for determining the processing state of the transaction.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing embodiments have been provided for the purpose of illustrating the general principles of the present application, and are not meant to limit the scope of the invention, but to limit the scope of the invention. Moreover, while the various embodiments described above have been described as a series of acts for simplicity of explanation, it will be appreciated by those skilled in the art that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in the same fashion, in accordance with the invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are alternative embodiments, and that the acts and modules referred to are not necessarily required for the present invention.

Claims (10)

1. A method of transaction processing for a distributed database, the method comprising:
the coordinator node receives a commit instruction sent by the terminal node, wherein the coordinator node is used for scheduling the corresponding participant node cluster, and the coordinator node is deployed in the corresponding participant node cluster;
The coordinator node sends the commit instruction to each participant node in the participant node cluster;
in response to the commit instruction, each participant node performs a preparation operation to generate a corresponding preparation result, wherein the preparation result indicates whether the copy of the participant node successfully completes the preparation operation;
and executing corresponding transaction operation in response to the generated preparation result meeting a preset condition, wherein the preset condition is used for determining the processing state of the transaction.
2. The method of claim 1, wherein the performing the corresponding transaction in response to the generated preparation result satisfying a preset condition comprises:
in response to generating a preparation result indicating that the copy of the participant node did not successfully complete the preparation operation, performing a transaction rollback operation; or alternatively
And in response to the number of target preparation results being generated being greater than the target number, performing a transaction commit operation, wherein the target preparation results indicate that the copy of the participant node successfully completed the preparation operation.
3. The method of claim 1, wherein the coordinator node receives a commit instruction sent by a terminal node, comprising:
A coordinator node in a coordinator node cluster receives a commit instruction sent by a terminal node;
the coordinator node sending the commit instruction to each participant node in a cluster of participant nodes, comprising:
the coordinator node in the coordinator node cluster receiving the commit instruction sends the commit instruction to each participant node in each participant node cluster and other coordinators except the coordinator node in the coordinator node cluster; and
and executing corresponding transaction operation in response to the generated preparation result meeting a preset condition, wherein the method comprises the following steps:
responsive to the participant node generating a corresponding preparation result, the participant node sending the preparation result to each coordinator node in the coordinator node cluster;
in response to a single coordinator node in the coordinator node cluster receiving a target preparation result sent by each participant node in the respective participant node cluster, the single coordinator node performs a transaction commit operation, wherein the target preparation result indicates that a copy of a participant node successfully completed the preparation operation.
4. A method according to claim 3, wherein after said sending of the preparation result to each coordinator node in the coordinator node cluster, the method further comprises:
In response to a single coordinator node of the coordinator node cluster receiving a preparation result sent by each participant node of the single participant node cluster, the single coordinator node sends the received preparation result to each participant node of the each participant node cluster.
5. A method according to claim 3, wherein the coordinator node in the coordinator node cluster receives a commit instruction sent by a terminal node, comprising:
determining coordinator nodes for receiving a commit instruction sent by a terminal node from a coordinator node cluster by adopting a load balancing algorithm;
and receiving the commit instruction by adopting the determined coordinator node.
6. A method according to claim 3, characterized in that the method further comprises:
and in response to the occurrence of the abnormality of the coordinator node in the coordinator node cluster, adopting a new coordinator node in the coordinator node cluster to replace the coordinator node with the abnormality to execute the operation.
7. The method according to one of claims 1 to 6, wherein after said performing the corresponding transaction operation, the method further comprises:
and sending response information for indicating whether the commit instruction is successfully executed or not to the terminal node.
8. A transaction processing apparatus for a distributed database, the apparatus comprising:
the first sending unit is used for receiving a submitting instruction sent by the terminal node by the coordinator node, wherein the coordinator node is used for scheduling the corresponding participant node cluster, and the coordinator node is deployed in the corresponding participant node cluster;
a second sending unit, configured to send the commit instruction to each participant node in the participant node cluster by using the coordinator node;
the first execution unit is used for responding to the commit instruction, and each participant node respectively executes a preparation operation to generate a corresponding preparation result, wherein the preparation result indicates whether the copy of the participant node successfully completes the preparation operation;
and the second execution unit is used for responding to the generated preparation result to meet the preset condition, and executing the corresponding transaction operation, wherein the preset condition is used for determining the processing state of the transaction.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing a computer program stored in said memory, and which, when executed, implements the method of any of the preceding claims 1-7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of the preceding claims 1-7.
CN202211660353.5A 2022-12-21 2022-12-21 Transaction processing method and device of distributed database, electronic equipment and storage medium Pending CN116107706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211660353.5A CN116107706A (en) 2022-12-21 2022-12-21 Transaction processing method and device of distributed database, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211660353.5A CN116107706A (en) 2022-12-21 2022-12-21 Transaction processing method and device of distributed database, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116107706A true CN116107706A (en) 2023-05-12

Family

ID=86258943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211660353.5A Pending CN116107706A (en) 2022-12-21 2022-12-21 Transaction processing method and device of distributed database, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116107706A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118245553A (en) * 2024-05-23 2024-06-25 成都茗匠科技有限公司 Method for implementing two-stage submitting 2PC distributed transaction by using relational database

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118245553A (en) * 2024-05-23 2024-06-25 成都茗匠科技有限公司 Method for implementing two-stage submitting 2PC distributed transaction by using relational database

Similar Documents

Publication Publication Date Title
CN111258822B (en) Data processing method, server, and computer-readable storage medium
US8140623B2 (en) Non-blocking commit protocol systems and methods
WO2018103318A1 (en) Distributed transaction handling method and system
JP4976661B2 (en) CHEAPAXOS
CN111368002A (en) Data processing method, system, computer equipment and storage medium
US6823355B1 (en) Synchronous replication of transactions in a distributed system
JP2005196763A (en) Simplified paxos
US20050149609A1 (en) Conflict fast consensus
US6823356B1 (en) Method, system and program products for serializing replicated transactions of a distributed computing environment
JP2007518195A (en) Cluster database using remote data mirroring
CN110413687B (en) Distributed transaction fault processing method and related equipment based on node interaction verification
CN115550384B (en) Cluster data synchronization method, device and equipment and computer readable storage medium
CN112527759B (en) Log execution method and device, computer equipment and storage medium
CN110635941A (en) Database node cluster fault migration method and device
US6873987B1 (en) Method, system and program products for recovering from failures within a shared nothing distributed computing environment
CN115794499B (en) Method and system for dual-activity replication data among distributed block storage clusters
CN116107706A (en) Transaction processing method and device of distributed database, electronic equipment and storage medium
CN112596801B (en) Transaction processing method, device, equipment, storage medium and database
van Renesse et al. Replication techniques for availability
CN116232893A (en) Consensus method and device of distributed system, electronic equipment and storage medium
CN101329670B (en) Method and system for keep consistency of data under copy database environment
EP2009557A1 (en) Method and device for data processing and system comprising such device
CN115145715A (en) Distributed transaction processing method, system and related equipment
CN112511359B (en) Method, system and computer readable storage medium for configuration change in service system
CN109582288B (en) Method, system and storage medium for producing configuration reflux

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination