CN112882870A - Restoration method and device of distributed database system and computer readable storage medium - Google Patents

Restoration method and device of distributed database system and computer readable storage medium Download PDF

Info

Publication number
CN112882870A
CN112882870A CN202110312905.2A CN202110312905A CN112882870A CN 112882870 A CN112882870 A CN 112882870A CN 202110312905 A CN202110312905 A CN 202110312905A CN 112882870 A CN112882870 A CN 112882870A
Authority
CN
China
Prior art keywords
transaction
time point
time
distributed
transactions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110312905.2A
Other languages
Chinese (zh)
Inventor
周家晶
苗浩
韩韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Publication of CN112882870A publication Critical patent/CN112882870A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a method, a device and a system for restoring a distributed database system and a computer readable storage medium, wherein the method comprises the following steps: acquiring a restoring instruction, wherein the restoring instruction is used for indicating that each data node of the distributed database system is restored to a target time point; acquiring full backup data and incremental logs of each data node according to the target time point; determining a trust time period and an untrusted time period of a target time point; taking the full backup data of each data node as a recovery starting point, and rolling back or executing the transaction initiated and submitted by each data node in the trust time period according to the incremental log of each data node; and matching the transactions to be matched based on a preset matching strategy. By the method, data backup can be performed without blocking global transactions, meanwhile, consistency can be restored to any time point in a backup period, and the calculation cost of consistency restoration is remarkably saved by dividing the trusted time period and the untrusted time period.

Description

Restoration method and device of distributed database system and computer readable storage medium
Cross Reference to Related Applications
This application claims priority from patent application No. CN2020108596225 filed 24/8/2020 and entitled "a method, apparatus and computer readable storage medium for restoring a distributed database system," the entire contents of which are incorporated herein by reference.
Technical Field
The invention belongs to the technical field of distributed data systems, and particularly relates to a restoration method and device of a distributed database system and a computer readable storage medium.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
A distributed database may store data distributed among multiple nodes (generally referred to as data nodes), and for a global transaction, its transaction information may be distributed among different data nodes. Therefore, in a distributed database scenario, it is difficult to restore the data stored in a distributed manner to a certain time point at the same time, and since the actual commit time of the global transaction at each data node is inconsistent, the way of independently restoring each data node to a certain time node may result in only partial restoration of part of the global transaction. For example, a global transaction involving the operation of node a and node B, in a manner that the data nodes independently restore, may result in the global transaction only restoring the operation of a or B, thus creating an inconsistency problem.
In the prior art, the following restoration method for obtaining a consistent snapshot based on a GTM (global transaction manager) component is generally adopted for consistent restoration of a distributed database, however, implementation relying on the GTM component is complex, processing performance of a distributed system may be affected due to GTM, and in addition, relying on the GTM component can only perform full-volume snapshot backup of a consistent restoration point, and an incremental backup method is not supported.
In other words, distributed databases have long lacked a consistency restore mechanism,
disclosure of Invention
In view of the problems in the prior art, a recovery method, a recovery device and a computer-readable storage medium for a distributed database system are provided.
The present invention provides the following.
In a first aspect, a recovery method for a distributed database system is provided, where the distributed database system includes a coordinator node and a plurality of data nodes, and the method includes: acquiring a restoring instruction, wherein the restoring instruction is used for indicating that each data node of the distributed database system is restored to a target time point; acquiring full backup data and incremental logs of each data node according to the target time point; determining a trust time period and an untrusted time period of a target time point; taking the full backup data of each data node as a recovery starting point, and rolling back or executing the transaction initiated and submitted by each data node in the trust time period according to the incremental log of each data node; and matching the transactions to be matched based on a preset matching strategy, wherein the transactions to be matched initiate submitted transactions for each data node in an untrusted time period.
In one possible embodiment, determining the trusted time period and the untrusted time period for the target point in time comprises: determining transaction commit durations of a plurality of distributed transactions according to the incremental logs of the data nodes; recording a transaction start commit time point and a transaction finish commit time point corresponding to the transaction commit duration larger than a preset value into a distributed transaction time consumption table; determining a first time point according to a distributed transaction time consumption table, wherein the first time point is a trust bifurcation time point of a target time point; and dividing a trust time period and an untrusted time period of the target time point according to the target time point and the first time point.
In one possible implementation, determining the first time point according to the distributed transaction time consumption table includes: searching a distributed transaction meeting a first preset condition according to a distributed transaction time consumption table, wherein the first preset condition is that the transaction starting submission time point is before t-s and the corresponding transaction finishing submission time is after t, t is a target time point, and s is a preset value; and determining the first time point as the earliest transaction start commit time in the distributed transactions meeting the first preset condition or a time point before the earliest transaction start commit time.
In a possible implementation manner, determining the first time point according to the distributed transaction time consumption table further includes: if the distributed transaction meeting the first preset condition cannot be found in the distributed transaction time consumption table, determining that the first time point is t-s or any time point before t-s, wherein t is a target time point and s is preset duration.
In one possible implementation, determining a transaction commit duration for a plurality of distributed transactions from the delta log for each data node includes: if the distributed transaction consists of a plurality of two-stage branch transactions and a global transaction, calculating the time difference between the first preparation record of the two-stage branch transaction of the distributed transaction and the log record of the global transaction through a coordinator node, and taking the time difference as the submission duration of the distributed transaction; or, if the distributed transaction is executed by the general participant and the last participant, and the general participant adopts a two-phase commit protocol and the last participant adopts a one-phase commit protocol, calculating, by the coordinator node, a time difference from the first preparation record of the general participant to the last participant to commit the response of the general participant of the distributed transaction as a commit duration of the distributed transaction.
In a possible implementation manner, the matching of the transaction to be matched based on the preset matching policy includes: if the first transaction in the transactions to be matched is a non-distributed transaction, playing back the first transaction; if the second affairs in the affairs to be matched fall to the ground corresponding to all the branch affairs in the trust time period of the target time point, the second affairs are played back; if the global log corresponding to the third transaction in the transactions to be matched and all the branch logs exist, playing back the third transaction; and not playing back the rest of the transactions to be matched.
In a possible implementation, the method further comprises a step of determining a third transaction from the transactions to be matched, and the step of determining the third transaction comprises the following steps: determining a second time point according to the distributed transaction time consumption table, wherein the second time point is a trust bifurcation time point of the first time point; judging whether a global log and all branch logs corresponding to any one or more distributed transactions exist in the incremental logs between the second time point and the target time point of each data node; and if so, determining any one or more distributed transactions as third transactions.
In one possible embodiment, the determining the second time point according to the distributed transaction time consumption table includes: searching a second distributed transaction according to the distributed transaction time consumption table, and determining that the second time point is the earliest transaction start and commit time or the time point before the earliest transaction start and commit time in the second distributed transaction, wherein the transaction start and commit time point of the second distributed transaction is at t1S before, and transaction complete commit time at t1Then; and if the second distributed transaction cannot be found in the distributed transaction time consumption table, determining that the second time point is t1-s or t1A point in time before s, where t1S is a preset value at a first time point.
In a possible implementation manner, determining the second time point according to the distributed transaction time consumption table further includes: replacing t with t-a to determine a first point in time; and/or, using t1-a replaces t1To determine a second point in time; wherein t is a target time point, a is a preset time deviation, and t1Is the first point in time.
In one possible embodiment, the method further comprises: in the matching process, two or more transactions having a dependency relationship are subjected to constraint processing.
In one possible implementation, before restoring the distributed database system, the method further includes: constructing a full backup of each data node to obtain full backup data; and acquiring the incremental logs of the data nodes.
In one possible embodiment, constructing a full backup of each data node comprises: and locking after all the two-phase transactions of the currently processed distributed transaction are completed by each data node, and performing full backup on each data node after locking.
In one possible embodiment, constructing a full backup of each data node further comprises: carrying out first full backup on each data node at a first full backup time point; determining an untrusted time period of a first full-volume backup time point, and acquiring a first incremental log set of each data node in the untrusted time period of the first full-volume backup time point; analyzing the first incremental log set to obtain pending transactions that are in a ready state and not committed or rolled back; the pending transactions are played back on the data obtained from the first full backup, thereby constructing a full backup of the respective data nodes.
In one possible embodiment, constructing a full backup of each data node comprises: performing second full backup on each data node at a second full backup time point; determining an untrusted time period of a second full-volume backup time point, and acquiring a second incremental log set from a starting time point of the untrusted time period of the second full-volume backup time point to any time point after the second full-volume backup is completed by each data node; and performing fault-tolerant playback on the data obtained according to the second full backup by using the second incremental log set.
In one possible embodiment, the method further comprises: determining a trust bifurcation time point of the full-amount backup time point, wherein all transactions in a preparation state before the trust bifurcation time point can be confirmed to be submitted or rolled back through an increment log before the full-amount backup time point; and determining the distrust time period of the full-amount backup time point according to the trust bifurcation time point of the full-amount backup time point.
In one possible implementation, determining the first time point according to the distributed transaction time consumption table includes: determining a coordinator time t' ═ t-m according to a preset time deviation m and a target time point t, wherein the preset time deviation m is used for indicating the maximum time deviation between any two data nodes of the distributed database system: determining a trust bifurcation time point t1 'of the coordinator time t' according to a distributed transaction time consumption table; the first time t1 ═ t1 ' -m is determined from the time deviation value m and the trust divergence time t1 ' of the coordinator time t '.
In one possible implementation, determining the trust divergence time point t1 'of the coordinator time t' according to the distributed transaction time consumption table includes: searching a distributed transaction meeting a third preset condition according to the distributed transaction time consumption table, wherein the third preset condition is that the transaction starting submission time point is before t '-s and the corresponding transaction finishing submission time is after t', and s is a preset value; if distributed transactions meeting a third preset condition exist, determining a trust divergence time point t1 'in which the earliest transaction start commit time or a time point before the earliest transaction start commit time is the coordinator time t'; if no distributed transaction meeting the preset condition exists, determining t '-s or the time point before t' -s as the trust divergence time point t1 'of the coordinator time t'.
In a possible implementation manner, dividing the trusted time period and the untrusted time period of the target time point according to the target time point and the first time point further includes: dividing an untrusted time period [ t ] of the target time point according to the target time point t, the time deviation value m and the first time point t11,t+m]。
In a possible implementation manner, the matching of the transaction to be matched based on the preset matching policy includes: if the fourth transaction in the transactions to be matched is a non-distributed transaction, playing back the fourth transaction; if a fifth transaction in the transactions to be matched is a distributed transaction and the global log corresponding to the fifth transaction falls to the ground before the target time point, playing back the fifth transaction; and not playing back the rest of the transactions to be matched.
In one possible implementation, playing back the fifth transaction further comprises: if the branch transaction corresponding to the fifth transaction is completely submitted in the untrusted time period of the target time point, directly playing back the fifth transaction; and if one or more branch transactions corresponding to the fifth transaction are not finished to be submitted in the distrusted time period of the target time point, the one or more branch transactions which are not finished to be submitted are submitted at the corresponding data node after the global transaction and the branch transaction which are finished to be submitted and corresponding to the fifth transaction are replayed.
In one possible implementation, determining a transaction commit duration for a plurality of distributed transactions from the delta log for each data node includes: and calculating the landing time difference of the preparation of the first general participant of the distributed transaction to record the submission response of the last general participant through the coordinator node as the submission time length of the distributed transaction.
In a second aspect, there is provided an apparatus for restoring a distributed database system, the distributed database system including a coordinator node and a plurality of data nodes, the apparatus including: the instruction unit is used for acquiring a reduction instruction which is used for indicating that each data node of the distributed database system is reduced to a target time point; the acquisition unit is used for acquiring the full backup data and the incremental logs of each data node according to the target time point; the determining unit is used for determining a trust time period and an untrusted time period of the target time point; the first restoring unit is used for taking the full backup data of each data node as a restoring starting point and rolling back or executing the transaction initiated and submitted by each data node in the trust time period according to the incremental log of each data node; and the second restoring unit is used for matching the transaction to be matched based on a preset matching strategy, and the transaction to be matched initiates submitted transactions for each data node in an untrusted time period.
In a possible embodiment, the determining unit is further configured to: determining transaction commit durations of a plurality of distributed transactions according to the incremental logs of the data nodes; recording a transaction start commit time point and a transaction finish commit time point corresponding to the transaction commit duration larger than a preset value into a distributed transaction time consumption table; determining a first time point according to a distributed transaction time consumption table, wherein the first time point is a trust bifurcation time point of a target time point; and dividing a trust time period and an untrusted time period of the target time point according to the target time point and the first time point.
In a possible embodiment, the determining unit is further configured to: searching a distributed transaction meeting a first preset condition according to a distributed transaction time consumption table, wherein the first preset condition is that the transaction starting submission time point is before t-s and the corresponding transaction finishing submission time is after t, t is a target time point, and s is a preset value; the first time point is determined to be the earliest transaction start commit time or a time point before the earliest transaction start commit time in the distributed transactions meeting the first preset condition.
In a possible embodiment, the determining unit is further configured to: if the distributed transaction meeting the first preset condition cannot be found in the distributed transaction time consumption table, determining that the first time point is t-s or a time point before t-s, wherein t is a target time point and s is preset duration.
In a possible embodiment, the determining unit is further configured to: if the distributed transaction consists of a plurality of two-stage branch transactions and a global transaction, calculating the time difference between the first preparation record of the two-stage branch transaction of the distributed transaction and the log record of the global transaction through a coordinator node, and taking the time difference as the submission duration of the distributed transaction; or, if the distributed transaction is executed by the general participant and the last participant, and the general participant adopts a two-phase commit protocol and the last participant adopts a one-phase commit protocol, calculating, by the coordinator node, a time difference from the first preparation record of the general participant to the last participant to commit the response of the general participant of the distributed transaction as a commit duration of the distributed transaction.
In one possible embodiment, the second reduction unit is further configured to: if the first transaction in the transactions to be matched is a non-distributed transaction, playing back the first transaction; if the second affairs in the affairs to be matched fall to the ground corresponding to all the branch affairs in the trust time period of the target time point, the second affairs are played back; if the global log corresponding to the third transaction in the transactions to be matched and all the branch logs exist, playing back the third transaction; and not playing back the rest of the transactions to be matched.
In one possible embodiment, the second reduction unit is further configured to: determining a second time point according to the distributed transaction time consumption table, wherein the second time point is a trust bifurcation time point of the first time point; judging whether a global log and all branch logs corresponding to any one or more distributed transactions exist in the incremental logs between the second time point and the target time point of each data node; and if so, determining any one or more distributed transactions as third transactions.
In one possible embodiment, the second reduction unit is further configured to: searching a second distributed transaction according to a distributed transaction time consumption table, and determining that the first time point is the earliest transaction start and commit time in the second distributed transaction or the time point before the earliest transaction start and commit time, wherein the transaction start and commit time point of the second distributed transaction is at t1S before, and transaction complete commit time at t1Then; and if the second distributed transaction cannot be found in the distributed transaction time consumption table, determining that the second time point is t1-s or t1A point in time before s, where t1S is a preset value at a first time point.
In one possible embodiment, the second reduction unit is further configured to: replacing t with t-a to determine a first point in time; and/or, using t1-a replaces t1To determine a second point in time; wherein t is a target time point, a is a preset time deviation, and t1Is the first point in time.
In one possible embodiment, the apparatus is further configured to: in the matching process, two or more transactions having a dependency relationship are subjected to constraint processing.
In a possible embodiment, the apparatus further comprises a backup unit for: constructing a full backup of each data node to obtain full backup data; and acquiring the incremental logs of the data nodes.
In a possible embodiment, the backup unit is further configured to: and locking after all the two-phase transactions of the currently processed distributed transaction are completed by each data node, and performing full backup on each data node after locking.
In a possible embodiment, the backup unit is further configured to: carrying out first full backup on each data node at a first full backup time point; determining an untrusted time period of a first full-volume backup time point, and acquiring a first incremental log set of each data node in the untrusted time period of the first full-volume backup time point; analyzing the first incremental log set to obtain pending transactions that are in a ready state and not committed or rolled back; the pending transactions are played back on the data obtained from the first full backup, thereby constructing a full backup of the respective data nodes.
In a possible embodiment, the backup unit is further configured to: performing second full backup on each data node at a second full backup time point; determining an untrusted time period of a second full-volume backup time point, and acquiring a second incremental log set from a starting time point of the untrusted time period of the second full-volume backup time point to any time point after the second full-volume backup is completed by each data node; and performing fault-tolerant playback on the data obtained according to the second full backup by using the second incremental log set.
In one possible embodiment, the method further comprises: determining a trust bifurcation time point of the full-amount backup time point, wherein all transactions in a preparation state before the trust bifurcation time point can be confirmed to be submitted or rolled back through an increment log before the full-amount backup time point; and determining the distrust time period of the full-amount backup time point according to the trust bifurcation time point of the full-amount backup time point.
In a possible embodiment, the determining unit is further configured to: determining a coordinator time t' ═ t-m according to a preset time deviation m and a target time point t, wherein the preset time deviation m is used for indicating the maximum time deviation between any two data nodes of the distributed database system: determining a trust bifurcation time point t1 'of the coordinator time t' according to a distributed transaction time consumption table; the first time t1 ═ t1 ' -m is determined from the time deviation value m and the trust divergence time t1 ' of the coordinator time t '.
In a possible embodiment, the determining unit is further configured to: searching a distributed transaction meeting a third preset condition according to the distributed transaction time consumption table, wherein the third preset condition is that the transaction starting submission time point is before t '-s and the corresponding transaction finishing submission time is after t', and s is a preset value; wherein the content of the first and second substances,
if distributed transactions meeting a third preset condition exist, determining a trust divergence time point t1 'in which the earliest transaction start commit time or a time point before the earliest transaction start commit time is the coordinator time t';
if no distributed transaction meeting the preset condition exists, determining t '-s or the time point before t' -s as the trust divergence time point t1 'of the coordinator time t'.
In a possible embodiment, the determining unit is further configured to: according to the target time point t, the time deviation value m and the first time point t1Dividing an untrusted time period [ t ] of a target time point1,t+m]。
In one possible embodiment, the second reduction unit is further configured to: if the fourth transaction in the transactions to be matched is a non-distributed transaction, playing back the fourth transaction; if a fifth transaction in the transactions to be matched is a distributed transaction and the global log corresponding to the fifth transaction falls to the ground before the target time point, playing back the fifth transaction; and not playing back the rest of the transactions to be matched.
In one possible embodiment, the second reduction unit is further configured to: if the branch transaction corresponding to the fifth transaction is completely submitted in the untrusted time period of the target time point, directly playing back the fifth transaction;
and if one or more branch transactions corresponding to the fifth transaction are not finished to be submitted in the distrusted time period of the target time point, the one or more branch transactions which are not finished to be submitted are submitted at the corresponding data node after the global transaction and the branch transaction which are finished to be submitted and corresponding to the fifth transaction are replayed.
In one possible implementation, determining a transaction commit duration for a plurality of distributed transactions from the delta log for each data node includes: and calculating the landing time difference of the preparation of the first general participant of the distributed transaction to record the submission response of the last general participant through the coordinator node as the submission time length of the distributed transaction.
In a third aspect, a restoring apparatus for a distributed database system is provided, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform: the method of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium storing a program which, when executed by a multicore processor, causes the multicore processor to perform the method of the first aspect.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: the embodiment provides a consistency method for supporting the distributed reduction to any time point in the backup cycle without blocking global transactions, and the calculation cost of the consistency reduction is obviously saved by dividing the trusted time period and the untrusted time period.
It should be understood that the above description is only an overview of the technical solutions of the present invention, so as to clearly understand the technical means of the present invention, and thus can be implemented according to the content of the description. In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
The advantages and benefits described herein, as well as other advantages and benefits, will be apparent to those of ordinary skill in the art upon reading the following detailed description of the exemplary embodiments. The drawings are only for purposes of illustrating exemplary embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like elements throughout. In the drawings:
FIG. 1 is a schematic diagram of a distributed database system;
FIG. 2 is a schematic diagram of a distributed transaction;
FIG. 3 is a flowchart illustrating a recovery method of a distributed database system according to an embodiment of the invention;
FIG. 4 is a timing diagram for recovery of a distributed database system according to one embodiment of the invention;
FIG. 5 is a schematic structural diagram of a recovery apparatus of a distributed database system according to an embodiment of the present invention
Fig. 6 is a schematic structural diagram of a recovery apparatus of a distributed database system according to another embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the present invention, it is to be understood that terms such as "including" or "having," or the like, are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility of the presence of one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
A Transaction (Transaction) is a unit of concurrency control and is a sequence of operations defined by a user. These operations are either all done or none, and are therefore an indivisible unit of work. Distributed transaction processing means that a transaction may involve multiple database operations, and the key to distributed transaction processing is that there must be a way to know all the actions that a transaction involves anywhere, so that the decision to commit or rollback a transaction must produce a uniform result (either all commits or all rollbacks) for them.
Fig. 1 is a distributed database system comprising a coordinator node 101, a plurality of data nodes 102, and a plurality of clients 103. The logic of the Coordinator node 101(Coordinator) of the distributed transaction processing the transaction is as follows: if a global transaction involves only 1 data node, then the data node may only commit the transaction using a one-phase protocol. Of course, a two-phase protocol may also be used for transaction commit, but may result in poor processing performance and increased failure probability; if the global transaction relates to a plurality of data nodes, a two-phase (2PC) protocol (which can also use a three-phase protocol) is adopted when the data nodes start branch transactions, the coordinator controls the branch transactions to enter a Prepare (Prepare) state, a global transaction log (containing branch transaction information corresponding to the global transaction) is recorded after the branch transactions are all in the Prepare (Prepare) state, if the global transaction log records successfully (i.e. the global transaction is considered to be certain successful), the branch transactions in the Prepare (Prepare) state are started to be committed until the commit is successful, and if the global transaction log records fail (i.e. the global transaction is considered to be finally failed), all the branch transactions are rolled back. In practical application, the following optimization processing is performed on the transaction submission: selecting one data node from a plurality of data nodes as a last participant, using other data nodes as common participants, when a global transaction is submitted, firstly enabling a branch transaction to enter a ready state by the common participants by using a two-phase protocol, recording (but not submitting) a global transaction log to the local of the last participant by using the branch transaction before the branch transaction is submitted by the last participant, then submitting the branch transaction by using a one-phase protocol by the last participant, if the last participant is successfully submitted (namely the global transaction is considered to be successful), starting to submit the branch transaction of which each common participant is in the ready state until the branch transaction is successfully submitted, and if the last participant is failed to be submitted (namely the global transaction is considered to be failed), rolling back the branch transaction.
Fig. 2 shows an example of a distributed transaction, such as that shown in fig. 2 (where the arrow direction indicates time) when a user uses the distributed database system through a client, only a global transaction can be seen, and when the user uses the distributed database system, the global transaction is firstly (explicitly or implicitly) started and then submitted. Opening a global transaction may be considered to be immediately in effect, while committing a global transaction requires a process: from the beginning of committing the global transaction to the completion of committing the global transaction, the coordinator node can coordinate the normal participant A and the normal participant B to use a two-phase protocol, and the final participant commits the transaction by using a one-phase protocol, and the two-phase logs (preparation log and commit log) of the normal participant and the one-phase log (branch transaction and global transaction log) of the final participant are reflected on the incremental log of the participants.
The problem of consistency restoration of the distributed database is caused by the fact that a global transaction involves a plurality of data nodes, a two-phase protocol is started on the data nodes, and the transaction cannot be recorded and acquired in the same real event under partial scenes.
Fig. 3 is a flowchart illustrating a recovery method 300 of a distributed database system according to an embodiment of the present application, in which from a device perspective, an execution subject may be one or more electronic devices; from the program perspective, the execution main body may accordingly be a program loaded on these electronic devices.
As shown in fig. 3, the method 300 may include:
301, acquiring a restoring instruction, wherein the restoring instruction is used for indicating that each data node of the distributed database system is restored to a target time point;
step 302, acquiring full backup data and incremental logs of each data node according to the target time point;
when data of the distributed database system needs to be restored to a target time point, a latest full backup data of each data node before the target time point when the data is at the target time point needs to be acquired, and an incremental log of each data node between the full backup time point and the target time point needs to be acquired. The full backup data refers to a complete data mirror image, and each data node has corresponding full backup data in the distributed database. That is, at a particular time, the full amount of backup data for each data node contains all of the data for each data node at that particular time. The full backup does not need real-time backup, and a mechanism of periodic backup is often adopted. The full backup data and the incremental log form the basic condition for the data back-up of the distributed database.
For example, when the target time point of the restore instruction is 12 hours 00 minutes, and the full-amount backup data is periodically backed up at 11 hours 30 minutes, the full-amount backup data of 11 hours 30 minutes and the incremental log between 11 hours 30 minutes and 12 hours 00 minutes can be acquired, that is, the data is restored to the full-amount backup data of 11 hours 30 minutes, and the full-amount backup data of 11 hours 30 minutes is used as the restore starting point of each data node for restoring.
Step 303, determining a trusted time period and an untrusted time period of the target time point;
in step 303, a trusted time period and an untrusted time period for the target time point in the distributed transaction are divided, and for the target time point t, the trusted time period for the target time point t means that all finally committed transactions in the time period have completed the global log landing before time t, and thus will be committed. The distrusted time period of the target time point t means that the time period may have the distributed transaction which cannot determine whether the distributed transaction will be committed or rolled back according to the log before the time t. The trusted time period and the untrusted time period are separated by a trusted divergence time point of the target time point, and the trusted time period is before the divergence point and the untrusted time period is after the divergence point.
In a possible implementation, step 303 may specifically include: determining transaction commit durations of a plurality of distributed transactions according to the incremental logs of the data nodes; recording a transaction start commit time point and a transaction finish commit time point corresponding to the transaction commit duration which is greater than a preset value into a distributed transaction time consumption table; determining a first time point according to the distributed transaction time consumption table, wherein the first time point is a trust bifurcation time point of the target time point; and dividing the trust time period and the distrust time period of the target time point according to the target time point and the first time point.
In a possible implementation manner, the determining the transaction commit durations of the plurality of distributed transactions according to the incremental logs of the data nodes specifically includes: under the condition that a last participant strategy is not adopted, a distributed transaction consists of a global transaction and a plurality of branch transactions of general participants, and in this case, the time difference between the first preparation (prepare) record (namely the time when the first preparation log of the general participants falls to the ground) of the two-phase branch transaction of the distributed transaction and the log record (namely the time when the global transaction log falls to the ground) of the global transaction can be calculated through a coordinator node as the transaction commit duration of the distributed transaction; alternatively, in the case of employing the last participant policy, referring to fig. 2, the distributed transaction is executed by the general participants and the last participant corresponding to the data nodes, wherein the general participants employ a two-phase commit protocol, and the last participant employs a one-phase commit protocol, then the time difference from the first preparation record (i.e., the preparation log floor time of the general participant a) of the general participants of the distributed transaction to the time when the last participant commits the response may be calculated by the coordinator node as the transaction commit duration of the distributed transaction.
In a possible implementation manner, under the condition of adopting the above-mentioned last participant strategy, the time difference from the beginning of submitting the global transaction to the one-stage log submission response of the last participant of each distributed transaction may be calculated by the coordinator node as the observable transaction submission time length, where the observable transaction submission time length of the distributed transaction is greater than the actual transaction submission time length, so that the transaction submission time length can be more conveniently known.
In one example, as shown in Table 1, an example of a distributed transaction elapsed time table is shown, wherein it can be known from empirical values that the transaction commit duration of most distributed transactions is at s1Within seconds (E.g. s10.3). If the transaction commit duration of a distributed transaction is greater than s1Second, the start commit time and the finish commit time of the distributed transaction are recorded into the distributed transaction time consumption table of table 1; if the transaction commit duration of the distributed transaction is less than or equal to s1Second, the distributed transaction elapsed time table does not need to be recorded.
Table 1:
sequence number (distributed transaction id) Transaction start commit time Transaction completion commit time
111111 2020-5-28 12:22:11.61 2020-5-2812:22:11.92
222222 2020-5-29 12:22:11.61 2020-5-2912:22:11.92
333333 2020-5-30 12:22:11.61 2020-5-3012:22:11.92
In another possible implementation manner, the determining the transaction commit durations of the plurality of distributed transactions according to the incremental logs of the data nodes may further include: regardless of whether a last participant policy is employed, a time difference from the floor of the preparation record of the first general participant (i.e., the time at which the first preparation log of the general participant falls to the floor) to the floor of the commit response of the last general participant (i.e., the time at which the last commit log of the general participant falls to the floor) of the distributed transaction may be calculated by the coordinator node as the transaction commit duration of the distributed transaction. For example, in the example of fig. 2, the transaction commit duration of the distributed transaction corresponds to the floor time difference between the prepare log and the commit log in the general participant a delta log.
In this embodiment, the transaction start commit time point of the distributed transaction is the floor time of the preparation record of the first general participant, and the transaction completion commit time point of the distributed transaction is the time of the commit response of the last general participant.
In one example, as shown in table 2, an embodiment in which the time difference of the first preparation of a general participant of a distributed transaction to record the commit responses of all the general participants is taken as the transaction commit duration is shown in another example of a distributed transaction time-consuming table, in which it can be known from experience that the transaction commit duration of most distributed transactions is s2Within seconds (e.g., s)23). If the transaction commit duration of a distributed transaction is greater than s2Second, the start commit time and the finish commit time of the distributed transaction are recorded into the distributed transaction time consumption table of table 2; if the transaction commit duration of the distributed transaction is less than or equal to s2Second, the distributed transaction elapsed time table does not need to be recorded.
Table 2:
sequence number (distributed transaction id) Transaction start commit time Transaction completion commitTime
444444 2020-5-28 12:22:11.61 2020-5-2812:22:14.62
555555 2020-5-29 12:22:11.61 2020-5-2912:22:14.62
666666 2020-5-30 12:22:11.61 2020-5-3012:22:14.62
In a possible implementation manner, the determining a first time point according to the above-obtained distributed transaction elapsed time table may include: searching a distributed transaction meeting a first preset condition according to the distributed transaction time consumption table, wherein the first preset condition is that the transaction starting submission time point is before t-s and the corresponding transaction finishing submission time is after t, the t is the target time point, and the s is the preset value; determining the first time point as an earliest transaction start commit time in the distributed transactions meeting the first preset condition, or may be a time point before the earliest transaction start commit time.
In another possible implementation manner, determining the first time point according to the distributed transaction time consumption table may further include: if the distributed transaction meeting the first preset condition cannot be found in the distributed transaction time consumption table, determining that the first time point is t-s or a time point before the t-s, wherein t is the target time point, and s is the preset duration.
In another possible implementation, considering the time error necessarily existing in the distributed system, we need to modify the above scheme, and determining the first time point according to the distributed transaction time consumption table may further include: determining a coordinator time t' ═ t-m according to a preset time deviation m and the target time point t, wherein the preset time deviation m is used for indicating the maximum time deviation between any two data nodes of the distributed database system: determining a trust bifurcation time point t1 'of the coordinator time t' according to the distributed transaction time consumption table; determining the first time point t 1-t 1 ' -m according to the time deviation value m and a trust divergence time point t1 ' of the coordinator time t '.
In the present embodiment, in consideration of the time error necessarily existing in the distributed system, it may be first defined that the maximum time error between each two data nodes does not exceed m seconds (for example, m is 3 seconds). Based on this time error, the following analysis was done: for the first time point t1 of the trust divergence time point t of the target time point t, since there is a time difference between each data node and no more than m, it is possible to: 1) for the data nodes with time slower than the coordinator, the coordinator time t ' is made to be t-m, the coordinator time can be guaranteed to be earlier than all the data nodes, the trust bifurcation time point t1 ' of the coordinator time t ' is obtained by utilizing the distributed time consuming table and referring to the calculation method of the trust bifurcation time point, and the data nodes with time slower than the coordinator can be guaranteed to meet the requirements. 2) For the data node whose time is faster than the coordinator, subtracting a time error m from the calculated time t1 'can ensure that the time is also slower than the coordinator, i.e. t1 ═ t 1' -m can ensure that the data node faster than the coordinator can meet the requirement. For t1The previous times are all diverging time points of t, and in summary, the first time point t1 is t 1' -m.
In a possible implementation manner, the obtaining of the trust bifurcation time point t1 'of the coordinator time t' by using a distributed time consuming table and referring to the above calculation method of the trust bifurcation time point specifically includes: and searching a distributed transaction meeting a third preset condition according to the distributed transaction time consumption table, wherein the third preset condition is that the transaction starting submission time point is before t '-s and the corresponding transaction finishing submission time is after t', and s is a preset value. Wherein, if there is the distributed transaction meeting the third preset condition, determining a trust divergence time point t1 'in which the earliest transaction start commit time or a time point before it is the coordinator time t'; conversely, if there is no distributed transaction that meets the preset condition, then it is determined that t ' -s, or a point in time before it, is a trust divergence point in time t1 ' of the coordinator time t '.
In addition, since each data node has a time deviation with respect to the target time point, the time corresponding to the target time point t of each data node is also inconsistent, and a global log of a distributed transaction may land before the target time point t of a certain data node, but a preparation (Prepare) of a distributed transaction corresponding to another data node is recorded after the playback point t.
In another possible embodiment, in order to avoid the above-mentioned influence of the time offset, step 303 may further include: dividing the distrusted time period [ t1, t + m ] of the target time point according to the target time point t, the time deviation value m and the first time point t 1. In other words, during the consistency restoration process, it is necessary to acquire the branch transaction to be processed from the increment log in the time interval of [ t1, t + m ] of each branch participant based on the participant information recorded in the global log, and perform the matching process
Optionally, the preset value s may be set as s correspondingly based on different distributed transaction time consumption tables such as shown in table 1 and table 2 above1Or s2Of course, other values that are consistent with operational experience may be set.
It will be appreciated that the divergent time point due to the trust of the target time point t is the first time point t1It can be understood that the first time point t1Any time before can be regarded as a divergence time point of t, and in the embodiment, the divergence time closest to the target time point t is found as much as possible, so that the cost of matching subsequent distributed transactions can be reduced.
Step 304, taking the full backup data of each data node as a recovery starting point, and rolling back or executing the transaction initiated and submitted by each data node in the trust time period according to the incremental log of each data node;
in the restoration scheme of this embodiment, as shown in fig. 4, the trust divergence time point of the target time point t can be calculated as the first time point t1This means that the first point in time t1Previous distributed transactions may be determined by the delta log before time t, which may ultimately be rolled back or committed, and1there may be a distributed transaction that cannot determine whether it will commit or rollback from the delta log before time t. Therefore, the incremental log can be played back directly at each data node to the first time point t1Without the need to make a match for the distributed transaction.
And 305, matching the to-be-matched transactions based on a preset matching strategy, wherein the to-be-matched transactions are transactions submitted by the data nodes in the distrusted time period.
In a possible implementation manner, in step 305, the transaction to be matched is matched based on a preset matching policy, where the preset matching policy includes: (1) if the first transaction in the transactions to be matched is a non-distributed transaction, namely if the first transaction does not relate to a distributed transaction (a one-stage transaction submission log which does not contain a global transaction log), replaying the first transaction; (2) if the global log corresponding to a second transaction in the transactions to be matched falls to the ground within the trust time period of the target time point, playing back the second transaction; (3) if the global log corresponding to the third transaction in the transactions to be matched and all the branch logs exist, playing back the third transaction; and (4) not playing back the rest of the transactions to be matched.
In a possible implementation manner, the preset matching strategy (3) further includes: determining a second time point t according to the distributed transaction time consumption table2As shown in fig. 4, the second time point is a trust divergence time point of the first time point; at the second time point of each data node to the destinationJudging whether a global log and all branch logs corresponding to any one or more distributed transactions exist in the incremental logs between the marked time points; and if so, determining that the any one or more distributed transactions are the third transaction.
Wherein t is calculated1Trust divergence time point t of2Due to t2The previous transaction is at t1Completed, then it must exist at t if its global and branch logs exist for a so-called "third transaction"2Time period t, so we are playing back each data node t2To t1The time increment log also records which global transaction logs exist and which participant logs exist for use in the matching process in step 305.
In a possible implementation manner, the determining a second time point according to the distributed transaction elapsed time table includes: searching a second distributed transaction according to the distributed transaction time consumption table, and determining that the first time point is the earliest transaction start and commit time or a time point before the earliest transaction start and commit time in the second distributed transaction, wherein the transaction start and commit time of the second distributed transaction is at t1S before, and transaction complete commit time at said t1Then; and if the second distributed transaction cannot be found in the distributed transaction time consumption table, determining that the second time point is t1-s, wherein said t1And the s is the preset value at the first time point.
In a possible implementation manner, considering the time inconsistency of each data node, in practical application, the trust divergence time point may be advanced a little, that is, the preset time deviation of each data node is observed as a in advance, and t may be replaced by t-a to determine the first time point; and/or, using t1-a replaces said t1To determine the second point in time; wherein t is the target time point, a is a preset time deviation, and t1Is the first time point.
In another possible implementation manner, the preset matching policy in step 305 may further include: (4) if the fourth transaction in the transactions to be matched is a non-distributed transaction, playing back the fourth transaction; (5) if a fifth transaction in the transactions to be matched is a distributed transaction and a global log corresponding to the fifth transaction falls to the ground before the target time point, playing back the fifth transaction; and (6) not playing back the rest of the transactions to be matched.
For the preset matching policy (5), if the global log corresponding to the fifth transaction falls to the ground before the target time point, it is indicated that whether the fifth transaction is finally rolled back or committed can be determined by using the global log of the fifth transaction in the untrusted time period, so that the fifth transaction can be played back.
In a possible implementation manner, the preset matching strategy (5) may further include the following strategies: if the branch transaction corresponding to the fifth transaction is completely committed in the distrusted time period of the target time point, which indicates that all global transactions and branch transactions of the fifth transaction are completely executed before the target time point, the fifth transaction can be directly played back; if one or more branch transactions corresponding to the fifth transaction are not finished to be committed in the distrusted time period of the target time point, the one or more branch transactions which are not finished to be committed are committed at the corresponding data node after the global transaction and the branch transaction which are finished to be committed and correspond to the fifth transaction are replayed, and therefore consistency restoration can be kept.
In one possible implementation, in order to further guarantee consistent restoration of distributed transactions, constraint processing is performed on two or more transactions having a dependency relationship during the matching processing. For example: if the previous transaction with the dependency cannot be played back, the current transaction cannot be played back either. If the fourth transaction in the current transaction is a non-distributed transaction, the fourth transaction is not played back only in the current branch, and all subsequent transactions with dependence constraints on the fourth transaction in the current branch are not played back; if the fifth transaction in the current transaction is a distributed transaction, the transaction should not be replayed in all involved branches, and all subsequent transactions having dependency constraints on the involved branches should not be replayed until all the transactions which are not replayed are not related in a dependent mode. Typical inter-transaction dependencies include: time dependence (the order in which transactions occur), row record dependence, etc.
In one possible implementation, before restoring the distributed database system, the method further includes: constructing a full backup of each data node to obtain the full backup data; and acquiring the incremental logs of the data nodes.
The embodiment relies on performing full backup on each data node as a starting point for restoration of each data node. And the data incremental backup is carried out by depending on a logic log (branch transaction information record) on the data node, and the consistent transaction matching playback is carried out on the global transaction logic log so as to realize the consistent restoration of the distributed database. The global transaction logic log comprises a global log submission record and an association relation between a global transaction and a branch transaction.
In a possible implementation manner, for the problem that when each data node is in full backup, the transactions in the ready state are not generally recorded, the following three alternatives can be included in constructing the full backup of each data node:
(1) and locking after all two-phase transactions of the currently processed distributed transaction are completed by each data node, and performing full backup on each data node after locking.
(2) Carrying out first full backup on each data node at a first full backup time point; determining an untrusted time period of the first full-volume backup time point, and acquiring a first incremental log set of each data node in the untrusted time period of the first full-volume backup time point; analyzing the first incremental log set to obtain pending transactions that are in a ready state and not committed or rolled back; and replaying the pending transactions on the data obtained according to the first full backup, thereby constructing a full backup of the respective data nodes.
(3) Carrying out second full backup on each data node at a second full backup time point; determining an untrusted time period of the second full-volume backup time point, and acquiring a second incremental log set from a starting time point of the untrusted time period of the second full-volume backup time point to any time point after the second full-volume backup is completed by each data node; and performing fault-tolerant playback on the data obtained according to the second full-volume backup by using the second incremental log set. The fault-tolerant playback is to avoid the data influence caused by repeated execution of the data, for example, neglecting repeated insertion.
In one embodiment, with respect to the above alternative (2) or (3), the following steps may be further included: determining an untrusted time period of the first full back-up time point or the second full back-up time point: determining a trust divergence time point for a full backup time point before which all transactions in a ready state can be determined to be committed or rolled back by an incremental log before the full backup time point; and determining the distrusted time period of the full-amount backup time point according to the trust bifurcation time point of the full-amount backup time point.
In this embodiment, the above-mentioned trust bifurcation time point of the full-amount backup time point is set for each data node, the trust bifurcation time point on the data node is different from the trust bifurcation time point for the global transaction, and the trust bifurcation time point obtained by each data node through calculation and analysis according to the preparation phase time and the commit phase time of the branch transaction refers to the trust bifurcation time point i for the full-amount backup time point i of the data node1,i1All previous transactions in the prepare state must be committed or rolled back as determined from the incremental log prior to the full backup point in time i, where i1A transaction with a prepare state between i cannot determine the transaction end state from the delta log prior to i time. If the alternative (2) or (3) is selected, a branch transaction time consumption table similar to the distributed transaction time consumption table needs to be maintained on each data node or all the data nodes together for calculating the obtained numberThe trust on the node diverges from the point in time.
In a possible implementation manner, based on the same technical concept, an embodiment of the present invention further provides a recovery apparatus for a distributed database system, where the distributed database system includes a coordinator node and a plurality of data nodes, and the apparatus is configured to execute the recovery method for the distributed database system provided in any of the above embodiments. Fig. 5 is a schematic structural diagram of a recovery apparatus of a distributed database system according to an embodiment of the present invention.
As shown in fig. 5, the apparatus 500 includes:
an instruction unit 501, configured to obtain a restore instruction, where the restore instruction is used to instruct to restore each data node of the distributed database system to a target time point;
an obtaining unit 502, configured to obtain full backup data and incremental logs of each data node according to a target time point;
a determining unit 503, configured to determine a trusted time period and an untrusted time period of the target time point;
a first restoring unit 504, configured to use the full backup data of each data node as a restoring starting point, and perform rollback or execution on a transaction initiated and submitted by each data node in a trust time period according to an incremental log of each data node;
and the second restoring unit 505 is configured to match the transaction to be matched based on a preset matching policy, where the transaction to be matched initiates a submitted transaction for each data node in an untrusted time period.
In a possible embodiment, the determining unit is further configured to: determining transaction commit durations of a plurality of distributed transactions according to the incremental logs of the data nodes; recording a transaction start commit time point and a transaction finish commit time point corresponding to the transaction commit duration larger than a preset value into a distributed transaction time consumption table; determining a first time point according to a distributed transaction time consumption table, wherein the first time point is a trust bifurcation time point of a target time point; and dividing a trust time period and an untrusted time period of the target time point according to the target time point and the first time point.
In a possible embodiment, the determining unit is further configured to: searching a distributed transaction meeting a first preset condition according to a distributed transaction time consumption table, wherein the first preset condition is that the transaction starting submission time point is before t-s and the corresponding transaction finishing submission time is after t, t is a target time point, and s is a preset value; the first time point is determined to be the earliest transaction start commit time or a time point before the earliest transaction start commit time in the distributed transactions meeting the first preset condition.
In a possible embodiment, the determining unit is further configured to: if the distributed transaction meeting the first preset condition cannot be found in the distributed transaction time consumption table, determining that the first time point is t-s or a time point before t-s, wherein t is a target time point and s is preset duration.
In a possible embodiment, the determining unit is further configured to: if the distributed transaction consists of a plurality of two-stage branch transactions and a global transaction, calculating the time difference between the first preparation record of the two-stage branch transaction of the distributed transaction and the log record of the global transaction through a coordinator node, and taking the time difference as the submission duration of the distributed transaction; or, if the distributed transaction is executed by the general participant and the last participant, and the general participant adopts a two-phase commit protocol and the last participant adopts a one-phase commit protocol, calculating, by the coordinator node, a time difference from the first preparation record of the general participant to the last participant to commit the response of the general participant of the distributed transaction as a commit duration of the distributed transaction.
In one possible embodiment, the second reduction unit is further configured to: if the first transaction in the transactions to be matched is a non-distributed transaction, playing back the first transaction; if the second affairs in the affairs to be matched fall to the ground corresponding to all the branch affairs in the trust time period of the target time point, the second affairs are played back; if the global log corresponding to the third transaction in the transactions to be matched and all the branch logs exist, playing back the third transaction; and not playing back the rest of the transactions to be matched.
In one possible embodiment, the second reduction unit is further configured to: determining a second time point according to the distributed transaction time consumption table, wherein the second time point is a trust bifurcation time point of the first time point; judging whether a global log and all branch logs corresponding to any one or more distributed transactions exist in the incremental logs between the second time point and the target time point of each data node; and if so, determining any one or more distributed transactions as third transactions.
In one possible embodiment, the second reduction unit is further configured to: searching a second distributed transaction according to a distributed transaction time consumption table, and determining that the first time point is the earliest transaction start and commit time in the second distributed transaction or the time point before the earliest transaction start and commit time, wherein the transaction start and commit time point of the second distributed transaction is at t1S before, and transaction complete commit time at t1Then; and if the second distributed transaction cannot be found in the distributed transaction time consumption table, determining that the second time point is t1-s or t1A point in time before s, where t1S is a preset value at a first time point.
In one possible embodiment, the second reduction unit is further configured to: replacing t with t-a to determine a first point in time; and/or, using t1-a replaces t1To determine a second point in time; wherein t is a target time point, a is a preset time deviation, and t1Is the first point in time.
In one possible embodiment, the apparatus is further configured to: in the matching process, two or more transactions having a dependency relationship are subjected to constraint processing.
In a possible embodiment, the apparatus further comprises a backup unit for: constructing a full backup of each data node to obtain full backup data; and acquiring the incremental logs of the data nodes.
In a possible embodiment, the backup unit is further configured to: and locking after all the two-phase transactions of the currently processed distributed transaction are completed by each data node, and performing full backup on each data node after locking.
In a possible embodiment, the backup unit is further configured to: carrying out first full backup on each data node at a first full backup time point; determining an untrusted time period of a first full-volume backup time point, and acquiring a first incremental log set of each data node in the untrusted time period of the first full-volume backup time point; analyzing the first incremental log set to obtain pending transactions that are in a ready state and not committed or rolled back; the pending transactions are played back on the data obtained from the first full backup, thereby constructing a full backup of the respective data nodes.
In a possible embodiment, the backup unit is further configured to: performing second full backup on each data node at a second full backup time point; determining an untrusted time period of a second full-volume backup time point, and acquiring a second incremental log set from a starting time point of the untrusted time period of the second full-volume backup time point to any time point after the second full-volume backup is completed by each data node; and performing fault-tolerant playback on the data obtained according to the second full backup by using the second incremental log set.
In one possible embodiment, the method further comprises: determining a trust bifurcation time point of the full-amount backup time point, wherein all transactions in a preparation state before the trust bifurcation time point can be confirmed to be submitted or rolled back through an increment log before the full-amount backup time point; and determining the distrust time period of the full-amount backup time point according to the trust bifurcation time point of the full-amount backup time point.
In a possible embodiment, the determining unit is further configured to: determining a coordinator time t' ═ t-m according to a preset time deviation m and a target time point t, wherein the preset time deviation m is used for indicating the maximum time deviation between any two data nodes of the distributed database system: determining a trust bifurcation time point t1 'of the coordinator time t' according to a distributed transaction time consumption table; the first time t1 ═ t1 ' -m is determined from the time deviation value m and the trust divergence time t1 ' of the coordinator time t '.
In a possible embodiment, the determining unit is further configured to: searching a distributed transaction meeting a third preset condition according to the distributed transaction time consumption table, wherein the third preset condition is that the transaction starting submission time point is before t '-s and the corresponding transaction finishing submission time is after t', and s is a preset value; wherein the content of the first and second substances,
if distributed transactions meeting a third preset condition exist, determining a trust divergence time point t1 'in which the earliest transaction start commit time or a time point before the earliest transaction start commit time is the coordinator time t';
if no distributed transaction meeting the preset condition exists, determining t '-s or the time point before t' -s as the trust divergence time point t1 'of the coordinator time t'.
In a possible embodiment, the determining unit is further configured to: according to the target time point t, the time deviation value m and the first time point t1Dividing an untrusted time period [ t ] of a target time point1,t+m]。
In one possible embodiment, the second reduction unit is further configured to: if the fourth transaction in the transactions to be matched is a non-distributed transaction, playing back the fourth transaction; if a fifth transaction in the transactions to be matched is a distributed transaction and the global log corresponding to the fifth transaction falls to the ground before the target time point, playing back the fifth transaction; and not playing back the rest of the transactions to be matched.
In one possible embodiment, the second reduction unit is further configured to: if the branch transaction corresponding to the fifth transaction is completely submitted in the untrusted time period of the target time point, directly playing back the fifth transaction; and if one or more branch transactions corresponding to the fifth transaction are not finished to be submitted in the distrusted time period of the target time point, the one or more branch transactions which are not finished to be submitted are submitted at the corresponding data node after the global transaction and the branch transaction which are finished to be submitted and corresponding to the fifth transaction are replayed.
In one possible implementation, determining a transaction commit duration for a plurality of distributed transactions from the delta log for each data node includes: and calculating the landing time difference of the preparation of the first general participant of the distributed transaction to record the submission response of the last general participant through the coordinator node as the submission time length of the distributed transaction.
Fig. 6 is a recovery apparatus of a distributed database system according to an embodiment of the present application, configured to execute a recovery method of the distributed database system shown in fig. 3, where the apparatus includes: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the restoration method of the distributed database system according to the embodiments.
According to some embodiments of the present application, there is provided a non-volatile computer storage medium of a data processing method, having stored thereon computer-executable instructions configured to, when executed by a processor, perform the restoration method of a distributed database system as illustrated in the embodiments above
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, device, and computer-readable storage medium embodiments, the description is simplified because they are substantially similar to the method embodiments, and reference may be made to some descriptions of the method embodiments for their relevance.
The apparatus, the device, and the computer-readable storage medium provided in the embodiment of the present application correspond to the method one to one, and therefore, the apparatus, the device, and the computer-readable storage medium also have advantageous technical effects similar to those of the corresponding method.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (44)

1. A method of restoring a distributed database system, the distributed database system comprising a coordinator node and a plurality of data nodes, the method comprising:
acquiring a restoring instruction, wherein the restoring instruction is used for indicating that each data node of the distributed database system is restored to a target time point;
acquiring full backup data and incremental logs of each data node according to the target time point;
determining a trusted time period and an untrusted time period of the target time point;
taking the full backup data of each data node as a recovery starting point, and rolling back or executing a transaction initiated and submitted by each data node in the trust time period according to the incremental log of each data node;
and matching the transaction to be matched based on a preset matching strategy, wherein the transaction to be matched is the transaction which is initiated by each data node and submitted in the distrust time period.
2. The method of claim 1, wherein determining the trusted time period and the untrusted time period for the target point in time comprises:
determining transaction commit durations of a plurality of distributed transactions according to the incremental logs of the data nodes;
recording a transaction start commit time point and a transaction finish commit time point corresponding to the transaction commit duration which is greater than a preset value into a distributed transaction time consumption table;
determining a first time point according to the distributed transaction time consumption table, wherein the first time point is a trust bifurcation time point of the target time point;
and dividing the trust time period and the distrust time period of the target time point according to the target time point and the first time point.
3. The method of claim 2, wherein determining the first time point according to the distributed transaction elapsed time table comprises:
searching a distributed transaction meeting a first preset condition according to the distributed transaction time consumption table, wherein the first preset condition is that the transaction starting submission time point is before t-s and the corresponding transaction finishing submission time is after t, the t is the target time point, and the s is the preset value;
and determining that the first time point is the earliest transaction start and commit time in the distributed transactions meeting the first preset condition or a time point before the earliest transaction start and commit time.
4. The method of claim 3, wherein determining the first time point according to the distributed transaction elapsed time table further comprises:
if the distributed transaction meeting the first preset condition cannot be found in the distributed transaction time consumption table, determining that the first time point is t-s or any time point before the t-s, wherein t is the target time point, and s is the preset duration.
5. The method of claim 2, wherein determining the transaction commit durations for a plurality of distributed transactions from the delta logs for the respective data nodes comprises:
if the distributed transaction consists of a plurality of two-stage branch transactions and a global transaction, calculating the time difference between the first preparation record of the two-stage branch transaction of the distributed transaction and the log record of the global transaction through a coordinator node, and taking the time difference as the transaction submission time length of the distributed transaction; alternatively, the first and second electrodes may be,
if the distributed transaction is executed by a general participant and a final participant, the general participant adopts a two-phase commit protocol, and the final participant adopts a one-phase commit protocol, calculating a time difference from the first preparation record of the general participant to the final participant to commit a response of the distributed transaction through a coordinator node as the transaction commit duration of the distributed transaction.
6. The method of claim 2, wherein the matching the transaction to be matched based on a preset matching policy comprises:
if the first transaction in the transactions to be matched is a non-distributed transaction, playing back the first transaction;
if a second transaction in the transactions to be matched falls to the ground within the trust time period of the target time point corresponding to all the branch transactions, replaying the second transaction;
if the global log corresponding to the third transaction in the transactions to be matched and all the branch logs exist, playing back the third transaction; and the number of the first and second groups,
and not playing back the rest of the transactions to be matched.
7. The method of claim 6, further comprising the step of determining the third transaction from the transactions to be matched, comprising:
determining a second time point according to the distributed transaction time consumption table, wherein the second time point is a trust bifurcation time point of the first time point;
judging whether a global log and all branch logs corresponding to any one or more distributed transactions exist in the incremental logs between the second time point and the target time point of each data node;
and if so, determining that the any one or more distributed transactions are the third transaction.
8. The method of claim 7, wherein determining the second time point according to the distributed transaction elapsed time table comprises:
searching a second distributed transaction according to the distributed transaction time consumption table, and determining that the second time point is the earliest transaction start and commit time in the second distributed transaction or the time point before the earliest transaction start and commit time, wherein the transaction start and commit time point of the second distributed transaction is at t1S before, and transaction complete commit time at said t1Then; and the number of the first and second groups,
if the second distributed transaction cannot be found in the distributed transaction time consumption table, determining that the second time point is t1-s or said t1-a point in time before s, wherein said t1And the s is the preset value at the first time point.
9. The method of claim 4, wherein determining the second time point according to the distributed transaction elapsed time table further comprises:
replacing the t with t-a to determine the first point in time; and/or, using t1-a replaces said t1To determine the second point in time;
wherein t is the target time point, a is a preset time deviation, and t1Is the first time point.
10. The method of claim 1, further comprising:
in the matching process, two or more transactions having a dependency relationship are subjected to constraint processing.
11. The method of claim 1, further comprising, prior to restoring the distributed database system:
constructing a full backup of each data node to obtain the full backup data; and acquiring the incremental logs of the data nodes.
12. The method of claim 11, wherein said constructing a full-scale backup of said respective data nodes comprises:
and locking after all two-phase transactions of the currently processed distributed transaction are completed by each data node, and performing full backup on each data node after locking.
13. The method of claim 11, wherein said constructing a full backup of said respective data nodes further comprises:
carrying out first full backup on each data node at a first full backup time point;
determining an untrusted time period of the first full-volume backup time point, and acquiring a first incremental log set of each data node in the untrusted time period of the first full-volume backup time point;
analyzing the first incremental log set to obtain pending transactions that are in a ready state and not committed or rolled back;
and replaying the pending transactions on the data obtained according to the first full backup, thereby constructing a full backup of the respective data nodes.
14. The method of claim 11, wherein said constructing a full-scale backup of said respective data nodes comprises:
carrying out second full backup on each data node at a second full backup time point;
determining an untrusted time period of the second full-volume backup time point, and acquiring a second incremental log set from a starting time point of the untrusted time period of the second full-volume backup time point to any time point after the second full-volume backup is completed by each data node;
and performing fault-tolerant playback on the data obtained according to the second full-volume backup by using the second incremental log set.
15. The method of claim 13 or 14, further comprising:
determining a trust divergence time point for a full backup time point before which all transactions in a ready state can be determined to be committed or rolled back by an incremental log before the full backup time point;
and determining the distrusted time period of the full-amount backup time point according to the trust bifurcation time point of the full-amount backup time point.
16. The method of claim 2, wherein determining the first time point according to the distributed transaction elapsed time table comprises:
determining a coordinator time t' ═ t-m according to a preset time deviation m and the target time point t, wherein the preset time deviation m is used for indicating the maximum time deviation between any two data nodes of the distributed database system:
determining a trust bifurcation time point t1 'of the coordinator time t' according to the distributed transaction time consumption table;
determining the first time point t 1-t 1 ' -m according to the time deviation value m and a trust divergence time point t1 ' of the coordinator time t '.
17. The method of claim 16, wherein determining the trust divergence time point t1 'of the coordinator time t' according to the distributed transaction time consumption table comprises:
searching a distributed transaction meeting a third preset condition according to the distributed transaction time consumption table, wherein the third preset condition is that the transaction starting submission time point is before t '-s and the corresponding transaction finishing submission time is after t', and s is a preset value; wherein the content of the first and second substances,
if the distributed transaction meeting the third preset condition exists, determining a trust divergence time point t1 'in which the earliest transaction start commit time or a time point before the earliest transaction start commit time is the coordinator time t';
if the distributed transaction meeting the preset condition does not exist, determining the t '-s or the time point before the t' -s as a trust divergence time point t1 'of the coordinator time t'.
18. The method of claim 16, wherein partitioning the trusted time period and the untrusted time period for the target time point according to the target time point and the first time point further comprises:
dividing the distrusted time period [ t ] of the target time point according to the target time point t, the time deviation value m and the first time point t11,t+m]。
19. The method of claim 1, wherein the matching the transaction to be matched based on a preset matching policy comprises:
if the fourth transaction in the transactions to be matched is a non-distributed transaction, playing back the fourth transaction;
if a fifth transaction in the transactions to be matched is a distributed transaction and a global log corresponding to the fifth transaction falls to the ground before the target time point, playing back the fifth transaction; and the number of the first and second groups,
and not playing back the rest of the transactions to be matched.
20. The method of claim 19, wherein playing back the fifth transaction further comprises:
if the branch transaction corresponding to the fifth transaction is completely submitted in the distrusted time period of the target time point, directly playing back the fifth transaction;
if one or more branch transactions corresponding to the fifth transaction are not completed to commit in the untrusted time period of the target time point, the one or more branch transactions that are not completed to commit are committed at the corresponding data node after the completed committed global transaction and branch transaction corresponding to the fifth transaction are replayed.
21. The method of any of claims 2-4 or 11-20, wherein determining a transaction commit duration for a plurality of distributed transactions from the delta logs for the respective data nodes comprises:
calculating, by a coordinator node, a floor time difference of a preparation record of a first general participant to a commit response of a last general participant of the distributed transaction as the transaction commit duration of the distributed transaction.
22. An apparatus for restoring a distributed database system, the distributed database system including a coordinator node and a plurality of data nodes, the apparatus comprising:
the instruction unit is used for acquiring a restoration instruction, and the restoration instruction is used for indicating that each data node of the distributed database system is restored to a target time point;
the acquisition unit is used for acquiring the full backup data and the incremental logs of each data node according to the target time point;
a determining unit, configured to determine a trusted time period and an untrusted time period of the target time point;
a first restoring unit, configured to use the full-amount backup data of each data node as a restoring starting point, and perform rollback or execution on a transaction initiated and submitted by each data node within the trust time period according to the incremental log of each data node;
and the second restoring unit is used for matching the to-be-matched transactions based on a preset matching strategy, wherein the to-be-matched transactions are transactions submitted by the data nodes in the distrusted time period.
23. The apparatus of claim 22, wherein the determining unit is further configured to:
determining transaction commit durations of a plurality of distributed transactions according to the incremental logs of the data nodes;
recording a transaction start commit time point and a transaction finish commit time point corresponding to the transaction commit duration which is greater than a preset value into a distributed transaction time consumption table;
determining a first time point according to the distributed transaction time consumption table, wherein the first time point is a trust bifurcation time point of the target time point;
and dividing the trust time period and the distrust time period of the target time point according to the target time point and the first time point.
24. The apparatus of claim 23, wherein the determining unit is further configured to:
searching a distributed transaction meeting a first preset condition according to the distributed transaction time consumption table, wherein the first preset condition is that the transaction starting submission time point is before t-s and the corresponding transaction finishing submission time is after t, the t is the target time point, and the s is the preset value;
and determining the first time point as the earliest transaction start and commit time in the distributed transactions meeting the first preset condition, or a time point before the earliest transaction start and commit time.
25. The apparatus of claim 24, wherein the determining unit is further configured to:
if the distributed transaction meeting the first preset condition cannot be found in the distributed transaction time consumption table, determining that the first time point is t-s or a time point before the t-s, wherein t is the target time point, and s is the preset duration.
26. The apparatus of claim 23, wherein the determining unit is further configured to:
if the distributed transaction consists of a plurality of two-stage branch transactions and a global transaction, calculating the time difference between the first preparation record of the two-stage branch transaction of the distributed transaction and the log record of the global transaction through a coordinator node, and taking the time difference as the transaction submission time length of the distributed transaction; alternatively, the first and second electrodes may be,
if the distributed transaction is executed by a general participant and a final participant, the general participant adopts a two-phase commit protocol, and the final participant adopts a one-phase commit protocol, calculating a time difference from the first preparation record of the general participant to the final participant to commit a response of the distributed transaction through a coordinator node as the transaction commit duration of the distributed transaction.
27. The apparatus of claim 23, wherein the second reduction unit is further configured to:
if the first transaction in the transactions to be matched is a non-distributed transaction, playing back the first transaction;
if a second transaction in the transactions to be matched falls to the ground within the trust time period of the target time point corresponding to all the branch transactions, replaying the second transaction;
if the global log corresponding to the third transaction in the transactions to be matched and all the branch logs exist, playing back the third transaction; and the number of the first and second groups,
and not playing back the rest of the transactions to be matched.
28. The apparatus of claim 27, wherein the second reduction unit is further configured to:
determining a second time point according to the distributed transaction time consumption table, wherein the second time point is a trust bifurcation time point of the first time point;
judging whether a global log and all branch logs corresponding to any one or more distributed transactions exist in the incremental logs between the second time point and the target time point of each data node;
and if so, determining that the any one or more distributed transactions are the third transaction.
29. The apparatus of claim 28, wherein the second reduction unit is further configured to:
searching a second distributed transaction according to the distributed transaction time consumption table, and determining that the first time point is the earliest transaction start and commit time in the second distributed transaction or the time point before the earliest transaction start and commit time, wherein the first time point is the earliest transaction start and commit time in the second distributed transactionSaid transaction begin commit time point of said second distributed transaction is at t1S before, and transaction complete commit time at said t1Then; and the number of the first and second groups,
if the second distributed transaction cannot be found in the distributed transaction time consumption table, determining that the second time point is t1-s or said t1-a point in time before s, wherein said t1And the s is the preset value at the first time point.
30. The apparatus of claim 25, wherein the second reduction unit is further configured to:
replacing the t with t-a to determine the first point in time; and/or, using t1-a replaces said t1To determine the second point in time;
wherein t is the target time point, a is a preset time deviation, and t1Is the first time point.
31. The apparatus of claim 22, wherein the apparatus is further configured to:
in the matching process, two or more transactions having a dependency relationship are subjected to constraint processing.
32. The apparatus of claim 22, further comprising a backup unit configured to:
constructing a full backup of each data node to obtain the full backup data; and acquiring the incremental logs of the data nodes.
33. The apparatus of claim 32, wherein the backup unit is further configured to:
and locking after all two-phase transactions of the currently processed distributed transaction are completed by each data node, and performing full backup on each data node after locking.
34. The apparatus of claim 32, wherein the backup unit is further configured to:
carrying out first full backup on each data node at a first full backup time point;
determining an untrusted time period of the first full-volume backup time point, and acquiring a first incremental log set of each data node in the untrusted time period of the first full-volume backup time point;
analyzing the first incremental log set to obtain pending transactions that are in a ready state and not committed or rolled back;
and replaying the pending transactions on the data obtained according to the first full backup, thereby constructing a full backup of the respective data nodes.
35. The apparatus of claim 32, wherein the backup unit is further configured to:
carrying out second full backup on each data node at a second full backup time point;
determining an untrusted time period of the second full-volume backup time point, and acquiring a second incremental log set from a starting time point of the untrusted time period of the second full-volume backup time point to any time point after the second full-volume backup is completed by each data node;
and performing fault-tolerant playback on the data obtained according to the second full-volume backup by using the second incremental log set.
36. The apparatus of claim 34 or 35, further comprising:
determining a trust divergence time point for a full backup time point before which all transactions in a ready state can be determined to be committed or rolled back by an incremental log before the full backup time point;
and determining the distrusted time period of the full-amount backup time point according to the trust bifurcation time point of the full-amount backup time point.
37. The apparatus of claim 23, wherein the determining unit is further configured to:
determining a coordinator time t' ═ t-m according to a preset time deviation m and the target time point t, wherein the preset time deviation m is used for indicating the maximum time deviation between any two data nodes of the distributed database system:
determining a trust bifurcation time point t1 'of the coordinator time t' according to the distributed transaction time consumption table;
determining the first time point t 1-t 1 ' -m according to the time deviation value m and a trust divergence time point t1 ' of the coordinator time t '.
38. The apparatus of claim 37, wherein the determining unit is further configured to:
searching a distributed transaction meeting a third preset condition according to the distributed transaction time consumption table, wherein the third preset condition is that the transaction starting submission time point is before t '-s and the corresponding transaction finishing submission time is after t', and s is a preset value; wherein the content of the first and second substances,
if the distributed transaction meeting the third preset condition exists, determining a trust divergence time point t1 'in which the earliest transaction start commit time or a time point before the earliest transaction start commit time is the coordinator time t';
if the distributed transaction meeting the preset condition does not exist, determining the t '-s or the time point before the t' -s as a trust divergence time point t1 'of the coordinator time t'.
39. The apparatus of claim 37, wherein the determining unit is further configured to:
according to the target time point t, the time deviation value m and the first time point t1Dividing the target time pointSaid untrusted time period [ t ]1,t+m]。
40. The apparatus of claim 23, wherein the second reduction unit is further configured to:
if the fourth transaction in the transactions to be matched is a non-distributed transaction, playing back the fourth transaction;
if a fifth transaction in the transactions to be matched is a distributed transaction and a global log corresponding to the fifth transaction falls to the ground before the target time point, playing back the fifth transaction; and the number of the first and second groups,
and not playing back the rest of the transactions to be matched.
41. The apparatus of claim 40, wherein the second reduction unit is further configured to:
if the branch transaction corresponding to the fifth transaction is completely submitted in the distrusted time period of the target time point, directly playing back the fifth transaction;
if one or more branch transactions corresponding to the fifth transaction are not completed to commit in the untrusted time period of the target time point, the one or more branch transactions that are not completed to commit are committed at the corresponding data node after the completed committed global transaction and branch transaction corresponding to the fifth transaction are replayed.
42. The apparatus of any of claims 23-25 or 32-41, wherein determining a transaction commit duration for a plurality of distributed transactions from the delta logs for the respective data nodes comprises:
calculating, by a coordinator node, a floor time difference of a preparation record of a first general participant to a commit response of a last general participant of the distributed transaction as the transaction commit duration of the distributed transaction.
43. A recovery apparatus for a distributed database system, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform: the method of any one of claims 1-21.
44. A computer-readable storage medium storing a program that, when executed by a multi-core processor, causes the multi-core processor to perform the method of any of claims 1-21.
CN202110312905.2A 2020-08-24 2021-03-24 Restoration method and device of distributed database system and computer readable storage medium Pending CN112882870A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010859622.5A CN112000522A (en) 2020-08-24 2020-08-24 Restoration method and device of distributed database system and computer readable storage medium
CN2020108596225 2020-08-24

Publications (1)

Publication Number Publication Date
CN112882870A true CN112882870A (en) 2021-06-01

Family

ID=73470710

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010859622.5A Pending CN112000522A (en) 2020-08-24 2020-08-24 Restoration method and device of distributed database system and computer readable storage medium
CN202110312905.2A Pending CN112882870A (en) 2020-08-24 2021-03-24 Restoration method and device of distributed database system and computer readable storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010859622.5A Pending CN112000522A (en) 2020-08-24 2020-08-24 Restoration method and device of distributed database system and computer readable storage medium

Country Status (1)

Country Link
CN (2) CN112000522A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112000521B (en) * 2020-08-24 2021-08-27 中国银联股份有限公司 Full backup method and device for distributed database system and computer readable storage medium
CN112463457A (en) * 2020-12-10 2021-03-09 上海爱数信息技术股份有限公司 Data protection method, device, medium and system for guaranteeing application consistency
CN113238892B (en) * 2021-05-10 2022-01-04 深圳巨杉数据库软件有限公司 Time point recovery method and device for global consistency of distributed system

Also Published As

Publication number Publication date
CN112000522A (en) 2020-11-27

Similar Documents

Publication Publication Date Title
CN112882870A (en) Restoration method and device of distributed database system and computer readable storage medium
US4868744A (en) Method for restarting a long-running, fault-tolerant operation in a transaction-oriented data base system without burdening the system log
Zheng et al. Fast databases with fast durability and recovery through multicore parallelism
US6785696B2 (en) System and method for replication of distributed databases that span multiple primary nodes
US7386752B1 (en) Using asset dependencies to identify the recovery set and optionally automate and/or optimize the recovery
US6578041B1 (en) High speed on-line backup when using logical log operations
US8627135B2 (en) Management of a distributed computing system through replication of write ahead logs
CN110209521B (en) Data verification method and device, computer readable storage medium and computer equipment
US10331699B2 (en) Data backup method and apparatus
US20020116404A1 (en) Method and system for highly-parallel logging and recovery operation in main-memory transaction processing systems
US20130246358A1 (en) Online verification of a standby database in log shipping physical replication environments
JPS633341B2 (en)
JPH06318165A (en) Method for making data available in transaction adaptive system in restart after trouble
US20090043845A1 (en) Method, system and computer program for providing atomicity for a unit of work
CN110309227B (en) Distributed data rollback method, device and computer readable storage medium
JPS63307551A (en) Roll backing in system for first write logging type transaction
CN112000521B (en) Full backup method and device for distributed database system and computer readable storage medium
US11301341B2 (en) Replication system takeover with handshake
Lomet High speed on-line backup when using logical log operations
CN116541206B (en) Data recovery method and device of distributed data cluster and electronic equipment
CN111581023A (en) Bank memory data processing method and device
JP3290182B2 (en) Data set backup method and apparatus in shared environment
CN115357429B (en) Method, device and client for recovering data file
JPH07168730A (en) Check point sampling system
CN115221240A (en) Data processing method and device, electronic equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination