CN114238353A - Method and system for realizing distributed transaction - Google Patents

Method and system for realizing distributed transaction Download PDF

Info

Publication number
CN114238353A
CN114238353A CN202111570290.XA CN202111570290A CN114238353A CN 114238353 A CN114238353 A CN 114238353A CN 202111570290 A CN202111570290 A CN 202111570290A CN 114238353 A CN114238353 A CN 114238353A
Authority
CN
China
Prior art keywords
lock
distributed
transaction
distributed transaction
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111570290.XA
Other languages
Chinese (zh)
Inventor
王浩之
张琦
王瀚墨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yunxi Technology Co ltd
Original Assignee
Shandong Inspur Scientific Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Inspur Scientific Research Institute Co Ltd filed Critical Shandong Inspur Scientific Research Institute Co Ltd
Priority to CN202111570290.XA priority Critical patent/CN114238353A/en
Publication of CN114238353A publication Critical patent/CN114238353A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
    • G06F16/2343Locking methods, e.g. distributed locking or locking implementation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention particularly relates to a method and a system for realizing distributed transactions. The method and the system for realizing the distributed transaction adopt the distributed lock of the resident memory to store the data in the process of processing the distributed transaction, and realize the isolation of the distributed transaction; the distributed lock is stored in a lock table of a resident memory so as to be convenient to search and modify; the synchronization of data and locks is realized among a plurality of server nodes through a consensus algorithm, and the consistency of distributed transactions is ensured; the integrity and the persistence of the distributed transaction are ensured through the transaction log; in addition, to reduce the number of IO times, the disk is not read or written except for reading the new value and final commit. The method and the system for realizing the distributed transaction fully utilize the performance advantages of the memory, reduce the read-write operation of the disk, improve the execution efficiency of the distributed transaction, ensure the safety of the distributed transaction, and meet the atomicity, consistency, isolation and durability of the distributed transaction.

Description

Method and system for realizing distributed transaction
Technical Field
The invention relates to the technical field of distributed databases, in particular to a method and a system for realizing distributed transactions.
Background
Currently, the internet is huge in size, and a large amount of new data is generated at any moment, and meanwhile, a large amount of requests for accessing old data exist. Newly generated data needs to be stored in a database in time, and meanwhile, a large amount of data is read out, so that the database supports high-frequency writing and also considers high-frequency reading.
In addition, data security is also very important. The database needs to ensure the consistency and integrity of data while providing strong read-write capability. When a fault occurs to cause a server to be down or even damaged, data needs to be protected or recovered to prevent data loss. Meanwhile, the data storage device also has good expansion capability so as to ensure that all newly added data can be stored under the condition that the data is rapidly increased, and the data cannot be lost due to insufficient capacity. And finally, the continuity of the service is ensured, and the server can be recovered in time when the server fails, so that the condition of service interruption or service unavailability is avoided.
The traditional database is generally deployed on one server, the cost is high when the capacity is expanded, and faults such as downtime and the like cannot be dealt with. In order to solve the fatal problems of downtime and the like, ensure that data is not lost and service is timely recovered, a master-slave database system for backing up databases appears, and then a distributed database appears. In any case, to ensure that data is not lost requires cooperation of multiple servers, where the same data is stored. In order to ensure timely synchronization of data and guarantee consistency and integrity of data between servers, distributed transactions need to be introduced.
Currently, there are two main implementations of distributed transactions:
1) two-phase commit (2PC)
This approach divides the execution process of the distributed transaction into two phases: a preparation phase and a commit phase.
In the prepare phase, each server needs to execute the distributed transaction locally and then write the transaction log.
A commit phase, wherein if any server fails in the preparation phase, all servers roll back the distributed transaction; if all servers succeed in the prepare phase, all servers commit the distributed transaction.
The disadvantage of this approach is that the fault tolerance is low, all servers must be successful to commit successfully, and a distributed transaction fails whenever there is a server failure or failure.
2) Consensus algorithm and multi-version concurrency control (MVCC)
The consensus algorithm (such as Raft) requires that the number of servers is odd, and the servers are divided into two types, namely leader nodes and follower nodes. Only one node selected as the leader executes the distributed transaction, encapsulates executed operations into logs, and synchronizes the logs to the follower node. The follower node parses the data from the log and applies it locally, so long as half of the nodes synchronize successfully and can commit.
The consensus algorithm ensures the consistency of distributed transactions, has higher efficiency compared with two-phase submission, allows the server failure within half, and has higher fault tolerance. The multi-version concurrency control can ensure the isolation of distributed transactions, and the data of a plurality of versions is stored, including the data which is not submitted when the distributed transactions are executed, is synchronized to each node through a consensus algorithm and is persisted to a disk, so that other distributed transactions can be sensed, and the isolation among the distributed transactions is realized. Meanwhile, the data is ensured not to be lost by writing the disk in time, and the failure can be timely recovered.
The method has the disadvantages that each step of operation needs to be written into the disk, and other distributed transactions need to read the disk to know whether conflicts exist or not, so that the IO times are more, and the execution speed is slowed down.
Based on the above situation, the invention provides a method and a system for realizing distributed transactions.
Disclosure of Invention
In order to make up for the defects of the prior art, the invention provides a simple and efficient method and system for realizing distributed transactions.
The invention is realized by the following technical scheme:
a method for realizing distributed transaction is characterized in that: a distributed lock of a resident memory is adopted to store data in the process of distributed transaction processing, so that the isolation of distributed transactions is realized; the distributed lock is stored in a lock table of a resident memory so as to be convenient to search and modify; the synchronization of data and locks is realized among a plurality of server nodes through a consensus algorithm, and the consistency of distributed transactions is ensured; the integrity and the persistence of the distributed transaction are ensured through the transaction log; in addition, to reduce the number of IO times, the disk is not read or written except for reading the new value and final commit.
Creating a lock when the distributed transaction operates the database, and recording the distributed transaction and data information to realize the isolation of the distributed transaction; the specific process is as follows:
firstly, before a distributed transaction operates a database, checking an operation type and accessing a lock table through a concurrency manager, inquiring in the lock table according to a Key of data to be operated, and if a corresponding lock is inquired and the lock is not held by the transaction, blocking and waiting until the distributed transaction holding the lock is finished and releasing the lock; if the corresponding lock is not inquired or the corresponding lock is inquired and the lock is held by the distributed transaction, the operation is continuously executed;
secondly, after the operation of the distributed transaction is executed, transmitting Key and Value of the data of the operation to a concurrency manager;
thirdly, the concurrency manager creates a lock object through the Key, the Value and the ID of the distributed transaction of the data, and saves the Key, the Value and the distributed transaction information of the data in the lock;
fourth, the concurrency manager adds the newly created lock to the lock table.
In the first step, a lock table object is created and stored in a memory, and the lock table adopts a hash table or a data structure of a B + tree and is used for storing a lock.
In the fourth step, if the distributed transaction already holds the lock on the Key of the data to be operated, the concurrency manager updates the original lock in the lock table according to the data of the new lock; otherwise, adding the newly-built lock to the lock table;
when the distributed transaction is finished, the lock held by the distributed transaction is deleted from the lock table, and the distributed transaction waiting for the lock is notified, so that the blocked distributed transaction can be executed.
The transaction log is used for recording changes of distributed transactions on the database, and the specific process is as follows:
the first step, before the distributed transaction is submitted, a final operation result is obtained according to a Key and a Value stored in a lock, and a transaction log is constructed according to the final operation result, wherein the transaction log comprises the final operation result of the distributed transaction;
secondly, writing the transaction log into a disk before writing data into the disk according to a log writing-first principle; if the transaction log fails to be written, rolling back the distributed transaction and cleaning the memory;
and thirdly, after the transaction log is successfully written, writing the final operation result of the distributed transaction into a disk.
And when the server node fails after the transaction log is successfully written, the final operation result cannot be written into a disk or the database data is lost, and the recovery is carried out through the transaction log.
The consistency of the distributed transactions is realized through a consensus algorithm, and the specific process is as follows:
firstly, before a new lock is created, synchronizing the information of the lock to a follower node through a consensus algorithm;
secondly, creating a lock in the node according to the received lock information by the follower node, wherein the lock of the follower node also comprises Key and Value of data and distributed transaction information;
thirdly, notifying a follower node through a consensus algorithm before the distributed transaction is submitted;
and fourthly, after the follower node receives the submission notification, executing a submission process at the node, wherein the submission process comprises log generation and writing into a disk.
The consensus algorithm adopts a Raft algorithm, and the process of synchronizing the operation result to the follower node through the consensus algorithm is as follows:
step one, after one operation of the distributed transaction is successfully executed, returning Key, Value and distributed transaction information of the operated data, and then transmitting the Key, Value and distributed transaction information to a Raft protocol stack;
secondly, encapsulating the received data into a Raft log by a Raft protocol stack, and then synchronizing the Raft log to a follower node;
thirdly, after more than half of the number of follower nodes are successfully synchronized, the leader nodes continue to execute the distributed transaction; if the synchronization fails, the leader node rolls back the distributed transaction;
fourthly, when the distributed transaction is submitted, the workflow node is informed through a Raft protocol stack, and starts to create a transaction log and write the transaction log and the result into a disk;
if the leader node fails before the distributed transaction commits, the Raft protocol will select a new leader node from the follower nodes.
The information of the lock refers to the data Key, Value and distributed transaction information corresponding to the distributed lock.
The system based on the implementation method of the distributed transaction is characterized in that: dividing the distributed server nodes into leader nodes and follower nodes according to a consensus algorithm, wherein the follower nodes are used as standby nodes of the leader nodes, and all operations of distributed transactions are executed by the leader nodes and synchronized to the follower nodes through the consensus algorithm;
the leader node and the follower node are both provided with a lock table manager and a concurrence manager;
the lock table manager is maintained by the concurrency manager and is used for storing and managing the lock structures of all distributed transactions, and the corresponding locks are quickly searched through the Key of data, so that whether the distributed transactions conflict or not can be effectively identified, and the isolation of the distributed transactions is realized;
the concurrency manager is used for creating and maintaining distributed locks and lock table managers and managing the concurrency relation among distributed transactions through the lock table managers.
The invention has the beneficial effects that: according to the method and the system for realizing the distributed transaction, a mode of combining the lock table, the consensus algorithm and the transaction log is adopted, the performance advantages of the memory are fully utilized, the read-write operation of a disk is reduced, the execution efficiency of the distributed transaction is improved, meanwhile, the safety of the distributed transaction is ensured, and the atomicity, consistency, isolation and durability of the distributed transaction are met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of an architecture for a distributed transaction with multiple server nodes according to the present invention.
FIG. 2 is a schematic diagram of a processing flow of a distributed transaction of a leader node according to the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the technical solution in the embodiment of the present invention will be clearly and completely described below with reference to the embodiment of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method for realizing the distributed transaction adopts a distributed lock of a resident memory to store data in the process of processing the distributed transaction, and realizes the isolation of the distributed transaction; the distributed lock is stored in a lock table of a resident memory so as to be convenient to search and modify; the synchronization of data and locks is realized among a plurality of server nodes through a consensus algorithm, and the consistency of distributed transactions is ensured; the integrity and the persistence of the distributed transaction are ensured through the transaction log; in addition, to reduce the number of IO times, the disk is not read or written except for reading the new value and final commit.
Creating a lock when the distributed transaction operates the database, and recording the distributed transaction and data information to realize the isolation of the distributed transaction; the specific process is as follows:
firstly, before a distributed transaction operates a database, checking an operation type and accessing a lock table through a concurrency manager, inquiring in the lock table according to a Key of data to be operated, and if a corresponding lock is inquired and the lock is not held by the transaction, blocking and waiting until the distributed transaction holding the lock is finished and releasing the lock; if the corresponding lock is not inquired or the corresponding lock is inquired and the lock is held by the distributed transaction, the operation is continuously executed;
secondly, after the operation of the distributed transaction is executed, transmitting Key and Value of the data of the operation to a concurrency manager;
thirdly, the concurrency manager creates a lock object through the Key, the Value and the ID of the distributed transaction of the data, and saves the Key, the Value and the distributed transaction information of the data in the lock;
fourth, the concurrency manager adds the newly created lock to the lock table.
The operation of the distributed transaction on the database refers to an operation which can modify data in the database; the Key is an ID which uniquely identifies a line of data in the database; the Value represents a row of data in the database.
In the first step, a lock table object is created and stored in a memory, and the lock table adopts a hash table or a data structure of a B + tree and is used for storing a lock.
In the fourth step, if the distributed transaction already holds the lock on the Key of the data to be operated, the concurrency manager updates the original lock in the lock table according to the data of the new lock; otherwise, adding the newly-built lock to the lock table;
when the distributed transaction is finished, the lock held by the distributed transaction is deleted from the lock table, and the distributed transaction waiting for the lock is notified, so that the blocked distributed transaction can be executed.
The operation results before the distributed transaction is submitted are all stored in the memory, a transaction log is not needed, and the transaction log is only needed to be generated and written into a disk when the distributed transaction is to be submitted.
The transaction log is used for recording changes of distributed transactions on the database, and the specific process is as follows:
the first step, before the distributed transaction is submitted, a final operation result is obtained according to a Key and a Value stored in a lock, and a transaction log is constructed according to the final operation result, wherein the transaction log comprises the final operation result of the distributed transaction;
secondly, writing the transaction log into a disk before writing data into the disk according to a log writing-first principle; if the transaction log fails to be written, rolling back the distributed transaction and cleaning the memory;
and thirdly, after the transaction log is successfully written, writing the final operation result of the distributed transaction into a disk.
And when the server node fails after the transaction log is successfully written, the final operation result cannot be written into a disk or the database data is lost, and the recovery is carried out through the transaction log. Because the transaction log contains the complete data.
The follower node is a concept in a consensus algorithm and mainly has the main functions of data backup and take over after the failure of the leader node. The server nodes are divided into two types of leader nodes and follower nodes, the number of the leader nodes is only one, and all operations of the distributed transaction are executed by the leader nodes and synchronized to the follower nodes.
The consistency of the distributed transactions is realized through a consensus algorithm, and the specific process is as follows:
firstly, before a new lock is created, synchronizing the information of the lock to a follower node through a consensus algorithm;
secondly, creating a lock in the node according to the received lock information by the follower node, wherein the lock of the follower node also comprises Key and Value of data and distributed transaction information;
thirdly, notifying a follower node through a consensus algorithm before the distributed transaction is submitted;
and fourthly, after the follower node receives the submission notification, executing a submission process at the node, wherein the submission process comprises log generation and writing into a disk.
The consensus algorithm adopts a Raft algorithm, and the process of synchronizing the operation result to the follower node through the consensus algorithm is as follows:
step one, after one operation of the distributed transaction is successfully executed, returning Key, Value and distributed transaction information of the operated data, and then transmitting the Key, Value and distributed transaction information to a Raft protocol stack;
secondly, encapsulating the received data into a Raft log by a Raft protocol stack, and then synchronizing the Raft log to a follower node;
thirdly, after more than half of the number of follower nodes are successfully synchronized, the leader nodes continue to execute the distributed transaction; if the synchronization fails, the leader node rolls back the distributed transaction;
fourthly, when the distributed transaction is submitted, the workflow node is informed through a Raft protocol stack, and starts to create a transaction log and write the transaction log and the result into a disk;
if the leader node fails before the distributed transaction commits, the Raft protocol will select a new leader node from the follower nodes. Because the operation results before the distributed transaction are synchronized to all the nodes, the data cannot be lost, the distributed transaction can be continuously executed, and the fault tolerance of the database is effectively improved.
The information of the lock refers to the data Key, Value and distributed transaction information corresponding to the distributed lock. The information of the lock is used for identifying the lock, and can also be used as an operation record when the distributed transaction needs to be rolled back or submitted.
The system based on the distributed transaction implementation method divides distributed server nodes into leader nodes and follower nodes according to a consensus algorithm, wherein the follower nodes are used as standby nodes of the leader nodes, and all operations of the distributed transaction are executed by the leader nodes and synchronized to the follower nodes through the consensus algorithm;
the leader node and the follower node are both provided with a lock table manager and a concurrence manager;
the lock table manager is maintained by the concurrency manager and is used for storing and managing the lock structures of all distributed transactions, and the corresponding locks are quickly searched through the Key of data, so that whether the distributed transactions conflict or not can be effectively identified, and the isolation of the distributed transactions is realized;
the concurrency manager is used for creating and maintaining distributed locks and lock table managers and managing the concurrency relation among distributed transactions through the lock table managers.
The above-described embodiment is only one specific embodiment of the present invention, and general changes and substitutions by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention.

Claims (9)

1. A method for realizing distributed transaction is characterized in that: a distributed lock of a resident memory is adopted to store data in the process of distributed transaction processing, so that the isolation of distributed transactions is realized; the distributed lock is stored in a lock table of a resident memory so as to be convenient to search and modify; the synchronization of data and locks is realized among a plurality of server nodes through a consensus algorithm, and the consistency of distributed transactions is ensured; the integrity and the persistence of the distributed transaction are ensured through the transaction log; in addition, to reduce the number of IO times, the disk is not read or written except for reading the new value and final commit.
2. The method of claim 1, wherein: creating a lock when the distributed transaction operates the database, and recording the distributed transaction and data information to realize the isolation of the distributed transaction; the specific process is as follows:
firstly, before a distributed transaction operates a database, checking an operation type and accessing a lock table through a concurrency manager, inquiring in the lock table according to a Key of data to be operated, and if a corresponding lock is inquired and the lock is not held by the transaction, blocking and waiting until the distributed transaction holding the lock is finished and releasing the lock; if the corresponding lock is not inquired or the corresponding lock is inquired and the lock is held by the distributed transaction, the operation is continuously executed;
secondly, after the operation of the distributed transaction is executed, transmitting Key and Value of the data of the operation to a concurrency manager;
thirdly, the concurrency manager creates a lock object through the Key, the Value and the ID of the distributed transaction of the data, and saves the Key, the Value and the distributed transaction information of the data in the lock;
fourth, the concurrency manager adds the newly created lock to the lock table.
3. The method of claim 2, wherein: in the first step, a lock table object is created and stored in a memory, and the lock table adopts a hash table or a data structure of a B + tree and is used for storing a lock.
4. The method of claim 2, wherein: in the fourth step, if the distributed transaction already holds the lock on the Key of the data to be operated, the concurrency manager updates the original lock in the lock table according to the data of the new lock; otherwise, adding the newly-built lock to the lock table;
when the distributed transaction is finished, the lock held by the distributed transaction is deleted from the lock table, and the distributed transaction waiting for the lock is notified, so that the blocked distributed transaction can be executed.
5. The method of claim 1, wherein: the transaction log is used for recording changes of distributed transactions on the database, and the specific process is as follows:
the first step, before the distributed transaction is submitted, a final operation result is obtained according to a Key and a Value stored in a lock, and a transaction log is constructed according to the final operation result, wherein the transaction log comprises the final operation result of the distributed transaction;
secondly, writing the transaction log into a disk before writing data into the disk according to a log writing-first principle; if the transaction log fails to be written, rolling back the distributed transaction and cleaning the memory;
thirdly, after the transaction log is successfully written, writing the final operation result of the distributed transaction into a disk;
and when the server node fails after the transaction log is successfully written, the final operation result cannot be written into a disk or the database data is lost, and the recovery is carried out through the transaction log.
6. The method for implementing distributed transaction according to claim 1 or 2, characterized in that: the consistency of the distributed transactions is realized through a consensus algorithm, and the specific process is as follows:
firstly, before a new lock is created, synchronizing the information of the lock to a follower node through a consensus algorithm;
secondly, creating a lock in the node according to the received lock information by the follower node, wherein the lock of the follower node also comprises Key and Value of data and distributed transaction information;
thirdly, notifying a follower node through a consensus algorithm before the distributed transaction is submitted;
and fourthly, after the follower node receives the submission notification, executing a submission process at the node, wherein the submission process comprises log generation and writing into a disk.
7. The method of claim 6, wherein: the information of the lock refers to the data Key, Value and distributed transaction information corresponding to the distributed lock.
8. The method of claim 6, wherein: the consensus algorithm adopts a Raft algorithm, and the process of synchronizing the operation result to the follower node through the consensus algorithm is as follows:
step one, after one operation of the distributed transaction is successfully executed, returning Key, Value and distributed transaction information of the operated data, and then transmitting the Key, Value and distributed transaction information to a Raft protocol stack;
secondly, encapsulating the received data into a Raft log by a Raft protocol stack, and then synchronizing the Raft log to a follower node;
thirdly, after more than half of the number of follower nodes are successfully synchronized, the leader nodes continue to execute the distributed transaction; if the synchronization fails, the leader node rolls back the distributed transaction;
fourthly, when the distributed transaction is submitted, the workflow node is informed through a Raft protocol stack, and starts to create a transaction log and write the transaction log and the result into a disk;
if the leader node fails before the distributed transaction commits, the Raft protocol will select a new leader node from the follower nodes.
9. A system based on the method for implementing distributed transaction of claims 1-8, characterized in that: dividing the distributed server nodes into leader nodes and follower nodes according to a consensus algorithm, wherein the follower nodes are used as standby nodes of the leader nodes, and all operations of distributed transactions are executed by the leader nodes and synchronized to the follower nodes through the consensus algorithm;
the leader node and the follower node are both provided with a lock table manager and a concurrence manager;
the lock table manager is maintained by the concurrency manager and is used for storing and managing the lock structures of all distributed transactions, and the corresponding locks are quickly searched through the Key of data, so that whether the distributed transactions conflict or not can be effectively identified, and the isolation of the distributed transactions is realized;
the concurrency manager is used for creating and maintaining distributed locks and lock table managers and managing the concurrency relation among distributed transactions through the lock table managers.
CN202111570290.XA 2021-12-21 2021-12-21 Method and system for realizing distributed transaction Pending CN114238353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111570290.XA CN114238353A (en) 2021-12-21 2021-12-21 Method and system for realizing distributed transaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111570290.XA CN114238353A (en) 2021-12-21 2021-12-21 Method and system for realizing distributed transaction

Publications (1)

Publication Number Publication Date
CN114238353A true CN114238353A (en) 2022-03-25

Family

ID=80760186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111570290.XA Pending CN114238353A (en) 2021-12-21 2021-12-21 Method and system for realizing distributed transaction

Country Status (1)

Country Link
CN (1) CN114238353A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116225724A (en) * 2023-05-09 2023-06-06 云筑信息科技(成都)有限公司 Method for realizing distributed retry scheduling based on memory
CN117435574A (en) * 2023-12-21 2024-01-23 北京大道云行科技有限公司 Improved two-stage commit transaction implementation method, system, device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706811A (en) * 2009-11-24 2010-05-12 中国科学院软件研究所 Transaction commit method of distributed database system
KR101296778B1 (en) * 2012-09-18 2013-08-14 (주)카디날정보기술 Method of eventual transaction processing on nosql database
CN106033437A (en) * 2015-03-13 2016-10-19 阿里巴巴集团控股有限公司 Method and system for processing distributed transaction
US9904722B1 (en) * 2015-03-13 2018-02-27 Amazon Technologies, Inc. Log-based distributed transaction management
US20180232412A1 (en) * 2017-02-10 2018-08-16 Sap Se Transaction commit protocol with recoverable commit identifier
US20180276269A1 (en) * 2015-01-27 2018-09-27 Clusterpoint Group Limited Transaction processing in distributed database management system
CN112241400A (en) * 2020-10-21 2021-01-19 衡阳云汇科技有限公司 Method for realizing distributed lock based on database
CN113391885A (en) * 2021-06-18 2021-09-14 电子科技大学 Distributed transaction processing system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706811A (en) * 2009-11-24 2010-05-12 中国科学院软件研究所 Transaction commit method of distributed database system
KR101296778B1 (en) * 2012-09-18 2013-08-14 (주)카디날정보기술 Method of eventual transaction processing on nosql database
US20180276269A1 (en) * 2015-01-27 2018-09-27 Clusterpoint Group Limited Transaction processing in distributed database management system
CN106033437A (en) * 2015-03-13 2016-10-19 阿里巴巴集团控股有限公司 Method and system for processing distributed transaction
US9904722B1 (en) * 2015-03-13 2018-02-27 Amazon Technologies, Inc. Log-based distributed transaction management
US20180232412A1 (en) * 2017-02-10 2018-08-16 Sap Se Transaction commit protocol with recoverable commit identifier
CN112241400A (en) * 2020-10-21 2021-01-19 衡阳云汇科技有限公司 Method for realizing distributed lock based on database
CN113391885A (en) * 2021-06-18 2021-09-14 电子科技大学 Distributed transaction processing system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许海洋 等: "分布式事务两阶段提交协议的实现方法研究", 《信息技术与信息化》, no. 02, 15 April 2011 (2011-04-15), pages 72 - 75 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116225724A (en) * 2023-05-09 2023-06-06 云筑信息科技(成都)有限公司 Method for realizing distributed retry scheduling based on memory
CN116225724B (en) * 2023-05-09 2023-08-22 云筑信息科技(成都)有限公司 Method for realizing distributed retry scheduling based on memory
CN117435574A (en) * 2023-12-21 2024-01-23 北京大道云行科技有限公司 Improved two-stage commit transaction implementation method, system, device and storage medium

Similar Documents

Publication Publication Date Title
US7330859B2 (en) Database backup system using data and user-defined routines replicators for maintaining a copy of database on a secondary server
US7779295B1 (en) Method and apparatus for creating and using persistent images of distributed shared memory segments and in-memory checkpoints
CN101567805B (en) Method for recovering failed parallel file system
CN106598762B (en) Message synchronization method
US20120254120A1 (en) Logging system using persistent memory
US6934877B2 (en) Data backup/recovery system
CN102024016B (en) Rapid data restoration method for distributed file system (DFS)
US20120246116A1 (en) System and method for data replication between heterogeneous databases
US8099627B1 (en) Persistent images of distributed shared memory segments and in-memory checkpoints
CN111858629B (en) Implementation method and device for two-stage submitting distributed transaction update database
CN109992628B (en) Data synchronization method, device, server and computer readable storage medium
CN114238353A (en) Method and system for realizing distributed transaction
JP6220851B2 (en) System and method for supporting transaction recovery based on strict ordering of two-phase commit calls
US20060095478A1 (en) Consistent reintegration a failed primary instance
US9471622B2 (en) SCM-conscious transactional key-value store
CN109582686B (en) Method, device, system and application for ensuring consistency of distributed metadata management
US20090063807A1 (en) Data redistribution in shared nothing architecture
KR101296778B1 (en) Method of eventual transaction processing on nosql database
CN105574187A (en) Duplication transaction consistency guaranteeing method and system for heterogeneous databases
US20120278429A1 (en) Cluster system, synchronization controlling method, server, and synchronization controlling program
US11003550B2 (en) Methods and systems of operating a database management system DBMS in a strong consistency mode
CN113905054B (en) RDMA (remote direct memory access) -based Kudu cluster data synchronization method, device and system
US8818943B1 (en) Mirror resynchronization of fixed page length tables for better repair time to high availability in databases
CN114385755A (en) Distributed storage system
CN115658245B (en) Transaction submitting system, method and device based on distributed database system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221202

Address after: Room 305-22, Building 2, No. 1158 Zhangdong Road and No. 1059 Dangui Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai, 200120

Applicant after: Shanghai Yunxi Technology Co.,Ltd.

Address before: Building S02, 1036 Gaoxin Langchao Road, Jinan, Shandong 250100

Applicant before: Shandong Inspur Scientific Research Institute Co.,Ltd.

TA01 Transfer of patent application right