CN115017168A - Distributed transaction processing system and method - Google Patents

Distributed transaction processing system and method Download PDF

Info

Publication number
CN115017168A
CN115017168A CN202210640508.2A CN202210640508A CN115017168A CN 115017168 A CN115017168 A CN 115017168A CN 202210640508 A CN202210640508 A CN 202210640508A CN 115017168 A CN115017168 A CN 115017168A
Authority
CN
China
Prior art keywords
server
transaction
incremental
data
servers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210640508.2A
Other languages
Chinese (zh)
Inventor
朱思文
庄元�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunyao Technology Zhejiang Co ltd
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202210640508.2A priority Critical patent/CN115017168A/en
Publication of CN115017168A publication Critical patent/CN115017168A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2315Optimistic concurrency control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a distributed transaction processing system and a method, comprising the following steps: the system comprises a client-oriented fusion server and a plurality of incremental servers connected with the fusion server, wherein each incremental server is provided with a plurality of corresponding backup servers and is used for storing incremental data, and the incremental data stored by the incremental servers are not overlapped; the fusion server is configured to: when a data updating request sent by the client is received, acquiring a data partitioning rule; determining an incremental server involved in the data updating according to the data information involved in the data request and the data division rule; if the number of the incremental servers is N, splitting a physical updating plan corresponding to the data updating into N updating sub-plans; and controlling the related N incremental servers to execute the corresponding updating sub-plans to realize data updating. The invention can improve the processing efficiency of the incremental data.

Description

Distributed transaction processing system and method
Technical Field
The invention relates to the technical field of databases, in particular to a distributed transaction processing system and a distributed transaction processing method.
Background
With the further development of technologies such as cloud computing and web2.0, the traditional relational database is not attentive to processing of mass data. The NoSQL database gives up strict transaction consistency and paradigm constraint of the conventional relational database, adopts a weak consistency model, supports distributed and horizontal expansion, and meets the requirement of mass data management, so that the NoSQL database is widely concerned and applied in the field of big data processing, for example, Google BigTable and Amazon Dynamo. Compared with traditional relational data, the NoSQL database has the characteristics of high cost performance, expandability and the like, so that the NoSQL database is promoted to become a preferred database for domestic financial enterprises to deal with mass data. However, in addition to processing massive data, the traditional financial business needs to ensure strong consistency of transactions in the data processing process. However, most of the existing NoSQL databases do not support strong consistency of transactions, so that the demands of financial services cannot be met.
The distributed type massive relational database YaoBase is a relational database system, has the functions of high reliability, high availability, strong consistency, high expansion and the like of the traditional distributed data management system, and simultaneously supports the SQL function and the database transaction function. The YaoBase realizes cross-row and cross-table transactions on huge data volume, and designs and realizes mechanisms of multiple transaction nodes, multiple copy redundancy, fault tolerance, load balance and the like so as to ensure that the whole system continuously provides high-level data read-write service.
Currently, a single incremental server (i.e., YaotxnSvr, TS for short) is used in the design of the transaction logic of YaoBase. In YaoBase, an increment server is used for storing increment updating data, a master server and a slave server are adopted during deployment, and the master server and the slave server are synchronized through an operation log. The method comprises the steps that only a main YaotxnSvr is allowed to provide writing service in one cluster, when a writing transaction is processed, the YaotxnSvr writes incremental data into a memory, when the usage amount of the memory reaches a certain degree, the data in the memory is dumped onto an SSD (solid State disk), when the memory is merged every day, the data in an incremental server is merged into a baseline data server YaodataSvr, and the YaotxnSvr ensures reliability through a mode of writing operation logs.
Under the framework of the single incremental server, although the existing YaoBase supports strong consistency of transactions, when high-concurrency read-write transactions come together, YaotxnSvr is easy to become a performance bottleneck, and particularly most of incremental data in the YaotxnSvr is stored in a memory, so that the response time of the transactions is prolonged, the read-write performance of a database cannot be fully exerted, and therefore an optimization design of the single incremental server is urgently needed.
Disclosure of Invention
Embodiments of the present invention provide a distributed transaction processing system and method, which can reduce a single point load of a single incremental server and improve processing efficiency of incremental data.
In a first aspect, a distributed transaction processing system provided in an embodiment of the present invention includes: the system comprises a client-oriented fusion server and a plurality of incremental servers connected with the fusion server in a communication manner, wherein each incremental server is provided with a plurality of corresponding backup servers and used for storing incremental data, and the incremental data stored by the incremental servers are not overlapped; wherein:
the fusion server is configured to: when a data updating request sent by the client is received, acquiring a data partitioning rule; determining an incremental server related to the data updating according to the data information related to the data request and the data division rule; if the number of the incremental servers is N, splitting the physical updating plan corresponding to the data updating into N updating sub-plans; controlling the N involved incremental servers to execute respective corresponding updating sub-plans to realize data updating; wherein the data partitioning rule includes a plurality of data tables and a first mapping relationship between a plurality of groups and a second mapping relationship between the plurality of groups and the plurality of incremental servers, N being a positive integer greater than 1.
In a second aspect, a distributed transaction processing method provided in an embodiment of the present invention is implemented based on the distributed transaction processing system provided in the first aspect, and the method includes:
the fusion server acquires a data division rule when receiving a data updating request sent by the client; determining an incremental server related to the data updating according to the data information related to the data request and the data division rule; if the number of the incremental servers is N, splitting the physical updating plan corresponding to the data updating into N updating sub-plans; controlling the N involved incremental servers to execute respective corresponding updating sub-plans to realize data updating; wherein the data partitioning rule includes a plurality of data tables and a first mapping relationship between a plurality of groups and a second mapping relationship between the plurality of groups and the plurality of incremental servers, and N is a positive integer greater than 1.
The distributed transaction processing system comprises a fusion server and a plurality of incremental servers, wherein each incremental server is used for storing incremental data, and the incremental data stored by the plurality of incremental servers are not overlapped; when a fusion server receives a data updating request sent by a client, acquiring a data division rule; determining an incremental server involved in the data updating according to the data information involved in the data request and the data division rule; if the number of the incremental servers is multiple, splitting the physical updating plan corresponding to the data updating into multiple updating sub-plans; and controlling the plurality of involved incremental servers to execute the corresponding updating sub-plans to realize data updating. Because the embodiment of the invention is provided with the plurality of incremental servers, the data can be orderly divided into the plurality of incremental servers through the data division rule, when high-concurrency read-write transactions arrive in a centralized manner, the single-point load of a single incremental server can be reduced, the processing efficiency of the incremental data is improved, and the overall performance of the distributed database is improved.
Drawings
FIG. 1 is a block diagram of a distributed transaction processing system in accordance with an embodiment of the present invention;
FIG. 2 is a mapping relationship diagram of data partitioning rules in an embodiment of the present invention;
FIG. 3a is a schematic diagram of a distributed transaction processing system in accordance with one embodiment of the present invention;
FIG. 3b is a schematic diagram of the interaction between a coordinator server and 2 participant servers according to one embodiment of the present invention;
FIG. 4 is a schematic flow chart illustrating a split of a physical update plan performed by a fusion server according to an embodiment of the present invention;
FIG. 5 is a timing diagram illustrating the processing of transactions within an incremental server according to one embodiment of the present invention;
FIG. 6 is a flowchart illustrating the processing of the coordinator server after timeout in one embodiment of the present invention;
fig. 7 is a flow diagram of processing after participant server timeout processing in an embodiment of the invention.
Detailed Description
In a first aspect, the present invention provides a distributed transaction processing system.
Referring to fig. 1, the system includes: the system comprises a client-oriented fusion server and a plurality of incremental servers which are communicated with the fusion server, wherein each incremental server is provided with a plurality of corresponding backup servers, each incremental server is used for storing incremental data, and the incremental data stored by the incremental servers are not overlapped.
Wherein the system is actually part of a YaoBase system architecture. The fusion server is YaosqlSvr, called SS for short. The incremental server is YaotxnSvr, abbreviated TS. In the new system architecture, one incremental server has a plurality of backup servers, that is, one incremental server has a plurality of copies, that is, one master server and a plurality of backup servers, the incremental server and the corresponding plurality of copies form a Paxos group, and the consistency and reliability of data can be ensured between the incremental server and the corresponding plurality of copies through the Paxos protocol.
The fusion server is client-oriented, namely receives a request sent by the client, and then controls the plurality of incremental servers to perform corresponding processing, and further performs corresponding feedback to the client.
The YaoBase is a typical distributed database architecture, data are distributed on different nodes, and the data are divided into baseline data and incremental data, which are stored on a baseline server (namely YaodataSvr, abbreviated as DS) and an incremental server, respectively. When receiving a query request sent by a client, the fusion server sends the request to the baseline server and the incremental server, the baseline server performs query operation to obtain static data and returns the static data to the fusion server, the incremental server performs query operation to obtain incremental data and returns the incremental data to the fusion server, and the fusion server merges the static data and the incremental data and returns the merged incremental data to the client.
The above is a query service provided by YaoBase. Of course, YaoBase can also provide a data writing service, i.e. a data updating service. Because the embodiment of the invention is provided with a plurality of incremental servers, and the incremental data is distributed on different incremental servers, the embodiment of the invention mainly solves the problem of how to carry out relevant operation on the basis of the arrangement of the plurality of incremental servers.
To solve this problem, referring to fig. 2, in an embodiment of the present invention, a data partitioning rule is maintained on a server, where the data partitioning rule is divided into a first-level partition and a second-level partition, a first mapping relationship may be obtained through the first-level partition, and a second mapping relationship may be obtained through the second-level partition. Firstly, performing first-level division on data, so that the data can be divided into a plurality of groups; the groups are then partitioned to a second level, which maps the groups to different incremental servers.
The first-level division is performed in multiple division modes, for example, the division is performed according to a table level or a record level to obtain a first mapping relation; wherein:
in a first mapping relation obtained by dividing according to the table level, one data table corresponds to one group;
in a first mapping relationship obtained by dividing according to record levels, at least one row in a data table corresponds to one group, and one data table corresponds to at least one group.
It is understood that the division according to the table level means that the entire data table and one group are mapped, so that the entire data table corresponds to only one group. The division according to the recording level means that one or more rows of data in a data table correspond to one group, and thus one data table may correspond to a plurality of groups. Particularly, when the division according to the record level is adopted, the related data can be distributed on different incremental servers for a specific data updating operation.
When data is written, the data to be written may be written into different incremental servers according to the data division rule. When data query is performed, query operations may need to be performed from different incremental servers, and query results returned by the incremental servers are merged into a final query result. The incremental data stored on the respective incremental servers is non-duplicative, i.e., non-overlapping. It can be seen that, compared with a system architecture of a single incremental server, a system architecture of multiple incremental servers introduces a data partitioning problem, and the embodiment of the present invention adopts a data partitioning rule of two-level partitioning to solve the problem.
In the second-stage division, mapping between the group and the incremental server can be performed according to factors such as load. Since the data partitioning rule is not frequently changed after being determined, there is a problem that the load of the plurality of incremental servers may be unbalanced, and the purpose of data balancing can be achieved by appropriately adjusting the second mapping relationship.
It can be understood that the system architecture adopted in the embodiment of the present invention is a distributed system architecture, the distributed transaction ratio can be effectively reduced through reasonable data partitioning, and the efficiency of data processing can be improved through reasonable and uniform data partitioning because the incremental data is partitioned into a plurality of incremental servers.
In the above system architecture, the fusion server is configured to: when a data updating request sent by the client is received, acquiring a data partitioning rule; determining an incremental server involved in the data updating according to the data information involved in the data request and the data division rule; if the number of the incremental servers is N, splitting the physical updating plan corresponding to the data updating into N updating sub-plans; controlling the N involved incremental servers to execute respective corresponding updating sub-plans to realize data updating; wherein the data partitioning rule includes a plurality of data tables and a first mapping relationship between a plurality of groups and a second mapping relationship between the plurality of groups and the plurality of incremental servers, and N is a positive integer greater than 1.
For example, referring to fig. 3a and 3b, the data partitioning rule is stored in a server AS (i.e., AdminServer, which is a server for YaoBase management data distribution), and when the fusion server SS receives a data write request sent by a client, the data partitioning rule is obtained from the server AS. According to the data dividing rule, the first mapping relation is divided according to the record level, and three incremental servers are determined according to the data updating request and the data dividing rule, that is, data to be written needs to be written into the three incremental servers. In order to implement the write operation, a physical update plan corresponding to the write operation needs to be determined first, and the write operation needs to be executed by three incremental servers, so that the physical update plan is divided into three update sub-plans, and then the three incremental servers are controlled to execute the corresponding update sub-plans, so that the three incremental servers implement respective data write operations, and further, the data to be written is written to the three incremental servers in a distributed manner.
When a data updating request is received, the fusion server can know which data needs to be written in or read, and further can know which incremental servers the data needs to be stored in or from which incremental servers the data needs to be acquired according to the data dividing rule.
Of course, if the number of the incremental servers is one, the physical update plan does not need to be split at this time, and the physical update plan is directly sent to the incremental server, so that the whole incremental server can execute the physical update plan.
It can be understood that, for an update statement that involves multiple sets of data, such as "insert test values (1,2), (3, 4)", if the data partitioning rule is partitioned according to the table level, the update data in the statement is sent to the same incremental server for execution, and if the data partitioning rule is partitioned according to the record level, the update data may be sent to different incremental servers for execution. Therefore, before the update statement is executed, the incremental server which needs to execute the update operation is calculated through the data division rule according to the related data information. For the division according to the record level, whether the related data is divided to different incremental servers is judged. If it is determined to be split across different servers, the physical update plan needs to be split into multiple update sub-plans.
It can be understood that, if the fusion server receives the data query request, the process is similar, specifically: acquiring a data division rule; determining an incremental server involved in the data query according to the data information involved in the data query request and the data partitioning rule; if the number of the related incremental servers is N which is larger than 1, splitting the physical query plan corresponding to the data query into N updated query sub-plans; and controlling the related N incremental servers to execute the corresponding query sub-plans to realize data query.
In specific implementation, the fusion server is configured to perform the following steps: controlling the involved N incremental servers to execute respective corresponding update sub-plans may specifically include:
determining whether each incremental server involved has already opened a local transaction;
if so, sending the corresponding updating sub-plan to the incremental server so that the incremental server executes the corresponding updating sub-plan;
otherwise, the transaction opening mark and the corresponding updating sub-plan are sent to the incremental server, so that the incremental server sends the local transaction ID to the fusion server after opening the transaction, and executes the corresponding updating sub-plan.
That is, the fusion server first determines whether each involved incremental server has already started a local transaction, if an incremental server has already started a local transaction, the fusion server has already recorded a local transaction ID corresponding to the incremental server, and at this time, the fusion server directly sends the corresponding update sub-plan to the incremental server, and the incremental server applies the update operation to the corresponding local transaction according to the local transaction ID, thereby executing the update sub-plan. If the local transaction is not started, the updating sub-plan and the transaction starting mark need to be sent to the incremental server together, so that the incremental server starts the local transaction, returns the local transaction ID to the fusion server for recording, and then executes the corresponding updating sub-plan.
For example, referring to fig. 4, when the fusion server receives a data update request, it is determined whether the data partitioning rule is partitioned according to the table level. And if the physical update plan is not divided according to the table level, judging whether the physical update plan needs to be split or not. If splitting is needed, splitting the physical update plan into a plurality of update sub-plans according to the data partitioning rule. And then determines whether a local transaction has been opened for each delta server involved. If a local transaction is opened, the update sub-plan is sent directly to the delta server. And if the transaction is not started, the updating sub-plan and the transaction starting mark I are sent to the incremental server to start the transaction and execute the updating sub-plan, and the fusion server records when receiving the local transaction ID returned by the incremental server. If the physical update plan is divided according to the table level, or even if only one incremental server is involved in the division according to the record level, the physical update plan does not need to be split, and at the moment, the physical update plan is directly sent to the corresponding incremental server to be executed.
The session information between the fusion server and the client is stored in a yaoSQLSessenfo class, and the yaoSQLSessenfo class can store user information, query result information, generated physical execution plan, transaction state, transaction start time, temporary variables and other information in the session. All session information on the fusion server can be managed through the YaoSQLSessenciMgr class.
The system architecture has two transaction opening modes, one is explicit opening and the other is implicit opening. The explicit opening mode is to open by adopting begin statement and commit statement. After the transaction is explicitly started, the fusion server records the start time and the timeout time of the transaction, then sends a command for starting the transaction to the incremental server, and the incremental server initializes the transaction and returns the transaction ID to the fusion server. The fusion server stores the received transaction ID into a YaoSQLSessenfo class, then returns success to the client, and the subsequent updating operation is sent to the corresponding incremental server through the transaction ID so as to execute the transaction uniquely corresponding to the transaction ID. The implicit opening mode is to open the transaction by setting an environment variable autocommit, after the implicit opening, the instruction for opening the transaction is not directly sent to the incremental server, but the value of the environment variable is marked in the YaoS QLSessenfo class, and then the success is directly returned to the client. And when the first data updating statement needs to be executed later, sending the physical updating plan and the opening transaction mark to the incremental server, starting the transaction and executing the physical updating plan by the incremental server, returning the transaction ID to the fusion server instead of directly submitting the transaction ID after the transaction is started and the plan is executed by the incremental server, and recording the transaction ID, the starting time and the timeout time by the fusion server. It will be appreciated that the above two opening methods are applicable to situations involving one incremental server.
In practice, the update operation is likely to involve different delta servers, and therefore, the local transaction IDs of multiple delta servers need to be recorded. In order to reduce the proportion of distributed transactions, one incremental server needs to be selected from the plurality of incremental servers involved to be used as a coordinator server, and the other incremental servers are used as participant servers. If the explicit opening mode is adopted, since a plurality of involved incremental servers are unknown, a command for opening a transaction cannot be directly sent to one incremental server, and only one transaction opening mark can be set according to the mode of setting the environment variable. And after the specific update statement is analyzed by the fusion server, calculating each related incremental server according to the data related to the update statement through a data partitioning rule, then sending the split update sub-plan and the transaction start mark to the corresponding incremental server, and receiving the local transaction ID returned by the incremental server.
In a practical scenario, each incremental server may specifically execute the associated update sub-plan using a two-phase commit protocol. For this purpose, one of the incremental servers is selected as the coordinator server, and the involved incremental servers are participant servers. The coordinator server records list information formed by the identification of each participant server and local transaction ID corresponding to each participant server; an identification of the coordinator server is recorded in each participant server. Since the coordinator server records the list information formed by each participant server and the local transaction ID corresponding to each participant server, the coordinator server can send related information to each participant server, and each participant server also records the information of the coordinator server, so that the participant servers can also send information to the coordinator server.
In particular implementation, the coordinator server is configured to: sending a preprocessing request to each participant server;
the participant server is to: after receiving the preprocessing request, executing a corresponding local transaction according to the updating sub-plan corresponding to the participant server, and sending confirmation information to the coordinator server after the transaction is executed;
the coordinator server is further configured to: after receiving the confirmation information returned by all the participant servers, sending a transaction submission notice to all the participant servers;
the participant server is further configured to: and when a transaction submission notice sent by the coordinator server is received, sending local transaction submission information corresponding to the participant server to the fusion server.
That is, the coordinator server sends a preprocessing request to each participant server, and after receiving the preprocessing request, the participant server executes the corresponding local transaction according to its own update sub-plan, and after the execution is completed, the participant server does not submit but sends confirmation information to the coordinator server. And after receiving the confirmation information sent by all the participants, the coordinator server sends a transaction submission notice to all the participant servers. When the participant servers receive the transaction commit notification, they will each commit the transaction locally, which is the same as the standalone transaction described below.
When the update operation only involves one incremental server, the processing logic of the incremental server for the transaction is approximately as follows: referring to fig. 5, after the incremental server receives the task sent by the fusion server, the transactor transacturer on the incremental server distributes the task to a transactlepool thread pool, which is a transaction queue for temporarily storing the to-be-processed transaction. Then the TransHandlePool allocates a thread, opens a transaction, initializes the transaction lock, and updates the memory table for storing the incremental data. If the incremental data recorded for the row reaches a certain amount, the incremental data is compressed. After the processing is finished, the task is sent to a TransCommitThread thread for processing the transaction submission, the TransCommitThread thread calculates the checksum of the change part in the transaction to ensure that the change is correct, the change is recorded in a log file form and then sent to a flush _ quque queue. And setting a transaction module RWSISsionCtx for managing the information content of the read-write session to a FROZEN state, so as to prevent the transaction task which has written the log from being killed by the overtime thread. Because the transaction is done and the log may not have been flushed for a while, the FROZEN state needs to be set to prevent the timeout thread from killing the transaction tasks that have written the log. And another log synchronization thread can synchronize the logs in the flush _ queue to the standby machine, so that when the log is idle or the number of the logs which are synchronized and completed reaches a certain number, the thread handle _ flush _ log for processing the log flushing can flush the synchronized log, and then the current transaction is sent to the CommitEndHandlePool thread pool for processing. CommitEndHandlePool validates changes made by the transaction, releases the locks held in the transaction, and finally returns the execution results.
In order to improve the processing efficiency of the distributed transaction and reduce the influence of the distributed transaction on the single-machine transaction, the processing in the two-phase commit process independently uses the thread processing in the thread pool distributedtrandlhandles. In the distributed transaction processing, when the coordinator server sends a preprocessing request, i.e., a preparation command, to the participant server, the flow of the participant server processing the preparation command is slightly different from the flow shown in fig. 5: the TransExecutor pushes the task to a distributedTransHandlePool thread pool for processing, the task is still pushed to a single thread TransCommitThread for processing when the prepared log is written, and the sequence of the log writing is ensured. And after the log is written, directly returning a message of completing the log writing preparation to the coordinator server, wherein the message is not processed by the CommitEndHandlePool of the thread pool any more, and the change is not effective because the change is not determined to be submitted at the moment. When the coordinator server receives the preparation success messages, namely the confirmation messages, returned by all the participant servers or does not receive the confirmation messages returned by all the participant servers after the timeout, the transaction state is recorded in the system table as commit (when the confirmation messages returned by all the participant servers are received) or the transaction state is recorded in the system table as rollback (when the confirmation messages returned by all the participant servers are not received after the timeout). Before the transaction state is recorded in the system table, the version information of the memory table needs to be checked to ensure that the transaction recorded in the system table and the currently executed transaction are directed to the same version. And if the version is not the same, writing a rollback message into the system table. The coordinator server sends a transaction commit command or a rollback command to the participant server according to the transaction status recorded in the system table. After receiving a transaction commit command or a rollback command, the participant server records the final state of the transaction, processes rollback of the last local transaction or validates modification through the thread pool CommitEndHandlePool, and releases the relevant lock. Since the isolation level is the isolation level of the committed read, in order to ensure the consistency of the transaction in the read-write transaction, an explicit locking operation needs to be performed on the read operation by using a select for update statement.
According to the two-phase commit protocol, in the first phase, the coordinator server needs to send a pre-processing request (i.e., prepare instructions) to the participant servers. In the second phase, the coordinator server decides whether to send a transaction commit notification or a rollback notification to the participant servers based on the transaction state recorded in the system table. In the first phase, the respective participant servers also need to write respective preparation logs. In the second phase, each participant server also needs to write a commit log or a rollback log. It should be noted that, in the embodiment of the present invention, the local transaction ID of the coordinator needs to be recorded in the preparation log, and is taken as the ID of the current update transaction. When the commit log or the rollback log is written later, the local transaction ID of the coordinator is also written. The commit or abort operation can only be performed after the commit log or the rollback log is written.
In particular implementations, if a delta server has written a prepare log, then a prepare state identification needs to be added to the local transaction state. If the local transaction has written the prepare log, it cannot be killed directly after the transaction times out, requiring either a lookup of the system table or a query to the coordinator server to obtain the transaction status, before killing can occur after the transaction validation fails.
In addition, after the transaction is started, the memory table version information corresponding to the transaction needs to be added to the information returned to the fusion server, and it is determined that each incremental server is a local transaction executed based on the memory table of the same version according to the version number in the memory table version information, that is, it is ensured that all local transactions are executed on the same memory table version.
That is, each incremental server involved in the update is further configured to: returning the version information of the memory table corresponding to the local transaction to the fusion server; correspondingly, the fusion server is further configured to: and determining whether the local transaction of each incremental server is executed aiming at the memory table with the same version or not according to the memory table version information sent by each incremental server.
As can be seen from the above description, if the version is the same, each incremental server can write the local transaction state in the system table. If not, a rollback message is written to the system table.
The above description is an execution process of a system architecture under a normal condition, but some abnormal conditions may occur in an actual scenario, for example, a coordinator server is overtime, a participant server is overtime, a downtime is restarted, and the like.
In the event of an exception, there may be pending transactions, which are primarily caused by a node failure or network delay or interruption in the two-phase commit process. A simple timeout detection mechanism is currently employed to detect pending transactions. A separate timeout detection thread is arranged in the increment server, and transactions which are overtime are automatically detected and processed after a certain time interval. And if the time is out, the thread can traverse all transaction nodes through the SessionMgr, wherein the transaction nodes are incremental servers participating in updating operation. If a timeout has occurred and no log has been written, the associated change is aborted and rolled back directly. If the preparation log is written, the preparation log needs to be processed respectively according to whether the transaction node is a coordinator server or a participant server. Referring to fig. 6, a flow chart of the general process in the case of coordinator server timeout.
(1) Processing for coordinator server timeout:
for a coordinator server timeout, the coordinator server is further configured to perform the following steps:
if the coordinator server is overtime, detecting whether the coordinator server has a pending transaction;
if the pending transaction exists, judging whether log information is written aiming at the detected pending transaction;
if no log is written, stopping and performing rollback operation;
if the preparation log is written, stopping receiving the confirmation information returned by the participant server; judging whether confirmation information returned by all participant servers is received;
and if the confirmation information returned by all the participant servers is received, writing a committed transaction state into the system table, and sending a transaction commit notification to each participant server so that each participant server sends the local transaction commit information to the fusion server.
That is, if it times out, it is determined whether the transaction of the coordinator server is done, and if not, the coordinator server has a pending transaction. When it is determined that there is a pending transaction, it is determined whether log information has been written for the pending transaction. If the task log has not been written, the abort can be done directly and a rollback operation can be performed. If the preparation log is written, which indicates that the coordinator server has already performed the first phase at this time, the reception of the confirmation message returned by the participant server is stopped at this time, and it is determined whether the confirmation messages of all participants have been received. If the acknowledgement messages returned by all participants have been received and the coordination server as one of the participants has also finished the first phase, the transaction status written into the system table may be commit and the respective participant servers are notified so that the respective participant servers will commit the local transaction information.
Of course, the coordinator server may also be configured to: and if the confirmation information returned by all the participant servers is not received, writing the rollback transaction state into the system table, and sending a rollback notification to each participant server so that each participant server sends the rollback information to the fusion server.
That is, if acknowledgement messages returned by all participant servers are not received, the transaction status written into the system table is rolled back and the respective participant server is notified.
It can be seen that only after all participant servers confirm that the update execution sub-plan is successful, the coordinator server records the whole transaction state of the current update into the system table. And when the transaction state is a commit state, indicating that the execution of the updating sub-plan by each participant server is successful, and if the transaction state is a rollback state, indicating that at least one participant server fails to execute the updating sub-plan, or indicating other conditions such as inconsistent memory table versions and the like.
In summary, when the coordinator server times out, the coordinator server stops receiving the acknowledgement messages returned by other participants first, and then detects whether all the acknowledgement messages are received. If receiving the confirmation message of all participant servers, recording the transaction state as commit in a system table, otherwise recording as rollback. Since the memory table version is also validated prior to writing to the system table, if the transaction state record is not the same version as the transaction update portion, the transaction state is recorded as a rollback. And finally, sending a submit or rollback command to all participant servers according to the record of the system table. In this process, it is not detected whether there is a record in the system table first, since this would increase two network communications. To ensure that the records written multiple times are the same, the receiving participant server will stop receiving the acknowledgement message from the participant server, thereby avoiding an inconsistent state where all replies were not received before and all replies were received at the next time.
(2) Referring to fig. 7, for participant server timeout processing:
the participant server is further configured to perform the steps of:
if the participant server is overtime or the local log information is completely played back after the downtime is restarted, detecting whether the participant server has pending transactions;
if there is a pending transaction, then for the pending transaction, querying a transaction state of the pending transaction from the system table;
if the corresponding transaction state is inquired, corresponding processing is carried out according to the inquired transaction state;
if the corresponding transaction state is not inquired, judging whether the coordinator server is in a normal state or not;
if the state is a normal state, waiting for next overtime processing;
if the system state is abnormal, whether the corresponding transaction state is recorded in the system table is confirmed again;
if the corresponding transaction state is not recorded, writing the rollback transaction state into the system table, and finishing the corresponding local transaction;
and if the corresponding transaction state is recorded, performing corresponding processing according to the inquired transaction state.
That is, when the participant server times out, it is detected whether the local transaction on the participant server is complete, and if not, a pending transaction is considered to exist. And inquiring the whole transaction state of the updating operation in the system table aiming at the pending transaction. If the transaction state can be inquired, the process of committing or rolling back is carried out according to the inquired transaction state. If the transaction state is not inquired, whether the coordinator server is normal or not is judged, if so, next overtime processing is waited, a submit or rollback command sent by the coordinator server may be received in the waiting process, and if so, the execution can be carried out according to the command. If the coordinator server is not normal, whether the transaction state is recorded in the system table is confirmed again, and if not, a rollback is written into the system table. If a transaction state exists at this point, commit or rollback occurs according to the transaction state.
Therefore, in the process, the record of the coordinator is taken as the main record, and if the coordinator does not exist, the whole transaction rollback can be written into the system table, so that the coordinator is prevented from being reselected.
Wherein the participant server may perform the above steps except upon a timeout. And when the local log information is played back after the participant server is down and restarted, the steps can be executed.
Aiming at the condition after the downtime restart, the method can also be processed as follows:
any one of the delta servers may also be configured to: after the downtime is restarted, local log information is played back; if a preparation log appears in the playback process, performing corresponding updating operation according to the preparation log; if a commit log occurs after the prepare log, committing the transaction locally; if a rollback log occurs after the prepare log, the transaction is rolled back locally.
That is, if a preparation log appears during playback, a corresponding update operation is performed according to the preparation log, but a commit operation is not performed. When a commit log or a rollback log occurs after the log is prepared, the commit or rollback is performed. After all logs are played back, if there are still unmatched prepared logs, the processing flow of timeout of the participant server in the above step (2) may be adopted for each transaction.
Therefore, when the log is played back, the incremental server adopts the processes of multi-thread playback and single-thread submission. For distributed transactions, the log is divided into two parts. When the prepared log is played back, only the related data is written into the uncommitted linked list, the subsequent submission part is not carried out, and the log information is stored to wait for the subsequent processing. And when the corresponding submission log or the rollback log is played back, performing corresponding submission or suspension operation. If the log is completely replayed and the unmatched prepared log still exists, the pending transaction processing process after the log is replayed is entered.
When the version is frozen, which is equivalent to establishing a log playback point, the processing of a new transaction can be prevented, and the transaction in execution can be quickly ended. At this time, ending the executing transaction is basically the same as the timeout processing, but the difference is that whether the transaction is overtime is not checked, and as long as the transaction which is not written with the log is directly aborted, the transaction which is written with the operation log is the same as the processing of the timeout thread, and the processing of the distributed transaction is also the same.
The system table submitted in the above is set specifically for the distributed transaction, and in an abnormal condition, the participant server needs to acquire the transaction state, so that the transaction state of the distributed transaction needs to be recorded globally. To facilitate write and read operations by the coordinator server and the participant servers in the distributed transaction, the final state of the distributed transaction is recorded in a new system table. The specific structural brief design of the system table is shown in the following table 1:
TABLE 1
Field (Property) Type (Type) Nullable (can be empty, 0 is no) default
start_time int 0 null
server_ip varchar(32) 0 null
server_port int 0 null
commit_stat bool 0 null
group_info varchar(1024) 0 null
As shown in table 1, the transaction status is recorded one by one on the system table, the timestamp of the transaction starting to execute on the coordinator server, the IP address of the coordinator server and the PORT are used as joint main keys of the records, the status of the transaction commit or rollback is marked through the commit _ stat status, the group _ info identifies the group information involved in the distributed transaction, and the distributed transaction proportion is reduced because group migration may be needed in the later period.
It can be understood that, in the embodiment of the present invention, a fusion server and a plurality of incremental servers are included in a distributed transaction processing system, each of the incremental servers is used for storing incremental data, and the incremental data stored by the plurality of incremental servers are not overlapped; when a fusion server receives a data updating request sent by a client, acquiring a data division rule; determining an incremental server related to the data updating according to the data information related to the data request and the data division rule; if the number of the incremental servers is multiple, splitting the physical updating plan corresponding to the data updating into multiple updating sub-plans; and controlling the plurality of incremental servers involved to execute the corresponding update sub-plans to realize data update. Because the embodiment of the invention is provided with the plurality of incremental servers, the data can be orderly divided into the plurality of incremental servers through the data division rule, when high-concurrency read-write transactions arrive in a centralized way, the single-point load of a single incremental server can be reduced, the processing efficiency of the incremental data is improved, and the overall performance of the distributed database is further improved.
The following experiments were performed for the distributed processing system.
The experimental environment is as follows: the YaoBase server is deployed in a cluster mode, and the server is composed of a 1T SSD hard disk, a 256GB memory, a 64-core CPU and a network card. The operating system of the Server is Kylin Linux Advanced Server release V10 (Sword). The configuration information is shown in table 2 below:
TABLE 2
Figure BDA0003683779680000171
Six servers, node1-node6 in table 2, provide database services and build a yaoBase distributed cluster. A distributed cluster with three architectures is constructed, wherein the architecture is a single incremental server architecture, the architecture II and the architecture III correspond to distributed architectures of 3 incremental servers and 5 incremental servers respectively, and the node7 serves as an independent test node to run a test program. The role assignments for each node under the three architectures are seen in table 3 below.
TABLE 3
Figure BDA0003683779680000181
Experimental data: the sysbench is adopted as a benchmark test in the experiment, and the related experimental data parameters are set as follows: test table size: table _ size 1000000 (100W); the number of test tables is as follows: tables ═ 30 (so the data volume is estimated as: 1W row ═ 2.4MB, 100 x 30 ═ 2.4M ═ 7 GB); testing the number of threads: 32/64/128/256/512/1024; test report time: report-interval 10 (results are reported every 10 seconds); and (3) testing time: time 60 (60 seconds per run).
Experiment one
Experiment one, non-transaction stress testing in a read-only (read _ only) scene is performed on the YaoBase cluster under the three architectures through a sysbench testing tool. As the number of threads increases, the TPS (transaction per second), QPS (query per second), and latency performance of YaoBase for three different architectures are shown in table 4 below.
TABLE 4
Figure BDA0003683779680000182
Figure BDA0003683779680000191
Experimental analysis: the experimental result shows that after the multi-TS framework is applied, under the condition of the same thread number, the TPS and the QPS of the YaoBase are greatly improved, the delay time is greatly reduced, and the performance of a non-transaction scene is obviously improved. Because the single TS is transformed into the multi-TS framework, the incremental data is divided into different TSs, and a plurality of TSs can provide readable service at the same time, the single-point bottleneck of the single TS is solved, and the read-write performance of the system is obviously improved.
Experiment two
Experiment two is that the YaoBase cluster under three architectures is subjected to a non-transactional pressure test in a write-only (write _ only) scenario by a sysbench test tool. As the number of threads increases, the TPS, QPS, and delay time performance of YaoBase for the three different architectures are shown in table 5 below.
TABLE 5
Figure BDA0003683779680000192
Figure BDA0003683779680000201
Experimental analysis: the experimental result shows that after the multi-TS framework is applied, the TPS and the QPS of the YaoBase are reduced to a certain extent and the delay time is increased under the condition of the same thread number. This is because a single TS is transformed into a multi-TS architecture, incremental data is divided into different TSs, and for the extreme write-only scenario, there are a large number of distributed transactions that need to be processed. And compared with a single TS (transport stream) architecture, the distributed transaction processing means the cost of consuming more network transmission and the like. Although distributed transactions are reduced as much as possible through a data partitioning strategy, multi-node distributed transactions are inevitable, so that the performance of the distributed YaoBase in a write-only scenario is reduced to a certain extent compared with that of a single TS.
Experiment three
The experiment three-way performs a non-transaction pressure test of a read-write hybrid (read _ rite) scene on the YaoBase cluster under the three architectures through a sysbench test tool. As the number of threads increases, the TPS, QPS, and delay time performance of YaoBase for the three different architectures are shown in table 6 below.
TABLE 6
Figure BDA0003683779680000202
Figure BDA0003683779680000211
Experimental analysis: the experimental result shows that after the multi-TS architecture is applied, under the condition of the same thread number, TPS and QPS of YaoBase are greatly improved, delay time is greatly reduced, and performance under the read-write mixed condition closer to an application scene is remarkably improved. Because the single TS is transformed into the multi-TS framework, the incremental data is divided into different TSs, and a plurality of TSs can provide readable service at the same time, the single-point bottleneck of the single TS is solved, and the read-write performance of the system is obviously improved.
The experiments show that the invention provides a multi-TS transaction strategy, and the writing efficiency of incremental data is improved by orderly dividing the data in a plurality of incremental servers, thereby improving the overall performance of the distributed database.
In a second aspect, an embodiment of the present invention provides a distributed transaction processing method, which is implemented based on the distributed transaction processing system provided in the first aspect, and the method includes:
the fusion server acquires a data division rule when receiving a data updating request sent by the client; determining an incremental server related to the data updating according to the data information related to the data request and the data division rule; if the number of the incremental servers is N, splitting the physical updating plan corresponding to the data updating into N updating sub-plans; controlling the N involved incremental servers to execute respective corresponding updating sub-plans to realize data updating; wherein the data partitioning rule includes a plurality of data tables and a first mapping relationship between a plurality of groups and a second mapping relationship between the plurality of groups and the plurality of incremental servers, and N is a positive integer greater than 1.
In some embodiments, the controlling, by the fusion server, the involved N incremental servers to execute the respective corresponding update sub-plans specifically includes:
determining whether each incremental server involved has already opened a local transaction;
if so, sending the corresponding updating sub-plan to the incremental server so that the incremental server executes the corresponding updating sub-plan;
otherwise, the transaction opening mark and the corresponding updating sub-plan are sent to the incremental server, so that the incremental server returns the local transaction ID to the fusion server after the transaction is opened, and executes the corresponding updating sub-plan.
In some embodiments, the first mapping relationship is a mapping relationship obtained by dividing according to a table level or a record level; wherein:
in a first mapping relation obtained by dividing according to the table level, one data table corresponds to one group;
in a first mapping relation obtained by dividing according to record levels, at least one row in one data table corresponds to one group, and one data table corresponds to at least one group.
In some embodiments, one of the plurality of delta servers is a coordinator server, and each of the delta servers involved is a participant server; the coordinator server records list information formed by the identification of each participant server and local transaction ID corresponding to each participant server; the identifier of the coordinator server is recorded in each participant server;
correspondingly, the process of updating the sub-plan executed by the incremental server comprises the following steps:
the coordinator server sends a preprocessing request to each participant server;
after receiving the preprocessing request, the participant server executes a corresponding local transaction according to the updating sub-plan corresponding to the participant server, and sends confirmation information to the coordinator server after the transaction is executed;
after receiving the confirmation information returned by all the participant servers, the coordinator server sends a transaction submission notice to all the participant servers;
and when receiving the transaction submission notification sent by the coordinator server, the participant server sends local transaction submission information corresponding to the participant server to the fusion server.
In some embodiments, the method further comprises the following steps performed by the coordinator server:
if the coordinator server is overtime, detecting whether the coordinator server has a pending transaction;
if the pending transaction exists, judging whether log information is written aiming at the detected pending transaction;
if no log is written, stopping and performing rollback operation;
if the preparation log is written, stopping receiving the confirmation information returned by the participant server, and judging whether the confirmation information returned by all the participant servers is received;
and if the confirmation information returned by all the participant servers is received, writing a committed transaction state into the system table, and sending a transaction commit notification to each participant server so that each participant server sends the local transaction commit information to the fusion server.
In some embodiments, the method further comprises: and if the coordinator server does not receive the confirmation information returned by all the participant servers, writing the rollback transaction state into the system table, and sending a rollback notification to each participant server so that each participant server sends the rollback information to the fusion server.
In some embodiments, the method further comprises the following steps performed by the participant server:
if the participant server is overtime or the local log information is completely played back after the downtime is restarted, detecting whether the participant server has pending transactions;
if there is a pending transaction, then for the pending transaction, querying a transaction state of the pending transaction from the system table;
if the corresponding transaction state is inquired, corresponding processing is carried out according to the inquired transaction state;
if the corresponding transaction state is not inquired, judging whether the coordinator server is in a normal state or not;
if the state is a normal state, waiting for next overtime processing;
if the system state is abnormal, whether the corresponding transaction state is recorded in the system table is confirmed again;
if the corresponding transaction state is not recorded, writing the rollback transaction state into the system table, and finishing the corresponding local transaction;
and if the corresponding transaction state is recorded, performing corresponding processing according to the inquired transaction state.
In some embodiments, the method further comprises:
after the downtime is restarted, any one incremental server plays back local log information; if a preparation log appears in the playback process, performing corresponding updating operation according to the preparation log; if a commit log appears after the preparation log, sending the local transaction commit information to the fusion server; and if a rollback log appears after the preparation log, sending the rollback information of the local transaction to the fusion server.
In some embodiments, the method further comprises:
each incremental server involved in the updating returns the version information of the memory table corresponding to the local transaction to the fusion server;
and the fusion server determines whether the local affairs of each incremental server are executed aiming at the memory table of the same version or not according to the memory table version information sent by each incremental server.
It is to be understood that the method provided by the second aspect corresponds to the system provided by the first aspect, and for the explanation, implementation, example, and beneficial effects of the method, reference may be made to corresponding parts in the first aspect, and details are not described herein.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A distributed transaction processing system, comprising: the system comprises a client-oriented fusion server and a plurality of incremental servers which are in communication connection with the fusion server, wherein each incremental server is provided with a plurality of corresponding backup servers and is used for storing incremental data, and the incremental data stored by the incremental servers are not overlapped; wherein:
the fusion server is configured to: when a data updating request sent by the client is received, acquiring a data partitioning rule; determining an incremental server related to the data updating according to the data information related to the data request and the data division rule; if the number of the incremental servers is N, splitting the physical updating plan corresponding to the data updating into N updating sub-plans; controlling the N involved incremental servers to execute respective corresponding updating sub-plans to realize data updating; wherein the data partitioning rule includes a plurality of data tables and a first mapping relationship between a plurality of groups and a second mapping relationship between the plurality of groups and the plurality of incremental servers, and N is a positive integer greater than 1.
2. The system of claim 1,
the fusion server is configured to perform the steps of: controlling the involved N incremental servers to execute respective corresponding update sub-plans, specifically comprising:
determining whether each incremental server involved has already opened a local transaction;
if so, sending the corresponding updating sub-plan to the incremental server so that the incremental server executes the corresponding updating sub-plan;
otherwise, the transaction opening mark and the corresponding updating sub-plan are sent to the incremental server, so that the incremental server returns the local transaction ID to the fusion server after opening the transaction, and executes the corresponding updating sub-plan.
3. The system according to claim 1, wherein the first mapping relationship is a mapping relationship divided according to a table level or a record level; wherein:
in a first mapping relation obtained by dividing according to the table level, one data table corresponds to one group;
in a first mapping relation obtained by dividing according to record levels, at least one row in one data table corresponds to one group, and one data table corresponds to at least one group.
4. The system of claim 1, wherein one of the plurality of incremental servers is a coordinator server and each of the involved incremental servers is a participant server; the coordinator server records list information formed by the identification of each participant server and local transaction ID corresponding to each participant server; the identifier of the coordinator server is recorded in each participant server;
the coordinator server is to: sending a preprocessing request to each participant server;
the participant server is to: after receiving the preprocessing request, executing a corresponding local transaction according to the updating sub-plan corresponding to the participant server, and sending confirmation information to the coordinator server after the transaction is executed;
the coordinator server is further configured to: after receiving the confirmation information returned by all the participant servers, sending a transaction submission notice to all the participant servers;
the participant server is further configured to: and when a transaction submission notice sent by the coordinator server is received, sending local transaction submission information corresponding to the participant server to the fusion server.
5. The system of claim 4,
the coordinator server is further configured to: the following steps are carried out:
if the coordinator server is overtime, detecting whether the coordinator server has a pending transaction;
if the pending transaction exists, judging whether log information is written aiming at the detected pending transaction;
if no log is written, stopping and performing rollback operation;
if the preparation log is written, stopping receiving the confirmation information returned by the participant server, and judging whether the confirmation information returned by all the participant servers is received;
and if the confirmation information returned by all the participant servers is received, writing a submitted transaction state into the system table, and sending a transaction submission notice to each participant server so that each participant server sends the local transaction submission information to the fusion server.
6. The system of claim 5,
the coordinator server is further configured to: and if the confirmation information returned by all the participant servers is not received, writing the rollback transaction state into the system table, and sending a rollback notification to each participant server so that each participant server sends the rollback information to the fusion server.
7. The system of claim 5,
the participant server is further configured to: the following steps are carried out:
if the participant server is overtime or the local log information is completely played back after the downtime is restarted, detecting whether the participant server has pending transactions;
if there is a pending transaction, then for the pending transaction, querying a transaction state of the pending transaction from the system table;
if the corresponding transaction state is inquired, corresponding processing is carried out according to the inquired transaction state;
if the corresponding transaction state is not inquired, judging whether the coordinator server is in a normal state or not;
if the state is a normal state, waiting for next overtime processing;
if the system state is abnormal, whether the corresponding transaction state is recorded in the system table is confirmed again;
if the corresponding transaction state is not recorded, writing the rollback transaction state into the system table, and finishing the corresponding local transaction;
and if the corresponding transaction state is recorded, performing corresponding processing according to the inquired transaction state.
8. The system of claim 1,
any one of the delta servers is further configured to: after the downtime is restarted, local log information is played back; if a preparation log appears in the playback process, performing corresponding updating operation according to the preparation log; if a commit log appears after the preparation log, sending the local transaction commit information to the fusion server; and if a rollback log appears after the preparation log, sending the rollback information of the local transaction to the fusion server.
9. The system of claim 1,
each incremental server involved in the update is further configured to: returning the version information of the memory table corresponding to the local transaction to the fusion server;
correspondingly, the convergence server is further configured to: and determining whether the local transaction of each incremental server is executed aiming at the memory table with the same version or not according to the memory table version information sent by each incremental server.
10. A distributed transaction processing method implemented based on the distributed transaction processing system according to any one of claims 1 to 9, the method comprising:
the fusion server acquires a data division rule when receiving a data updating request sent by the client; determining an incremental server related to the data updating according to the data information related to the data request and the data division rule; if the number of the incremental servers is N, splitting the physical updating plan corresponding to the data updating into N updating sub-plans; controlling the N involved incremental servers to execute respective corresponding updating sub-plans to realize data updating; wherein the data partitioning rule includes a plurality of data tables and a first mapping relationship between a plurality of groups and a second mapping relationship between the plurality of groups and the plurality of incremental servers, and N is a positive integer greater than 1.
CN202210640508.2A 2022-06-08 2022-06-08 Distributed transaction processing system and method Pending CN115017168A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210640508.2A CN115017168A (en) 2022-06-08 2022-06-08 Distributed transaction processing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210640508.2A CN115017168A (en) 2022-06-08 2022-06-08 Distributed transaction processing system and method

Publications (1)

Publication Number Publication Date
CN115017168A true CN115017168A (en) 2022-09-06

Family

ID=83073996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210640508.2A Pending CN115017168A (en) 2022-06-08 2022-06-08 Distributed transaction processing system and method

Country Status (1)

Country Link
CN (1) CN115017168A (en)

Similar Documents

Publication Publication Date Title
Zhang et al. Building consistent transactions with inconsistent replication
CN109739935B (en) Data reading method and device, electronic equipment and storage medium
US7177866B2 (en) Asynchronous coordinated commit replication and dual write with replication transmission and locking of target database on updates only
US8892509B2 (en) Systems and methods for a distributed in-memory database
US8306947B2 (en) Replication of operations on objects distributed in a storage system
US6978396B2 (en) Method and system for processing replicated transactions parallel in secondary server
US7434096B2 (en) Match server for a financial exchange having fault tolerant operation
EP1840766B1 (en) Systems and methods for a distributed in-memory database and distributed cache
WO2021036768A1 (en) Data reading method, apparatus, computer device, and storage medium
Sciascia et al. Scalable deferred update replication
EP1349085A2 (en) Collision avoidance in database replication systems
US20130110781A1 (en) Server replication and transaction commitment
US7260589B2 (en) High performance support for XA protocols in a clustered shared database
CN113396407A (en) System and method for augmenting database applications using blockchain techniques
US20080046400A1 (en) Apparatus and method of optimizing database clustering with zero transaction loss
CN109783578B (en) Data reading method and device, electronic equipment and storage medium
WO2022134876A1 (en) Data synchronization method and apparatus, and electronic device and storage medium
WO2017105935A1 (en) Replication of structured data records among partitioned data storage spaces
CN112214649B (en) Distributed transaction solution system of temporal graph database
CN115794499B (en) Method and system for dual-activity replication data among distributed block storage clusters
CN110830582B (en) Cluster owner selection method and device based on server
US20240211471A1 (en) Database system with transactional commit protocol based on safe conjunction of majorities
CN116055563A (en) Task scheduling method, system, electronic equipment and medium based on Raft protocol
CN115658245B (en) Transaction submitting system, method and device based on distributed database system
WO2015196692A1 (en) Cloud computing system and processing method and apparatus for cloud computing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221221

Address after: Room 660, Building 5, No. 16, Zhuantang Science and Technology Economic Block, Xihu District, Hangzhou, Zhejiang, 310012

Applicant after: Yunyao Technology (Zhejiang) Co.,Ltd.

Address before: School of computer science, Northwest University of technology, 127 Youyi West Road, Xi'an, Shaanxi 710129

Applicant before: Northwestern Polytechnical University

TA01 Transfer of patent application right