CN113064768B - Method and device for switching fragment nodes in block chain system - Google Patents

Method and device for switching fragment nodes in block chain system Download PDF

Info

Publication number
CN113064768B
CN113064768B CN202110419687.2A CN202110419687A CN113064768B CN 113064768 B CN113064768 B CN 113064768B CN 202110419687 A CN202110419687 A CN 202110419687A CN 113064768 B CN113064768 B CN 113064768B
Authority
CN
China
Prior art keywords
node
shard
cross
transaction
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110419687.2A
Other languages
Chinese (zh)
Other versions
CN113064768A (en
Inventor
陈序
徐泉清
郑子彬
闫莺
张辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Ant Blockchain Technology Shanghai Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Ant Blockchain Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd, Ant Blockchain Technology Shanghai Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202110419687.2A priority Critical patent/CN113064768B/en
Publication of CN113064768A publication Critical patent/CN113064768A/en
Application granted granted Critical
Publication of CN113064768B publication Critical patent/CN113064768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2028Failover techniques eliminating a faulty processor or activating a spare
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1675Temporal synchronisation or re-synchronisation of redundant processing components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • G06F16/278Data partitioning, e.g. horizontal or vertical partitioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/80Database-specific techniques

Abstract

An embodiment of the present specification provides a method for switching a sharded node in a blockchain system, where the blockchain system includes a first shard and at least one second shard, the first shard includes a first node, the first node is switched to a node of the first shard after a node in the first shard fails, and stores first data acquired after the switching, and the method is applied to the first node, and includes: receiving online information from a second node; synchronizing the first data to the second node; after the synchronization is finished, the synchronization end information is respectively sent to the second nodes and the second fragments, so that the second nodes replace the first nodes to become the nodes of the first fragments.

Description

Method and device for switching fragment nodes in block chain system
Technical Field
The embodiments of the present disclosure relate to the field of blockchain technologies, and in particular, to a method and an apparatus for switching a fragmentation node in a blockchain system.
Background
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like, and essentially, the block chain is a shared database, and data or information stored in the shared database has the characteristics of difficulty in tampering, decentralization and the like. The block chain technology has great application potential in the scenes of finance, public welfare, judicial expertise, transaction and the like.
One of the most important factors that currently limit the large-scale application of blockchain technology is performance, which mainly includes throughput, which can be generally measured by Transaction Per Second (TPS). Developers have proposed various schemes to try to increase the throughput of blocks, a process known as "capacity expansion". The block chain fragmentation technology is a scheme for expanding a block chain. The basic idea of blockchain fragmentation is to divide nodes in a blockchain network into a plurality of relatively independent sub-networks, each sub-network forming a blockchain, i.e. a sub-network, i.e. a shard (shard). Through the parallel processing of a plurality of fragments, the throughput of the whole network can be improved.
However, in the current implementation scheme of the fragmentation technology, on one hand, it is difficult to meet the increasing demand in practical application, and on the other hand, because the existing block chain fragmentation technology is generally applied to a public chain, the number of node participants is large, and the number of fault nodes influencing consensus is basically impossible to achieve under the condition of not being attacked maliciously, so that recovery of the fault nodes is rarely involved.
Disclosure of Invention
Embodiments of the present disclosure are directed to providing a more efficient scheme for switching sharded nodes in a blockchain system containing shards, so that recovery of a failed node can be performed more quickly.
To achieve the above object, an aspect of the present specification provides a method for switching a sharded node in a blockchain system, where the blockchain system includes a first shard and at least one second shard, the first shard includes a first node, the first node switches to a node of the first shard after a node failure in the first shard and stores first data acquired after the switching, and the method is applied to the first node, and includes:
receiving online information from a second node;
synchronizing the first data to the second node;
after the synchronization is finished, the synchronization end information is respectively sent to the second nodes and the second fragments, so that the second nodes replace the first nodes to become the nodes of the first fragments.
Another aspect of the present specification provides an apparatus for switching a sharded node in a blockchain system, the blockchain system including a first shard and at least one second shard, the first shard including a first node, the first node being switched to a node of the first shard after a node failure in the first shard and storing first data acquired after the switching, the apparatus being applied to the first node, and including:
a receiving unit, configured to receive online information from a second node;
a synchronization unit configured to synchronize the first data to the second node;
and a sending unit, configured to send synchronization end information to the second node and each second segment after synchronization ends, so that the second node replaces the first node to become a node of the first segment.
Another aspect of the present specification provides a computer readable storage medium having a computer program stored thereon, which, when executed in a computer, causes the computer to perform any one of the above methods.
Another aspect of the present specification provides a computing device comprising a memory having stored therein executable code, and a processor that, when executing the executable code, implements any of the methods described above.
In the embodiment of the present specification, after a first node with lower performance in a segment comes on line with a second node with higher performance, state information generated in a block execution process is synchronized to the second node, so that the second node does not need to repeatedly execute transactions and cross-segment sub-transactions, and thus state recovery and switching of segment nodes can be realized more quickly.
Drawings
The embodiments of the present specification may be made more clear by describing the embodiments with reference to the attached drawings:
fig. 1 shows an architecture diagram of a blockchain system according to an embodiment of the present disclosure;
FIG. 2 is a process diagram illustrating a plurality of tiled execution tiles in a blockchain system;
fig. 3 is a flowchart illustrating a method for switching a slicing node in a blockchain system according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating a possible process of switching a fragmentation node according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram illustrating a possible process of switching a fragmentation node according to an embodiment of the present disclosure;
fig. 6 illustrates a device architecture diagram for switching a slice node in a blockchain system according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present specification will be described below with reference to the accompanying drawings.
The information processing method provided by the embodiments of the present specification is mainly used for recovering a failed node in a blockchain, and the method can be applied to a blockchain system including fragmentation. The block chain fragmentation technology is a scheme for expanding a block chain. The basic idea of blockchain fragmentation is to divide nodes in a blockchain network into a plurality of relatively independent sub-networks, each sub-network forming a blockchain, i.e. a sub-network, i.e. a shard (shard). Through the parallel processing of a plurality of fragments, the throughput of the whole network can be improved. Specifically, the fragmentation technology can be divided into three types according to different fragmentation mechanisms: network sharding (network sharding), transaction sharding (transaction sharding), state sharding (state sharding). Network fragmentation is the most basic way of fragmentation, including dividing the entire blockchain network into multiple sub-networks, i.e., multiple fragments. In this way, multiple slices in a blockchain network can process different transactions in the network in parallel. The transaction slicing is to distribute the transaction to different slices according to a certain rule, so that the aim of parallel processing can be achieved, and the problem of double flowers can be avoided. The premise of transaction fragmentation is that network fragmentation is performed first. The key of the state fragmentation is to separate the whole storage area, so that different fragments store the states of different accounts, and each fragment is only responsible for storing the world states of a part of accounts belonging to the fragment, but not the world states of all accounts in the block chain of the storage area. State fragmentation can solve the storage capacity bottleneck problem. Hereinafter, an information processing scheme provided by an embodiment of the present specification will be described by taking a blockchain system including a plurality of slices as an example. The scheme provided by the embodiment of the present specification can be applied to state fragmentation, and it can be understood that the scheme of the embodiment of the present specification can also be applied to other types of fragmentation.
Fig. 1 shows an architecture diagram of a block chain system according to an embodiment of the present disclosure.
As shown in fig. 1, the blockchain system may include a shard 1, a shard 2 and a shard 3 (not shown in fig. 1), wherein a plurality of nodes of different shards communicating with each other may form a set, and the set may include 3 shard nodes respectively belonging to the 3 shards, and the shard nodes may be implemented as any device having computing and processing capabilities, a server or a device cluster, and the like. For example, the blockchain system may be a federation chain system, which may include a federation party a, a federation party B, a federation party C, and a federation party D, each federation party being a set of the above-mentioned mutually trusted nodes including different shards. It will be appreciated that fig. 1 is merely illustrative of three shards and four federating parties, and that in fact any number of shards and federating parties may be included in a blockchain system. The embodiments of the present disclosure are not limited to be applied to a federation chain system, but may be applied to any block chain system.
From the viewpoint of fragmentation, the fragmentation node 1A, the fragmentation node 1B, the fragmentation node 1C and the fragmentation node 1D all belong to the fragmentation 1. The fragmentation node 2A, the fragmentation node 2B, the fragmentation node 2C and the fragmentation node 2D all belong to the fragment 2. The fragmentation node 3A, the fragmentation node 3B, the fragmentation node 3C and the fragmentation node 3D all belong to a fragment 3. For example, nodes 1A, 1B, and 1C in a segment 1 may construct a block chain 1, nodes 2A, 2B, and 2C in a segment 2 may construct a block chain 2, and nodes 3A, 3B, and 3C in a segment 3 may construct a block chain 3. And carrying out consensus among all participant nodes in each fragment.
From the perspective of the alliance party, the sharded node 1A, the sharded node 2A and the sharded node 3A all belong to the alliance party a. The fragmentation node 1B, the fragmentation node 2B and the fragmentation node 3B all belong to a party B of the alliance. The fragmentation node 1C, the fragmentation node 2C and the fragmentation node 3C all belong to a party C of the alliance. The fragment node 1D, the fragment node 2D and the fragment node 3D all belong to a party D of the alliance. The nodes belonging to the same alliance party are mutually trusted nodes and can be connected through the network inside the alliance party.
Each fragment node is connected with a standby node, and the standby node and the fragment nodes connected with the standby node belong to the same fragment and the same alliance party. For example, the backup node 1A is connected to the sharded node 1A, and the backup node 1A also belongs to the shard 1 and to the federation party a. The fragmentation node 2B is connected with a standby node 2B, and the standby node 2B also belongs to the fragmentation 2 and to the federation party B. The fragment node 3C is connected with a standby node 3C, the standby node 3C also belongs to the fragment 3 and the alliance party C, and so on for other standby nodes.
In the blockchain system shown in fig. 1, one alliance party includes a plurality of nodes respectively belonging to different segments, and the plurality of nodes may be connected through an intranet of the alliance party, for example. When one fragment in the system needs to send a cross-fragment sub-transaction to another fragment, cross-fragment communication can be performed in fragment nodes of the two fragments of one alliance party, so that a faster communication speed can be provided. In addition, because the nodes included by one alliance party trust each other, authentication is not needed in the cross-fragment communication process, and the processing efficiency of the system can be further improved.
It is to be understood that the solution provided by the embodiments of the present specification is not limited to be used in the blockchain system shown in fig. 1, but can be applied in any blockchain system including fragmentation.
FIG. 2 is a process diagram illustrating a multi-sliced execution block in a blockchain system. In fig. 2, each shard node included in the federation party a is described as an example of representing each shard, and it is to be understood that the process of executing blocks by multiple shards shown in fig. 2 is not limited to being executed by multiple nodes in one federation party, but may be executed by any other node in each shard.
As shown in fig. 2, in the process of executing the partition, the fragmentation node 1A to the fragmentation node 3A execute the respective partitioned partitions with the same height at the same time. For example, the slicing node 1A executes the block of the slice 1 with the height M (i.e. the slice 1 block M), meanwhile, the slicing node 2A executes the block of the slice 2 with the height M (i.e. the slice 2 block M), and the slicing node 3A executes the block of the slice 3 with the height M (i.e. the slice 3 block M).
The fact that the slices 1 to 3 execute respective blocks with the same height at the same time may be to ensure atomicity of execution results of the sub-transactions of different slices including the cross-slice transaction. Specifically, the segment 1, the segment 2, and the segment 3 respectively start to execute respective creation blocks (hereinafter referred to as block 0), and the respective segments are stored in the respective creation blocks after being respectively identified in common, where the creation blocks generally include configuration information, such as a segment node included in the segment, an IP address and a port number of each segment node, a public key list of each segment node, and an identification of the identified node in the segment. After executing the created tiles, tile 1, tile 2, and tile 3 begin executing respective tile 1. Specifically, after determining the multiple transactions belonging to the block 1 and the execution sequence of the multiple transactions through consensus respectively, the slice 1, the slice 2, and the slice 3 start to execute the multiple transactions in the respective block 1 through consensus respectively. Wherein, segment 1 may generate a cross-segment sub-transaction to be executed by segment 2 in the process of executing the transaction. If the transaction corresponds to execution of a transaction, slice 1 can determine that execution of the corresponding transaction is successful after determining that execution of the cross-slice sub-transaction of the transaction is successful. Accordingly, slice 1 can only determine that multiple transactions in block 1 are successfully executed if it is determined that the cross-slice sub-transactions sent to all other slice nodes are successfully executed. In the above and following description, a "transaction" is used to indicate a plurality of transactions belonging to a block that are commonly recognized in a slice, and a "cross-slice sub-transaction" is used to indicate a sub-transaction generated by performing a transaction or a cross-slice sub-transaction.
Similarly, the fragmentation node 2A and the fragmentation node 3A may also send cross-fragmentation sub-transactions to other fragmentation nodes in the federation party a, and similarly, it can be determined that multiple transactions in the block 1 of the corresponding fragmentation are successfully executed until it is determined that the cross-fragmentation sub-transactions sent to all other fragmentation nodes are successfully executed. Therefore, in order to ensure the execution correctness of the transactional transaction in the block 1 of each fragment, the fragment nodes 1A to 3A respectively wait for all the fragment nodes in the federation party a to finish executing the cross-fragment sub-transaction, and after determining that all the cross-fragment sub-transactions are executed successfully, start to perform the following operations: the world state of the segment to which the transaction belongs is updated, that is, the world state of the segment to which the transaction belongs is stored, which corresponds to the block 1, the block 1 of the segment to which the transaction belongs (that is, the block body and the block header of the block 1) is generated, and the block 1 is stored, so that the execution of the block 1 of the segment to which the transaction belongs is ended, thereby ensuring the atomicity of execution of the sub-transactions included in the transactional transaction in the block 1 of each segment.
The fragmentation node 1A to the fragmentation node 3A respectively start the execution of the block 2 after determining that all the fragmentation nodes finish the execution of the block 1, and so on, and the fragmentation node 1A to the fragmentation node 3A respectively start the execution of the block 3 after determining that all the fragmentation nodes finish the execution of the block 2. Therefore, the execution of the blocks between the slicing nodes 1A to 3A (i.e. different slices) is synchronous, and each slicing node has the same block height at the same time.
It is to be understood that, although fig. 2 illustrates that the sharding nodes 1A to 3A execute the same height of the belonging shards at the same time, the embodiment of the present specification is not limited thereto, and the embodiment of the present specification may also be combined with other technologies for ensuring transaction atomicity, in which different shards may execute different height shards, that is, the sharding nodes 1A to 3A may execute different height shards at the same time.
As can be seen from the above description, in the blockchain system including multiple shards provided in the embodiments of the present specification, the process of executing a block includes the following steps performed in this order: determining a plurality of transactions of a block (such as a block M) belonging to the same height by the fragments substantially simultaneously through consensus, executing a plurality of respective transactions belonging to the block M by the fragments, executing a plurality of cross-fragment sub-transactions from other fragments, updating a world state, namely storing the world state corresponding to the block M, generating the block body and the block header of the block M, and storing the block M by the fragments respectively based on state information corresponding to execution results of the plurality of transactions and the plurality of cross-fragment sub-transactions after determining that the fragments complete the execution of the cross-fragment sub-transactions and do not generate a new cross-fragment sub-transaction. It will be appreciated that in the event that none of the slices generate a cross-slice sub-transaction during execution of a plurality of transactions belonging to tile M, the process of executing the tile will not include the process of executing the cross-slice sub-transaction described above.
Specifically, as shown in fig. 2, taking the execution of the block M by each sharded node in the federation party a as an example, after determining n transactions belonging to the respective block M by consensus for the shard 1, the shard 2, and the shard 3, the sharded node 1A, the sharded node 2A, and the sharded node 3A respectively start to execute n transactions belonging to the block M of the corresponding shard (as shown by a rectangular box labeled "n" in fig. 2, the execution process of a transaction or a cross-sharded sub-transaction is represented by the same rectangular box), and the execution process of the n transactions may be regarded as the 1 st round transaction execution process in the execution of the block M. It is to be understood that, although the slicing node 1A, the slicing node 2A, and the slicing node 3A are shown in fig. 2 as examples to execute n transactions belonging to the slicing 1 chunk M, the slicing 2 chunk M, and the slicing 3 chunk M respectively in the 1 st round of transaction execution process, the embodiments of the present specification are not limited thereto, and the slicing 1, the slicing 2, and the slicing 3 may set the number of transactions belonging to the slicing 1 chunk M, the slicing 2 chunk M, and the slicing 3 chunk M respectively, which may be unequal. It is understood that tile 1, tile M, represents a tile with a height M in tile 1, tile 2, tile M, represents a tile with a height M in tile 2, and so on, and thus tile 1, tile 2, tile M, and tile 3, tile M represent different tiles.
As shown in fig. 1, a backup node corresponding to the fragmentation node is further included in the federation party a, and the operation of the backup node is described below by taking the backup node 1A as an example, and it is understood that the operation of the backup node 2A and the backup node 3A in fig. 1 can refer to the description of the backup node 1A. The standby node 1A corresponds to the fragment node 1A, and is configured to backup data generated by the fragment node 1A in a process of executing a block, so that the standby node 1A can work in place of the fragment node 1A when the fragment node 1A fails. Specifically, after receiving the consensus results of the n transactions for the block M, the fragmentation node 1A may send the n transactions and the consensus results to the standby node 1A for backup. After the n transactions belonging to the block M are executed, the fragmentation node 1A generates state information for changing the world based on the execution result of the n transactions, stores the state information, and simultaneously, the fragmentation node 1A also sends the state information to the standby node 1A for backup.
After the 1 st round transaction execution process of the block M is executed, one or more cross-fragment sub-transactions are respectively generated by each fragment, and the cross-fragment sub-transactions are respectively sent to the fragment nodes of the corresponding fragments. For example, sharding node 1A, after executing n transactions, generates one or more cross-sharding sub-transactions corresponding to shard 2 and sends these cross-sharding sub-transactions to sharding node 2A, generates one or more cross-sharding sub-transactions corresponding to shard 3 and sends the write cross-sharding sub-transactions to sharding node 3A. The sharded node 2A and sharded node 3A similarly conduct cross-sharded communications. Finally, the sharding node 1A receives q cross-sharding sub-transactions from the sharding node 2A and the sharding node 3A together, for example, and executes the q cross-sharding sub-transactions, the sharding node 2A receives M cross-sharding sub-transactions from the sharding node 1A and the sharding node 3A together, for example, and executes the M cross-sharding sub-transactions, and the sharding node 3A receives p cross-sharding sub-transactions from the sharding node 1A and the sharding node 2A together, for example, and executes the p cross-sharding sub-transactions, as shown in fig. 2, the execution process can be regarded as 2 nd round transaction execution process of each sharding node in the process of executing the block M. Similarly, after completing the 2 nd round transaction execution process, each fragmentation node backups state information corresponding to the round transaction execution process to the corresponding standby node. It can be understood that, since the 2 nd round transaction execution process is performed based on the state information corresponding to the 1 st round transaction execution process, the state information corresponding to the 2 nd round transaction execution process includes the content that the previous state information is not updated in the 2 nd round transaction execution process. Meanwhile, each fragment node may generate a new cross-fragment sub-transaction after completing the 2 nd round transaction execution process, and may send the newly generated cross-fragment sub-transaction to the corresponding fragment node, so that the fragment node starts a new round of transaction execution process.
The cross-segment sub-transaction is a transaction generated in the process of executing a block, and specifically may be a transaction generated in the process of executing a plurality of transactions belonging to a block, or may be a transaction generated in the process of executing a cross-segment sub-transaction. The cross-segment sub-transaction includes, for example, an operation instruction to instruct an operation corresponding to the operation instruction to be performed in the corresponding segment. The sub-transaction may have the same form as the transaction, that is, include a sending account, a receiving account, a data field, and the like, and unlike the transaction, since the sub-transaction is sent between a plurality of mutually trusted sharded nodes included in the same coalition party, the plurality of sharded nodes do not need to verify the sub-transaction, and thus, the sub-transaction may not include a digital signature. Where the blockchain system includes state shards, the cross-shard sub-transaction is used to query or alter the state (i.e., world state) of accounts in the corresponding shard, for example, by invoking contracts in the corresponding shard. In one embodiment, a hash value of the corresponding original transaction may be included in the cross-sharded sub-transaction. In another embodiment, corresponding execution turn information may be included in the cross-slice sub-transaction indicating that a corresponding cross-slice sub-transaction was generated for a few rounds of transaction execution.
That is, the execution process of each partition node on the block may include multiple rounds of transaction execution processes, and after each round of transaction execution process, state information (i.e., information for changing the world state) corresponding to the round of transaction execution process is generated and sent to the corresponding standby node for backup. For the 1 st round transaction execution process of the block M, the corresponding state information is a record of a value of a written variable in the transaction execution process, and for each round transaction execution process of the block M starting from the 2 nd round transaction execution process, the corresponding state information is a record of a value of a written variable in the current round transaction execution process on the basis of the state information corresponding to the previous round transaction execution process. After a round of transaction execution process, a cross-fragment communication process can be included, the cross-fragment communication process can send cross-fragment sub-transactions across fragments, so that a node receiving the cross-fragment sub-transactions performs a next round of transaction execution process to execute operations corresponding to the cross-fragment sub-transactions, and the multi-round transaction execution processes executed by each fragment node are basically synchronous.
As shown in fig. 2, after each shard node completes the ith round of transaction execution process, it is assumed that shard node 1A fails, and thus shard node 2A and shard node 3A cannot connect shard node 1A and send the cross-shard sub-transaction corresponding to shard 1 generated in the ith round of transaction execution process. In this case, sharded node 2A and sharded node 3A are connected to backup node 1A, respectively, and send cross-sharded sub-transactions to backup node 1A. Backup node 1A may determine that sharded node 1A is malfunctioning after receiving the cross-sharded sub-transaction from both sharded node 2A and sharded node 3A, such that backup node 1A begins to operate in place of sharded node 1A. Specifically, the standby node 1A performs execution of the received cross-segment sub-transaction (as shown by a rectangular frame in a row corresponding to the standby node 1A) based on the state information corresponding to the ith round of execution of the segment node 1A that has been backed up, that is, performs the (i + 1) th round of transaction execution process, and generates cross-segment information according to an executed block and sends the cross-segment information to the corresponding segment node 2A and segment node 3A, respectively.
In one case, the standby node 1A generates a new one or more cross-shard sub-transactions during the i +1 th round of transaction execution and sends the cross-shard sub-transactions to the corresponding sharded nodes. In another case, the standby node 1A does not generate a new cross-shard sub-transaction during the i +1 th round of transaction execution, and thus sends information indicating that a cross-shard sub-transaction is not generated to the sharded node 2A and the sharded node 3A, respectively. Meanwhile, the standby node 1A generates state information corresponding to the (i + 1) th round transaction execution process after the (i + 1) th round transaction execution process is completed, and stores the state information. That is, the standby node 1A performs the same operation as the fragmentation node 1A. In a case where the standby node 1A, the sharding node 2A, and the sharding node 3A all send information indicating that the cross-sharding sub-transaction is not generated to each other, the standby node 1A, the sharding node 2A, and the sharding node 3A may store a world state corresponding to the block M, generate the block M, store the block M, thereby ending the execution of the block M, and start the execution of the block M + 1.
In another case, when performing the (i + 1) th round of execution, the standby node 1A fails to execute one of the cross-fragmentation sub-transactions, the standby node 1A stops the (i + 1) th round of execution, and notifies the original transaction corresponding to the cross-fragmentation sub-transaction as an error transaction to the fragmentation node 2A and the fragmentation node 3A according to the hash value of the original transaction included in the cross-fragmentation sub-transaction. Assume that the error transaction is a transaction belonging to partition M of partition 2, and therefore, after receiving the notification, partition node 2A stops the execution of round i +1, and waits for the result of the execution of round i +1 by partition node 3A. If no other error transaction is found after the (i + 1) th round of execution is completed by the fragment node 3A, the information is notified to other two fragment nodes, after the fragment node 2A receives the information, the error transaction is removed from a plurality of transactions belonging to the block M and the state information in the cache is rolled back, and the block M is re-executed based on the world state corresponding to the block M-1 of the fragment 2. The fragmentation node 1A and the fragmentation node 3A also roll back to re-execute the block M at the same time. If the slicing node 3A also finds an error transaction after completing the (i + 1) th round of execution, the information is notified to the other two slicing nodes, and after receiving the information, the slicing node 2A rolls back the state and re-executes the block M after removing the error transaction from the multiple transactions belonging to the block M. If it is determined that there is an error transaction among the transactions belonging to the block M of the slice 1 and the slice 3 based on the same process, the slice node 1A and the slice node 3A re-execute the transactions belonging to the block M after the error transaction is eliminated after the status rollback as described above.
However, in the system shown in fig. 1, due to cost considerations, the standby node is typically a lower performance device than the sharded node, only to maintain service uninterrupted at a relatively low performance level. Therefore, after the backup node replaces the failed fragmented node to work, when the failed fragmented node recovers to be normal, the backup node needs to be switched back to the fragmented node to provide a service with normal performance for the user. The scheme for switching back from the standby node to the sharded node provided by the embodiment of the present specification is described below with reference to the flowchart of fig. 3.
Fig. 3 is a flowchart illustrating a method for switching a sliced node in a blockchain system according to an embodiment of the present specification, where the method is performed by a standby node that is switched to a node in a slice in the blockchain system that includes slices, and the method includes:
step S301, receiving online information from the fragmentation node;
step S303, synchronizing data to the fragmentation nodes;
step S305, after the synchronization is completed, the synchronization completion information is respectively sent to the segment node and other segments in the blockchain system.
Hereinafter, the method shown in fig. 3 is described with reference to the process diagrams shown in fig. 4 and fig. 5, where in fig. 4 and fig. 5, operations in which each shard node included in federation party a represents each shard are similarly described as an example, and it is understood that this specification embodiment is not limited thereto, for example, shard node 2A in fig. 4 may be replaced by any other node in shard 2, and shard node 3A in fig. 4 may be replaced by any other node in shard 3.
Fig. 4 is a schematic diagram of a possible procedure for switching a fragmented node, and the method shown in fig. 3 will be described first with reference to the first case shown in fig. 4.
First, in step S301, online information is received from a sharded node. Referring to fig. 4, it is assumed that the standby node 1A performs the method shown in fig. 3 after switching to the sharded node as shown in fig. 2 to switch back to the failure-recovered sharded node 1A. As shown in fig. 4, it is assumed that after the standby node 1A and the sharding nodes 2A and 3A all perform the jth round transaction execution process of the block M and perform cross-sharding communication with each other, the standby node 1A receives online information from the sharding node 1A, and the online information is sent by the sharding node 1A after failure recovery, indicating that the sharding node 1A can operate as a sharding node again.
In step S303, the data is synchronized to the sharded nodes.
As shown in fig. 4, before receiving the online information, the standby node 1A performs the jth round of transaction execution process, and performs cross-shard communication with the sharding node 2A and the sharding node 3A. After completing the jth round of transaction execution process, the standby node 1A generates and stores state information corresponding to the round of transaction execution process, and the standby node 1A receives a plurality of cross-shard sub-transactions from the shard node 2A and the shard node 3A. Thus, after determining that sharded node 1A is online, standby node 1A synchronizes the most recent state information (i.e., the state information corresponding to the jth round of transaction execution process) to sharded node 1A along with the one or more cross-sharded sub-transactions received from sharded node 2A and/or sharded node 3A to be executed, such that sharded node 1A may continue execution of chunk M directly based on the synchronization data without restarting execution of chunk M. Meanwhile, the standby node 1A can also send the state information corresponding to each round of transaction execution process from the (i + 1) th round of transaction execution process to the (j-1) th round of transaction execution process to the fragment node 1A. It is to be understood that the jth round transaction execution process may be any round transaction execution process in the process of executing the tile M, for example, the jth round transaction execution process may be a 1 st round transaction execution process in the process of executing the tile M, that is, a process of executing a plurality of transactions belonging to the tile M, or may also be a process of executing a cross-segment sub-transaction for any round in the process of executing the tile M.
It is understood that fig. 4 only shows a case where the standby node 1A synchronizes state information to the sharded node 1A and performs sub-transactions across shards, and the standby node 1A synchronizes different data to the sharded node 1A according to different execution results of the jth round transaction execution process of the standby node 1A, the sharded node 2A and the sharded node 3A or according to different occasions when online information is received, which will be described in detail below.
In step S305, after the synchronization is completed, the synchronization completion information is sent to the fragmentation node and other fragments in the blockchain system, respectively.
Referring to fig. 4, the standby node 1A transmits synchronization completion information to the fragmentation node 1A, the fragmentation node 2A, and the fragmentation node 3A, respectively, after completing synchronization of data to the fragmentation node 1A. After receiving the synchronization completion information, the segment node 1A may execute a plurality of cross-segment sub-transactions received from the standby node 1A based on the state information corresponding to the jth round transaction execution process received from the standby node 1A, so as to perform the jth +1 round transaction execution process of the block M, and after completing the jth +1 round transaction execution process, the segment node 1A may back up the state information to the standby node 1A again. According to the time when the synchronization completion information is received, it is possible that the fragmentation node 2A and the fragmentation node 3A execute the j +1 th round of transaction execution process after receiving the synchronization completion information and send the newly generated cross-fragmentation sub-transaction to the fragmentation node 1A as shown in fig. 4, so that the fragmentation node 1A, the fragmentation node 2A and the fragmentation node 3A can continue to execute the block M subsequently. The fragmentation node 2A and the fragmentation node 3A may also perform the j +1 th round of execution before receiving the synchronization completion information, and send the newly generated cross-fragmentation sub-transaction to the standby node 1A, in this case, after receiving the synchronization completion information, the fragmentation node 2A and the fragmentation node 3A know that the fragmentation node 1A is online, and may send the cross-fragmentation sub-transaction newly generated in the j +1 round of transaction execution to the fragmentation node 1A again, so that the fragmentation node 1A, the fragmentation node 2A, and the fragmentation node 3A may subsequently and similarly continue to perform the execution on the block M.
In a second case, except the case shown in fig. 4, the standby node 1A generates a new cross-shard sub-transaction to be sent to the shard node 2A and/or the shard node 3A after the execution of the jth round transaction execution process, and receives information indicating that the cross-shard sub-transaction is not generated from the shard node 2A and the shard node 3A, respectively, and the standby node 1A receives the online information from the shard node 1A after the cross-shard communication of the jth round transaction execution process. In this case, the standby node 1A synchronizes its own state information corresponding to the jth round of transaction execution process to the fragmentation node 1A together with the received information indicating that no cross-fragmentation sub-transaction is generated, and in this case, the fragmentation node 1A does not execute any cross-fragmentation sub-transaction in the jth +1 th round of transaction execution process after receiving the synchronization completion information.
In the third case, after the execution of the jth round of transaction execution process, the standby node 1A does not generate a new cross-sharding sub-transaction, so that information indicating that a cross-sharding sub-transaction is not generated is sent to the sharding node 2A and the sharding node 3A, respectively, and information indicating that a cross-sharding sub-transaction is not generated is received from the sharding node 2A and the sharding node 3A, respectively, and after the cross-sharding communication of the jth round of transaction execution process, the standby node 1A receives online information from the sharding node 1A. In this case, the standby node 1A synchronizes its own state information corresponding to the jth round transaction execution process and information indicating that none of the standby node 1A, the sharding node 2A, and the sharding node 3A has generated the cross-shard sub-transaction to the sharding node 1A, so that the sharding node 1A can start the operation of generating the block M and storing the block M after determining that the synchronization is completed, and can update the world state, i.e., store the world state corresponding to the block M, based on the state information corresponding to the jth round transaction execution process.
In a fourth case, the standby node 1A receives information of an error transaction, such as a hash value of the error transaction, from the fragmentation node 2A and/or the fragmentation node 3A during the execution of the jth round of transaction execution, and the standby node 1A determines whether the error transaction is a transaction in the fragmentation 1 chunk M based on the information of the error transaction, and if so, stores the information of the error transaction and ends the jth +1 round of transaction execution. If not, continuing the transaction execution process of the (j + 1) th round to determine whether cross-fragmentation sub-transactions which fail to be executed exist in the transaction execution process of the (j + 1) th round, and if so, sending the hash value of the transaction corresponding to the cross-fragmentation sub-transactions to other fragmentation nodes as error transaction information. After completing the j +1 th round of transaction execution process and cross-sharding communication, the standby node 1A receives the online information from the sharding node 1A, so that the standby node 1A synchronizes the error transaction information of the sharding 1 block M and/or the other sharded blocks M to the sharding node 1A. Thus, after the synchronization is completed, in a case where the erroneous transaction information is included in the segment 1 block M, the segment node 1A re-executes a plurality of transactions belonging to the segment 1 block M after the erroneous transaction is eliminated, based on the world state of the segment 1 block M-1; in the case where no erroneous transaction information is included in the segment 1 block M, the segment node 1A re-executes a plurality of transactions belonging to the segment 1 block M based on the world state of the segment 1 block M-1. The slicing node 2A and the slicing node 3A also re-execute the block M from which the error transaction is rejected after the rollback state.
In a fifth scenario, as shown in a possible process diagram for switching sharded nodes in fig. 5, the standby node 1A receives the online information from the sharded node 1A after completing the jth round of transaction execution process and before sending the cross-sharded information. Therefore, the standby node 1A synchronizes the state information corresponding to the jth round transaction execution process and the cross-fragment information to be sent to other fragment nodes to the fragment node 1A. Wherein the cross-slice information may be one or more cross-slice sub-transactions, information indicating that a cross-slice sub-transaction is not generated, or error transaction information, as described above. After completion of synchronization, the standby node 1A transmits synchronization completion information to each of the fragmentation nodes, respectively. After receiving the synchronization completion information, the fragmentation node 1A sends the cross-fragmentation information synchronized from the standby node 1A to the fragmentation node 2A and/or the fragmentation node 3A, and after receiving the synchronization completion information, the fragmentation node 2A and the fragmentation node 3A can send the cross-fragmentation information already sent to the standby node 1A to the fragmentation node 1A again, so that each fragmentation node can continue to execute the block M. For example, as shown in fig. 5, in the case where the sharding node 2A and the sharding node 3A send cross-sharding sub-transactions to each other after the sharding node 1A goes online, the sharding nodes 1A to 3A may perform the (j + 1) th round transaction execution process of the block M, respectively.
In the sixth case, the standby node 1A has finished executing the block M when receiving the online information of the fragmentation node 1A, and completes the k-th round transaction execution process for the block N, which is a block after the block M. In this case, in addition to synchronizing the state information and the cross-segment information corresponding to the k-th round of transaction execution process to the segment node 1A as described above, the standby node 1A needs to synchronize the following data to the segment node 1A: the block data and corresponding world states of the blocks M to N-1, a plurality of transactions belonging to the block N, and execution results of the transactions. That is, the standby node 1A needs to synchronize, to the sharded node 1A, data generated by the standby node 1A during its failure, which needs to be persistently stored, and data needed for continuing to execute the block N.
In a seventh case, the failure of the sharded node 1A is unrecoverable, and the sharded node 1A is replaced by a new sharded node 1A 'online, in this case, assuming that the standby node 1A has already performed the jth round transaction execution process of the block M when the sharded node 1A' is online, the standby node 1A needs to synchronize the following data to the sharded node 1A ', in addition to synchronizing the state information and the cross-sharded information corresponding to the jth round transaction execution process to the sharded node 1A' as described above: creating block data of each block from the block M-1 and corresponding world states, a plurality of transactions belonging to the block M and execution results of the plurality of transactions. That is, the standby node 1A needs to synchronize, to the sharded node 1A', data that needs to be persistently stored starting from the created chunk in the shard 1, and data that is needed for continuing to execute the chunk M. Wherein, the backup node 1A backs up the data generated by the fragmentation node 1A and needing the persistent storage from the fragmentation node 1A during the normal operation of the fragmentation node 1A.
In addition, in any of the above cases, in a scenario where each sharded node in the shards puts the received transaction into the transaction pool and acquires multiple transactions belonging to a block to be executed from the transaction pool, the standby node 1A may further synchronize the multiple transactions included in the current transaction pool with the sharded node 1A, so that after the current block is executed by the sharded node 1A, the multiple transactions belonging to the next block may be acquired based on the multiple transactions included in the current transaction pool.
Fig. 6 illustrates a node switching apparatus for switching a sharded node in a blockchain system according to an embodiment of the present specification, the blockchain system including a first shard and at least one second shard, the first shard including a first node, the first node being switched to a node of the first shard after a node failure in the first shard and storing first data acquired after the switching, the apparatus being applied to the first node, and including:
a receiving unit 61, configured to receive online information from the second node;
a synchronization unit 62 for synchronizing the first data to the second node;
a sending unit 63, configured to send synchronization end information to the second node and each second segment after synchronization ends, so that the second node replaces the first node to become a node of the first segment.
In an embodiment, each of the second segments includes a third node that is a trusted node with respect to the first node, and the sending unit is specifically configured to send synchronization end information to the second node and each of the third nodes, respectively.
In one embodiment, the first data includes status information generated by the first node during execution of the first block.
In one embodiment, the first data further includes one or more first cross-fragmentation information generated by the first node during execution of the first block.
In one embodiment, the first data further includes one or more second cross-shard information received by the first node from one or more of the second shards.
Another aspect of the present specification provides a computer readable storage medium having a computer program stored thereon, which, when executed in a computer, causes the computer to perform any one of the above methods.
Another aspect of the present specification provides a computing device comprising a memory and a processor, the memory having stored therein executable code, the processor implementing any of the above methods when executing the executable code.
In the embodiment of the present specification, after a fragmentation node is online, a standby node synchronizes state information generated in a block execution process to the fragmentation node, so that the fragmentation node does not need to repeatedly execute transactions and cross-fragmentation sub-transactions, and thus state recovery and switching of the fragmentation node can be realized more quickly.
It is to be understood that the terms "first," "second," and the like, herein are used for descriptive purposes only and not for purposes of limitation, to distinguish between similar concepts.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
It will be further appreciated by those of ordinary skill in the art that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The software modules may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (21)

1. A method for switching a sharded node in a blockchain system, the blockchain system comprising a first shard and at least one second shard, the first shard comprising a first node, the first node being switched to a node of the first shard after a node failure in the first shard and storing first data acquired after the switching, the method applied to the first node, comprising:
receiving online information from a second node;
synchronizing the first data to the second node;
after the synchronization is finished, the synchronization end information is respectively sent to the second nodes and the second fragments, so that the second nodes replace the first nodes to become the nodes of the first fragments.
2. The method according to claim 1, wherein each of the second segments includes a third node that is a trust node with the first node, and the sending the synchronization end information to the second node and each of the second segments includes sending the synchronization end information to the second node and each of the third nodes, respectively.
3. The method according to claim 1 or 2, wherein the first data comprises state information generated by the first node in the course of executing the first block.
4. The method of claim 3, wherein the state information comprises first state information generated by the first node after performing a plurality of transactions belonging to the first block.
5. The method of claim 3, wherein the state information comprises second state information generated by the first node after executing one or more cross-tile sub-transactions generated by one or more of the second tiles during execution of the second block.
6. The method of claim 3, wherein the first data further comprises one or more first cross-sharding information generated by the first node during execution of the first chunk.
7. The method of claim 6, wherein the one or more first cross-sharding information comprises one or more cross-sharding sub-transactions generated by the first node during execution of the first tile.
8. The method of claim 6, wherein the one or more first cross-sharding information comprises information generated by the first node during execution of the first tile to indicate that a cross-sharding sub-transaction is not generated.
9. The method of claim 6, wherein the one or more first cross-shard information includes error transaction information generated by the first node in the event of a failure to execute a cross-shard sub-transaction generated by any of the second shards in the course of executing the second tile.
10. The method of claim 3, wherein the first data further comprises one or more second cross-shard information received by the first node from one or more of the second shards.
11. The method of claim 10, wherein the one or more second cross-shard information comprises one or more cross-shard sub-transactions received by the first node from one or more of the second shards, the one or more cross-shard sub-transactions generated by one or more of the second shards during execution of the second tile.
12. The method of claim 10, wherein the one or more second cross-shard information comprises one or more information received by the first node from one or more of the second shards indicating that a cross-shard sub-transaction was not generated.
13. The method of claim 10, wherein the one or more second cross-shard information comprises one or more erroneous transaction information received by the first node from one or more of the second shards.
14. The method of claim 2, wherein the blockchain system comprises a federation chain system, the first node, the second node, and the third node belonging to the same federation party.
15. An apparatus for switching a sharded node in a blockchain system, the blockchain system including a first shard and at least one second shard, the first shard including a first node that is switched to a node of the first shard after a node failure in the first shard and stores first data acquired after the switching, the apparatus applied to the first node, comprising:
a receiving unit, configured to receive the online information from the second node;
a synchronization unit configured to synchronize the first data to the second node;
and a sending unit, configured to send synchronization end information to the second node and each second segment after synchronization ends, so that the second node replaces the first node to become a node of the first segment.
16. The apparatus according to claim 15, wherein each of the second segments includes a third node that is a trusted node with respect to the first node, and the sending unit is specifically configured to send synchronization end information to the second node and each of the third nodes, respectively.
17. The apparatus according to claim 15 or 16, wherein the first data includes status information generated by the first node during execution of the first chunk.
18. The apparatus of claim 17, wherein the first data further comprises one or more first cross-sharding information generated by the first node during execution of the first chunk.
19. The apparatus of claim 17, wherein the first data further comprises one or more second cross-shard information received by the first node from one or more of the second shards.
20. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-14.
21. A computing device comprising a memory having executable code stored therein and a processor that, when executed, implements the method of any of claims 1-14.
CN202110419687.2A 2021-04-19 2021-04-19 Method and device for switching fragment nodes in block chain system Active CN113064768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110419687.2A CN113064768B (en) 2021-04-19 2021-04-19 Method and device for switching fragment nodes in block chain system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110419687.2A CN113064768B (en) 2021-04-19 2021-04-19 Method and device for switching fragment nodes in block chain system

Publications (2)

Publication Number Publication Date
CN113064768A CN113064768A (en) 2021-07-02
CN113064768B true CN113064768B (en) 2022-08-09

Family

ID=76566966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110419687.2A Active CN113064768B (en) 2021-04-19 2021-04-19 Method and device for switching fragment nodes in block chain system

Country Status (1)

Country Link
CN (1) CN113064768B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114996350A (en) * 2022-04-28 2022-09-02 蚂蚁区块链科技(上海)有限公司 Block state synchronization method in block chain and first node

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277645A (en) * 2020-01-16 2020-06-12 深圳市网心科技有限公司 Hot switching method for main and standby nodes, block chain system, block chain node and medium
CN111680050A (en) * 2020-05-25 2020-09-18 杭州趣链科技有限公司 Fragmentation processing method, device and storage medium for alliance link data

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11620152B2 (en) * 2018-07-20 2023-04-04 Ezblock Ltd. Blockchain sharding with parallel threads
CN110868434B (en) * 2018-08-27 2022-07-01 深圳物缘科技有限公司 Block chain consensus method and system of multilayer fragment architecture
CN109885264B (en) * 2019-04-16 2019-12-06 北京艾摩瑞策科技有限公司 Logic slicing method and system for block chain link points
CN111241593A (en) * 2020-01-02 2020-06-05 支付宝(杭州)信息技术有限公司 Data synchronization method and device for block chain nodes
CN111736963B (en) * 2020-06-08 2022-10-11 中国科学院计算技术研究所 Transaction processing system and method for backbone-free multi-partition block chain
CN112087497B (en) * 2020-08-17 2021-04-27 成都质数斯达克科技有限公司 Data synchronization method and device, electronic equipment and readable storage medium
CN112261159B (en) * 2020-12-21 2021-04-20 支付宝(杭州)信息技术有限公司 Method and system for executing cross-slice transaction, main chain node and target slicing node

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277645A (en) * 2020-01-16 2020-06-12 深圳市网心科技有限公司 Hot switching method for main and standby nodes, block chain system, block chain node and medium
CN111680050A (en) * 2020-05-25 2020-09-18 杭州趣链科技有限公司 Fragmentation processing method, device and storage medium for alliance link data

Also Published As

Publication number Publication date
CN113064768A (en) 2021-07-02

Similar Documents

Publication Publication Date Title
CN110730204B (en) Method for deleting nodes in block chain network and block chain system
JP6382454B2 (en) Distributed storage and replication system and method
Liskov et al. Viewstamped replication revisited
US8140623B2 (en) Non-blocking commit protocol systems and methods
CN107919977B (en) Online capacity expansion and online capacity reduction method and device based on Paxos protocol
US9672244B2 (en) Efficient undo-processing during data redistribution
JP3798661B2 (en) Method for processing a merge request received by a member of a group in a clustered computer system
CN113064764B (en) Method and apparatus for executing blocks in a blockchain system
WO2012071920A1 (en) Method, system, token conreoller and memory database for implementing distribute-type main memory database system
GB2484086A (en) Reliability and performance modes in a distributed storage system
CN111130879B (en) PBFT algorithm-based cluster exception recovery method
CN110784331B (en) Consensus process recovery method and related nodes
US20210320977A1 (en) Method and apparatus for implementing data consistency, server, and terminal
CN107038192B (en) Database disaster tolerance method and device
CN116680256B (en) Database node upgrading method and device and computer equipment
CN115794499B (en) Method and system for dual-activity replication data among distributed block storage clusters
CN115098229A (en) Transaction processing method, device, node equipment and storage medium
CN113064768B (en) Method and device for switching fragment nodes in block chain system
CN113965578A (en) Method, device, equipment and storage medium for electing master node in cluster
CN109726211B (en) Distributed time sequence database
CN114422331A (en) Disaster tolerance switching method, device and system
CN113157450A (en) Method and apparatus for performing blocks in a blockchain system
CN105323271B (en) Cloud computing system and processing method and device thereof
CN113268382B (en) Method and device for switching fragment nodes in block chain system
CN106951443B (en) Method, equipment and system for synchronizing copies based on distributed system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant